From owner-freebsd-fs@FreeBSD.ORG Sun Feb 23 17:30:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DBDB72A9 for ; Sun, 23 Feb 2014 17:30:30 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 31695133E for ; Sun, 23 Feb 2014 17:30:28 +0000 (UTC) Received: from firewall.mikej.com (162-238-140-44.lightspeed.lsvlky.sbcglobal.net [162.238.140.44]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s1NHPILC052693 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Sun, 23 Feb 2014 12:25:19 -0500 (EST) (envelope-from mikej@mikej.com) Received: from firewall.mikej.com (localhost.mikej.com [127.0.0.1]) by firewall.mikej.com (8.14.8/8.14.6) with ESMTP id s1NHObGh004131 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Sun, 23 Feb 2014 12:25:18 -0500 (EST) (envelope-from mikej@mikej.com) Received: (from www@localhost) by firewall.mikej.com (8.14.8/8.14.8/Submit) id s1NHObZt004130; Sun, 23 Feb 2014 12:24:37 -0500 (EST) (envelope-from mikej@mikej.com) X-Authentication-Warning: firewall.mikej.com: www set sender to mikej@mikej.com using -f To: Subject: =?UTF-8?Q?ffs=5Ffsync=3A=20dirty?= MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="=_9ece7ca3fa5ad763b67bc18fdc2bdee7" Date: Sun, 23 Feb 2014 13:24:07 -0400 From: mikej Message-ID: X-Sender: mikej@mikej.com User-Agent: Roundcube Webmail/0.6-beta X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Feb 2014 17:30:30 -0000 --=_9ece7ca3fa5ad763b67bc18fdc2bdee7 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=UTF-8; format=flowed FreeBSD custom 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r261885: Fri Feb 14 08:51:48 EST 2014 mikej@custom:/usr/obj/usr/src/sys/GENERIC amd64 I get a bunch of these while running poudriere. ffs_fsync: dirty 0xfffff808e200e3b0: tag ufs, type VDIR usecount 1, writecount 0, refcount 8 mountedhere 0 flags (VI_ACTIVE) v_object 0xfffff8039e934300 ref 0 pages 38 cleanbuf 1 dirtybuf 4 lock type ufs: EXCL by thread 0xfffff8021bf72920 (pid 48820, cpdup, tid 100292) ino 1527731, on dev mfid0p2 I also get these LOR's but it never drops to the debugger. lock order reversal: 1st 0xfffffe0f9447e4d8 bufwait (bufwait) @ /usr/src/sys/kern/vfs_bio.c:3081 2nd 0xfffff8008b4a4000 dirhash (dirhash) @ /usr/src/sys/ufs/ufs/ufs_dirhash.c:284 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe104b7dc660 kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe104b7dc710 witness_checkorder() at witness_checkorder+0xd23/frame 0xfffffe104b7dc7a0 _sx_xlock() at _sx_xlock+0x75/frame 0xfffffe104b7dc7e0 ufsdirhash_remove() at ufsdirhash_remove+0x37/frame 0xfffffe104b7dc810 ufs_dirremove() at ufs_dirremove+0x11b/frame 0xfffffe104b7dc860 ufs_remove() at ufs_remove+0x75/frame 0xfffffe104b7dc8c0 VOP_REMOVE_APV() at VOP_REMOVE_APV+0xf0/frame 0xfffffe104b7dc8f0 kern_unlinkat() at kern_unlinkat+0x20c/frame 0xfffffe104b7dcae0 amd64_syscall() at amd64_syscall+0x265/frame 0xfffffe104b7dcbf0 Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe104b7dcbf0 --- syscall (10, FreeBSD ELF64, sys_unlink), rip = 0x8009309ba, rsp = 0x7fffffffda98, rbp = 0x7fffffffdb60 --- lock order reversal: 1st 0xfffff801f0802068 ufs (ufs) @ /usr/src/sys/kern/vfs_mount.c:851 2nd 0xfffff802033799a0 devfs (devfs) @ /usr/src/sys/kern/vfs_subr.c:2101 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe104b78c3d0 kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe104b78c480 witness_checkorder() at witness_checkorder+0xd23/frame 0xfffffe104b78c510 __lockmgr_args() at __lockmgr_args+0x878/frame 0xfffffe104b78c640 vop_stdlock() at vop_stdlock+0x3c/frame 0xfffffe104b78c660 VOP_LOCK1_APV() at VOP_LOCK1_APV+0xf5/frame 0xfffffe104b78c690 _vn_lock() at _vn_lock+0xab/frame 0xfffffe104b78c700 vget() at vget+0x70/frame 0xfffffe104b78c750 devfs_allocv() at devfs_allocv+0xfd/frame 0xfffffe104b78c7a0 devfs_root() at devfs_root+0x43/frame 0xfffffe104b78c7d0 vfs_donmount() at vfs_donmount+0x115e/frame 0xfffffe104b78caa0 sys_nmount() at sys_nmount+0x72/frame 0xfffffe104b78cae0 amd64_syscall() at amd64_syscall+0x265/frame 0xfffffe104b78cbf0 Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe104b78cbf0 --- syscall (378, FreeBSD ELF64, sys_nmount), rip = 0x800a9ecba, rsp = 0x7fffffffcb18, rbp = 0x7fffffffd080 --- lock order reversal: 1st 0xfffff8008b5d8240 ufs (ufs) @ /usr/src/sys/kern/vfs_subr.c:2101 2nd 0xfffffe0f945594c0 bufwait (bufwait) @ /usr/src/sys/ufs/ffs/ffs_vnops.c:262 3rd 0xfffff8008b92c240 ufs (ufs) @ /usr/src/sys/kern/vfs_subr.c:2101 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe104b836030 kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe104b8360e0 witness_checkorder() at witness_checkorder+0xd23/frame 0xfffffe104b836170 __lockmgr_args() at __lockmgr_args+0x878/frame 0xfffffe104b8362a0 ffs_lock() at ffs_lock+0x84/frame 0xfffffe104b8362f0 VOP_LOCK1_APV() at VOP_LOCK1_APV+0xf5/frame 0xfffffe104b836320 _vn_lock() at _vn_lock+0xab/frame 0xfffffe104b836390 vget() at vget+0x70/frame 0xfffffe104b8363e0 vfs_hash_get() at vfs_hash_get+0xf5/frame 0xfffffe104b836430 ffs_vgetf() at ffs_vgetf+0x41/frame 0xfffffe104b8364c0 softdep_sync_buf() at softdep_sync_buf+0x3c7/frame 0xfffffe104b8365a0 ffs_syncvnode() at ffs_syncvnode+0x258/frame 0xfffffe104b836620 softdep_fsync() at softdep_fsync+0x598/frame 0xfffffe104b8366d0 ffs_fsync() at ffs_fsync+0x60/frame 0xfffffe104b836700 VOP_FSYNC_APV() at VOP_FSYNC_APV+0xf0/frame 0xfffffe104b836730 bufsync() at bufsync+0x35/frame 0xfffffe104b836760 bufobj_invalbuf() at bufobj_invalbuf+0x9f/frame 0xfffffe104b8367d0 vfs_donmount() at vfs_donmount+0xa49/frame 0xfffffe104b836aa0 sys_nmount() at sys_nmount+0x72/frame 0xfffffe104b836ae0 amd64_syscall() at amd64_syscall+0x265/frame 0xfffffe104b836bf0 Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe104b836bf0 --- syscall (378, FreeBSD ELF64, sys_nmount), rip = 0x800888cba, rsp = 0x7fffffffd1a8, rbp = 0x7fffffffdaf0 --- lock order reversal: 1st 0xfffff800135de240 syncer (syncer) @ /usr/src/sys/kern/vfs_subr.c:1720 2nd 0xfffff8030ecb9068 ufs (ufs) @ /usr/src/sys/kern/vfs_subr.c:2101 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe104ad456a0 kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe104ad45750 witness_checkorder() at witness_checkorder+0xd23/frame 0xfffffe104ad457e0 __lockmgr_args() at __lockmgr_args+0x878/frame 0xfffffe104ad45910 ffs_lock() at ffs_lock+0x84/frame 0xfffffe104ad45960 VOP_LOCK1_APV() at VOP_LOCK1_APV+0xf5/frame 0xfffffe104ad45990 _vn_lock() at _vn_lock+0xab/frame 0xfffffe104ad45a00 vget() at vget+0x70/frame 0xfffffe104ad45a50 vfs_msync() at vfs_msync+0x99/frame 0xfffffe104ad45ab0 sync_fsync() at sync_fsync+0xf7/frame 0xfffffe104ad45ae0 VOP_FSYNC_APV() at VOP_FSYNC_APV+0xf0/frame 0xfffffe104ad45b10 sched_sync() at sched_sync+0x34c/frame 0xfffffe104ad45bb0 fork_exit() at fork_exit+0x84/frame 0xfffffe104ad45bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe104ad45bf0 --- trap 0, rip = 0, rsp = 0xfffffe104ad45cb0, rbp = 0 --- --=_9ece7ca3fa5ad763b67bc18fdc2bdee7 Content-Transfer-Encoding: base64 Content-Type: application/octet-stream; name=dmesg Content-Disposition: attachment; filename=dmesg VGFibGUgJ0ZBQ1AnIGF0IDB4ZjI4ZjAKVGFibGUgJ0FQSUMnIGF0IDB4ZjI5ZTQKQVBJQzogRm91 bmQgdGFibGUgYXQgMHhmMjllNApBUElDOiBVc2luZyB0aGUgTUFEVCBlbnVtZXJhdG9yLgpNQURU OiBGb3VuZCBDUFUgQVBJQyBJRCAwIEFDUEkgSUQgMDogZW5hYmxlZApTTVA6IEFkZGVkIENQVSAw IChBUCkKTUFEVDogRm91bmQgQ1BVIEFQSUMgSUQgOCBBQ1BJIElEIDE6IGVuYWJsZWQKU01QOiBB ZGRlZCBDUFUgOCAoQVApCk1BRFQ6IEZvdW5kIENQVSBBUElDIElEIDE2IEFDUEkgSUQgMjogZW5h YmxlZApTTVA6IEFkZGVkIENQVSAxNiAoQVApCk1BRFQ6IEZvdW5kIENQVSBBUElDIElEIDI0IEFD UEkgSUQgMzogZW5hYmxlZApTTVA6IEFkZGVkIENQVSAyNCAoQVApCk1BRFQ6IEZvdW5kIENQVSBB UElDIElEIDIgQUNQSSBJRCA0OiBlbmFibGVkClNNUDogQWRkZWQgQ1BVIDIgKEFQKQpNQURUOiBG b3VuZCBDUFUgQVBJQyBJRCAxMCBBQ1BJIElEIDU6IGVuYWJsZWQKU01QOiBBZGRlZCBDUFUgMTAg KEFQKQpNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCAxOCBBQ1BJIElEIDY6IGVuYWJsZWQKU01QOiBB ZGRlZCBDUFUgMTggKEFQKQpNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCAyNiBBQ1BJIElEIDc6IGVu YWJsZWQKU01QOiBBZGRlZCBDUFUgMjYgKEFQKQpNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCAxIEFD UEkgSUQgODogZW5hYmxlZApTTVA6IEFkZGVkIENQVSAxIChBUCkKTUFEVDogRm91bmQgQ1BVIEFQ SUMgSUQgOSBBQ1BJIElEIDk6IGVuYWJsZWQKU01QOiBBZGRlZCBDUFUgOSAoQVApCk1BRFQ6IEZv dW5kIENQVSBBUElDIElEIDE3IEFDUEkgSUQgMTA6IGVuYWJsZWQKU01QOiBBZGRlZCBDUFUgMTcg KEFQKQpNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCAyNSBBQ1BJIElEIDExOiBlbmFibGVkClNNUDog QWRkZWQgQ1BVIDI1IChBUCkKTUFEVDogRm91bmQgQ1BVIEFQSUMgSUQgMyBBQ1BJIElEIDEyOiBl bmFibGVkClNNUDogQWRkZWQgQ1BVIDMgKEFQKQpNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCAxMSBB Q1BJIElEIDEzOiBlbmFibGVkClNNUDogQWRkZWQgQ1BVIDExIChBUCkKTUFEVDogRm91bmQgQ1BV IEFQSUMgSUQgMTkgQUNQSSBJRCAxNDogZW5hYmxlZApTTVA6IEFkZGVkIENQVSAxOSAoQVApCk1B RFQ6IEZvdW5kIENQVSBBUElDIElEIDI3IEFDUEkgSUQgMTU6IGVuYWJsZWQKU01QOiBBZGRlZCBD UFUgMjcgKEFQKQpNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCA0OCBBQ1BJIElEIDE2OiBkaXNhYmxl ZApNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCA0OSBBQ1BJIElEIDE3OiBkaXNhYmxlZApNQURUOiBG b3VuZCBDUFUgQVBJQyBJRCA1MCBBQ1BJIElEIDE4OiBkaXNhYmxlZApNQURUOiBGb3VuZCBDUFUg QVBJQyBJRCA1MSBBQ1BJIElEIDE5OiBkaXNhYmxlZApNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCA1 MiBBQ1BJIElEIDIwOiBkaXNhYmxlZApNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCA1MyBBQ1BJIElE IDIxOiBkaXNhYmxlZApNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCA1NCBBQ1BJIElEIDIyOiBkaXNh YmxlZApNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCA1NSBBQ1BJIElEIDIzOiBkaXNhYmxlZApNQURU OiBGb3VuZCBDUFUgQVBJQyBJRCA1NiBBQ1BJIElEIDI0OiBkaXNhYmxlZApNQURUOiBGb3VuZCBD UFUgQVBJQyBJRCA1NyBBQ1BJIElEIDI1OiBkaXNhYmxlZApNQURUOiBGb3VuZCBDUFUgQVBJQyBJ RCA1OCBBQ1BJIElEIDI2OiBkaXNhYmxlZApNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCA1OSBBQ1BJ IElEIDI3OiBkaXNhYmxlZApNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCA2MCBBQ1BJIElEIDI4OiBk aXNhYmxlZApNQURUOiBGb3VuZCBDUFUgQVBJQyBJRCA2MSBBQ1BJIElEIDI5OiBkaXNhYmxlZApN QURUOiBGb3VuZCBDUFUgQVBJQyBJRCA2MiBBQ1BJIElEIDMwOiBkaXNhYmxlZApNQURUOiBGb3Vu ZCBDUFUgQVBJQyBJRCA2MyBBQ1BJIElEIDMxOiBkaXNhYmxlZApDb3B5cmlnaHQgKGMpIDE5OTIt MjAxNCBUaGUgRnJlZUJTRCBQcm9qZWN0LgpDb3B5cmlnaHQgKGMpIDE5NzksIDE5ODAsIDE5ODMs IDE5ODYsIDE5ODgsIDE5ODksIDE5OTEsIDE5OTIsIDE5OTMsIDE5OTQKCVRoZSBSZWdlbnRzIG9m IHRoZSBVbml2ZXJzaXR5IG9mIENhbGlmb3JuaWEuIEFsbCByaWdodHMgcmVzZXJ2ZWQuCkZyZWVC U0QgaXMgYSByZWdpc3RlcmVkIHRyYWRlbWFyayBvZiBUaGUgRnJlZUJTRCBGb3VuZGF0aW9uLgpG cmVlQlNEIDExLjAtQ1VSUkVOVCAjMCByMjYxODg1OiBGcmkgRmViIDE0IDA4OjUxOjQ4IEVTVCAy MDE0CiAgICBtaWtlakBjdXN0b206L3Vzci9vYmovdXNyL3NyYy9zeXMvR0VORVJJQyBhbWQ2NApG cmVlQlNEIGNsYW5nIHZlcnNpb24gMy4zICh0YWdzL1JFTEVBU0VfMzMvZmluYWwgMTgzNTAyKSAy MDEzMDYxMApXQVJOSU5HOiBXSVRORVNTIG9wdGlvbiBlbmFibGVkLCBleHBlY3QgcmVkdWNlZCBw ZXJmb3JtYW5jZS4KUHJlbG9hZGVkIGVsZiBrZXJuZWwgIi9ib290L2tlcm5lbC9rZXJuZWwiIGF0 IDB4ZmZmZmZmZmY4MWI3MjAwMC4KQ2FsaWJyYXRpbmcgVFNDIGNsb2NrIC4uLiBUU0MgY2xvY2s6 IDI5MjU5MjYyMTIgSHoKQ1BVOiBJbnRlbChSKSBYZW9uKFIpIENQVSAgICAgICAgICAgWDczNTAg IEAgMi45M0dIeiAoMjkyNS45My1NSHogSzgtY2xhc3MgQ1BVKQogIE9yaWdpbj0iR2VudWluZUlu dGVsIiAgSWQ9MHg2ZmIgIEZhbWlseT0weDYgIE1vZGVsPTB4ZiAgU3RlcHBpbmc9MTEKICBGZWF0 dXJlcz0weGJmZWJmYmZmPEZQVSxWTUUsREUsUFNFLFRTQyxNU1IsUEFFLE1DRSxDWDgsQVBJQyxT RVAsTVRSUixQR0UsTUNBLENNT1YsUEFULFBTRTM2LENMRkxVU0gsRFRTLEFDUEksTU1YLEZYU1Is U1NFLFNTRTIsU1MsSFRULFRNLFBCRT4KICBGZWF0dXJlczI9MHg0ZTNiZDxTU0UzLERURVM2NCxN T04sRFNfQ1BMLFZNWCxFU1QsVE0yLFNTU0UzLENYMTYseFRQUixQRENNLERDQT4KICBBTUQgRmVh dHVyZXM9MHgyMDEwMDgwMDxTWVNDQUxMLE5YLExNPgogIEFNRCBGZWF0dXJlczI9MHgxPExBSEY+ CiAgVFNDOiBQLXN0YXRlIGludmFyaWFudCwgcGVyZm9ybWFuY2Ugc3RhdGlzdGljcwpyZWFsIG1l bW9yeSAgPSA2ODcxOTQ3NjczNiAoNjU1MzYgTUIpClBoeXNpY2FsIG1lbW9yeSBjaHVuayhzKToK MHgwMDAwMDAwMDAwMDEwMDAwIC0gMHgwMDAwMDAwMDAwMDliZmZmLCA1NzM0NDAgYnl0ZXMgKDE0 MCBwYWdlcykKMHgwMDAwMDAwMDAwMTAwMDAwIC0gMHgwMDAwMDAwMDAwMWZmZmZmLCAxMDQ4NTc2 IGJ5dGVzICgyNTYgcGFnZXMpCjB4MDAwMDAwMDAwMWJkODAwMCAtIDB4MDAwMDAwMDBiZmFhZmZm ZiwgMzE4NjQ1ODYyNCBieXRlcyAoNzc3OTQ0IHBhZ2VzKQoweDAwMDAwMDAwYmZhYzQwMDAgLSAw eDAwMDAwMDAwYmZhYzVmZmYsIDgxOTIgYnl0ZXMgKDIgcGFnZXMpCjB4MDAwMDAwMDEwMDAwMDAw MCAtIDB4MDAwMDAwMGZkNWY5MGZmZiwgNjM3MTk0MTE3MTIgYnl0ZXMgKDE1NTU2NDk3IHBhZ2Vz KQphdmFpbCBtZW1vcnkgPSA2NjcwMjE2ODA2NCAoNjM2MTIgTUIpCkV2ZW50IHRpbWVyICJMQVBJ QyIgcXVhbGl0eSA0MDAKQUNQSSBBUElDIFRhYmxlOiA8REVMTCAgIFBFX1NDMyAgPgpJTlRSOiBB ZGRpbmcgbG9jYWwgQVBJQyAxIGFzIGEgdGFyZ2V0CklOVFI6IEFkZGluZyBsb2NhbCBBUElDIDIg YXMgYSB0YXJnZXQKSU5UUjogQWRkaW5nIGxvY2FsIEFQSUMgMyBhcyBhIHRhcmdldApJTlRSOiBB ZGRpbmcgbG9jYWwgQVBJQyA4IGFzIGEgdGFyZ2V0CklOVFI6IEFkZGluZyBsb2NhbCBBUElDIDkg YXMgYSB0YXJnZXQKSU5UUjogQWRkaW5nIGxvY2FsIEFQSUMgMTAgYXMgYSB0YXJnZXQKSU5UUjog QWRkaW5nIGxvY2FsIEFQSUMgMTEgYXMgYSB0YXJnZXQKSU5UUjogQWRkaW5nIGxvY2FsIEFQSUMg MTYgYXMgYSB0YXJnZXQKSU5UUjogQWRkaW5nIGxvY2FsIEFQSUMgMTcgYXMgYSB0YXJnZXQKSU5U UjogQWRkaW5nIGxvY2FsIEFQSUMgMTggYXMgYSB0YXJnZXQKSU5UUjogQWRkaW5nIGxvY2FsIEFQ SUMgMTkgYXMgYSB0YXJnZXQKSU5UUjogQWRkaW5nIGxvY2FsIEFQSUMgMjQgYXMgYSB0YXJnZXQK SU5UUjogQWRkaW5nIGxvY2FsIEFQSUMgMjUgYXMgYSB0YXJnZXQKSU5UUjogQWRkaW5nIGxvY2Fs IEFQSUMgMjYgYXMgYSB0YXJnZXQKSU5UUjogQWRkaW5nIGxvY2FsIEFQSUMgMjcgYXMgYSB0YXJn ZXQKRnJlZUJTRC9TTVA6IE11bHRpcHJvY2Vzc29yIFN5c3RlbSBEZXRlY3RlZDogMTYgQ1BVcwpG cmVlQlNEL1NNUDogNCBwYWNrYWdlKHMpIHggNCBjb3JlKHMpCiBjcHUwIChCU1ApOiBBUElDIElE OiAgMAogY3B1MSAoQVApOiBBUElDIElEOiAgMQogY3B1MiAoQVApOiBBUElDIElEOiAgMgogY3B1 MyAoQVApOiBBUElDIElEOiAgMwogY3B1NCAoQVApOiBBUElDIElEOiAgOAogY3B1NSAoQVApOiBB UElDIElEOiAgOQogY3B1NiAoQVApOiBBUElDIElEOiAxMAogY3B1NyAoQVApOiBBUElDIElEOiAx MQogY3B1OCAoQVApOiBBUElDIElEOiAxNgogY3B1OSAoQVApOiBBUElDIElEOiAxNwogY3B1MTAg KEFQKTogQVBJQyBJRDogMTgKIGNwdTExIChBUCk6IEFQSUMgSUQ6IDE5CiBjcHUxMiAoQVApOiBB UElDIElEOiAyNAogY3B1MTMgKEFQKTogQVBJQyBJRDogMjUKIGNwdTE0IChBUCk6IEFQSUMgSUQ6 IDI2CiBjcHUxNSAoQVApOiBBUElDIElEOiAyNwpBUElDOiBDUFUgMCBoYXMgQUNQSSBJRCAwCkFQ SUM6IENQVSAxIGhhcyBBQ1BJIElEIDgKQVBJQzogQ1BVIDIgaGFzIEFDUEkgSUQgNApBUElDOiBD UFUgMyBoYXMgQUNQSSBJRCAxMgpBUElDOiBDUFUgNCBoYXMgQUNQSSBJRCAxCkFQSUM6IENQVSA1 IGhhcyBBQ1BJIElEIDkKQVBJQzogQ1BVIDYgaGFzIEFDUEkgSUQgNQpBUElDOiBDUFUgNyBoYXMg QUNQSSBJRCAxMwpBUElDOiBDUFUgOCBoYXMgQUNQSSBJRCAyCkFQSUM6IENQVSA5IGhhcyBBQ1BJ IElEIDEwCkFQSUM6IENQVSAxMCBoYXMgQUNQSSBJRCA2CkFQSUM6IENQVSAxMSBoYXMgQUNQSSBJ RCAxNApBUElDOiBDUFUgMTIgaGFzIEFDUEkgSUQgMwpBUElDOiBDUFUgMTMgaGFzIEFDUEkgSUQg MTEKQVBJQzogQ1BVIDE0IGhhcyBBQ1BJIElEIDcKQVBJQzogQ1BVIDE1IGhhcyBBQ1BJIElEIDE1 ClhFTjogQ1BVIDAgaGFzIFZDUFUgSUQgMApYRU46IENQVSAxIGhhcyBWQ1BVIElEIDgKWEVOOiBD UFUgMiBoYXMgVkNQVSBJRCA0ClhFTjogQ1BVIDMgaGFzIFZDUFUgSUQgMTIKWEVOOiBDUFUgNCBo YXMgVkNQVSBJRCAxClhFTjogQ1BVIDUgaGFzIFZDUFUgSUQgOQpYRU46IENQVSA2IGhhcyBWQ1BV IElEIDUKWEVOOiBDUFUgNyBoYXMgVkNQVSBJRCAxMwpYRU46IENQVSA4IGhhcyBWQ1BVIElEIDIK WEVOOiBDUFUgOSBoYXMgVkNQVSBJRCAxMApYRU46IENQVSAxMCBoYXMgVkNQVSBJRCA2ClhFTjog Q1BVIDExIGhhcyBWQ1BVIElEIDE0ClhFTjogQ1BVIDEyIGhhcyBWQ1BVIElEIDMKWEVOOiBDUFUg MTMgaGFzIFZDUFUgSUQgMTEKWEVOOiBDUFUgMTQgaGFzIFZDUFUgSUQgNwpYRU46IENQVSAxNSBo YXMgVkNQVSBJRCAxNQp4ODZiaW9zOiAgSVZUIDB4MDAwMDAwLTB4MDAwNGZmIGF0IDB4ZmZmZmY4 MDAwMDAwMDAwMAp4ODZiaW9zOiBTU0VHIDB4MDk4MDAwLTB4MDk4ZmZmIGF0IDB4ZmZmZmZlMTAw OTliYjAwMAp4ODZiaW9zOiBFQkRBIDB4MDlmMDAwLTB4MDlmZmZmIGF0IDB4ZmZmZmY4MDAwMDA5 ZjAwMAp4ODZiaW9zOiAgUk9NIDB4MGEwMDAwLTB4MGZlZmZmIGF0IDB4ZmZmZmY4MDAwMDBhMDAw MApVTEU6IHNldHVwIGNwdSAwClVMRTogc2V0dXAgY3B1IDEKVUxFOiBzZXR1cCBjcHUgMgpVTEU6 IHNldHVwIGNwdSAzClVMRTogc2V0dXAgY3B1IDQKVUxFOiBzZXR1cCBjcHUgNQpVTEU6IHNldHVw IGNwdSA2ClVMRTogc2V0dXAgY3B1IDcKVUxFOiBzZXR1cCBjcHUgOApVTEU6IHNldHVwIGNwdSA5 ClVMRTogc2V0dXAgY3B1IDEwClVMRTogc2V0dXAgY3B1IDExClVMRTogc2V0dXAgY3B1IDEyClVM RTogc2V0dXAgY3B1IDEzClVMRTogc2V0dXAgY3B1IDE0ClVMRTogc2V0dXAgY3B1IDE1CkFDUEk6 IFJTRFAgMHhmMjYwMCAwMDAyNCAodjAyIERFTEwgICkKQUNQSTogWFNEVCAweGYyNmY4IDAwMDg0 ICh2MDEgREVMTCAgIFBFX1NDMyAgIDAwMDAwMDAxIERFTEwgMDAwMDAwMDEpCkFDUEk6IEZBQ1Ag MHhmMjhmMCAwMDBGNCAodjAzIERFTEwgICBQRV9TQzMgICAwMDAwMDAwMSBERUxMIDAwMDAwMDAx KQpBQ1BJOiBEU0RUIDB4YmZhYzYwMDAgMDJFRTUgKHYwMSBERUxMICAgUEVfU0MzICAgMDAwMDAw MDEgSU5UTCAyMDA1MDYyNCkKQUNQSTogRkFDUyAweGJmYWQ1YzAwIDAwMDQwCkFDUEk6IEFQSUMg MHhmMjllNCAwMDE1RSAodjAxIERFTEwgICBQRV9TQzMgICAwMDAwMDAwMSBERUxMIDAwMDAwMDAx KQpBQ1BJOiBTUENSIDB4ZjJiNDMgMDAwNTAgKHYwMSBERUxMICAgUEVfU0MzICAgMDAwMDAwMDEg REVMTCAwMDAwMDAwMSkKQUNQSTogSFBFVCAweGYyYjkzIDAwMDM4ICh2MDEgREVMTCAgIFBFX1ND MyAgIDAwMDAwMDAxIERFTEwgMDAwMDAwMDEpCkFDUEk6IE1DRkcgMHhmMmJjYiAwMDAzQyAodjAx IERFTEwgICBQRV9TQzMgICAwMDAwMDAwMSBERUxMIDAwMDAwMDAxKQpBQ1BJOiBTTElDIDB4ZjJj MDcgMDAwMjQgKHYwMSBERUxMICAgUEVfU0MzICAgMDAwMDAwMDEgREVMTCAwMDAwMDAwMSkKQUNQ STogRVJTVCAweGJmYWNjMmQ4IDAwMjEwICh2MDEgREVMTCAgIFBFX1NDMyAgIDAwMDAwMDAxIERF TEwgMDAwMDAwMDEpCkFDUEk6IEhFU1QgMHhiZmFjYzRlOCAwMDI3QyAodjAxIERFTEwgICBQRV9T QzMgICAwMDAwMDAwMSBERUxMIDAwMDAwMDAxKQpBQ1BJOiBCRVJUIDB4YmZhY2MxNTggMDAwMzAg KHYwMSBERUxMICAgUEVfU0MzICAgMDAwMDAwMDEgREVMTCAwMDAwMDAwMSkKQUNQSTogRUlOSiAw eGJmYWNjMTg4IDAwMTUwICh2MDEgREVMTCAgIFBFX1NDMyAgIDAwMDAwMDAxIERFTEwgMDAwMDAw MDEpCkFDUEk6IFRDUEEgMHhmMmQ4OSAwMDA2NCAodjAyIERFTEwgICBQRV9TQzMgICAwMDAwMDAw MSBERUxMIDAwMDAwMDAxKQpBQ1BJOiBTU0RUIDB4YmZhYzhlZTUgMDMyNzMgKHYwMSBERUxMICAg UEVfU0MzICAgMDAwMDAwMTAgSU5UTCAyMDA1MDYyNCkKTUFEVDogRm91bmQgSU8gQVBJQyBJRCA2 LCBJbnRlcnJ1cHQgMCBhdCAweGZlYzAwMDAwCmlvYXBpYzA6IENoYW5naW5nIEFQSUMgSUQgdG8g Ngppb2FwaWMwOiBSb3V0aW5nIGV4dGVybmFsIDgyNTlBJ3MgLT4gaW50cGluIDAKTUFEVDogRm91 bmQgSU8gQVBJQyBJRCA3LCBJbnRlcnJ1cHQgMzIgYXQgMHhmZWM4MTAwMAppb2FwaWMxOiBDaGFu Z2luZyBBUElDIElEIHRvIDcKaW9hcGljMTogV0FSTklORzogaW50YmFzZSAzMiAhPSBleHBlY3Rl ZCBiYXNlIDI0CmxhcGljOiBSb3V0aW5nIE5NSSAtPiBMSU5UMQpsYXBpYzogTElOVDEgdHJpZ2dl cjogZWRnZQpsYXBpYzogTElOVDEgcG9sYXJpdHk6IGhpZ2gKTUFEVDogSW50ZXJydXB0IG92ZXJy aWRlOiBzb3VyY2UgMCwgaXJxIDIKaW9hcGljMDogUm91dGluZyBJUlEgMCAtPiBpbnRwaW4gMgpN QURUOiBJbnRlcnJ1cHQgb3ZlcnJpZGU6IHNvdXJjZSA5LCBpcnEgOQppb2FwaWMwOiBpbnRwaW4g OSB0cmlnZ2VyOiBsZXZlbAppb2FwaWMwIDxWZXJzaW9uIDIuMD4gaXJxcyAwLTIzIG9uIG1vdGhl cmJvYXJkCmlvYXBpYzEgPFZlcnNpb24gMi4wPiBpcnFzIDMyLTU1IG9uIG1vdGhlcmJvYXJkCmNw dTAgQlNQOgogICAgIElEOiAweDAwMDAwMDAwICAgVkVSOiAweDAwMDUwMDE0IExEUjogMHgwMDAw MDAwMCBERlI6IDB4ZmZmZmZmZmYKICBsaW50MDogMHgwMDAxMDcwMCBsaW50MTogMHgwMDAwMDQw MCBUUFI6IDB4MDAwMDAwMDAgU1ZSOiAweDAwMDAwMWZmCiAgdGltZXI6IDB4MDAwMTAwZWYgdGhl cm06IDB4MDAwMTAwMDAgZXJyOiAweDAwMDAwMGYwIHBtYzogMHgwMDAxMDQwMAp3bGFuOiA8ODAy LjExIExpbmsgTGF5ZXI+CnNuZF91bml0X2luaXQoKSB1PTB4MDBmZjgwMDAgWzUxMl0gZD0weDAw MDA3YzAwIFszMl0gYz0weDAwMDAwM2ZmIFsxMDI0XQpmZWVkZXJfcmVnaXN0ZXI6IHNuZF91bml0 PS0xIHNuZF9tYXhhdXRvdmNoYW5zPTE2IGxhdGVuY3k9NSBmZWVkZXJfcmF0ZV9taW49MSBmZWVk ZXJfcmF0ZV9tYXg9MjAxNjAwMCBmZWVkZXJfcmF0ZV9yb3VuZD0yNQpIYXJkd2FyZSwgSW50ZWwg SXZ5QnJpZGdlKyBSTkc6IFJEUkFORCBpcyBub3QgcHJlc2VudApIYXJkd2FyZSwgVklBIE5laGVt aWFoIFBhZGxvY2sgUk5HOiBWSUEgUGFkbG9jayBSTkcgbm90IHByZXNlbnQKbnVsbDogPG51bGwg ZGV2aWNlLCB6ZXJvIGRldmljZT4KRmFsbGluZyBiYWNrIHRvIDxTb2Z0d2FyZSwgWWFycm93PiBy YW5kb20gYWRhcHRvcgpyYW5kb206IDxTb2Z0d2FyZSwgWWFycm93PiBpbml0aWFsaXplZApuZnNs b2NrOiBwc2V1ZG8tZGV2aWNlCmtiZDogbmV3IGFycmF5IHNpemUgNAprYmQxIGF0IGtiZG11eDAK bWVtOiA8bWVtb3J5PgpWRVNBOiBJTlQgMHgxMCB2ZWN0b3IgMHhjMDAwOjB4MTc3MApWRVNBOiBp bmZvcm1hdGlvbiBibG9jawowMDAwICAgNTYgNDUgNTMgNDEgMDAgMDIgMDAgMDEgMDAgOTkgMDEg MDAgMDAgMDAgMjIgMDAKMDAxMCAgIDAwIDk5IDAwIDAyIDAwIDAxIDBiIDAxIDAwIDk5IDIxIDAx IDAwIDk5IDI2IDAxCjAwMjAgICAwMCA5OSA2YSAwMCAwMiAwMSAwNCAwMSA4MiAwMSAwZCAwMSAw ZSAwMSAwZiAwMQowMDMwICAgMjAgMDEgOTIgMDEgOTMgMDEgOTQgMDEgOTUgMDEgOTYgMDEgYTIg MDEgYTMgMDEKMDA0MCAgIGE0IDAxIGE1IDAxIGE2IDAxIGIyIDAxIGIzIDAxIGI0IDAxIGI1IDAx IGI2IDAxCjAwNTAgICBjMiAwMSBjMyAwMSBjNCAwMSBjNSAwMSBjNiAwMSAwMCAwMSA4MyAwMSA4 NCAwMQowMDYwICAgODUgMDEgODYgMDEgMDEgMDEgMTAgMDEgMTEgMDEgMTIgMDEgMjEgMDEgMDMg MDEKMDA3MCAgIDEzIDAxIDE0IDAxIDE1IDAxIDIyIDAxIDA1IDAxIDE2IDAxIDE3IDAxIDE4IDAx CjAwODAgICAyMyAwMSAwNyAwMSAxOSAwMSAxYSAwMSAxYiAwMSAyNCAwMSAwOSAwMSAwYSAwMQow MDkwICAgMzAgMDEgZmYgZmYgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAKMDBh MCAgIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwCjAwYjAg ICAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMAowMGMwICAg MDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAKMDBkMCAgIDAw IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwCjAwZTAgICAwMCAw MCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMAowMGYwICAgMDAgMDAg MDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAKMDEwMCAgIDQxIDU0IDQ5 IDIwIDQ1IDUzIDMxIDMwIDMwIDMwIDAwIDQxIDU0IDQ5IDIwIDU0CjAxMTAgICA2NSA2MyA2OCA2 ZSA2ZiA2YyA2ZiA2NyA2OSA2NSA3MyAyMCA0OSA2ZSA2MyAyZQowMTIwICAgMDAgNTIgNGUgMzUg MzAgMDAgMzAgMzEgMmUgMzAgMzAgMDAgMDAgMDAgMDAgMDAKMDEzMCAgIDAwIDAwIDAwIDAwIDAw IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwCjAxNDAgICAwMCAwMCAwMCAwMCAwMCAw MCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMAowMTUwICAgMDAgMDAgMDAgMDAgMDAgMDAg MDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAKMDE2MCAgIDAwIDAwIDAwIDAwIDAwIDAwIDAw IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwCjAxNzAgICAwMCAwMCAwMCAwMCAwMCAwMCAwMCAw MCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMAowMTgwICAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAg MDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAKMDE5MCAgIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw IDAwIDAwIDAwIDAwIDAwIDAwIDAwCjAxYTAgICAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAw MCAwMCAwMCAwMCAwMCAwMCAwMAowMWIwICAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAg MDAgMDAgMDAgMDAgMDAgMDAKMDFjMCAgIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw IDAwIDAwIDAwIDAwIDAwCjAxZDAgICAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAw MCAwMCAwMCAwMCAwMAowMWUwICAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAgMDAg MDAgMDAgMDAgMDAKMDFmMCAgIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw IDAwIDAwIDAwClZFU0E6IDU2IG1vZGUocykgZm91bmQKVkVTQTogdjIuMCwgMzI3NjhrIG1lbW9y eSwgZmxhZ3M6MHgxLCBtb2RlIHRhYmxlOjB4ZmZmZmZlMTAwOTlmZDAyMiAoOTkwMDAwMjIpClZF U0E6IEFUSSBFUzEwMDAKVkVTQTogQVRJIFRlY2hub2xvZ2llcyBJbmMuIFJONTAgMDEuMDAKaW86 IDxJL08+ClZNQlVTOiBsb2FkCmhwdG5yOiBSNzUwL0RDNzI4MCBjb250cm9sbGVyIGRyaXZlciB2 MS4wCmhwdDI3eHg6IFJvY2tldFJBSUQgMjd4eCBjb250cm9sbGVyIGRyaXZlciB2MS4xCmhwdHJy OiBSb2NrZXRSQUlEIDE3eHgvMnh4eCBTQVRBIGNvbnRyb2xsZXIgZHJpdmVyIHYxLjIKYWNwaTA6 IDxERUxMIFBFX1NDMz4gb24gbW90aGVyYm9hcmQKQUNQSTogQWxsIEFDUEkgVGFibGVzIHN1Y2Nl c3NmdWxseSBhY3F1aXJlZApQQ0llOiBNZW1vcnkgTWFwcGVkIGNvbmZpZ3VyYXRpb24gYmFzZSBA IDB4ZTAwMDAwMDAKaW9hcGljMDogcm91dGluZyBpbnRwaW4gOSAoSVNBIElSUSA5KSB0byBsYXBp YyAwIHZlY3RvciA0OAphY3BpMDogUG93ZXIgQnV0dG9uIChmaXhlZCkKY3B1MDogUHJvY2Vzc29y IFwxMzRfUFJfLkNQVTAgKEFDUEkgSUQgMCkgLT4gQVBJQyBJRCAwCmNwdTA6IDxBQ1BJIENQVT4g b24gYWNwaTAKY3B1MTogUHJvY2Vzc29yIFwxMzRfUFJfLkNQVTEgKEFDUEkgSUQgMSkgLT4gQVBJ QyBJRCA0CmNwdTE6IDxBQ1BJIENQVT4gb24gYWNwaTAKY3B1MjogUHJvY2Vzc29yIFwxMzRfUFJf LkNQVTIgKEFDUEkgSUQgMikgLT4gQVBJQyBJRCA4CmNwdTI6IDxBQ1BJIENQVT4gb24gYWNwaTAK Y3B1MzogUHJvY2Vzc29yIFwxMzRfUFJfLkNQVTMgKEFDUEkgSUQgMykgLT4gQVBJQyBJRCAxMgpj cHUzOiA8QUNQSSBDUFU+IG9uIGFjcGkwCmNwdTQ6IFByb2Nlc3NvciBcMTM0X1BSXy5DUFU0IChB Q1BJIElEIDQpIC0+IEFQSUMgSUQgMgpjcHU0OiA8QUNQSSBDUFU+IG9uIGFjcGkwCmNwdTU6IFBy b2Nlc3NvciBcMTM0X1BSXy5DUFU1IChBQ1BJIElEIDUpIC0+IEFQSUMgSUQgNgpjcHU1OiA8QUNQ SSBDUFU+IG9uIGFjcGkwCmNwdTY6IFByb2Nlc3NvciBcMTM0X1BSXy5DUFU2IChBQ1BJIElEIDYp IC0+IEFQSUMgSUQgMTAKY3B1NjogPEFDUEkgQ1BVPiBvbiBhY3BpMApjcHU3OiBQcm9jZXNzb3Ig XDEzNF9QUl8uQ1BVNyAoQUNQSSBJRCA3KSAtPiBBUElDIElEIDE0CmNwdTc6IDxBQ1BJIENQVT4g b24gYWNwaTAKY3B1ODogUHJvY2Vzc29yIFwxMzRfUFJfLkNQVTggKEFDUEkgSUQgOCkgLT4gQVBJ QyBJRCAxCmNwdTg6IDxBQ1BJIENQVT4gb24gYWNwaTAKY3B1OTogUHJvY2Vzc29yIFwxMzRfUFJf LkNQVTkgKEFDUEkgSUQgOSkgLT4gQVBJQyBJRCA1CmNwdTk6IDxBQ1BJIENQVT4gb24gYWNwaTAK Y3B1MTA6IFByb2Nlc3NvciBcMTM0X1BSXy5DUDEwIChBQ1BJIElEIDEwKSAtPiBBUElDIElEIDkK Y3B1MTA6IDxBQ1BJIENQVT4gb24gYWNwaTAKY3B1MTE6IFByb2Nlc3NvciBcMTM0X1BSXy5DUDEx IChBQ1BJIElEIDExKSAtPiBBUElDIElEIDEzCmNwdTExOiA8QUNQSSBDUFU+IG9uIGFjcGkwCmNw dTEyOiBQcm9jZXNzb3IgXDEzNF9QUl8uQ1AxMiAoQUNQSSBJRCAxMikgLT4gQVBJQyBJRCAzCmNw dTEyOiA8QUNQSSBDUFU+IG9uIGFjcGkwCmNwdTEzOiBQcm9jZXNzb3IgXDEzNF9QUl8uQ1AxMyAo QUNQSSBJRCAxMykgLT4gQVBJQyBJRCA3CmNwdTEzOiA8QUNQSSBDUFU+IG9uIGFjcGkwCmNwdTE0 OiBQcm9jZXNzb3IgXDEzNF9QUl8uQ1AxNCAoQUNQSSBJRCAxNCkgLT4gQVBJQyBJRCAxMQpjcHUx NDogPEFDUEkgQ1BVPiBvbiBhY3BpMApjcHUxNTogUHJvY2Vzc29yIFwxMzRfUFJfLkNQMTUgKEFD UEkgSUQgMTUpIC0+IEFQSUMgSUQgMTUKY3B1MTU6IDxBQ1BJIENQVT4gb24gYWNwaTAKQUNQSTog UHJvY2Vzc29yIFwxMzRfUFJfLkNQMTYgKEFDUEkgSUQgMTYpIGlnbm9yZWQKQUNQSTogUHJvY2Vz c29yIFwxMzRfUFJfLkNQMTcgKEFDUEkgSUQgMTcpIGlnbm9yZWQKQUNQSTogUHJvY2Vzc29yIFwx MzRfUFJfLkNQMTggKEFDUEkgSUQgMTgpIGlnbm9yZWQKQUNQSTogUHJvY2Vzc29yIFwxMzRfUFJf LkNQMTkgKEFDUEkgSUQgMTkpIGlnbm9yZWQKQUNQSTogUHJvY2Vzc29yIFwxMzRfUFJfLkNQMjAg KEFDUEkgSUQgMjApIGlnbm9yZWQKQUNQSTogUHJvY2Vzc29yIFwxMzRfUFJfLkNQMjEgKEFDUEkg SUQgMjEpIGlnbm9yZWQKQUNQSTogUHJvY2Vzc29yIFwxMzRfUFJfLkNQMjIgKEFDUEkgSUQgMjIp IGlnbm9yZWQKQUNQSTogUHJvY2Vzc29yIFwxMzRfUFJfLkNQMjMgKEFDUEkgSUQgMjMpIGlnbm9y ZWQKQUNQSTogUHJvY2Vzc29yIFwxMzRfUFJfLkNQMjQgKEFDUEkgSUQgMjQpIGlnbm9yZWQKQUNQ STogUHJvY2Vzc29yIFwxMzRfUFJfLkNQMjUgKEFDUEkgSUQgMjUpIGlnbm9yZWQKQUNQSTogUHJv Y2Vzc29yIFwxMzRfUFJfLkNQMjYgKEFDUEkgSUQgMjYpIGlnbm9yZWQKQUNQSTogUHJvY2Vzc29y IFwxMzRfUFJfLkNQMjcgKEFDUEkgSUQgMjcpIGlnbm9yZWQKQUNQSTogUHJvY2Vzc29yIFwxMzRf UFJfLkNQMjggKEFDUEkgSUQgMjgpIGlnbm9yZWQKQUNQSTogUHJvY2Vzc29yIFwxMzRfUFJfLkNQ MjkgKEFDUEkgSUQgMjkpIGlnbm9yZWQKQUNQSTogUHJvY2Vzc29yIFwxMzRfUFJfLkNQMzAgKEFD UEkgSUQgMzApIGlnbm9yZWQKQUNQSTogUHJvY2Vzc29yIFwxMzRfUFJfLkNQMzEgKEFDUEkgSUQg MzEpIGlnbm9yZWQKYXRydGMwOiA8QVQgcmVhbHRpbWUgY2xvY2s+IHBvcnQgMHg3MC0weDdmIGly cSA4IG9uIGFjcGkwCmF0cnRjMDogcmVnaXN0ZXJlZCBhcyBhIHRpbWUtb2YtZGF5IGNsb2NrIChy ZXNvbHV0aW9uIDEwMDAwMDB1cywgYWRqdXN0bWVudCAwLjUwMDAwMDAwMHMpCmlvYXBpYzA6IHJv dXRpbmcgaW50cGluIDggKElTQSBJUlEgOCkgdG8gbGFwaWMgMCB2ZWN0b3IgNDkKRXZlbnQgdGlt ZXIgIlJUQyIgZnJlcXVlbmN5IDMyNzY4IEh6IHF1YWxpdHkgMAphdHRpbWVyMDogPEFUIHRpbWVy PiBwb3J0IDB4NDAtMHg1ZiBpcnEgMCBvbiBhY3BpMApUaW1lY291bnRlciAiaTgyNTQiIGZyZXF1 ZW5jeSAxMTkzMTgyIEh6IHF1YWxpdHkgMAppb2FwaWMwOiByb3V0aW5nIGludHBpbiAyIChJU0Eg SVJRIDApIHRvIGxhcGljIDAgdmVjdG9yIDUwCkV2ZW50IHRpbWVyICJpODI1NCIgZnJlcXVlbmN5 IDExOTMxODIgSHogcXVhbGl0eSAxMDAKaHBldDA6IDxIaWdoIFByZWNpc2lvbiBFdmVudCBUaW1l cj4gaW9tZW0gMHhmZWQwMDAwMC0weGZlZDAwM2ZmIG9uIGFjcGkwCmhwZXQwOiB2ZW5kb3IgMHg4 MDg2LCByZXYgMHgxLCAxNDMxODE4MEh6IDY0Yml0LCAzIHRpbWVycywgbGVnYWN5IHJvdXRlCmhw ZXQwOiAgdDA6IGlycXMgMHgwMGYwMDAwMCAoMCksIDY0Yml0LCBwZXJpb2RpYwpocGV0MDogIHQx OiBpcnFzIDB4MDBmMDAwMDAgKDApCmhwZXQwOiAgdDI6IGlycXMgMHgwMGYwMDgwMCAoMCkKVGlt ZWNvdW50ZXIgIkhQRVQiIGZyZXF1ZW5jeSAxNDMxODE4MCBIeiBxdWFsaXR5IDk1MAppb2FwaWMw OiByb3V0aW5nIGludHBpbiAyMCAoUENJIElSUSAyMCkgdG8gbGFwaWMgMCB2ZWN0b3IgNTEKRXZl bnQgdGltZXIgIkhQRVQiIGZyZXF1ZW5jeSAxNDMxODE4MCBIeiBxdWFsaXR5IDM1MApFdmVudCB0 aW1lciAiSFBFVDEiIGZyZXF1ZW5jeSAxNDMxODE4MCBIeiBxdWFsaXR5IDM0MApFdmVudCB0aW1l ciAiSFBFVDIiIGZyZXF1ZW5jeSAxNDMxODE4MCBIeiBxdWFsaXR5IDM0MApBQ1BJIHRpbWVyOiAx LzEgMS8xIDEvMSAxLzEgMS8xIDEvMSAxLzEgMS8xIDEvMSAxLzEgLT4gMTAKVGltZWNvdW50ZXIg IkFDUEktZmFzdCIgZnJlcXVlbmN5IDM1Nzk1NDUgSHogcXVhbGl0eSA5MDAKYWNwaV90aW1lcjA6 IDwyNC1iaXQgdGltZXIgYXQgMy41Nzk1NDVNSHo+IHBvcnQgMHg4MDgtMHg4MGIgb24gYWNwaTAK cGNpX2xpbmswOiAgICAgICAgSW5kZXggIElSUSAgUnRkICBSZWYgIElSUXMKICBJbml0aWFsIFBy b2JlICAgICAgIDAgICAxMSAgIE4gICAgIDAgIDMgNCA1IDYgNyAxMCAxMSAxMgogIFZhbGlkYXRp b24gICAgICAgICAgMCAgIDExICAgTiAgICAgMCAgMyA0IDUgNiA3IDEwIDExIDEyCiAgQWZ0ZXIg RGlzYWJsZSAgICAgICAwICAyNTUgICBOICAgICAwICAzIDQgNSA2IDcgMTAgMTEgMTIKcGNpX2xp bmsxOiAgICAgICAgSW5kZXggIElSUSAgUnRkICBSZWYgIElSUXMKICBJbml0aWFsIFByb2JlICAg ICAgIDAgICAxMCAgIE4gICAgIDAgIDMgNCA1IDYgNyAxMCAxMSAxMgogIFZhbGlkYXRpb24gICAg ICAgICAgMCAgIDEwICAgTiAgICAgMCAgMyA0IDUgNiA3IDEwIDExIDEyCiAgQWZ0ZXIgRGlzYWJs ZSAgICAgICAwICAyNTUgICBOICAgICAwICAzIDQgNSA2IDcgMTAgMTEgMTIKcGNpX2xpbmsyOiAg ICAgICAgSW5kZXggIElSUSAgUnRkICBSZWYgIElSUXMKICBJbml0aWFsIFByb2JlICAgICAgIDAg ICAgNiAgIE4gICAgIDAgIDMgNCA1IDYgNyAxMCAxMSAxMgogIFZhbGlkYXRpb24gICAgICAgICAg MCAgICA2ICAgTiAgICAgMCAgMyA0IDUgNiA3IDEwIDExIDEyCiAgQWZ0ZXIgRGlzYWJsZSAgICAg ICAwICAyNTUgICBOICAgICAwICAzIDQgNSA2IDcgMTAgMTEgMTIKcGNpX2xpbmszOiAgICAgICAg SW5kZXggIElSUSAgUnRkICBSZWYgIElSUXMKICBJbml0aWFsIFByb2JlICAgICAgIDAgICAxMSAg IE4gICAgIDAgIDMgNCA1IDYgNyAxMCAxMSAxMgogIFZhbGlkYXRpb24gICAgICAgICAgMCAgIDEx ICAgTiAgICAgMCAgMyA0IDUgNiA3IDEwIDExIDEyCiAgQWZ0ZXIgRGlzYWJsZSAgICAgICAwICAy NTUgICBOICAgICAwICAzIDQgNSA2IDcgMTAgMTEgMTIKcGNpX2xpbms0OiAgICAgICAgSW5kZXgg IElSUSAgUnRkICBSZWYgIElSUXMKICBJbml0aWFsIFByb2JlICAgICAgIDAgICAgNSAgIE4gICAg IDAgIDMgNCA1IDYgNyAxMCAxMSAxMgogIFZhbGlkYXRpb24gICAgICAgICAgMCAgICA1ICAgTiAg ICAgMCAgMyA0IDUgNiA3IDEwIDExIDEyCiAgQWZ0ZXIgRGlzYWJsZSAgICAgICAwICAyNTUgICBO ICAgICAwICAzIDQgNSA2IDcgMTAgMTEgMTIKcGNpX2xpbms1OiAgICAgICAgSW5kZXggIElSUSAg UnRkICBSZWYgIElSUXMKICBJbml0aWFsIFByb2JlICAgICAgIDAgICAxMSAgIE4gICAgIDAgIDMg NCA1IDYgNyAxMCAxMSAxMgogIFZhbGlkYXRpb24gICAgICAgICAgMCAgIDExICAgTiAgICAgMCAg MyA0IDUgNiA3IDEwIDExIDEyCiAgQWZ0ZXIgRGlzYWJsZSAgICAgICAwICAyNTUgICBOICAgICAw ICAzIDQgNSA2IDcgMTAgMTEgMTIKcGNpX2xpbms2OiAgICAgICAgSW5kZXggIElSUSAgUnRkICBS ZWYgIElSUXMKICBJbml0aWFsIFByb2JlICAgICAgIDAgIDI1NSAgIE4gICAgIDAgIDMgNCA1IDYg NyAxMCAxMSAxMgogIFZhbGlkYXRpb24gICAgICAgICAgMCAgMjU1ICAgTiAgICAgMCAgMyA0IDUg NiA3IDEwIDExIDEyCiAgQWZ0ZXIgRGlzYWJsZSAgICAgICAwICAyNTUgICBOICAgICAwICAzIDQg NSA2IDcgMTAgMTEgMTIKcGNpX2xpbms3OiAgICAgICAgSW5kZXggIElSUSAgUnRkICBSZWYgIElS UXMKICBJbml0aWFsIFByb2JlICAgICAgIDAgIDI1NSAgIE4gICAgIDAgIDMgNCA1IDYgNyAxMCAx MSAxMgogIFZhbGlkYXRpb24gICAgICAgICAgMCAgMjU1ICAgTiAgICAgMCAgMyA0IDUgNiA3IDEw IDExIDEyCiAgQWZ0ZXIgRGlzYWJsZSAgICAgICAwICAyNTUgICBOICAgICAwICAzIDQgNSA2IDcg MTAgMTEgMTIKcGNpYjA6IDxBQ1BJIEhvc3QtUENJIGJyaWRnZT4gcG9ydCAweGNmOC0weGNmZiBv biBhY3BpMApwY2liMDogZGVjb2RpbmcgNSByYW5nZSAwLTB4ZmYKcGNpYjA6IGRlY29kaW5nIDQg cmFuZ2UgMC0weGNmNwpwY2liMDogZGVjb2RpbmcgNCByYW5nZSAweGQwMC0weGZmZmYKcGNpYjA6 IGRlY29kaW5nIDMgcmFuZ2UgMHhhMDAwMC0weGJmZmZmCnBjaWIwOiBkZWNvZGluZyAzIHJhbmdl IDB4YzAwMDAwMDAtMHhmZGZmZmZmZgpwY2liMDogZGVjb2RpbmcgMyByYW5nZSAweGZlZDQwMDAw LTB4ZmVkNDRmZmYKcGNpMDogPEFDUEkgUENJIGJ1cz4gb24gcGNpYjAKcGNpMDogZG9tYWluPTAs IHBoeXNpY2FsIGJ1cz0wCmZvdW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4MzYwMCwgcmV2aWQ9 MHgwMQoJZG9tYWluPTAsIGJ1cz0wLCBzbG90PTAsIGZ1bmM9MAoJY2xhc3M9MDYtMDAtMDAsIGhk cnR5cGU9MHgwMCwgbWZkZXY9MAoJY21kcmVnPTB4MDEwNCwgc3RhdHJlZz0weDAwMTAsIGNhY2hl bG5zej0xNiAoZHdvcmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdudD0weDAwICgwIG5z KSwgbWF4bGF0PTB4MDAgKDAgbnMpCglpbnRwaW49YSwgaXJxPTExCglwb3dlcnNwZWMgMiAgc3Vw cG9ydHMgRDAgRDMgIGN1cnJlbnQgRDAKCU1TSSBzdXBwb3J0cyAxIG1lc3NhZ2UKcGNpYjA6IG1h dGNoZWQgZW50cnkgZm9yIDAuMC5JTlRBCnBjaWIwOiBzbG90IDAgSU5UQSBoYXJkd2lyZWQgdG8g SVJRIDE2CmZvdW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4MzYwNCwgcmV2aWQ9MHgwMQoJZG9t YWluPTAsIGJ1cz0wLCBzbG90PTEsIGZ1bmM9MAoJY2xhc3M9MDYtMDQtMDAsIGhkcnR5cGU9MHgw MSwgbWZkZXY9MAoJY21kcmVnPTB4MDEwNywgc3RhdHJlZz0weDAwMTAsIGNhY2hlbG5zej0xNiAo ZHdvcmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdudD0weDA3ICgxNzUwIG5zKSwgbWF4 bGF0PTB4MDAgKDAgbnMpCglpbnRwaW49YSwgaXJxPTEwCglwb3dlcnNwZWMgMiAgc3VwcG9ydHMg RDAgRDMgIGN1cnJlbnQgRDAKCU1TSSBzdXBwb3J0cyAxIG1lc3NhZ2UKcGNpYjA6IG1hdGNoZWQg ZW50cnkgZm9yIDAuMS5JTlRBCnBjaWIwOiBzbG90IDEgSU5UQSBoYXJkd2lyZWQgdG8gSVJRIDM2 CglzZWNidXM9MjAsIHN1YmJ1cz0yMApmb3VuZC0+CXZlbmRvcj0weDgwODYsIGRldj0weDM2MDUs IHJldmlkPTB4MDEKCWRvbWFpbj0wLCBidXM9MCwgc2xvdD0yLCBmdW5jPTAKCWNsYXNzPTA2LTA0 LTAwLCBoZHJ0eXBlPTB4MDEsIG1mZGV2PTAKCWNtZHJlZz0weDAxMDcsIHN0YXRyZWc9MHgwMDEw LCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgw NyAoMTc1MCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJaW50cGluPWEsIGlycT0xMAoJcG93ZXJz cGVjIDIgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglNU0kgc3VwcG9ydHMgMSBtZXNzYWdl CnBjaWIwOiBtYXRjaGVkIGVudHJ5IGZvciAwLjIuSU5UQQpwY2liMDogc2xvdCAyIElOVEEgaGFy ZHdpcmVkIHRvIElSUSAxNwoJc2VjYnVzPTIyLCBzdWJidXM9MjYKZm91bmQtPgl2ZW5kb3I9MHg4 MDg2LCBkZXY9MHgzNjA2LCByZXZpZD0weDAxCglkb21haW49MCwgYnVzPTAsIHNsb3Q9MywgZnVu Yz0wCgljbGFzcz0wNi0wNC0wMCwgaGRydHlwZT0weDAxLCBtZmRldj0wCgljbWRyZWc9MHgwMTA3 LCBzdGF0cmVnPTB4MDAxMCwgY2FjaGVsbnN6PTE2IChkd29yZHMpCglsYXR0aW1lcj0weDAwICgw IG5zKSwgbWluZ250PTB4MDcgKDE3NTAgbnMpLCBtYXhsYXQ9MHgwMCAoMCBucykKCWludHBpbj1h LCBpcnE9NgoJcG93ZXJzcGVjIDIgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglNU0kgc3Vw cG9ydHMgMSBtZXNzYWdlCnBjaWIwOiBtYXRjaGVkIGVudHJ5IGZvciAwLjMuSU5UQQpwY2liMDog c2xvdCAzIElOVEEgaGFyZHdpcmVkIHRvIElSUSAzNwoJc2VjYnVzPTEsIHN1YmJ1cz0xMQpmb3Vu ZC0+CXZlbmRvcj0weDgwODYsIGRldj0weDM2MDcsIHJldmlkPTB4MDEKCWRvbWFpbj0wLCBidXM9 MCwgc2xvdD00LCBmdW5jPTAKCWNsYXNzPTA2LTA0LTAwLCBoZHJ0eXBlPTB4MDEsIG1mZGV2PTAK CWNtZHJlZz0weDAxMDcsIHN0YXRyZWc9MHgwMDEwLCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxh dHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgwNyAoMTc1MCBucyksIG1heGxhdD0weDAwICgw IG5zKQoJaW50cGluPWEsIGlycT0xMAoJcG93ZXJzcGVjIDIgIHN1cHBvcnRzIEQwIEQzICBjdXJy ZW50IEQwCglNU0kgc3VwcG9ydHMgMSBtZXNzYWdlCnBjaWIwOiBtYXRjaGVkIGVudHJ5IGZvciAw LjQuSU5UQQpwY2liMDogc2xvdCA0IElOVEEgaGFyZHdpcmVkIHRvIElSUSAzMgoJc2VjYnVzPTEy LCBzdWJidXM9MTUKZm91bmQtPgl2ZW5kb3I9MHg4MDg2LCBkZXY9MHgzNjA5LCByZXZpZD0weDAx Cglkb21haW49MCwgYnVzPTAsIHNsb3Q9NiwgZnVuYz0wCgljbGFzcz0wNi0wNC0wMCwgaGRydHlw ZT0weDAxLCBtZmRldj0wCgljbWRyZWc9MHgwMTA3LCBzdGF0cmVnPTB4MDAxMCwgY2FjaGVsbnN6 PTE2IChkd29yZHMpCglsYXR0aW1lcj0weDAwICgwIG5zKSwgbWluZ250PTB4MDcgKDE3NTAgbnMp LCBtYXhsYXQ9MHgwMCAoMCBucykKCWludHBpbj1hLCBpcnE9NgoJcG93ZXJzcGVjIDIgIHN1cHBv cnRzIEQwIEQzICBjdXJyZW50IEQwCglNU0kgc3VwcG9ydHMgMSBtZXNzYWdlCnBjaWIwOiBtYXRj aGVkIGVudHJ5IGZvciAwLjYuSU5UQQpwY2liMDogc2xvdCA2IElOVEEgaGFyZHdpcmVkIHRvIElS USAzMwoJc2VjYnVzPTE2LCBzdWJidXM9MTkKZm91bmQtPgl2ZW5kb3I9MHg4MDg2LCBkZXY9MHgz NjBiLCByZXZpZD0weDAxCglkb21haW49MCwgYnVzPTAsIHNsb3Q9OCwgZnVuYz0wCgljbGFzcz0w OC04MC0wMCwgaGRydHlwZT0weDAwLCBtZmRldj0wCgljbWRyZWc9MHgwMDA2LCBzdGF0cmVnPTB4 MDAxMCwgY2FjaGVsbnN6PTAgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9 MHgwMCAoMCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJaW50cGluPWEsIGlycT0xMQoJcG93ZXJz cGVjIDIgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglNU0kgc3VwcG9ydHMgMSBtZXNzYWdl CgltYXBbMTBdOiB0eXBlIE1lbW9yeSwgcmFuZ2UgNjQsIGJhc2UgMHhkZTBmZjgwMCwgc2l6ZSAx MCwgZW5hYmxlZApwY2liMDogYWxsb2NhdGVkIHR5cGUgMyAoMHhkZTBmZjgwMC0weGRlMGZmYmZm KSBmb3IgcmlkIDEwIG9mIHBjaTA6MDo4OjAKcGNpYjA6IG1hdGNoZWQgZW50cnkgZm9yIDAuOC5J TlRBCnBjaWIwOiBzbG90IDggSU5UQSBoYXJkd2lyZWQgdG8gSVJRIDE2CmZvdW5kLT4JdmVuZG9y PTB4ODA4NiwgZGV2PTB4MzYwYywgcmV2aWQ9MHgwMQoJZG9tYWluPTAsIGJ1cz0wLCBzbG90PTE2 LCBmdW5jPTAKCWNsYXNzPTA2LTAwLTAwLCBoZHJ0eXBlPTB4MDAsIG1mZGV2PTEKCWNtZHJlZz0w eDAwMDAsIHN0YXRyZWc9MHgwMDAwLCBjYWNoZWxuc3o9MCAoZHdvcmRzKQoJbGF0dGltZXI9MHgw MCAoMCBucyksIG1pbmdudD0weDAwICgwIG5zKSwgbWF4bGF0PTB4MDAgKDAgbnMpCmZvdW5kLT4J dmVuZG9yPTB4ODA4NiwgZGV2PTB4MzYwYywgcmV2aWQ9MHgwMQoJZG9tYWluPTAsIGJ1cz0wLCBz bG90PTE2LCBmdW5jPTEKCWNsYXNzPTA2LTAwLTAwLCBoZHJ0eXBlPTB4MDAsIG1mZGV2PTEKCWNt ZHJlZz0weDAwMDAsIHN0YXRyZWc9MHgwMDAwLCBjYWNoZWxuc3o9MCAoZHdvcmRzKQoJbGF0dGlt ZXI9MHgwMCAoMCBucyksIG1pbmdudD0weDAwICgwIG5zKSwgbWF4bGF0PTB4MDAgKDAgbnMpCmZv dW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4MzYwYywgcmV2aWQ9MHgwMQoJZG9tYWluPTAsIGJ1 cz0wLCBzbG90PTE2LCBmdW5jPTIKCWNsYXNzPTA2LTAwLTAwLCBoZHJ0eXBlPTB4MDAsIG1mZGV2 PTEKCWNtZHJlZz0weDAwMDAsIHN0YXRyZWc9MHgwMDAwLCBjYWNoZWxuc3o9MCAoZHdvcmRzKQoJ bGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdudD0weDAwICgwIG5zKSwgbWF4bGF0PTB4MDAgKDAg bnMpCmZvdW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4MzYwYywgcmV2aWQ9MHgwMQoJZG9tYWlu PTAsIGJ1cz0wLCBzbG90PTE2LCBmdW5jPTMKCWNsYXNzPTA2LTAwLTAwLCBoZHJ0eXBlPTB4MDAs IG1mZGV2PTEKCWNtZHJlZz0weDAwMDAsIHN0YXRyZWc9MHgwMDAwLCBjYWNoZWxuc3o9MCAoZHdv cmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdudD0weDAwICgwIG5zKSwgbWF4bGF0PTB4 MDAgKDAgbnMpCmZvdW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4MzYwZCwgcmV2aWQ9MHgwMQoJ ZG9tYWluPTAsIGJ1cz0wLCBzbG90PTE3LCBmdW5jPTAKCWNsYXNzPTA2LTAwLTAwLCBoZHJ0eXBl PTB4MDAsIG1mZGV2PTEKCWNtZHJlZz0weDAwMDAsIHN0YXRyZWc9MHgwMDAwLCBjYWNoZWxuc3o9 MCAoZHdvcmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdudD0weDAwICgwIG5zKSwgbWF4 bGF0PTB4MDAgKDAgbnMpCmZvdW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4MzYwZCwgcmV2aWQ9 MHgwMQoJZG9tYWluPTAsIGJ1cz0wLCBzbG90PTE3LCBmdW5jPTMKCWNsYXNzPTA2LTAwLTAwLCBo ZHJ0eXBlPTB4MDAsIG1mZGV2PTEKCWNtZHJlZz0weDAwMDAsIHN0YXRyZWc9MHgwMDAwLCBjYWNo ZWxuc3o9MCAoZHdvcmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdudD0weDAwICgwIG5z KSwgbWF4bGF0PTB4MDAgKDAgbnMpCmZvdW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4MzYwZSwg cmV2aWQ9MHgwMQoJZG9tYWluPTAsIGJ1cz0wLCBzbG90PTE5LCBmdW5jPTAKCWNsYXNzPTA2LTAw LTAwLCBoZHJ0eXBlPTB4MDAsIG1mZGV2PTAKCWNtZHJlZz0weDAwMDAsIHN0YXRyZWc9MHgwMDAw LCBjYWNoZWxuc3o9MCAoZHdvcmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdudD0weDAw ICgwIG5zKSwgbWF4bGF0PTB4MDAgKDAgbnMpCmZvdW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4 MzYwZiwgcmV2aWQ9MHgwMQoJZG9tYWluPTAsIGJ1cz0wLCBzbG90PTIxLCBmdW5jPTAKCWNsYXNz PTA2LTAwLTAwLCBoZHJ0eXBlPTB4MDAsIG1mZGV2PTAKCWNtZHJlZz0weDAwMDAsIHN0YXRyZWc9 MHgwMDAwLCBjYWNoZWxuc3o9MCAoZHdvcmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdu dD0weDAwICgwIG5zKSwgbWF4bGF0PTB4MDAgKDAgbnMpCmZvdW5kLT4JdmVuZG9yPTB4ODA4Niwg ZGV2PTB4MzYxMCwgcmV2aWQ9MHgwMQoJZG9tYWluPTAsIGJ1cz0wLCBzbG90PTIyLCBmdW5jPTAK CWNsYXNzPTA2LTAwLTAwLCBoZHJ0eXBlPTB4MDAsIG1mZGV2PTAKCWNtZHJlZz0weDAwMDAsIHN0 YXRyZWc9MHgwMDAwLCBjYWNoZWxuc3o9MCAoZHdvcmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyks IG1pbmdudD0weDAwICgwIG5zKSwgbWF4bGF0PTB4MDAgKDAgbnMpCmZvdW5kLT4JdmVuZG9yPTB4 ODA4NiwgZGV2PTB4MjY5MCwgcmV2aWQ9MHgwOQoJZG9tYWluPTAsIGJ1cz0wLCBzbG90PTI4LCBm dW5jPTAKCWNsYXNzPTA2LTA0LTAwLCBoZHJ0eXBlPTB4MDEsIG1mZGV2PTEKCWNtZHJlZz0weDAw NDcsIHN0YXRyZWc9MHgwMDEwLCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAg KDAgbnMpLCBtaW5nbnQ9MHgwNyAoMTc1MCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJcG93ZXJz cGVjIDIgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglNU0kgc3VwcG9ydHMgMSBtZXNzYWdl CglzZWNidXM9MjEsIHN1YmJ1cz0yMQpmb3VuZC0+CXZlbmRvcj0weDgwODYsIGRldj0weDI2ODgs IHJldmlkPTB4MDkKCWRvbWFpbj0wLCBidXM9MCwgc2xvdD0yOSwgZnVuYz0wCgljbGFzcz0wYy0w My0wMCwgaGRydHlwZT0weDAwLCBtZmRldj0xCgljbWRyZWc9MHgwMDA1LCBzdGF0cmVnPTB4MDI4 MCwgY2FjaGVsbnN6PTAgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgw MCAoMCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJaW50cGluPWEsIGlycT01CgltYXBbMjBdOiB0 eXBlIEkvTyBQb3J0LCByYW5nZSAzMiwgYmFzZSAweGNjODAsIHNpemUgIDUsIGVuYWJsZWQKcGNp YjA6IGFsbG9jYXRlZCB0eXBlIDQgKDB4Y2M4MC0weGNjOWYpIGZvciByaWQgMjAgb2YgcGNpMDow OjI5OjAKcGNpYjA6IG1hdGNoZWQgZW50cnkgZm9yIDAuMjkuSU5UQQpwY2liMDogc2xvdCAyOSBJ TlRBIGhhcmR3aXJlZCB0byBJUlEgMjAKZm91bmQtPgl2ZW5kb3I9MHg4MDg2LCBkZXY9MHgyNjg5 LCByZXZpZD0weDA5Cglkb21haW49MCwgYnVzPTAsIHNsb3Q9MjksIGZ1bmM9MQoJY2xhc3M9MGMt MDMtMDAsIGhkcnR5cGU9MHgwMCwgbWZkZXY9MAoJY21kcmVnPTB4MDAwNSwgc3RhdHJlZz0weDAy ODAsIGNhY2hlbG5zej0wIChkd29yZHMpCglsYXR0aW1lcj0weDAwICgwIG5zKSwgbWluZ250PTB4 MDAgKDAgbnMpLCBtYXhsYXQ9MHgwMCAoMCBucykKCWludHBpbj1hLCBpcnE9NQoJbWFwWzIwXTog dHlwZSBJL08gUG9ydCwgcmFuZ2UgMzIsIGJhc2UgMHhjY2EwLCBzaXplICA1LCBlbmFibGVkCnBj aWIwOiBhbGxvY2F0ZWQgdHlwZSA0ICgweGNjYTAtMHhjY2JmKSBmb3IgcmlkIDIwIG9mIHBjaTA6 MDoyOToxCnBjaWIwOiBtYXRjaGVkIGVudHJ5IGZvciAwLjI5LklOVEEKcGNpYjA6IHNsb3QgMjkg SU5UQSBoYXJkd2lyZWQgdG8gSVJRIDIwCmZvdW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4MjY4 YSwgcmV2aWQ9MHgwOQoJZG9tYWluPTAsIGJ1cz0wLCBzbG90PTI5LCBmdW5jPTIKCWNsYXNzPTBj LTAzLTAwLCBoZHJ0eXBlPTB4MDAsIG1mZGV2PTAKCWNtZHJlZz0weDAwMDUsIHN0YXRyZWc9MHgw MjgwLCBjYWNoZWxuc3o9MCAoZHdvcmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdudD0w eDAwICgwIG5zKSwgbWF4bGF0PTB4MDAgKDAgbnMpCglpbnRwaW49YSwgaXJxPTUKCW1hcFsyMF06 IHR5cGUgSS9PIFBvcnQsIHJhbmdlIDMyLCBiYXNlIDB4Y2NjMCwgc2l6ZSAgNSwgZW5hYmxlZApw Y2liMDogYWxsb2NhdGVkIHR5cGUgNCAoMHhjY2MwLTB4Y2NkZikgZm9yIHJpZCAyMCBvZiBwY2kw OjA6Mjk6MgpwY2liMDogbWF0Y2hlZCBlbnRyeSBmb3IgMC4yOS5JTlRBCnBjaWIwOiBzbG90IDI5 IElOVEEgaGFyZHdpcmVkIHRvIElSUSAyMApmb3VuZC0+CXZlbmRvcj0weDgwODYsIGRldj0weDI2 OGIsIHJldmlkPTB4MDkKCWRvbWFpbj0wLCBidXM9MCwgc2xvdD0yOSwgZnVuYz0zCgljbGFzcz0w Yy0wMy0wMCwgaGRydHlwZT0weDAwLCBtZmRldj0wCgljbWRyZWc9MHgwMDA1LCBzdGF0cmVnPTB4 MDI4MCwgY2FjaGVsbnN6PTAgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9 MHgwMCAoMCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJaW50cGluPWEsIGlycT01CgltYXBbMjBd OiB0eXBlIEkvTyBQb3J0LCByYW5nZSAzMiwgYmFzZSAweGNjZTAsIHNpemUgIDUsIGVuYWJsZWQK cGNpYjA6IGFsbG9jYXRlZCB0eXBlIDQgKDB4Y2NlMC0weGNjZmYpIGZvciByaWQgMjAgb2YgcGNp MDowOjI5OjMKcGNpYjA6IG1hdGNoZWQgZW50cnkgZm9yIDAuMjkuSU5UQQpwY2liMDogc2xvdCAy OSBJTlRBIGhhcmR3aXJlZCB0byBJUlEgMjAKZm91bmQtPgl2ZW5kb3I9MHg4MDg2LCBkZXY9MHgy NjhjLCByZXZpZD0weDA5Cglkb21haW49MCwgYnVzPTAsIHNsb3Q9MjksIGZ1bmM9NwoJY2xhc3M9 MGMtMDMtMjAsIGhkcnR5cGU9MHgwMCwgbWZkZXY9MAoJY21kcmVnPTB4MDEwNiwgc3RhdHJlZz0w eDAyOTAsIGNhY2hlbG5zej0wIChkd29yZHMpCglsYXR0aW1lcj0weDAwICgwIG5zKSwgbWluZ250 PTB4MDAgKDAgbnMpLCBtYXhsYXQ9MHgwMCAoMCBucykKCWludHBpbj1iLCBpcnE9MTEKCXBvd2Vy c3BlYyAyICBzdXBwb3J0cyBEMCBEMyAgY3VycmVudCBEMAoJbWFwWzEwXTogdHlwZSBNZW1vcnks IHJhbmdlIDMyLCBiYXNlIDB4ZGUwZmZjMDAsIHNpemUgMTAsIGVuYWJsZWQKcGNpYjA6IGFsbG9j YXRlZCB0eXBlIDMgKDB4ZGUwZmZjMDAtMHhkZTBmZmZmZikgZm9yIHJpZCAxMCBvZiBwY2kwOjA6 Mjk6NwpwY2liMDogbWF0Y2hlZCBlbnRyeSBmb3IgMC4yOS5JTlRCCnBjaWIwOiBzbG90IDI5IElO VEIgaGFyZHdpcmVkIHRvIElSUSAyMQpmb3VuZC0+CXZlbmRvcj0weDgwODYsIGRldj0weDI0NGUs IHJldmlkPTB4ZDkKCWRvbWFpbj0wLCBidXM9MCwgc2xvdD0zMCwgZnVuYz0wCgljbGFzcz0wNi0w NC0wMSwgaGRydHlwZT0weDAxLCBtZmRldj0wCgljbWRyZWc9MHgwMTQ3LCBzdGF0cmVnPTB4MDAx MCwgY2FjaGVsbnN6PTAgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgw YiAoMjc1MCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJc2VjYnVzPTI3LCBzdWJidXM9MjcKZm91 bmQtPgl2ZW5kb3I9MHg4MDg2LCBkZXY9MHgyNjcwLCByZXZpZD0weDA5Cglkb21haW49MCwgYnVz PTAsIHNsb3Q9MzEsIGZ1bmM9MAoJY2xhc3M9MDYtMDEtMDAsIGhkcnR5cGU9MHgwMCwgbWZkZXY9 MQoJY21kcmVnPTB4MDE0Nywgc3RhdHJlZz0weDAyMDAsIGNhY2hlbG5zej0wIChkd29yZHMpCgls YXR0aW1lcj0weDAwICgwIG5zKSwgbWluZ250PTB4MDAgKDAgbnMpLCBtYXhsYXQ9MHgwMCAoMCBu cykKZm91bmQtPgl2ZW5kb3I9MHg4MDg2LCBkZXY9MHgyNjgwLCByZXZpZD0weDA5Cglkb21haW49 MCwgYnVzPTAsIHNsb3Q9MzEsIGZ1bmM9MgoJY2xhc3M9MDEtMDEtOGEsIGhkcnR5cGU9MHgwMCwg bWZkZXY9MAoJY21kcmVnPTB4MDA0Nywgc3RhdHJlZz0weDAyYjAsIGNhY2hlbG5zej0wIChkd29y ZHMpCglsYXR0aW1lcj0weDAwICgwIG5zKSwgbWluZ250PTB4MDAgKDAgbnMpLCBtYXhsYXQ9MHgw MCAoMCBucykKCXBvd2Vyc3BlYyAyICBzdXBwb3J0cyBEMCBEMyAgY3VycmVudCBEMApwY2liMDog YWxsb2NhdGVkIHR5cGUgNCAoMHgxZjAtMHgxZjcpIGZvciByaWQgMTAgb2YgcGNpMDowOjMxOjIK cGNpYjA6IGFsbG9jYXRlZCB0eXBlIDQgKDB4M2Y2LTB4M2Y2KSBmb3IgcmlkIDE0IG9mIHBjaTA6 MDozMToyCnBjaWIwOiBhbGxvY2F0ZWQgdHlwZSA0ICgweDE3MC0weDE3NykgZm9yIHJpZCAxOCBv ZiBwY2kwOjA6MzE6MgpwY2liMDogYWxsb2NhdGVkIHR5cGUgNCAoMHgzNzYtMHgzNzYpIGZvciBy aWQgMWMgb2YgcGNpMDowOjMxOjIKCW1hcFsyMF06IHR5cGUgSS9PIFBvcnQsIHJhbmdlIDMyLCBi YXNlIDB4ZmMwMCwgc2l6ZSAgNCwgZW5hYmxlZApwY2liMDogYWxsb2NhdGVkIHR5cGUgNCAoMHhm YzAwLTB4ZmMwZikgZm9yIHJpZCAyMCBvZiBwY2kwOjA6MzE6MgoJbWFwWzI0XTogdHlwZSBNZW1v cnksIHJhbmdlIDMyLCBiYXNlIDAsIHNpemUgMTAsIGVuYWJsZWQKcGNpYjE6IDxBQ1BJIFBDSS1Q Q0kgYnJpZGdlPiBpcnEgMzYgYXQgZGV2aWNlIDEuMCBvbiBwY2kwCnBjaWIxOiAgIGRvbWFpbiAg ICAgICAgICAgIDAKcGNpYjE6ICAgc2Vjb25kYXJ5IGJ1cyAgICAgMjAKcGNpYjE6ICAgc3Vib3Jk aW5hdGUgYnVzICAgMjAKcGNpYjE6ICAgc3BlY2lhbCBkZWNvZGUgICAgSVNBCnBjaTIwOiA8QUNQ SSBQQ0kgYnVzPiBvbiBwY2liMQpwY2liMTogYWxsb2NhdGVkIGJ1cyByYW5nZSAoMjAtMjApIGZv ciByaWQgMCBvZiBwY2kyMApwY2kyMDogZG9tYWluPTAsIHBoeXNpY2FsIGJ1cz0yMApwY2liMjog PEFDUEkgUENJLVBDSSBicmlkZ2U+IGlycSAxNyBhdCBkZXZpY2UgMi4wIG9uIHBjaTAKcGNpYjI6 IGFsbG9jYXRpbmcgbm9uLUlTQSByYW5nZSAweGUwMDAtMHhlMGZmCnBjaWIwOiBhbGxvY2F0ZWQg dHlwZSA0ICgweGUwMDAtMHhlMGZmKSBmb3IgcmlkIDFjIG9mIHBjaWIyCnBjaWIyOiBhbGxvY2F0 aW5nIG5vbi1JU0EgcmFuZ2UgMHhlNDAwLTB4ZTRmZgpwY2liMDogYWxsb2NhdGVkIHR5cGUgNCAo MHhlNDAwLTB4ZTRmZikgZm9yIHJpZCAxYyBvZiBwY2liMgpwY2liMjogYWxsb2NhdGluZyBub24t SVNBIHJhbmdlIDB4ZTgwMC0weGU4ZmYKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDQgKDB4ZTgwMC0w eGU4ZmYpIGZvciByaWQgMWMgb2YgcGNpYjIKcGNpYjI6IGFsbG9jYXRpbmcgbm9uLUlTQSByYW5n ZSAweGVjMDAtMHhlY2ZmCnBjaWIwOiBhbGxvY2F0ZWQgdHlwZSA0ICgweGVjMDAtMHhlY2ZmKSBm b3IgcmlkIDFjIG9mIHBjaWIyCnBjaWIwOiBhbGxvY2F0ZWQgdHlwZSAzICgweGRlMTAwMDAwLTB4 ZGUzZmZmZmYpIGZvciByaWQgMjAgb2YgcGNpYjIKcGNpYjI6ICAgZG9tYWluICAgICAgICAgICAg MApwY2liMjogICBzZWNvbmRhcnkgYnVzICAgICAyMgpwY2liMjogICBzdWJvcmRpbmF0ZSBidXMg ICAyNgpwY2liMjogICBJL08gZGVjb2RlICAgICAgICAweGUwMDAtMHhlZmZmCnBjaWIyOiAgIG1l bW9yeSBkZWNvZGUgICAgIDB4ZGUxMDAwMDAtMHhkZTNmZmZmZgpwY2liMjogICBzcGVjaWFsIGRl Y29kZSAgICBJU0EKcGNpMjI6IDxBQ1BJIFBDSSBidXM+IG9uIHBjaWIyCnBjaWIyOiBhbGxvY2F0 ZWQgYnVzIHJhbmdlICgyMi0yMikgZm9yIHJpZCAwIG9mIHBjaTIyCnBjaTIyOiBkb21haW49MCwg cGh5c2ljYWwgYnVzPTIyCmZvdW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4MzUwMCwgcmV2aWQ9 MHgwMQoJZG9tYWluPTAsIGJ1cz0yMiwgc2xvdD0wLCBmdW5jPTAKCWNsYXNzPTA2LTA0LTAwLCBo ZHJ0eXBlPTB4MDEsIG1mZGV2PTEKCWNtZHJlZz0weDAxNDcsIHN0YXRyZWc9MHgwMDEwLCBjYWNo ZWxuc3o9MTYgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgwNyAoMTc1 MCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJaW50cGluPWEsIGlycT0xMAoJcG93ZXJzcGVjIDIg IHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCnBjaWIyOiBtYXRjaGVkIGVudHJ5IGZvciAyMi4w LklOVEEKcGNpYjI6IHNsb3QgMCBJTlRBIGhhcmR3aXJlZCB0byBJUlEgMTcKCXNlY2J1cz0yMywg c3ViYnVzPTI1CnBjaWIyOiBhbGxvY2F0ZWQgYnVzIHJhbmdlICgyMy0yNSkgZm9yIHJpZCAwIG9m IHBjaTA6MjI6MDowCmZvdW5kLT4JdmVuZG9yPTB4ODA4NiwgZGV2PTB4MzUwYywgcmV2aWQ9MHgw MQoJZG9tYWluPTAsIGJ1cz0yMiwgc2xvdD0wLCBmdW5jPTMKCWNsYXNzPTA2LTA0LTAwLCBoZHJ0 eXBlPTB4MDEsIG1mZGV2PTEKCWNtZHJlZz0weDAwNDcsIHN0YXRyZWc9MHgwMDEwLCBjYWNoZWxu c3o9MTYgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgwNyAoMTc1MCBu cyksIG1heGxhdD0weDAwICgwIG5zKQoJcG93ZXJzcGVjIDIgIHN1cHBvcnRzIEQwIEQzICBjdXJy ZW50IEQwCglzZWNidXM9MjYsIHN1YmJ1cz0yNgpwY2liMjogYWxsb2NhdGVkIGJ1cyByYW5nZSAo MjYtMjYpIGZvciByaWQgMCBvZiBwY2kwOjIyOjA6MwpwY2liMzogPEFDUEkgUENJLVBDSSBicmlk Z2U+IGlycSAxNyBhdCBkZXZpY2UgMC4wIG9uIHBjaTIyCnBjaWIzOiBhbGxvY2F0aW5nIG5vbi1J U0EgcmFuZ2UgMHhlMDAwLTB4ZTBmZgpwY2liMjogYWxsb2NhdGVkIEkvTyBwb3J0IHJhbmdlICgw eGUwMDAtMHhlMGZmKSBmb3IgcmlkIDFjIG9mIHBjaWIzCnBjaWIzOiBhbGxvY2F0aW5nIG5vbi1J U0EgcmFuZ2UgMHhlNDAwLTB4ZTRmZgpwY2liMjogYWxsb2NhdGVkIEkvTyBwb3J0IHJhbmdlICgw eGU0MDAtMHhlNGZmKSBmb3IgcmlkIDFjIG9mIHBjaWIzCnBjaWIzOiBhbGxvY2F0aW5nIG5vbi1J U0EgcmFuZ2UgMHhlODAwLTB4ZThmZgpwY2liMjogYWxsb2NhdGVkIEkvTyBwb3J0IHJhbmdlICgw eGU4MDAtMHhlOGZmKSBmb3IgcmlkIDFjIG9mIHBjaWIzCnBjaWIzOiBhbGxvY2F0aW5nIG5vbi1J U0EgcmFuZ2UgMHhlYzAwLTB4ZWNmZgpwY2liMjogYWxsb2NhdGVkIEkvTyBwb3J0IHJhbmdlICgw eGVjMDAtMHhlY2ZmKSBmb3IgcmlkIDFjIG9mIHBjaWIzCnBjaWIyOiBhbGxvY2F0ZWQgbWVtb3J5 IHJhbmdlICgweGRlMjAwMDAwLTB4ZGUzZmZmZmYpIGZvciByaWQgMjAgb2YgcGNpYjMKcGNpYjM6 ICAgZG9tYWluICAgICAgICAgICAgMApwY2liMzogICBzZWNvbmRhcnkgYnVzICAgICAyMwpwY2li MzogICBzdWJvcmRpbmF0ZSBidXMgICAyNQpwY2liMzogICBJL08gZGVjb2RlICAgICAgICAweGUw MDAtMHhlZmZmCnBjaWIzOiAgIG1lbW9yeSBkZWNvZGUgICAgIDB4ZGUyMDAwMDAtMHhkZTNmZmZm ZgpwY2liMzogICBzcGVjaWFsIGRlY29kZSAgICBJU0EKcGNpMjM6IDxBQ1BJIFBDSSBidXM+IG9u IHBjaWIzCnBjaWIzOiBhbGxvY2F0ZWQgYnVzIHJhbmdlICgyMy0yMykgZm9yIHJpZCAwIG9mIHBj aTIzCnBjaTIzOiBkb21haW49MCwgcGh5c2ljYWwgYnVzPTIzCmZvdW5kLT4JdmVuZG9yPTB4ODA4 NiwgZGV2PTB4MzUxMCwgcmV2aWQ9MHgwMQoJZG9tYWluPTAsIGJ1cz0yMywgc2xvdD0wLCBmdW5j PTAKCWNsYXNzPTA2LTA0LTAwLCBoZHJ0eXBlPTB4MDEsIG1mZGV2PTAKCWNtZHJlZz0weDAxNDcs IHN0YXRyZWc9MHgwMDEwLCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAg bnMpLCBtaW5nbnQ9MHgwNyAoMTc1MCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJaW50cGluPWEs IGlycT0xMAoJcG93ZXJzcGVjIDIgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglNU0kgc3Vw cG9ydHMgMSBtZXNzYWdlLCA2NCBiaXQKcGNpYjM6IG1hdGNoZWQgZW50cnkgZm9yIDIzLjAuSU5U QQpwY2liMzogc2xvdCAwIElOVEEgaGFyZHdpcmVkIHRvIElSUSAxNwoJc2VjYnVzPTI1LCBzdWJi dXM9MjUKcGNpYjM6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDI1LTI1KSBmb3IgcmlkIDAgb2YgcGNp MDoyMzowOjAKZm91bmQtPgl2ZW5kb3I9MHg4MDg2LCBkZXY9MHgzNTE0LCByZXZpZD0weDAxCglk b21haW49MCwgYnVzPTIzLCBzbG90PTEsIGZ1bmM9MAoJY2xhc3M9MDYtMDQtMDAsIGhkcnR5cGU9 MHgwMSwgbWZkZXY9MAoJY21kcmVnPTB4MDE0Nywgc3RhdHJlZz0weDAwMTAsIGNhY2hlbG5zej0x NiAoZHdvcmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdudD0weDA3ICgxNzUwIG5zKSwg bWF4bGF0PTB4MDAgKDAgbnMpCglpbnRwaW49YSwgaXJxPTYKCXBvd2Vyc3BlYyAyICBzdXBwb3J0 cyBEMCBEMyAgY3VycmVudCBEMAoJTVNJIHN1cHBvcnRzIDEgbWVzc2FnZSwgNjQgYml0CnBjaWIz OiBtYXRjaGVkIGVudHJ5IGZvciAyMy4xLklOVEEKcGNpYjM6IHNsb3QgMSBJTlRBIGhhcmR3aXJl ZCB0byBJUlEgMTgKCXNlY2J1cz0yNCwgc3ViYnVzPTI0CnBjaWIzOiBhbGxvY2F0ZWQgYnVzIHJh bmdlICgyNC0yNCkgZm9yIHJpZCAwIG9mIHBjaTA6MjM6MTowCnBjaWI0OiA8QUNQSSBQQ0ktUENJ IGJyaWRnZT4gaXJxIDE3IGF0IGRldmljZSAwLjAgb24gcGNpMjMKcGNpYjQ6IGFsbG9jYXRpbmcg bm9uLUlTQSByYW5nZSAweGUwMDAtMHhlMGZmCnBjaWIzOiBhbGxvY2F0ZWQgSS9PIHBvcnQgcmFu Z2UgKDB4ZTAwMC0weGUwZmYpIGZvciByaWQgMWMgb2YgcGNpYjQKcGNpYjQ6IGFsbG9jYXRpbmcg bm9uLUlTQSByYW5nZSAweGU0MDAtMHhlNGZmCnBjaWIzOiBhbGxvY2F0ZWQgSS9PIHBvcnQgcmFu Z2UgKDB4ZTQwMC0weGU0ZmYpIGZvciByaWQgMWMgb2YgcGNpYjQKcGNpYjQ6IGFsbG9jYXRpbmcg bm9uLUlTQSByYW5nZSAweGU4MDAtMHhlOGZmCnBjaWIzOiBhbGxvY2F0ZWQgSS9PIHBvcnQgcmFu Z2UgKDB4ZTgwMC0weGU4ZmYpIGZvciByaWQgMWMgb2YgcGNpYjQKcGNpYjQ6IGFsbG9jYXRpbmcg bm9uLUlTQSByYW5nZSAweGVjMDAtMHhlY2ZmCnBjaWIzOiBhbGxvY2F0ZWQgSS9PIHBvcnQgcmFu Z2UgKDB4ZWMwMC0weGVjZmYpIGZvciByaWQgMWMgb2YgcGNpYjQKcGNpYjM6IGFsbG9jYXRlZCBt ZW1vcnkgcmFuZ2UgKDB4ZGUyMDAwMDAtMHhkZTNmZmZmZikgZm9yIHJpZCAyMCBvZiBwY2liNApw Y2liNDogICBkb21haW4gICAgICAgICAgICAwCnBjaWI0OiAgIHNlY29uZGFyeSBidXMgICAgIDI1 CnBjaWI0OiAgIHN1Ym9yZGluYXRlIGJ1cyAgIDI1CnBjaWI0OiAgIEkvTyBkZWNvZGUgICAgICAg IDB4ZTAwMC0weGVmZmYKcGNpYjQ6ICAgbWVtb3J5IGRlY29kZSAgICAgMHhkZTIwMDAwMC0weGRl M2ZmZmZmCnBjaWI0OiAgIHNwZWNpYWwgZGVjb2RlICAgIElTQQpwY2kyNTogPEFDUEkgUENJIGJ1 cz4gb24gcGNpYjQKcGNpYjQ6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDI1LTI1KSBmb3IgcmlkIDAg b2YgcGNpMjUKcGNpMjU6IGRvbWFpbj0wLCBwaHlzaWNhbCBidXM9MjUKZm91bmQtPgl2ZW5kb3I9 MHgxMDAwLCBkZXY9MHgwMDYwLCByZXZpZD0weDA0Cglkb21haW49MCwgYnVzPTI1LCBzbG90PTAs IGZ1bmM9MAoJY2xhc3M9MDEtMDQtMDAsIGhkcnR5cGU9MHgwMCwgbWZkZXY9MAoJY21kcmVnPTB4 MDAwNywgc3RhdHJlZz0weDAwMTAsIGNhY2hlbG5zej0xNiAoZHdvcmRzKQoJbGF0dGltZXI9MHgw MCAoMCBucyksIG1pbmdudD0weDAwICgwIG5zKSwgbWF4bGF0PTB4MDAgKDAgbnMpCglpbnRwaW49 YSwgaXJxPTEwCglwb3dlcnNwZWMgMiAgc3VwcG9ydHMgRDAgRDEgRDMgIGN1cnJlbnQgRDAKCU1T SSBzdXBwb3J0cyA0IG1lc3NhZ2VzLCA2NCBiaXQKCU1TSS1YIHN1cHBvcnRzIDQgbWVzc2FnZXMg aW4gbWFwIDB4MTAKCW1hcFsxMF06IHR5cGUgTWVtb3J5LCByYW5nZSA2NCwgYmFzZSAweGRlMzgw MDAwLCBzaXplIDE4LCBlbmFibGVkCnBjaWI0OiBhbGxvY2F0ZWQgbWVtb3J5IHJhbmdlICgweGRl MzgwMDAwLTB4ZGUzYmZmZmYpIGZvciByaWQgMTAgb2YgcGNpMDoyNTowOjAKCW1hcFsxOF06IHR5 cGUgSS9PIFBvcnQsIHJhbmdlIDMyLCBiYXNlIDB4ZWMwMCwgc2l6ZSAgOCwgZW5hYmxlZApwY2li NDogYWxsb2NhdGVkIEkvTyBwb3J0IHJhbmdlICgweGVjMDAtMHhlY2ZmKSBmb3IgcmlkIDE4IG9m IHBjaTA6MjU6MDowCgltYXBbMWNdOiB0eXBlIE1lbW9yeSwgcmFuZ2UgNjQsIGJhc2UgMHhkZTNj MDAwMCwgc2l6ZSAxOCwgZW5hYmxlZApwY2liNDogYWxsb2NhdGVkIG1lbW9yeSByYW5nZSAoMHhk ZTNjMDAwMC0weGRlM2ZmZmZmKSBmb3IgcmlkIDFjIG9mIHBjaTA6MjU6MDowCnBjaWI0OiBtYXRj aGVkIGVudHJ5IGZvciAyNS4wLklOVEEKcGNpYjQ6IHNsb3QgMCBJTlRBIGhhcmR3aXJlZCB0byBJ UlEgMTcKbWZpMDogPERlbGwgUEVSQyA2PiBwb3J0IDB4ZWMwMC0weGVjZmYgbWVtIDB4ZGUzODAw MDAtMHhkZTNiZmZmZiwweGRlM2MwMDAwLTB4ZGUzZmZmZmYgaXJxIDE3IGF0IGRldmljZSAwLjAg b24gcGNpMjUKbWZpMDogYXR0ZW1wdGluZyB0byBhbGxvY2F0ZSAxIE1TSSB2ZWN0b3JzICg0IHN1 cHBvcnRlZCkKbXNpOiByb3V0aW5nIE1TSSBJUlEgMjU2IHRvIGxvY2FsIEFQSUMgMCB2ZWN0b3Ig NTIKbWZpMDogdXNpbmcgSVJRIDI1NiBmb3IgTVNJCm1maTA6IFVzaW5nIE1TSQptZmkwOiBNZWdh cmFpZCBTQVMgZHJpdmVyIFZlciA0LjIzIAptZmkwOiBGVyBNYXhDbWRzID0gMTAwOCwgbGltaXRp bmcgdG8gMTI4CnBjaWI1OiA8QUNQSSBQQ0ktUENJIGJyaWRnZT4gaXJxIDE4IGF0IGRldmljZSAx LjAgb24gcGNpMjMKcGNpYjU6ICAgZG9tYWluICAgICAgICAgICAgMApwY2liNTogICBzZWNvbmRh cnkgYnVzICAgICAyNApwY2liNTogICBzdWJvcmRpbmF0ZSBidXMgICAyNApwY2liNTogICBzcGVj aWFsIGRlY29kZSAgICBJU0EKcGNpMjQ6IDxBQ1BJIFBDSSBidXM+IG9uIHBjaWI1CnBjaWI1OiBh bGxvY2F0ZWQgYnVzIHJhbmdlICgyNC0yNCkgZm9yIHJpZCAwIG9mIHBjaTI0CnBjaTI0OiBkb21h aW49MCwgcGh5c2ljYWwgYnVzPTI0CnBjaWI2OiA8UENJLVBDSSBicmlkZ2U+IGF0IGRldmljZSAw LjMgb24gcGNpMjIKcGNpYjY6ICAgZG9tYWluICAgICAgICAgICAgMApwY2liNjogICBzZWNvbmRh cnkgYnVzICAgICAyNgpwY2liNjogICBzdWJvcmRpbmF0ZSBidXMgICAyNgpwY2liNjogICBzcGVj aWFsIGRlY29kZSAgICBJU0EKcGNpMjY6IDxQQ0kgYnVzPiBvbiBwY2liNgpwY2liNjogYWxsb2Nh dGVkIGJ1cyByYW5nZSAoMjYtMjYpIGZvciByaWQgMCBvZiBwY2kyNgpwY2kyNjogZG9tYWluPTAs IHBoeXNpY2FsIGJ1cz0yNgpwY2liNzogPEFDUEkgUENJLVBDSSBicmlkZ2U+IGlycSAzNyBhdCBk ZXZpY2UgMy4wIG9uIHBjaTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4ZDYwMDAwMDAtMHhk ZGZmZmZmZikgZm9yIHJpZCAyMCBvZiBwY2liNwpwY2liNzogICBkb21haW4gICAgICAgICAgICAw CnBjaWI3OiAgIHNlY29uZGFyeSBidXMgICAgIDEKcGNpYjc6ICAgc3Vib3JkaW5hdGUgYnVzICAg MTEKcGNpYjc6ICAgbWVtb3J5IGRlY29kZSAgICAgMHhkNjAwMDAwMC0weGRkZmZmZmZmCnBjaWI3 OiAgIHNwZWNpYWwgZGVjb2RlICAgIElTQQpwY2liNzogY291bGQgbm90IGdldCBQQ0kgaW50ZXJy dXB0IHJvdXRpbmcgdGFibGUgZm9yIFwxMzRfU0JfLlBDSTAuUEVYMyAtIEFFX05PVF9GT1VORApw Y2kxOiA8QUNQSSBQQ0kgYnVzPiBvbiBwY2liNwpwY2liNzogYWxsb2NhdGVkIGJ1cyByYW5nZSAo MS0xKSBmb3IgcmlkIDAgb2YgcGNpMQpwY2kxOiBkb21haW49MCwgcGh5c2ljYWwgYnVzPTEKZm91 bmQtPgl2ZW5kb3I9MHgxMTFkLCBkZXY9MHg4MDJlLCByZXZpZD0weDBlCglkb21haW49MCwgYnVz PTEsIHNsb3Q9MCwgZnVuYz0wCgljbGFzcz0wNi0wNC0wMCwgaGRydHlwZT0weDAxLCBtZmRldj0w CgljbWRyZWc9MHgwMDA3LCBzdGF0cmVnPTB4MDAxMCwgY2FjaGVsbnN6PTE2IChkd29yZHMpCgls YXR0aW1lcj0weDAwICgwIG5zKSwgbWluZ250PTB4MDcgKDE3NTAgbnMpLCBtYXhsYXQ9MHgwMCAo MCBucykKCXBvd2Vyc3BlYyAzICBzdXBwb3J0cyBEMCBEMyAgY3VycmVudCBEMAoJc2VjYnVzPTIs IHN1YmJ1cz0xMQpwY2liNzogYWxsb2NhdGVkIGJ1cyByYW5nZSAoMi0xMSkgZm9yIHJpZCAwIG9m IHBjaTA6MTowOjAKcGNpYjg6IDxBQ1BJIFBDSS1QQ0kgYnJpZGdlPiBhdCBkZXZpY2UgMC4wIG9u IHBjaTEKcGNpYjc6IGFsbG9jYXRlZCBtZW1vcnkgcmFuZ2UgKDB4ZDYwMDAwMDAtMHhkZGZmZmZm ZikgZm9yIHJpZCAyMCBvZiBwY2liOApwY2liODogICBkb21haW4gICAgICAgICAgICAwCnBjaWI4 OiAgIHNlY29uZGFyeSBidXMgICAgIDIKcGNpYjg6ICAgc3Vib3JkaW5hdGUgYnVzICAgMTEKcGNp Yjg6ICAgbWVtb3J5IGRlY29kZSAgICAgMHhkNjAwMDAwMC0weGRkZmZmZmZmCnBjaWI4OiAgIHNw ZWNpYWwgZGVjb2RlICAgIElTQQpwY2liODogY291bGQgbm90IGdldCBQQ0kgaW50ZXJydXB0IHJv dXRpbmcgdGFibGUgZm9yIFwxMzRfU0JfLlBDSTAuUEVYMy5VUFNUIC0gQUVfTk9UX0ZPVU5ECnBj aTI6IDxBQ1BJIFBDSSBidXM+IG9uIHBjaWI4CnBjaWI4OiBhbGxvY2F0ZWQgYnVzIHJhbmdlICgy LTIpIGZvciByaWQgMCBvZiBwY2kyCnBjaTI6IGRvbWFpbj0wLCBwaHlzaWNhbCBidXM9Mgpmb3Vu ZC0+CXZlbmRvcj0weDExMWQsIGRldj0weDgwMmUsIHJldmlkPTB4MGUKCWRvbWFpbj0wLCBidXM9 Miwgc2xvdD0xLCBmdW5jPTAKCWNsYXNzPTA2LTA0LTAwLCBoZHJ0eXBlPTB4MDEsIG1mZGV2PTAK CWNtZHJlZz0weDAwMDcsIHN0YXRyZWc9MHgwMDEwLCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxh dHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgwNyAoMTc1MCBucyksIG1heGxhdD0weDAwICgw IG5zKQoJcG93ZXJzcGVjIDMgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglNU0kgc3VwcG9y dHMgMSBtZXNzYWdlLCA2NCBiaXQKCXNlY2J1cz0zLCBzdWJidXM9MwpwY2liODogYWxsb2NhdGVk IGJ1cyByYW5nZSAoMy0zKSBmb3IgcmlkIDAgb2YgcGNpMDoyOjE6MApmb3VuZC0+CXZlbmRvcj0w eDExMWQsIGRldj0weDgwMmUsIHJldmlkPTB4MGUKCWRvbWFpbj0wLCBidXM9Miwgc2xvdD0yLCBm dW5jPTAKCWNsYXNzPTA2LTA0LTAwLCBoZHJ0eXBlPTB4MDEsIG1mZGV2PTAKCWNtZHJlZz0weDAw MDcsIHN0YXRyZWc9MHgwMDEwLCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAg KDAgbnMpLCBtaW5nbnQ9MHgwNyAoMTc1MCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJcG93ZXJz cGVjIDMgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglNU0kgc3VwcG9ydHMgMSBtZXNzYWdl LCA2NCBiaXQKCXNlY2J1cz0xMCwgc3ViYnVzPTExCnBjaWI4OiBhbGxvY2F0ZWQgYnVzIHJhbmdl ICgxMC0xMSkgZm9yIHJpZCAwIG9mIHBjaTA6MjoyOjAKZm91bmQtPgl2ZW5kb3I9MHgxMTFkLCBk ZXY9MHg4MDJlLCByZXZpZD0weDBlCglkb21haW49MCwgYnVzPTIsIHNsb3Q9MywgZnVuYz0wCglj bGFzcz0wNi0wNC0wMCwgaGRydHlwZT0weDAxLCBtZmRldj0wCgljbWRyZWc9MHgwMDA3LCBzdGF0 cmVnPTB4MDAxMCwgY2FjaGVsbnN6PTE2IChkd29yZHMpCglsYXR0aW1lcj0weDAwICgwIG5zKSwg bWluZ250PTB4MDcgKDE3NTAgbnMpLCBtYXhsYXQ9MHgwMCAoMCBucykKCXBvd2Vyc3BlYyAzICBz dXBwb3J0cyBEMCBEMyAgY3VycmVudCBEMAoJTVNJIHN1cHBvcnRzIDEgbWVzc2FnZSwgNjQgYml0 CglzZWNidXM9OCwgc3ViYnVzPTkKcGNpYjg6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDgtOSkgZm9y IHJpZCAwIG9mIHBjaTA6MjozOjAKZm91bmQtPgl2ZW5kb3I9MHgxMTFkLCBkZXY9MHg4MDJlLCBy ZXZpZD0weDBlCglkb21haW49MCwgYnVzPTIsIHNsb3Q9NCwgZnVuYz0wCgljbGFzcz0wNi0wNC0w MCwgaGRydHlwZT0weDAxLCBtZmRldj0wCgljbWRyZWc9MHgwMDA3LCBzdGF0cmVnPTB4MDAxMCwg Y2FjaGVsbnN6PTE2IChkd29yZHMpCglsYXR0aW1lcj0weDAwICgwIG5zKSwgbWluZ250PTB4MDcg KDE3NTAgbnMpLCBtYXhsYXQ9MHgwMCAoMCBucykKCXBvd2Vyc3BlYyAzICBzdXBwb3J0cyBEMCBE MyAgY3VycmVudCBEMAoJTVNJIHN1cHBvcnRzIDEgbWVzc2FnZSwgNjQgYml0CglzZWNidXM9Niwg c3ViYnVzPTcKcGNpYjg6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDYtNykgZm9yIHJpZCAwIG9mIHBj aTA6Mjo0OjAKZm91bmQtPgl2ZW5kb3I9MHgxMTFkLCBkZXY9MHg4MDJlLCByZXZpZD0weDBlCglk b21haW49MCwgYnVzPTIsIHNsb3Q9NSwgZnVuYz0wCgljbGFzcz0wNi0wNC0wMCwgaGRydHlwZT0w eDAxLCBtZmRldj0wCgljbWRyZWc9MHgwMDA3LCBzdGF0cmVnPTB4MDAxMCwgY2FjaGVsbnN6PTE2 IChkd29yZHMpCglsYXR0aW1lcj0weDAwICgwIG5zKSwgbWluZ250PTB4MDcgKDE3NTAgbnMpLCBt YXhsYXQ9MHgwMCAoMCBucykKCXBvd2Vyc3BlYyAzICBzdXBwb3J0cyBEMCBEMyAgY3VycmVudCBE MAoJTVNJIHN1cHBvcnRzIDEgbWVzc2FnZSwgNjQgYml0CglzZWNidXM9NCwgc3ViYnVzPTUKcGNp Yjg6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDQtNSkgZm9yIHJpZCAwIG9mIHBjaTA6Mjo1OjAKcGNp Yjk6IDxQQ0ktUENJIGJyaWRnZT4gYXQgZGV2aWNlIDEuMCBvbiBwY2kyCnBjaWI5OiAgIGRvbWFp biAgICAgICAgICAgIDAKcGNpYjk6ICAgc2Vjb25kYXJ5IGJ1cyAgICAgMwpwY2liOTogICBzdWJv cmRpbmF0ZSBidXMgICAzCnBjaWI5OiAgIHNwZWNpYWwgZGVjb2RlICAgIElTQQpwY2kzOiA8UENJ IGJ1cz4gb24gcGNpYjkKcGNpYjk6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDMtMykgZm9yIHJpZCAw IG9mIHBjaTMKcGNpMzogZG9tYWluPTAsIHBoeXNpY2FsIGJ1cz0zCnBjaWIxMDogPEFDUEkgUENJ LVBDSSBicmlkZ2U+IGF0IGRldmljZSAyLjAgb24gcGNpMgpwY2liODogYWxsb2NhdGVkIG1lbW9y eSByYW5nZSAoMHhkNjAwMDAwMC0weGQ3ZmZmZmZmKSBmb3IgcmlkIDIwIG9mIHBjaWIxMApwY2li MTA6ICAgZG9tYWluICAgICAgICAgICAgMApwY2liMTA6ICAgc2Vjb25kYXJ5IGJ1cyAgICAgMTAK cGNpYjEwOiAgIHN1Ym9yZGluYXRlIGJ1cyAgIDExCnBjaWIxMDogICBtZW1vcnkgZGVjb2RlICAg ICAweGQ2MDAwMDAwLTB4ZDdmZmZmZmYKcGNpYjEwOiAgIHNwZWNpYWwgZGVjb2RlICAgIElTQQpw Y2liMTA6IGNvdWxkIG5vdCBnZXQgUENJIGludGVycnVwdCByb3V0aW5nIHRhYmxlIGZvciBcMTM0 X1NCXy5QQ0kwLlBFWDMuVVBTVC5EV04yIC0gQUVfTk9UX0ZPVU5ECnBjaTEwOiA8QUNQSSBQQ0kg YnVzPiBvbiBwY2liMTAKcGNpYjEwOiBhbGxvY2F0ZWQgYnVzIHJhbmdlICgxMC0xMCkgZm9yIHJp ZCAwIG9mIHBjaTEwCnBjaTEwOiBkb21haW49MCwgcGh5c2ljYWwgYnVzPTEwCmZvdW5kLT4JdmVu ZG9yPTB4MTE2NiwgZGV2PTB4MDEwMywgcmV2aWQ9MHhjMwoJZG9tYWluPTAsIGJ1cz0xMCwgc2xv dD0wLCBmdW5jPTAKCWNsYXNzPTA2LTA0LTAwLCBoZHJ0eXBlPTB4MDEsIG1mZGV2PTAKCWNtZHJl Zz0weDAwMDcsIHN0YXRyZWc9MHgwMDEwLCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxhdHRpbWVy PTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgwNyAoMTc1MCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJ cG93ZXJzcGVjIDIgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglzZWNidXM9MTEsIHN1YmJ1 cz0xMQpwY2liMTA6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDExLTExKSBmb3IgcmlkIDAgb2YgcGNp MDoxMDowOjAKcGNpYjExOiA8QUNQSSBQQ0ktUENJIGJyaWRnZT4gYXQgZGV2aWNlIDAuMCBvbiBw Y2kxMApwY2liMTA6IGFsbG9jYXRlZCBtZW1vcnkgcmFuZ2UgKDB4ZDYwMDAwMDAtMHhkN2ZmZmZm ZikgZm9yIHJpZCAyMCBvZiBwY2liMTEKcGNpYjExOiAgIGRvbWFpbiAgICAgICAgICAgIDAKcGNp YjExOiAgIHNlY29uZGFyeSBidXMgICAgIDExCnBjaWIxMTogICBzdWJvcmRpbmF0ZSBidXMgICAx MQpwY2liMTE6ICAgbWVtb3J5IGRlY29kZSAgICAgMHhkNjAwMDAwMC0weGQ3ZmZmZmZmCnBjaWIx MTogICBzcGVjaWFsIGRlY29kZSAgICBJU0EKcGNpMTE6IDxBQ1BJIFBDSSBidXM+IG9uIHBjaWIx MQpwY2liMTE6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDExLTExKSBmb3IgcmlkIDAgb2YgcGNpMTEK cGNpMTE6IGRvbWFpbj0wLCBwaHlzaWNhbCBidXM9MTEKZm91bmQtPgl2ZW5kb3I9MHgxNGU0LCBk ZXY9MHgxNjRjLCByZXZpZD0weDEyCglkb21haW49MCwgYnVzPTExLCBzbG90PTAsIGZ1bmM9MAoJ Y2xhc3M9MDItMDAtMDAsIGhkcnR5cGU9MHgwMCwgbWZkZXY9MAoJY21kcmVnPTB4MDE1ZSwgc3Rh dHJlZz0weDAyYjAsIGNhY2hlbG5zej0xNiAoZHdvcmRzKQoJbGF0dGltZXI9MHgyMCAoOTYwIG5z KSwgbWluZ250PTB4NDAgKDE2MDAwIG5zKSwgbWF4bGF0PTB4MDAgKDAgbnMpCglpbnRwaW49YSwg aXJxPTYKCXBvd2Vyc3BlYyAyICBzdXBwb3J0cyBEMCBEMyAgY3VycmVudCBEMAoJTVNJIHN1cHBv cnRzIDEgbWVzc2FnZSwgNjQgYml0CgltYXBbMTBdOiB0eXBlIE1lbW9yeSwgcmFuZ2UgNjQsIGJh c2UgMHhkNjAwMDAwMCwgc2l6ZSAyNSwgZW5hYmxlZApwY2liMTE6IGFsbG9jYXRlZCBtZW1vcnkg cmFuZ2UgKDB4ZDYwMDAwMDAtMHhkN2ZmZmZmZikgZm9yIHJpZCAxMCBvZiBwY2kwOjExOjA6MApw Y2liMTE6IG1hdGNoZWQgZW50cnkgZm9yIDExLjAuSU5UQQpwY2liMTE6IHNsb3QgMCBJTlRBIGhh cmR3aXJlZCB0byBJUlEgMTgKYmNlMDogPEJyb2FkY29tIE5ldFh0cmVtZSBJSSBCQ001NzA4IDEw MDBCYXNlLVQgKEIyKT4gbWVtIDB4ZDYwMDAwMDAtMHhkN2ZmZmZmZiBpcnEgMTggYXQgZGV2aWNl IDAuMCBvbiBwY2kxMQpiY2UwOiBhdHRlbXB0aW5nIHRvIGFsbG9jYXRlIDEgTVNJIHZlY3RvcnMg KDEgc3VwcG9ydGVkKQptc2k6IHJvdXRpbmcgTVNJIElSUSAyNTcgdG8gbG9jYWwgQVBJQyAwIHZl Y3RvciA1MwpiY2UwOiB1c2luZyBJUlEgMjU3IGZvciBNU0kKbWlpYnVzMDogPE1JSSBidXM+IG9u IGJjZTAKYnJncGh5MDogPEJDTTU3MDhDIDEwMDBCQVNFLVQgbWVkaWEgaW50ZXJmYWNlPiBQSFkg MSBvbiBtaWlidXMwCmJyZ3BoeTA6IE9VSSAweDAwMTAxOCwgbW9kZWwgMHgwMDM2LCByZXYuIDYK YnJncGh5MDogIDEwYmFzZVQsIDEwYmFzZVQtRkRYLCAxMDBiYXNlVFgsIDEwMGJhc2VUWC1GRFgs IDEwMDBiYXNlVCwgMTAwMGJhc2VULW1hc3RlciwgMTAwMGJhc2VULUZEWCwgMTAwMGJhc2VULUZE WC1tYXN0ZXIsIGF1dG8sIGF1dG8tZmxvdwpiY2UwOiBicGYgYXR0YWNoZWQKYmNlMDogRXRoZXJu ZXQgYWRkcmVzczogYTQ6YmE6ZGI6MzQ6Nzc6ZGYKYmNlMDogQVNJQyAoMHg1NzA4MTAyMCk7IFJl diAoQjIpOyBCdXMgKFBDSS1YLCA2NC1iaXQsIDEzM01Ieik7IEIvQyAoNy40LjApOyBCdWZzIChS WDoyO1RYOjI7UEc6OCk7IEZsYWdzIChTUExUfE1TSXxNRlcpOyBNRlcgKGlwbXMgMS42LjApCkNv YWwgKFJYOjYsNiwxOCwxODsgVFg6MjAsMjAsODAsODApCnBjaWIxMjogPEFDUEkgUENJLVBDSSBi cmlkZ2U+IGF0IGRldmljZSAzLjAgb24gcGNpMgpwY2liODogYWxsb2NhdGVkIG1lbW9yeSByYW5n ZSAoMHhkODAwMDAwMC0weGQ5ZmZmZmZmKSBmb3IgcmlkIDIwIG9mIHBjaWIxMgpwY2liMTI6ICAg ZG9tYWluICAgICAgICAgICAgMApwY2liMTI6ICAgc2Vjb25kYXJ5IGJ1cyAgICAgOApwY2liMTI6 ICAgc3Vib3JkaW5hdGUgYnVzICAgOQpwY2liMTI6ICAgbWVtb3J5IGRlY29kZSAgICAgMHhkODAw MDAwMC0weGQ5ZmZmZmZmCnBjaWIxMjogICBzcGVjaWFsIGRlY29kZSAgICBJU0EKcGNpYjEyOiBj b3VsZCBub3QgZ2V0IFBDSSBpbnRlcnJ1cHQgcm91dGluZyB0YWJsZSBmb3IgXDEzNF9TQl8uUENJ MC5QRVgzLlVQU1QuRFdOMyAtIEFFX05PVF9GT1VORApwY2k4OiA8QUNQSSBQQ0kgYnVzPiBvbiBw Y2liMTIKcGNpYjEyOiBhbGxvY2F0ZWQgYnVzIHJhbmdlICg4LTgpIGZvciByaWQgMCBvZiBwY2k4 CnBjaTg6IGRvbWFpbj0wLCBwaHlzaWNhbCBidXM9OApmb3VuZC0+CXZlbmRvcj0weDExNjYsIGRl dj0weDAxMDMsIHJldmlkPTB4YzMKCWRvbWFpbj0wLCBidXM9OCwgc2xvdD0wLCBmdW5jPTAKCWNs YXNzPTA2LTA0LTAwLCBoZHJ0eXBlPTB4MDEsIG1mZGV2PTAKCWNtZHJlZz0weDAwMDcsIHN0YXRy ZWc9MHgwMDEwLCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBt aW5nbnQ9MHgwNyAoMTc1MCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJcG93ZXJzcGVjIDIgIHN1 cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglzZWNidXM9OSwgc3ViYnVzPTkKcGNpYjEyOiBhbGxv Y2F0ZWQgYnVzIHJhbmdlICg5LTkpIGZvciByaWQgMCBvZiBwY2kwOjg6MDowCnBjaWIxMzogPEFD UEkgUENJLVBDSSBicmlkZ2U+IGF0IGRldmljZSAwLjAgb24gcGNpOApwY2liMTI6IGFsbG9jYXRl ZCBtZW1vcnkgcmFuZ2UgKDB4ZDgwMDAwMDAtMHhkOWZmZmZmZikgZm9yIHJpZCAyMCBvZiBwY2li MTMKcGNpYjEzOiAgIGRvbWFpbiAgICAgICAgICAgIDAKcGNpYjEzOiAgIHNlY29uZGFyeSBidXMg ICAgIDkKcGNpYjEzOiAgIHN1Ym9yZGluYXRlIGJ1cyAgIDkKcGNpYjEzOiAgIG1lbW9yeSBkZWNv ZGUgICAgIDB4ZDgwMDAwMDAtMHhkOWZmZmZmZgpwY2liMTM6ICAgc3BlY2lhbCBkZWNvZGUgICAg SVNBCnBjaTk6IDxBQ1BJIFBDSSBidXM+IG9uIHBjaWIxMwpwY2liMTM6IGFsbG9jYXRlZCBidXMg cmFuZ2UgKDktOSkgZm9yIHJpZCAwIG9mIHBjaTkKcGNpOTogZG9tYWluPTAsIHBoeXNpY2FsIGJ1 cz05CmZvdW5kLT4JdmVuZG9yPTB4MTRlNCwgZGV2PTB4MTY0YywgcmV2aWQ9MHgxMgoJZG9tYWlu PTAsIGJ1cz05LCBzbG90PTAsIGZ1bmM9MAoJY2xhc3M9MDItMDAtMDAsIGhkcnR5cGU9MHgwMCwg bWZkZXY9MAoJY21kcmVnPTB4MDE1ZSwgc3RhdHJlZz0weDAyYjAsIGNhY2hlbG5zej0xNiAoZHdv cmRzKQoJbGF0dGltZXI9MHgyMCAoOTYwIG5zKSwgbWluZ250PTB4NDAgKDE2MDAwIG5zKSwgbWF4 bGF0PTB4MDAgKDAgbnMpCglpbnRwaW49YSwgaXJxPTExCglwb3dlcnNwZWMgMiAgc3VwcG9ydHMg RDAgRDMgIGN1cnJlbnQgRDAKCU1TSSBzdXBwb3J0cyAxIG1lc3NhZ2UsIDY0IGJpdAoJbWFwWzEw XTogdHlwZSBNZW1vcnksIHJhbmdlIDY0LCBiYXNlIDB4ZDgwMDAwMDAsIHNpemUgMjUsIGVuYWJs ZWQKcGNpYjEzOiBhbGxvY2F0ZWQgbWVtb3J5IHJhbmdlICgweGQ4MDAwMDAwLTB4ZDlmZmZmZmYp IGZvciByaWQgMTAgb2YgcGNpMDo5OjA6MApwY2liMTM6IG1hdGNoZWQgZW50cnkgZm9yIDkuMC5J TlRBCnBjaWIxMzogc2xvdCAwIElOVEEgaGFyZHdpcmVkIHRvIElSUSAxOQpiY2UxOiA8QnJvYWRj b20gTmV0WHRyZW1lIElJIEJDTTU3MDggMTAwMEJhc2UtVCAoQjIpPiBtZW0gMHhkODAwMDAwMC0w eGQ5ZmZmZmZmIGlycSAxOSBhdCBkZXZpY2UgMC4wIG9uIHBjaTkKYmNlMTogYXR0ZW1wdGluZyB0 byBhbGxvY2F0ZSAxIE1TSSB2ZWN0b3JzICgxIHN1cHBvcnRlZCkKbXNpOiByb3V0aW5nIE1TSSBJ UlEgMjU4IHRvIGxvY2FsIEFQSUMgMCB2ZWN0b3IgNTQKYmNlMTogdXNpbmcgSVJRIDI1OCBmb3Ig TVNJCm1paWJ1czE6IDxNSUkgYnVzPiBvbiBiY2UxCmJyZ3BoeTE6IDxCQ001NzA4QyAxMDAwQkFT RS1UIG1lZGlhIGludGVyZmFjZT4gUEhZIDEgb24gbWlpYnVzMQpicmdwaHkxOiBPVUkgMHgwMDEw MTgsIG1vZGVsIDB4MDAzNiwgcmV2LiA2CmJyZ3BoeTE6ICAxMGJhc2VULCAxMGJhc2VULUZEWCwg MTAwYmFzZVRYLCAxMDBiYXNlVFgtRkRYLCAxMDAwYmFzZVQsIDEwMDBiYXNlVC1tYXN0ZXIsIDEw MDBiYXNlVC1GRFgsIDEwMDBiYXNlVC1GRFgtbWFzdGVyLCBhdXRvLCBhdXRvLWZsb3cKYmNlMTog YnBmIGF0dGFjaGVkCmJjZTE6IEV0aGVybmV0IGFkZHJlc3M6IGE0OmJhOmRiOjM0Ojc3OmRkCmJj ZTE6IEFTSUMgKDB4NTcwODEwMjApOyBSZXYgKEIyKTsgQnVzIChQQ0ktWCwgNjQtYml0LCAxMzNN SHopOyBCL0MgKDcuNC4wKTsgQnVmcyAoUlg6MjtUWDoyO1BHOjgpOyBGbGFncyAoU1BMVHxNU0l8 TUZXKTsgTUZXIChpcG1zIDEuNi4wKQpDb2FsIChSWDo2LDYsMTgsMTg7IFRYOjIwLDIwLDgwLDgw KQpwY2liMTQ6IDxBQ1BJIFBDSS1QQ0kgYnJpZGdlPiBhdCBkZXZpY2UgNC4wIG9uIHBjaTIKcGNp Yjg6IGFsbG9jYXRlZCBtZW1vcnkgcmFuZ2UgKDB4ZGEwMDAwMDAtMHhkYmZmZmZmZikgZm9yIHJp ZCAyMCBvZiBwY2liMTQKcGNpYjE0OiAgIGRvbWFpbiAgICAgICAgICAgIDAKcGNpYjE0OiAgIHNl Y29uZGFyeSBidXMgICAgIDYKcGNpYjE0OiAgIHN1Ym9yZGluYXRlIGJ1cyAgIDcKcGNpYjE0OiAg IG1lbW9yeSBkZWNvZGUgICAgIDB4ZGEwMDAwMDAtMHhkYmZmZmZmZgpwY2liMTQ6ICAgc3BlY2lh bCBkZWNvZGUgICAgSVNBCnBjaWIxNDogY291bGQgbm90IGdldCBQQ0kgaW50ZXJydXB0IHJvdXRp bmcgdGFibGUgZm9yIFwxMzRfU0JfLlBDSTAuUEVYMy5VUFNULkRXTjQgLSBBRV9OT1RfRk9VTkQK cGNpNjogPEFDUEkgUENJIGJ1cz4gb24gcGNpYjE0CnBjaWIxNDogYWxsb2NhdGVkIGJ1cyByYW5n ZSAoNi02KSBmb3IgcmlkIDAgb2YgcGNpNgpwY2k2OiBkb21haW49MCwgcGh5c2ljYWwgYnVzPTYK Zm91bmQtPgl2ZW5kb3I9MHgxMTY2LCBkZXY9MHgwMTAzLCByZXZpZD0weGMzCglkb21haW49MCwg YnVzPTYsIHNsb3Q9MCwgZnVuYz0wCgljbGFzcz0wNi0wNC0wMCwgaGRydHlwZT0weDAxLCBtZmRl dj0wCgljbWRyZWc9MHgwMDA3LCBzdGF0cmVnPTB4MDAxMCwgY2FjaGVsbnN6PTE2IChkd29yZHMp CglsYXR0aW1lcj0weDAwICgwIG5zKSwgbWluZ250PTB4MDcgKDE3NTAgbnMpLCBtYXhsYXQ9MHgw MCAoMCBucykKCXBvd2Vyc3BlYyAyICBzdXBwb3J0cyBEMCBEMyAgY3VycmVudCBEMAoJc2VjYnVz PTcsIHN1YmJ1cz03CnBjaWIxNDogYWxsb2NhdGVkIGJ1cyByYW5nZSAoNy03KSBmb3IgcmlkIDAg b2YgcGNpMDo2OjA6MApwY2liMTU6IDxBQ1BJIFBDSS1QQ0kgYnJpZGdlPiBhdCBkZXZpY2UgMC4w IG9uIHBjaTYKcGNpYjE0OiBhbGxvY2F0ZWQgbWVtb3J5IHJhbmdlICgweGRhMDAwMDAwLTB4ZGJm ZmZmZmYpIGZvciByaWQgMjAgb2YgcGNpYjE1CnBjaWIxNTogICBkb21haW4gICAgICAgICAgICAw CnBjaWIxNTogICBzZWNvbmRhcnkgYnVzICAgICA3CnBjaWIxNTogICBzdWJvcmRpbmF0ZSBidXMg ICA3CnBjaWIxNTogICBtZW1vcnkgZGVjb2RlICAgICAweGRhMDAwMDAwLTB4ZGJmZmZmZmYKcGNp YjE1OiAgIHNwZWNpYWwgZGVjb2RlICAgIElTQQpwY2k3OiA8QUNQSSBQQ0kgYnVzPiBvbiBwY2li MTUKcGNpYjE1OiBhbGxvY2F0ZWQgYnVzIHJhbmdlICg3LTcpIGZvciByaWQgMCBvZiBwY2k3CnBj aTc6IGRvbWFpbj0wLCBwaHlzaWNhbCBidXM9Nwpmb3VuZC0+CXZlbmRvcj0weDE0ZTQsIGRldj0w eDE2NGMsIHJldmlkPTB4MTIKCWRvbWFpbj0wLCBidXM9Nywgc2xvdD0wLCBmdW5jPTAKCWNsYXNz PTAyLTAwLTAwLCBoZHJ0eXBlPTB4MDAsIG1mZGV2PTAKCWNtZHJlZz0weDAxNWUsIHN0YXRyZWc9 MHgwMmIwLCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxhdHRpbWVyPTB4MjAgKDk2MCBucyksIG1p bmdudD0weDQwICgxNjAwMCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJaW50cGluPWEsIGlycT02 Cglwb3dlcnNwZWMgMiAgc3VwcG9ydHMgRDAgRDMgIGN1cnJlbnQgRDAKCU1TSSBzdXBwb3J0cyAx IG1lc3NhZ2UsIDY0IGJpdAoJbWFwWzEwXTogdHlwZSBNZW1vcnksIHJhbmdlIDY0LCBiYXNlIDB4 ZGEwMDAwMDAsIHNpemUgMjUsIGVuYWJsZWQKcGNpYjE1OiBhbGxvY2F0ZWQgbWVtb3J5IHJhbmdl ICgweGRhMDAwMDAwLTB4ZGJmZmZmZmYpIGZvciByaWQgMTAgb2YgcGNpMDo3OjA6MApwY2liMTU6 IG1hdGNoZWQgZW50cnkgZm9yIDcuMC5JTlRBCnBjaWIxNTogc2xvdCAwIElOVEEgaGFyZHdpcmVk IHRvIElSUSAzNwpiY2UyOiA8QnJvYWRjb20gTmV0WHRyZW1lIElJIEJDTTU3MDggMTAwMEJhc2Ut VCAoQjIpPiBtZW0gMHhkYTAwMDAwMC0weGRiZmZmZmZmIGlycSAzNyBhdCBkZXZpY2UgMC4wIG9u IHBjaTcKYmNlMjogYXR0ZW1wdGluZyB0byBhbGxvY2F0ZSAxIE1TSSB2ZWN0b3JzICgxIHN1cHBv cnRlZCkKbXNpOiByb3V0aW5nIE1TSSBJUlEgMjU5IHRvIGxvY2FsIEFQSUMgMCB2ZWN0b3IgNTUK YmNlMjogdXNpbmcgSVJRIDI1OSBmb3IgTVNJCm1paWJ1czI6IDxNSUkgYnVzPiBvbiBiY2UyCmJy Z3BoeTI6IDxCQ001NzA4QyAxMDAwQkFTRS1UIG1lZGlhIGludGVyZmFjZT4gUEhZIDEgb24gbWlp YnVzMgpicmdwaHkyOiBPVUkgMHgwMDEwMTgsIG1vZGVsIDB4MDAzNiwgcmV2LiA2CmJyZ3BoeTI6 ICAxMGJhc2VULCAxMGJhc2VULUZEWCwgMTAwYmFzZVRYLCAxMDBiYXNlVFgtRkRYLCAxMDAwYmFz ZVQsIDEwMDBiYXNlVC1tYXN0ZXIsIDEwMDBiYXNlVC1GRFgsIDEwMDBiYXNlVC1GRFgtbWFzdGVy LCBhdXRvLCBhdXRvLWZsb3cKYmNlMjogYnBmIGF0dGFjaGVkCmJjZTI6IEV0aGVybmV0IGFkZHJl c3M6IGE0OmJhOmRiOjM0Ojc3OmRiCmJjZTI6IEFTSUMgKDB4NTcwODEwMjApOyBSZXYgKEIyKTsg QnVzIChQQ0ktWCwgNjQtYml0LCAxMzNNSHopOyBCL0MgKDcuNC4wKTsgQnVmcyAoUlg6MjtUWDoy O1BHOjgpOyBGbGFncyAoU1BMVHxNU0l8TUZXKTsgTUZXIChVTVAgMS4xLjkpCkNvYWwgKFJYOjYs NiwxOCwxODsgVFg6MjAsMjAsODAsODApCnBjaWIxNjogPEFDUEkgUENJLVBDSSBicmlkZ2U+IGF0 IGRldmljZSA1LjAgb24gcGNpMgpwY2liODogYWxsb2NhdGVkIG1lbW9yeSByYW5nZSAoMHhkYzAw MDAwMC0weGRkZmZmZmZmKSBmb3IgcmlkIDIwIG9mIHBjaWIxNgpwY2liMTY6ICAgZG9tYWluICAg ICAgICAgICAgMApwY2liMTY6ICAgc2Vjb25kYXJ5IGJ1cyAgICAgNApwY2liMTY6ICAgc3Vib3Jk aW5hdGUgYnVzICAgNQpwY2liMTY6ICAgbWVtb3J5IGRlY29kZSAgICAgMHhkYzAwMDAwMC0weGRk ZmZmZmZmCnBjaWIxNjogICBzcGVjaWFsIGRlY29kZSAgICBJU0EKcGNpYjE2OiBjb3VsZCBub3Qg Z2V0IFBDSSBpbnRlcnJ1cHQgcm91dGluZyB0YWJsZSBmb3IgXDEzNF9TQl8uUENJMC5QRVgzLlVQ U1QuRFdONSAtIEFFX05PVF9GT1VORApwY2k0OiA8QUNQSSBQQ0kgYnVzPiBvbiBwY2liMTYKcGNp YjE2OiBhbGxvY2F0ZWQgYnVzIHJhbmdlICg0LTQpIGZvciByaWQgMCBvZiBwY2k0CnBjaTQ6IGRv bWFpbj0wLCBwaHlzaWNhbCBidXM9NApmb3VuZC0+CXZlbmRvcj0weDExNjYsIGRldj0weDAxMDMs IHJldmlkPTB4YzMKCWRvbWFpbj0wLCBidXM9NCwgc2xvdD0wLCBmdW5jPTAKCWNsYXNzPTA2LTA0 LTAwLCBoZHJ0eXBlPTB4MDEsIG1mZGV2PTAKCWNtZHJlZz0weDAwMDcsIHN0YXRyZWc9MHgwMDEw LCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgw NyAoMTc1MCBucyksIG1heGxhdD0weDAwICgwIG5zKQoJcG93ZXJzcGVjIDIgIHN1cHBvcnRzIEQw IEQzICBjdXJyZW50IEQwCglzZWNidXM9NSwgc3ViYnVzPTUKcGNpYjE2OiBhbGxvY2F0ZWQgYnVz IHJhbmdlICg1LTUpIGZvciByaWQgMCBvZiBwY2kwOjQ6MDowCnBjaWIxNzogPEFDUEkgUENJLVBD SSBicmlkZ2U+IGF0IGRldmljZSAwLjAgb24gcGNpNApwY2liMTY6IGFsbG9jYXRlZCBtZW1vcnkg cmFuZ2UgKDB4ZGMwMDAwMDAtMHhkZGZmZmZmZikgZm9yIHJpZCAyMCBvZiBwY2liMTcKcGNpYjE3 OiAgIGRvbWFpbiAgICAgICAgICAgIDAKcGNpYjE3OiAgIHNlY29uZGFyeSBidXMgICAgIDUKcGNp YjE3OiAgIHN1Ym9yZGluYXRlIGJ1cyAgIDUKcGNpYjE3OiAgIG1lbW9yeSBkZWNvZGUgICAgIDB4 ZGMwMDAwMDAtMHhkZGZmZmZmZgpwY2liMTc6ICAgc3BlY2lhbCBkZWNvZGUgICAgSVNBCnBjaTU6 IDxBQ1BJIFBDSSBidXM+IG9uIHBjaWIxNwpwY2liMTc6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDUt NSkgZm9yIHJpZCAwIG9mIHBjaTUKcGNpNTogZG9tYWluPTAsIHBoeXNpY2FsIGJ1cz01CmZvdW5k LT4JdmVuZG9yPTB4MTRlNCwgZGV2PTB4MTY0YywgcmV2aWQ9MHgxMgoJZG9tYWluPTAsIGJ1cz01 LCBzbG90PTAsIGZ1bmM9MAoJY2xhc3M9MDItMDAtMDAsIGhkcnR5cGU9MHgwMCwgbWZkZXY9MAoJ Y21kcmVnPTB4MDE1ZSwgc3RhdHJlZz0weDAyYjAsIGNhY2hlbG5zej0xNiAoZHdvcmRzKQoJbGF0 dGltZXI9MHgyMCAoOTYwIG5zKSwgbWluZ250PTB4NDAgKDE2MDAwIG5zKSwgbWF4bGF0PTB4MDAg KDAgbnMpCglpbnRwaW49YSwgaXJxPTExCglwb3dlcnNwZWMgMiAgc3VwcG9ydHMgRDAgRDMgIGN1 cnJlbnQgRDAKCU1TSSBzdXBwb3J0cyAxIG1lc3NhZ2UsIDY0IGJpdAoJbWFwWzEwXTogdHlwZSBN ZW1vcnksIHJhbmdlIDY0LCBiYXNlIDB4ZGMwMDAwMDAsIHNpemUgMjUsIGVuYWJsZWQKcGNpYjE3 OiBhbGxvY2F0ZWQgbWVtb3J5IHJhbmdlICgweGRjMDAwMDAwLTB4ZGRmZmZmZmYpIGZvciByaWQg MTAgb2YgcGNpMDo1OjA6MApwY2liMTc6IG1hdGNoZWQgZW50cnkgZm9yIDUuMC5JTlRBCnBjaWIx Nzogc2xvdCAwIElOVEEgaGFyZHdpcmVkIHRvIElSUSAzOApiY2UzOiA8QnJvYWRjb20gTmV0WHRy ZW1lIElJIEJDTTU3MDggMTAwMEJhc2UtVCAoQjIpPiBtZW0gMHhkYzAwMDAwMC0weGRkZmZmZmZm IGlycSAzOCBhdCBkZXZpY2UgMC4wIG9uIHBjaTUKYmNlMzogYXR0ZW1wdGluZyB0byBhbGxvY2F0 ZSAxIE1TSSB2ZWN0b3JzICgxIHN1cHBvcnRlZCkKbXNpOiByb3V0aW5nIE1TSSBJUlEgMjYwIHRv IGxvY2FsIEFQSUMgMCB2ZWN0b3IgNTYKYmNlMzogdXNpbmcgSVJRIDI2MCBmb3IgTVNJCm1paWJ1 czM6IDxNSUkgYnVzPiBvbiBiY2UzCmJyZ3BoeTM6IDxCQ001NzA4QyAxMDAwQkFTRS1UIG1lZGlh IGludGVyZmFjZT4gUEhZIDEgb24gbWlpYnVzMwpicmdwaHkzOiBPVUkgMHgwMDEwMTgsIG1vZGVs IDB4MDAzNiwgcmV2LiA2CmJyZ3BoeTM6ICAxMGJhc2VULCAxMGJhc2VULUZEWCwgMTAwYmFzZVRY LCAxMDBiYXNlVFgtRkRYLCAxMDAwYmFzZVQsIDEwMDBiYXNlVC1tYXN0ZXIsIDEwMDBiYXNlVC1G RFgsIDEwMDBiYXNlVC1GRFgtbWFzdGVyLCBhdXRvLCBhdXRvLWZsb3cKYmNlMzogYnBmIGF0dGFj aGVkCmJjZTM6IEV0aGVybmV0IGFkZHJlc3M6IGE0OmJhOmRiOjM0Ojc3OmQ5CmJjZTM6IEFTSUMg KDB4NTcwODEwMjApOyBSZXYgKEIyKTsgQnVzIChQQ0ktWCwgNjQtYml0LCAxMzNNSHopOyBCL0Mg KDcuNC4wKTsgQnVmcyAoUlg6MjtUWDoyO1BHOjgpOyBGbGFncyAoU1BMVHxNU0l8TUZXKTsgTUZX IChVTVAgMS4xLjkpCkNvYWwgKFJYOjYsNiwxOCwxODsgVFg6MjAsMjAsODAsODApCnBjaWIxODog PEFDUEkgUENJLVBDSSBicmlkZ2U+IGlycSAzMiBhdCBkZXZpY2UgNC4wIG9uIHBjaTAKcGNpYjE4 OiAgIGRvbWFpbiAgICAgICAgICAgIDAKcGNpYjE4OiAgIHNlY29uZGFyeSBidXMgICAgIDEyCnBj aWIxODogICBzdWJvcmRpbmF0ZSBidXMgICAxNQpwY2liMTg6ICAgc3BlY2lhbCBkZWNvZGUgICAg SVNBCnBjaWIxODogY291bGQgbm90IGdldCBQQ0kgaW50ZXJydXB0IHJvdXRpbmcgdGFibGUgZm9y IFwxMzRfU0JfLlBDSTAuUEVYNCAtIEFFX05PVF9GT1VORApwY2kxMjogPEFDUEkgUENJIGJ1cz4g b24gcGNpYjE4CnBjaWIxODogYWxsb2NhdGVkIGJ1cyByYW5nZSAoMTItMTIpIGZvciByaWQgMCBv ZiBwY2kxMgpwY2kxMjogZG9tYWluPTAsIHBoeXNpY2FsIGJ1cz0xMgpmb3VuZC0+CXZlbmRvcj0w eDExMWQsIGRldj0weDgwMWMsIHJldmlkPTB4MGUKCWRvbWFpbj0wLCBidXM9MTIsIHNsb3Q9MCwg ZnVuYz0wCgljbGFzcz0wNi0wNC0wMCwgaGRydHlwZT0weDAxLCBtZmRldj0wCgljbWRyZWc9MHgw MDA3LCBzdGF0cmVnPTB4MDAxMCwgY2FjaGVsbnN6PTE2IChkd29yZHMpCglsYXR0aW1lcj0weDAw ICgwIG5zKSwgbWluZ250PTB4MDcgKDE3NTAgbnMpLCBtYXhsYXQ9MHgwMCAoMCBucykKCXBvd2Vy c3BlYyAzICBzdXBwb3J0cyBEMCBEMyAgY3VycmVudCBEMAoJc2VjYnVzPTEzLCBzdWJidXM9MTUK cGNpYjE4OiBhbGxvY2F0ZWQgYnVzIHJhbmdlICgxMy0xNSkgZm9yIHJpZCAwIG9mIHBjaTA6MTI6 MDowCnBjaWIxOTogPEFDUEkgUENJLVBDSSBicmlkZ2U+IGF0IGRldmljZSAwLjAgb24gcGNpMTIK cGNpYjE5OiAgIGRvbWFpbiAgICAgICAgICAgIDAKcGNpYjE5OiAgIHNlY29uZGFyeSBidXMgICAg IDEzCnBjaWIxOTogICBzdWJvcmRpbmF0ZSBidXMgICAxNQpwY2liMTk6ICAgc3BlY2lhbCBkZWNv ZGUgICAgSVNBCnBjaWIxOTogY291bGQgbm90IGdldCBQQ0kgaW50ZXJydXB0IHJvdXRpbmcgdGFi bGUgZm9yIFwxMzRfU0JfLlBDSTAuUEVYNC5VUFNUIC0gQUVfTk9UX0ZPVU5ECnBjaTEzOiA8QUNQ SSBQQ0kgYnVzPiBvbiBwY2liMTkKcGNpYjE5OiBhbGxvY2F0ZWQgYnVzIHJhbmdlICgxMy0xMykg Zm9yIHJpZCAwIG9mIHBjaTEzCnBjaTEzOiBkb21haW49MCwgcGh5c2ljYWwgYnVzPTEzCmZvdW5k LT4JdmVuZG9yPTB4MTExZCwgZGV2PTB4ODAxYywgcmV2aWQ9MHgwZQoJZG9tYWluPTAsIGJ1cz0x Mywgc2xvdD0yLCBmdW5jPTAKCWNsYXNzPTA2LTA0LTAwLCBoZHJ0eXBlPTB4MDEsIG1mZGV2PTAK CWNtZHJlZz0weDAwMDcsIHN0YXRyZWc9MHgwMDEwLCBjYWNoZWxuc3o9MTYgKGR3b3JkcykKCWxh dHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgwNyAoMTc1MCBucyksIG1heGxhdD0weDAwICgw IG5zKQoJcG93ZXJzcGVjIDMgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglNU0kgc3VwcG9y dHMgMSBtZXNzYWdlLCA2NCBiaXQKCXNlY2J1cz0xNSwgc3ViYnVzPTE1CnBjaWIxOTogYWxsb2Nh dGVkIGJ1cyByYW5nZSAoMTUtMTUpIGZvciByaWQgMCBvZiBwY2kwOjEzOjI6MApmb3VuZC0+CXZl bmRvcj0weDExMWQsIGRldj0weDgwMWMsIHJldmlkPTB4MGUKCWRvbWFpbj0wLCBidXM9MTMsIHNs b3Q9NCwgZnVuYz0wCgljbGFzcz0wNi0wNC0wMCwgaGRydHlwZT0weDAxLCBtZmRldj0wCgljbWRy ZWc9MHgwMDA3LCBzdGF0cmVnPTB4MDAxMCwgY2FjaGVsbnN6PTE2IChkd29yZHMpCglsYXR0aW1l cj0weDAwICgwIG5zKSwgbWluZ250PTB4MDcgKDE3NTAgbnMpLCBtYXhsYXQ9MHgwMCAoMCBucykK CXBvd2Vyc3BlYyAzICBzdXBwb3J0cyBEMCBEMyAgY3VycmVudCBEMAoJTVNJIHN1cHBvcnRzIDEg bWVzc2FnZSwgNjQgYml0CglzZWNidXM9MTQsIHN1YmJ1cz0xNApwY2liMTk6IGFsbG9jYXRlZCBi dXMgcmFuZ2UgKDE0LTE0KSBmb3IgcmlkIDAgb2YgcGNpMDoxMzo0OjAKcGNpYjIwOiA8QUNQSSBQ Q0ktUENJIGJyaWRnZT4gYXQgZGV2aWNlIDIuMCBvbiBwY2kxMwpwY2liMjA6ICAgZG9tYWluICAg ICAgICAgICAgMApwY2liMjA6ICAgc2Vjb25kYXJ5IGJ1cyAgICAgMTUKcGNpYjIwOiAgIHN1Ym9y ZGluYXRlIGJ1cyAgIDE1CnBjaWIyMDogICBzcGVjaWFsIGRlY29kZSAgICBJU0EKcGNpMTU6IDxB Q1BJIFBDSSBidXM+IG9uIHBjaWIyMApwY2liMjA6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDE1LTE1 KSBmb3IgcmlkIDAgb2YgcGNpMTUKcGNpMTU6IGRvbWFpbj0wLCBwaHlzaWNhbCBidXM9MTUKcGNp YjIxOiA8QUNQSSBQQ0ktUENJIGJyaWRnZT4gYXQgZGV2aWNlIDQuMCBvbiBwY2kxMwpwY2liMjE6 ICAgZG9tYWluICAgICAgICAgICAgMApwY2liMjE6ICAgc2Vjb25kYXJ5IGJ1cyAgICAgMTQKcGNp YjIxOiAgIHN1Ym9yZGluYXRlIGJ1cyAgIDE0CnBjaWIyMTogICBzcGVjaWFsIGRlY29kZSAgICBJ U0EKcGNpMTQ6IDxBQ1BJIFBDSSBidXM+IG9uIHBjaWIyMQpwY2liMjE6IGFsbG9jYXRlZCBidXMg cmFuZ2UgKDE0LTE0KSBmb3IgcmlkIDAgb2YgcGNpMTQKcGNpMTQ6IGRvbWFpbj0wLCBwaHlzaWNh bCBidXM9MTQKcGNpYjIyOiA8QUNQSSBQQ0ktUENJIGJyaWRnZT4gaXJxIDMzIGF0IGRldmljZSA2 LjAgb24gcGNpMApwY2liMjI6ICAgZG9tYWluICAgICAgICAgICAgMApwY2liMjI6ICAgc2Vjb25k YXJ5IGJ1cyAgICAgMTYKcGNpYjIyOiAgIHN1Ym9yZGluYXRlIGJ1cyAgIDE5CnBjaWIyMjogICBz cGVjaWFsIGRlY29kZSAgICBJU0EKcGNpYjIyOiBjb3VsZCBub3QgZ2V0IFBDSSBpbnRlcnJ1cHQg cm91dGluZyB0YWJsZSBmb3IgXDEzNF9TQl8uUENJMC5QRVg2IC0gQUVfTk9UX0ZPVU5ECnBjaTE2 OiA8QUNQSSBQQ0kgYnVzPiBvbiBwY2liMjIKcGNpYjIyOiBhbGxvY2F0ZWQgYnVzIHJhbmdlICgx Ni0xNikgZm9yIHJpZCAwIG9mIHBjaTE2CnBjaTE2OiBkb21haW49MCwgcGh5c2ljYWwgYnVzPTE2 CmZvdW5kLT4JdmVuZG9yPTB4MTExZCwgZGV2PTB4ODAxYywgcmV2aWQ9MHgwZQoJZG9tYWluPTAs IGJ1cz0xNiwgc2xvdD0wLCBmdW5jPTAKCWNsYXNzPTA2LTA0LTAwLCBoZHJ0eXBlPTB4MDEsIG1m ZGV2PTAKCWNtZHJlZz0weDAwMDcsIHN0YXRyZWc9MHgwMDEwLCBjYWNoZWxuc3o9MTYgKGR3b3Jk cykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgwNyAoMTc1MCBucyksIG1heGxhdD0w eDAwICgwIG5zKQoJcG93ZXJzcGVjIDMgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQwCglzZWNi dXM9MTcsIHN1YmJ1cz0xOQpwY2liMjI6IGFsbG9jYXRlZCBidXMgcmFuZ2UgKDE3LTE5KSBmb3Ig cmlkIDAgb2YgcGNpMDoxNjowOjAKcGNpYjIzOiA8QUNQSSBQQ0ktUENJIGJyaWRnZT4gYXQgZGV2 aWNlIDAuMCBvbiBwY2kxNgpwY2liMjM6ICAgZG9tYWluICAgICAgICAgICAgMApwY2liMjM6ICAg c2Vjb25kYXJ5IGJ1cyAgICAgMTcKcGNpYjIzOiAgIHN1Ym9yZGluYXRlIGJ1cyAgIDE5CnBjaWIy MzogICBzcGVjaWFsIGRlY29kZSAgICBJU0EKcGNpYjIzOiBjb3VsZCBub3QgZ2V0IFBDSSBpbnRl cnJ1cHQgcm91dGluZyB0YWJsZSBmb3IgXDEzNF9TQl8uUENJMC5QRVg2LlVQU1QgLSBBRV9OT1Rf Rk9VTkQKcGNpMTc6IDxBQ1BJIFBDSSBidXM+IG9uIHBjaWIyMwpwY2liMjM6IGFsbG9jYXRlZCBi dXMgcmFuZ2UgKDE3LTE3KSBmb3IgcmlkIDAgb2YgcGNpMTcKcGNpMTc6IGRvbWFpbj0wLCBwaHlz aWNhbCBidXM9MTcKZm91bmQtPgl2ZW5kb3I9MHgxMTFkLCBkZXY9MHg4MDFjLCByZXZpZD0weDBl Cglkb21haW49MCwgYnVzPTE3LCBzbG90PTIsIGZ1bmM9MAoJY2xhc3M9MDYtMDQtMDAsIGhkcnR5 cGU9MHgwMSwgbWZkZXY9MAoJY21kcmVnPTB4MDAwNywgc3RhdHJlZz0weDAwMTAsIGNhY2hlbG5z ej0xNiAoZHdvcmRzKQoJbGF0dGltZXI9MHgwMCAoMCBucyksIG1pbmdudD0weDA3ICgxNzUwIG5z KSwgbWF4bGF0PTB4MDAgKDAgbnMpCglwb3dlcnNwZWMgMyAgc3VwcG9ydHMgRDAgRDMgIGN1cnJl bnQgRDAKCU1TSSBzdXBwb3J0cyAxIG1lc3NhZ2UsIDY0IGJpdAoJc2VjYnVzPTE5LCBzdWJidXM9 MTkKcGNpYjIzOiBhbGxvY2F0ZWQgYnVzIHJhbmdlICgxOS0xOSkgZm9yIHJpZCAwIG9mIHBjaTA6 MTc6MjowCmZvdW5kLT4JdmVuZG9yPTB4MTExZCwgZGV2PTB4ODAxYywgcmV2aWQ9MHgwZQoJZG9t YWluPTAsIGJ1cz0xNywgc2xvdD00LCBmdW5jPTAKCWNsYXNzPTA2LTA0LTAwLCBoZHJ0eXBlPTB4 MDEsIG1mZGV2PTAKCWNtZHJlZz0weDAwMDcsIHN0YXRyZWc9MHgwMDEwLCBjYWNoZWxuc3o9MTYg KGR3b3JkcykKCWxhdHRpbWVyPTB4MDAgKDAgbnMpLCBtaW5nbnQ9MHgwNyAoMTc1MCBucyksIG1h eGxhdD0weDAwICgwIG5zKQoJcG93ZXJzcGVjIDMgIHN1cHBvcnRzIEQwIEQzICBjdXJyZW50IEQw CglNU0kgc3VwcG9ydHMgMSBtZXNzYWdlLCA2NCBiaXQKCXNlY2J1cz0xOCwgc3ViYnVzPTE4CnBj aWIyMzogYWxsb2NhdGVkIGJ1cyByYW5nZSAoMTgtMTgpIGZvciByaWQgMCBvZiBwY2kwOjE3OjQ6 MApwY2liMjQ6IDxBQ1BJIFBDSS1QQ0kgYnJpZGdlPiBhdCBkZXZpY2UgMi4wIG9uIHBjaTE3CnBj aWIyNDogICBkb21haW4gICAgICAgICAgICAwCnBjaWIyNDogICBzZWNvbmRhcnkgYnVzICAgICAx OQpwY2liMjQ6ICAgc3Vib3JkaW5hdGUgYnVzICAgMTkKcGNpYjI0OiAgIHNwZWNpYWwgZGVjb2Rl ICAgIElTQQpwY2kxOTogPEFDUEkgUENJIGJ1cz4gb24gcGNpYjI0CnBjaWIyNDogYWxsb2NhdGVk IGJ1cyByYW5nZSAoMTktMTkpIGZvciByaWQgMCBvZiBwY2kxOQpwY2kxOTogZG9tYWluPTAsIHBo eXNpY2FsIGJ1cz0xOQpwY2liMjU6IDxBQ1BJIFBDSS1QQ0kgYnJpZGdlPiBhdCBkZXZpY2UgNC4w IG9uIHBjaTE3CnBjaWIyNTogICBkb21haW4gICAgICAgICAgICAwCnBjaWIyNTogICBzZWNvbmRh cnkgYnVzICAgICAxOApwY2liMjU6ICAgc3Vib3JkaW5hdGUgYnVzICAgMTgKcGNpYjI1OiAgIHNw ZWNpYWwgZGVjb2RlICAgIElTQQpwY2kxODogPEFDUEkgUENJIGJ1cz4gb24gcGNpYjI1CnBjaWIy NTogYWxsb2NhdGVkIGJ1cyByYW5nZSAoMTgtMTgpIGZvciByaWQgMCBvZiBwY2kxOApwY2kxODog ZG9tYWluPTAsIHBoeXNpY2FsIGJ1cz0xOApwY2kwOiA8YmFzZSBwZXJpcGhlcmFsPiBhdCBkZXZp Y2UgOC4wIChubyBkcml2ZXIgYXR0YWNoZWQpCnBjaWIyNjogPEFDUEkgUENJLVBDSSBicmlkZ2U+ IGF0IGRldmljZSAyOC4wIG9uIHBjaTAKcGNpYjI2OiAgIGRvbWFpbiAgICAgICAgICAgIDAKcGNp YjI2OiAgIHNlY29uZGFyeSBidXMgICAgIDIxCnBjaWIyNjogICBzdWJvcmRpbmF0ZSBidXMgICAy MQpwY2liMjY6ICAgc3BlY2lhbCBkZWNvZGUgICAgSVNBCnBjaTIxOiA8QUNQSSBQQ0kgYnVzPiBv biBwY2liMjYKcGNpYjI2OiBhbGxvY2F0ZWQgYnVzIHJhbmdlICgyMS0yMSkgZm9yIHJpZCAwIG9m IHBjaTIxCnBjaTIxOiBkb21haW49MCwgcGh5c2ljYWwgYnVzPTIxCnVoY2kwOiA8SW50ZWwgNjMx WEVTQi82MzJYRVNCLzMxMDAgVVNCIGNvbnRyb2xsZXIgVVNCLTE+IHBvcnQgMHhjYzgwLTB4Y2M5 ZiBpcnEgMjAgYXQgZGV2aWNlIDI5LjAgb24gcGNpMAp1c2J1czAgb24gdWhjaTAKdWhjaTA6IHVz YnBmOiBBdHRhY2hlZAp1aGNpMTogPEludGVsIDYzMVhFU0IvNjMyWEVTQi8zMTAwIFVTQiBjb250 cm9sbGVyIFVTQi0yPiBwb3J0IDB4Y2NhMC0weGNjYmYgaXJxIDIwIGF0IGRldmljZSAyOS4xIG9u IHBjaTAKdXNidXMxIG9uIHVoY2kxCnVoY2kxOiB1c2JwZjogQXR0YWNoZWQKdWhjaTI6IDxJbnRl bCA2MzFYRVNCLzYzMlhFU0IvMzEwMCBVU0IgY29udHJvbGxlciBVU0ItMz4gcG9ydCAweGNjYzAt MHhjY2RmIGlycSAyMCBhdCBkZXZpY2UgMjkuMiBvbiBwY2kwCnVzYnVzMiBvbiB1aGNpMgp1aGNp MjogdXNicGY6IEF0dGFjaGVkCnVoY2kzOiA8SW50ZWwgNjMxWEVTQi82MzJYRVNCLzMxMDAgVVNC IGNvbnRyb2xsZXIgVVNCLTQ+IHBvcnQgMHhjY2UwLTB4Y2NmZiBpcnEgMjAgYXQgZGV2aWNlIDI5 LjMgb24gcGNpMAp1c2J1czMgb24gdWhjaTMKdWhjaTM6IHVzYnBmOiBBdHRhY2hlZAplaGNpMDog PEludGVsIDYzWFhFU0IgVVNCIDIuMCBjb250cm9sbGVyPiBtZW0gMHhkZTBmZmMwMC0weGRlMGZm ZmZmIGlycSAyMSBhdCBkZXZpY2UgMjkuNyBvbiBwY2kwCmlvYXBpYzA6IHJvdXRpbmcgaW50cGlu IDIxIChQQ0kgSVJRIDIxKSB0byBsYXBpYyAwIHZlY3RvciA1Nwp1c2J1czQ6IEVIQ0kgdmVyc2lv biAxLjAKdXNidXM0IG9uIGVoY2kwCmVoY2kwOiB1c2JwZjogQXR0YWNoZWQKcGNpYjI3OiA8QUNQ SSBQQ0ktUENJIGJyaWRnZT4gYXQgZGV2aWNlIDMwLjAgb24gcGNpMApwY2liMDogYWxsb2NhdGVk IHR5cGUgNCAoMHhkMDAwLTB4ZGZmZikgZm9yIHJpZCAxYyBvZiBwY2liMjcKcGNpYjA6IGFsbG9j YXRlZCB0eXBlIDMgKDB4ZGU0MDAwMDAtMHhkZTRmZmZmZikgZm9yIHJpZCAyMCBvZiBwY2liMjcK cGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YzgwMDAwMDAtMHhjZmZmZmZmZikgZm9yIHJpZCAy NCBvZiBwY2liMjcKcGNpYjI3OiAgIGRvbWFpbiAgICAgICAgICAgIDAKcGNpYjI3OiAgIHNlY29u ZGFyeSBidXMgICAgIDI3CnBjaWIyNzogICBzdWJvcmRpbmF0ZSBidXMgICAyNwpwY2liMjc6ICAg SS9PIGRlY29kZSAgICAgICAgMHhkMDAwLTB4ZGZmZgpwY2liMjc6ICAgbWVtb3J5IGRlY29kZSAg ICAgMHhkZTQwMDAwMC0weGRlNGZmZmZmCnBjaWIyNzogICBwcmVmZXRjaGVkIGRlY29kZSAweGM4 MDAwMDAwLTB4Y2ZmZmZmZmYKcGNpYjI3OiAgIHNwZWNpYWwgZGVjb2RlICAgIFZHQSwgc3VidHJh Y3RpdmUKcGNpMjc6IDxBQ1BJIFBDSSBidXM+IG9uIHBjaWIyNwpwY2liMjc6IGFsbG9jYXRlZCBi dXMgcmFuZ2UgKDI3LTI3KSBmb3IgcmlkIDAgb2YgcGNpMjcKcGNpMjc6IGRvbWFpbj0wLCBwaHlz aWNhbCBidXM9MjcKZm91bmQtPgl2ZW5kb3I9MHgxMDAyLCBkZXY9MHg1MTVlLCByZXZpZD0weDAy Cglkb21haW49MCwgYnVzPTI3LCBzbG90PTEyLCBmdW5jPTAKCWNsYXNzPTAzLTAwLTAwLCBoZHJ0 eXBlPTB4MDAsIG1mZGV2PTAKCWNtZHJlZz0weDAxODcsIHN0YXRyZWc9MHgwMjkwLCBjYWNoZWxu c3o9MTYgKGR3b3JkcykKCWxhdHRpbWVyPTB4MjAgKDk2MCBucyksIG1pbmdudD0weDA4ICgyMDAw IG5zKSwgbWF4bGF0PTB4MDAgKDAgbnMpCglpbnRwaW49YSwgaXJxPTEwCglwb3dlcnNwZWMgMiAg c3VwcG9ydHMgRDAgRDEgRDIgRDMgIGN1cnJlbnQgRDAKCW1hcFsxMF06IHR5cGUgUHJlZmV0Y2hh YmxlIE1lbW9yeSwgcmFuZ2UgMzIsIGJhc2UgMHhjODAwMDAwMCwgc2l6ZSAyNywgZW5hYmxlZApw Y2liMjc6IGFsbG9jYXRlZCBwcmVmZXRjaCByYW5nZSAoMHhjODAwMDAwMC0weGNmZmZmZmZmKSBm b3IgcmlkIDEwIG9mIHBjaTA6Mjc6MTI6MAoJbWFwWzE0XTogdHlwZSBJL08gUG9ydCwgcmFuZ2Ug MzIsIGJhc2UgMHhkYzAwLCBzaXplICA4LCBlbmFibGVkCnBjaWIyNzogYWxsb2NhdGVkIEkvTyBw b3J0IHJhbmdlICgweGRjMDAtMHhkY2ZmKSBmb3IgcmlkIDE0IG9mIHBjaTA6Mjc6MTI6MAoJbWFw WzE4XTogdHlwZSBNZW1vcnksIHJhbmdlIDMyLCBiYXNlIDB4ZGU0ZjAwMDAsIHNpemUgMTYsIGVu YWJsZWQKcGNpYjI3OiBhbGxvY2F0ZWQgbWVtb3J5IHJhbmdlICgweGRlNGYwMDAwLTB4ZGU0ZmZm ZmYpIGZvciByaWQgMTggb2YgcGNpMDoyNzoxMjowCnBjaWIyNzogbWF0Y2hlZCBlbnRyeSBmb3Ig MjcuMTIuSU5UQQpwY2liMjc6IHNsb3QgMTIgSU5UQSBoYXJkd2lyZWQgdG8gSVJRIDE3CnZnYXBj aTA6IDxWR0EtY29tcGF0aWJsZSBkaXNwbGF5PiBwb3J0IDB4ZGMwMC0weGRjZmYgbWVtIDB4Yzgw MDAwMDAtMHhjZmZmZmZmZiwweGRlNGYwMDAwLTB4ZGU0ZmZmZmYgaXJxIDE3IGF0IGRldmljZSAx Mi4wIG9uIHBjaTI3CnZnYXBjaTA6IEJvb3QgdmlkZW8gZGV2aWNlCmlzYWIwOiA8UENJLUlTQSBi cmlkZ2U+IGF0IGRldmljZSAzMS4wIG9uIHBjaTAKaXNhMDogPElTQSBidXM+IG9uIGlzYWIwCmF0 YXBjaTA6IDxJbnRlbCA2M1hYRVNCMiBTQVRBMzAwIGNvbnRyb2xsZXI+IHBvcnQgMHgxZjAtMHgx ZjcsMHgzZjYsMHgxNzAtMHgxNzcsMHgzNzYsMHhmYzAwLTB4ZmMwZiBhdCBkZXZpY2UgMzEuMiBv biBwY2kwCnBjaTA6IGNoaWxkIGF0YXBjaTAgcmVxdWVzdGVkIHR5cGUgNCBmb3IgcmlkIDB4MjQs IGJ1dCB0aGUgQkFSIHNheXMgaXQgaXMgYW4gbWVtaW8KYXRhMDogPEFUQSBjaGFubmVsPiBhdCBj aGFubmVsIDAgb24gYXRhcGNpMAppb2FwaWMwOiByb3V0aW5nIGludHBpbiAxNCAoSVNBIElSUSAx NCkgdG8gbGFwaWMgMCB2ZWN0b3IgNTgKYXRhMTogPEFUQSBjaGFubmVsPiBhdCBjaGFubmVsIDEg b24gYXRhcGNpMAppb2FwaWMwOiByb3V0aW5nIGludHBpbiAxNSAoSVNBIElSUSAxNSkgdG8gbGFw aWMgMCB2ZWN0b3IgNTkKdWFydDA6IDwxNjU1MCBvciBjb21wYXRpYmxlPiBwb3J0IDB4M2Y4LTB4 M2ZmIGlycSA0IGZsYWdzIDB4MTAgb24gYWNwaTAKaW9hcGljMDogcm91dGluZyBpbnRwaW4gNCAo SVNBIElSUSA0KSB0byBsYXBpYyAwIHZlY3RvciA2MAp1YXJ0MDogZmFzdCBpbnRlcnJ1cHQKdWFy dDE6IDwxNjU1MCBvciBjb21wYXRpYmxlPiBwb3J0IDB4MmY4LTB4MmZmIGlycSAzIG9uIGFjcGkw CmlvYXBpYzA6IHJvdXRpbmcgaW50cGluIDMgKElTQSBJUlEgMykgdG8gbGFwaWMgMCB2ZWN0b3Ig NjEKdWFydDE6IGZhc3QgaW50ZXJydXB0CkFDUEk6IEVuYWJsZWQgMSBHUEVzIGluIGJsb2NrIDAw IHRvIDFGCmFjcGkwOiB3YWtldXAgY29kZSB2YSAweGZmZmZmZTEwNGFkMDUwMDAgcGEgMHg5MDAw MApleF9pc2FfaWRlbnRpZnkoKQphaGNfaXNhX2lkZW50aWZ5IDA6IGlvcG9ydCAweGMwMCBhbGxv YyBmYWlsZWQKYWhjX2lzYV9pZGVudGlmeSAxOiBpb3BvcnQgMHgxYzAwIGFsbG9jIGZhaWxlZAph aGNfaXNhX2lkZW50aWZ5IDI6IGlvcG9ydCAweDJjMDAgYWxsb2MgZmFpbGVkCmFoY19pc2FfaWRl bnRpZnkgMzogaW9wb3J0IDB4M2MwMCBhbGxvYyBmYWlsZWQKYWhjX2lzYV9pZGVudGlmeSA0OiBp b3BvcnQgMHg0YzAwIGFsbG9jIGZhaWxlZAphaGNfaXNhX2lkZW50aWZ5IDU6IGlvcG9ydCAweDVj MDAgYWxsb2MgZmFpbGVkCmFoY19pc2FfaWRlbnRpZnkgNjogaW9wb3J0IDB4NmMwMCBhbGxvYyBm YWlsZWQKYWhjX2lzYV9pZGVudGlmeSA3OiBpb3BvcnQgMHg3YzAwIGFsbG9jIGZhaWxlZAphaGNf aXNhX2lkZW50aWZ5IDg6IGlvcG9ydCAweDhjMDAgYWxsb2MgZmFpbGVkCmFoY19pc2FfaWRlbnRp ZnkgOTogaW9wb3J0IDB4OWMwMCBhbGxvYyBmYWlsZWQKYWhjX2lzYV9pZGVudGlmeSAxMDogaW9w b3J0IDB4YWMwMCBhbGxvYyBmYWlsZWQKYWhjX2lzYV9pZGVudGlmeSAxMTogaW9wb3J0IDB4YmMw MCBhbGxvYyBmYWlsZWQKYWhjX2lzYV9pZGVudGlmeSAxMjogaW9wb3J0IDB4Y2MwMCBhbGxvYyBm YWlsZWQKYWhjX2lzYV9pZGVudGlmeSAxMzogaW9wb3J0IDB4ZGMwMCBhbGxvYyBmYWlsZWQKYWhj X2lzYV9pZGVudGlmeSAxNDogaW9wb3J0IDB4ZWMwMCBhbGxvYyBmYWlsZWQKcGNpYjA6IGFsbG9j YXRlZCB0eXBlIDMgKDB4YTAwMDAtMHhhMDdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFs bG9jYXRlZCB0eXBlIDMgKDB4YTA4MDAtMHhhMGZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6 IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTEwMDAtMHhhMTdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNp YjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTE4MDAtMHhhMWZmZikgZm9yIHJpZCAwIG9mIG9ybTAK cGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTIwMDAtMHhhMjdmZikgZm9yIHJpZCAwIG9mIG9y bTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTI4MDAtMHhhMmZmZikgZm9yIHJpZCAwIG9m IG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTMwMDAtMHhhMzdmZikgZm9yIHJpZCAw IG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTM4MDAtMHhhM2ZmZikgZm9yIHJp ZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTQwMDAtMHhhNDdmZikgZm9y IHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTQ4MDAtMHhhNGZmZikg Zm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTUwMDAtMHhhNTdm ZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTU4MDAtMHhh NWZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTYwMDAt MHhhNjdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YTY4 MDAtMHhhNmZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4 YTcwMDAtMHhhNzdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMg KDB4YTc4MDAtMHhhN2ZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBl IDMgKDB4YTgwMDAtMHhhODdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0 eXBlIDMgKDB4YTg4MDAtMHhhOGZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRl ZCB0eXBlIDMgKDB4YTkwMDAtMHhhOTdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9j YXRlZCB0eXBlIDMgKDB4YTk4MDAtMHhhOWZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFs bG9jYXRlZCB0eXBlIDMgKDB4YWEwMDAtMHhhYTdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6 IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWE4MDAtMHhhYWZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNp YjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWIwMDAtMHhhYjdmZikgZm9yIHJpZCAwIG9mIG9ybTAK cGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWI4MDAtMHhhYmZmZikgZm9yIHJpZCAwIG9mIG9y bTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWMwMDAtMHhhYzdmZikgZm9yIHJpZCAwIG9m IG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWM4MDAtMHhhY2ZmZikgZm9yIHJpZCAw IG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWQwMDAtMHhhZDdmZikgZm9yIHJp ZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWQ4MDAtMHhhZGZmZikgZm9y IHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWUwMDAtMHhhZTdmZikg Zm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWU4MDAtMHhhZWZm ZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWYwMDAtMHhh ZjdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YWY4MDAt MHhhZmZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YjAw MDAtMHhiMDdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4 YjA4MDAtMHhiMGZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMg KDB4YjEwMDAtMHhiMTdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBl IDMgKDB4YjE4MDAtMHhiMWZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0 eXBlIDMgKDB4YjIwMDAtMHhiMjdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRl ZCB0eXBlIDMgKDB4YjI4MDAtMHhiMmZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9j YXRlZCB0eXBlIDMgKDB4YjMwMDAtMHhiMzdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFs bG9jYXRlZCB0eXBlIDMgKDB4YjM4MDAtMHhiM2ZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6 IGFsbG9jYXRlZCB0eXBlIDMgKDB4YjQwMDAtMHhiNDdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNp YjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YjQ4MDAtMHhiNGZmZikgZm9yIHJpZCAwIG9mIG9ybTAK cGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YjUwMDAtMHhiNTdmZikgZm9yIHJpZCAwIG9mIG9y bTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YjU4MDAtMHhiNWZmZikgZm9yIHJpZCAwIG9m IG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YjYwMDAtMHhiNjdmZikgZm9yIHJpZCAw IG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YjY4MDAtMHhiNmZmZikgZm9yIHJp ZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YjcwMDAtMHhiNzdmZikgZm9y IHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4Yjc4MDAtMHhiN2ZmZikg Zm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YjgwMDAtMHhiODdm ZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4Yjg4MDAtMHhi OGZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YjkwMDAt MHhiOTdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4Yjk4 MDAtMHhiOWZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4 YmEwMDAtMHhiYTdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMg KDB4YmE4MDAtMHhiYWZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBl IDMgKDB4YmIwMDAtMHhiYjdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0 eXBlIDMgKDB4YmI4MDAtMHhiYmZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9jYXRl ZCB0eXBlIDMgKDB4YmMwMDAtMHhiYzdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFsbG9j YXRlZCB0eXBlIDMgKDB4YmM4MDAtMHhiY2ZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6IGFs bG9jYXRlZCB0eXBlIDMgKDB4YmQwMDAtMHhiZDdmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNpYjA6 IGFsbG9jYXRlZCB0eXBlIDMgKDB4YmQ4MDAtMHhiZGZmZikgZm9yIHJpZCAwIG9mIG9ybTAKcGNp YjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YmUwMDAtMHhiZTdmZikgZm9yIHJpZCAwIG9mIG9ybTAK cGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YmU4MDAtMHhiZWZmZikgZm9yIHJpZCAwIG9mIG9y bTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YmYwMDAtMHhiZjdmZikgZm9yIHJpZCAwIG9m IG9ybTAKcGNpYjA6IGFsbG9jYXRlZCB0eXBlIDMgKDB4YmY4MDAtMHhiZmZmZikgZm9yIHJpZCAw IG9mIG9ybTAKaXNhX3Byb2JlX2NoaWxkcmVuOiBkaXNhYmxpbmcgUG5QIGRldmljZXMKYXRydGM6 IGF0cnRjMCBhbHJlYWR5IGV4aXN0czsgc2tpcHBpbmcgaXQKYXR0aW1lcjogYXR0aW1lcjAgYWxy ZWFkeSBleGlzdHM7IHNraXBwaW5nIGl0CnNjOiBzYzAgYWxyZWFkeSBleGlzdHM7IHNraXBwaW5n IGl0CnVhcnQ6IHVhcnQwIGFscmVhZHkgZXhpc3RzOyBza2lwcGluZyBpdAp1YXJ0OiB1YXJ0MSBh bHJlYWR5IGV4aXN0czsgc2tpcHBpbmcgaXQKaXNhX3Byb2JlX2NoaWxkcmVuOiBwcm9iaW5nIG5v bi1QblAgZGV2aWNlcwpvcm0wOiA8SVNBIE9wdGlvbiBST01zPiBhdCBpb21lbSAweGMwMDAwLTB4 YzhmZmYsMHhjZjgwMC0weGQwN2ZmLDB4ZWMwMDAtMHhlZmZmZiBvbiBpc2EwCnNjMDogPFN5c3Rl bSBjb25zb2xlPiBhdCBmbGFncyAweDEwMCBvbiBpc2EwCnNjMDogVkdBIDwxNiB2aXJ0dWFsIGNv bnNvbGVzLCBmbGFncz0weDMwMD4Kc2MwOiBmYjAsIGtiZDEsIHRlcm1pbmFsIGVtdWxhdG9yOiBz Y3Rla2VuICh0ZWtlbiB0ZXJtaW5hbCkKdmdhMDogPEdlbmVyaWMgSVNBIFZHQT4gYXQgcG9ydCAw eDNjMC0weDNkZiBpb21lbSAweGEwMDAwLTB4YmZmZmYgb24gaXNhMApwY2liMDogYWxsb2NhdGVk IHR5cGUgNCAoMHgzYzAtMHgzZGYpIGZvciByaWQgMCBvZiB2Z2EwCnBjaWIwOiBhbGxvY2F0ZWQg dHlwZSAzICgweGEwMDAwLTB4YmZmZmYpIGZvciByaWQgMCBvZiB2Z2EwCnBjaWIwOiBhbGxvY2F0 ZWQgdHlwZSA0ICgweDYwLTB4NjApIGZvciByaWQgMCBvZiBhdGtiZGMwCnBjaWIwOiBhbGxvY2F0 ZWQgdHlwZSA0ICgweDY0LTB4NjQpIGZvciByaWQgMSBvZiBhdGtiZGMwCmF0a2JkYzA6IDxLZXli b2FyZCBjb250cm9sbGVyIChpODA0Mik+IGF0IHBvcnQgMHg2MCwweDY0IG9uIGlzYTAKcGNpYjA6 IGFsbG9jYXRlZCB0eXBlIDQgKDB4NjAtMHg2MCkgZm9yIHJpZCAwIG9mIGF0a2JkYzAKcGNpYjA6 IGFsbG9jYXRlZCB0eXBlIDQgKDB4NjQtMHg2NCkgZm9yIHJpZCAxIG9mIGF0a2JkYzAKYXRrYmQw OiA8QVQgS2V5Ym9hcmQ+IGlycSAxIG9uIGF0a2JkYzAKa2JkMCBhdCBhdGtiZDAKYXRrYmQ6IHRo ZSBjdXJyZW50IGtiZCBjb250cm9sbGVyIGNvbW1hbmQgYnl0ZSAwMDQ1CmF0a2JkOiBrZXlib2Fy ZCBJRCAweGZmZmZmZmZmICgxKQphdGtiZDogZmFpbGVkIHRvIHJlc2V0IHRoZSBrZXlib2FyZC4K a2JkMDogYXRrYmQwLCBBVCA4NCAoMSksIGNvbmZpZzoweDAsIGZsYWdzOjB4M2QwMDAwCmlvYXBp YzA6IHJvdXRpbmcgaW50cGluIDEgKElTQSBJUlEgMSkgdG8gbGFwaWMgMCB2ZWN0b3IgNjIKYXRr YmQwOiBbR0lBTlQtTE9DS0VEXQpwc20wOiB1bmFibGUgdG8gYWxsb2NhdGUgSVJRCnBjaWIwOiBh bGxvY2F0ZWQgdHlwZSA0ICgweDNmMC0weDNmNSkgZm9yIHJpZCAwIG9mIGZkYzAKcGNpYjA6IGFs bG9jYXRlZCB0eXBlIDQgKDB4M2Y3LTB4M2Y3KSBmb3IgcmlkIDEgb2YgZmRjMApmZGMwIGZhaWxl ZCB0byBwcm9iZSBhdCBwb3J0IDB4M2YwLTB4M2Y1LDB4M2Y3IGlycSA2IGRycSAyIG9uIGlzYTAK cHBjMDogY2Fubm90IHJlc2VydmUgSS9PIHBvcnQgcmFuZ2UKcHBjMCBmYWlsZWQgdG8gcHJvYmUg YXQgaXJxIDcgb24gaXNhMAp3YndkMCBmYWlsZWQgdG8gcHJvYmUgb24gaXNhMAppc2FfcHJvYmVf Y2hpbGRyZW46IHByb2JpbmcgUG5QIGRldmljZXMKZXN0MDogZW5hYmxpbmcgU3BlZWRTdGVwCmVz dDA6IDxFbmhhbmNlZCBTcGVlZFN0ZXAgRnJlcXVlbmN5IENvbnRyb2w+IG9uIGNwdTAKZXN0OiBD UFUgc3VwcG9ydHMgRW5oYW5jZWQgU3BlZWRzdGVwLCBidXQgaXMgbm90IHJlY29nbml6ZWQuCmVz dDogY3B1X3ZlbmRvciBHZW51aW5lSW50ZWwsIG1zciBiMmMwYjJjMDYwMDBiMmMKZGV2aWNlX2F0 dGFjaDogZXN0MCBhdHRhY2ggcmV0dXJuZWQgNgpwNHRjYzA6IDxDUFUgRnJlcXVlbmN5IFRoZXJt YWwgQ29udHJvbD4gb24gY3B1MAplc3QxOiA8RW5oYW5jZWQgU3BlZWRTdGVwIEZyZXF1ZW5jeSBD b250cm9sPiBvbiBjcHUxCmVzdDogQ1BVIHN1cHBvcnRzIEVuaGFuY2VkIFNwZWVkc3RlcCwgYnV0 IGlzIG5vdCByZWNvZ25pemVkLgplc3Q6IGNwdV92ZW5kb3IgR2VudWluZUludGVsLCBtc3IgYjJj MGIyYzA2MDAwYjJjCmRldmljZV9hdHRhY2g6IGVzdDEgYXR0YWNoIHJldHVybmVkIDYKcDR0Y2Mx OiA8Q1BVIEZyZXF1ZW5jeSBUaGVybWFsIENvbnRyb2w+IG9uIGNwdTEKZXN0MjogPEVuaGFuY2Vk IFNwZWVkU3RlcCBGcmVxdWVuY3kgQ29udHJvbD4gb24gY3B1Mgplc3Q6IENQVSBzdXBwb3J0cyBF bmhhbmNlZCBTcGVlZHN0ZXAsIGJ1dCBpcyBub3QgcmVjb2duaXplZC4KZXN0OiBjcHVfdmVuZG9y IEdlbnVpbmVJbnRlbCwgbXNyIGIyYzBiMmMwNjAwMGIyYwpkZXZpY2VfYXR0YWNoOiBlc3QyIGF0 dGFjaCByZXR1cm5lZCA2CnA0dGNjMjogPENQVSBGcmVxdWVuY3kgVGhlcm1hbCBDb250cm9sPiBv biBjcHUyCmVzdDM6IDxFbmhhbmNlZCBTcGVlZFN0ZXAgRnJlcXVlbmN5IENvbnRyb2w+IG9uIGNw dTMKZXN0OiBDUFUgc3VwcG9ydHMgRW5oYW5jZWQgU3BlZWRzdGVwLCBidXQgaXMgbm90IHJlY29n bml6ZWQuCmVzdDogY3B1X3ZlbmRvciBHZW51aW5lSW50ZWwsIG1zciBiMmMwYjJjMDYwMDBiMmMK ZGV2aWNlX2F0dGFjaDogZXN0MyBhdHRhY2ggcmV0dXJuZWQgNgpwNHRjYzM6IDxDUFUgRnJlcXVl bmN5IFRoZXJtYWwgQ29udHJvbD4gb24gY3B1Mwplc3Q0OiA8RW5oYW5jZWQgU3BlZWRTdGVwIEZy ZXF1ZW5jeSBDb250cm9sPiBvbiBjcHU0CmVzdDogQ1BVIHN1cHBvcnRzIEVuaGFuY2VkIFNwZWVk c3RlcCwgYnV0IGlzIG5vdCByZWNvZ25pemVkLgplc3Q6IGNwdV92ZW5kb3IgR2VudWluZUludGVs LCBtc3IgYjJjMGIyYzA2MDAwYjJjCmRldmljZV9hdHRhY2g6IGVzdDQgYXR0YWNoIHJldHVybmVk IDYKcDR0Y2M0OiA8Q1BVIEZyZXF1ZW5jeSBUaGVybWFsIENvbnRyb2w+IG9uIGNwdTQKZXN0NTog PEVuaGFuY2VkIFNwZWVkU3RlcCBGcmVxdWVuY3kgQ29udHJvbD4gb24gY3B1NQplc3Q6IENQVSBz dXBwb3J0cyBFbmhhbmNlZCBTcGVlZHN0ZXAsIGJ1dCBpcyBub3QgcmVjb2duaXplZC4KZXN0OiBj cHVfdmVuZG9yIEdlbnVpbmVJbnRlbCwgbXNyIGIyYzBiMmMwNjAwMGIyYwpkZXZpY2VfYXR0YWNo OiBlc3Q1IGF0dGFjaCByZXR1cm5lZCA2CnA0dGNjNTogPENQVSBGcmVxdWVuY3kgVGhlcm1hbCBD b250cm9sPiBvbiBjcHU1CmVzdDY6IDxFbmhhbmNlZCBTcGVlZFN0ZXAgRnJlcXVlbmN5IENvbnRy b2w+IG9uIGNwdTYKZXN0OiBDUFUgc3VwcG9ydHMgRW5oYW5jZWQgU3BlZWRzdGVwLCBidXQgaXMg bm90IHJlY29nbml6ZWQuCmVzdDogY3B1X3ZlbmRvciBHZW51aW5lSW50ZWwsIG1zciBiMmMwYjJj MDYwMDBiMmMKZGV2aWNlX2F0dGFjaDogZXN0NiBhdHRhY2ggcmV0dXJuZWQgNgpwNHRjYzY6IDxD UFUgRnJlcXVlbmN5IFRoZXJtYWwgQ29udHJvbD4gb24gY3B1Ngplc3Q3OiA8RW5oYW5jZWQgU3Bl ZWRTdGVwIEZyZXF1ZW5jeSBDb250cm9sPiBvbiBjcHU3CmVzdDogQ1BVIHN1cHBvcnRzIEVuaGFu Y2VkIFNwZWVkc3RlcCwgYnV0IGlzIG5vdCByZWNvZ25pemVkLgplc3Q6IGNwdV92ZW5kb3IgR2Vu dWluZUludGVsLCBtc3IgYjJjMGIyYzA2MDAwYjJjCmRldmljZV9hdHRhY2g6IGVzdDcgYXR0YWNo IHJldHVybmVkIDYKcDR0Y2M3OiA8Q1BVIEZyZXF1ZW5jeSBUaGVybWFsIENvbnRyb2w+IG9uIGNw dTcKZXN0ODogPEVuaGFuY2VkIFNwZWVkU3RlcCBGcmVxdWVuY3kgQ29udHJvbD4gb24gY3B1OApl c3Q6IENQVSBzdXBwb3J0cyBFbmhhbmNlZCBTcGVlZHN0ZXAsIGJ1dCBpcyBub3QgcmVjb2duaXpl ZC4KZXN0OiBjcHVfdmVuZG9yIEdlbnVpbmVJbnRlbCwgbXNyIGIyYzBiMmMwNjAwMGIyYwpkZXZp Y2VfYXR0YWNoOiBlc3Q4IGF0dGFjaCByZXR1cm5lZCA2CnA0dGNjODogPENQVSBGcmVxdWVuY3kg VGhlcm1hbCBDb250cm9sPiBvbiBjcHU4CmVzdDk6IDxFbmhhbmNlZCBTcGVlZFN0ZXAgRnJlcXVl bmN5IENvbnRyb2w+IG9uIGNwdTkKZXN0OiBDUFUgc3VwcG9ydHMgRW5oYW5jZWQgU3BlZWRzdGVw LCBidXQgaXMgbm90IHJlY29nbml6ZWQuCmVzdDogY3B1X3ZlbmRvciBHZW51aW5lSW50ZWwsIG1z ciBiMmMwYjJjMDYwMDBiMmMKZGV2aWNlX2F0dGFjaDogZXN0OSBhdHRhY2ggcmV0dXJuZWQgNgpw NHRjYzk6IDxDUFUgRnJlcXVlbmN5IFRoZXJtYWwgQ29udHJvbD4gb24gY3B1OQplc3QxMDogPEVu aGFuY2VkIFNwZWVkU3RlcCBGcmVxdWVuY3kgQ29udHJvbD4gb24gY3B1MTAKZXN0OiBDUFUgc3Vw cG9ydHMgRW5oYW5jZWQgU3BlZWRzdGVwLCBidXQgaXMgbm90IHJlY29nbml6ZWQuCmVzdDogY3B1 X3ZlbmRvciBHZW51aW5lSW50ZWwsIG1zciBiMmMwYjJjMDYwMDBiMmMKZGV2aWNlX2F0dGFjaDog ZXN0MTAgYXR0YWNoIHJldHVybmVkIDYKcDR0Y2MxMDogPENQVSBGcmVxdWVuY3kgVGhlcm1hbCBD b250cm9sPiBvbiBjcHUxMAplc3QxMTogPEVuaGFuY2VkIFNwZWVkU3RlcCBGcmVxdWVuY3kgQ29u dHJvbD4gb24gY3B1MTEKZXN0OiBDUFUgc3VwcG9ydHMgRW5oYW5jZWQgU3BlZWRzdGVwLCBidXQg aXMgbm90IHJlY29nbml6ZWQuCmVzdDogY3B1X3ZlbmRvciBHZW51aW5lSW50ZWwsIG1zciBiMmMw YjJjMDYwMDBiMmMKZGV2aWNlX2F0dGFjaDogZXN0MTEgYXR0YWNoIHJldHVybmVkIDYKcDR0Y2Mx MTogPENQVSBGcmVxdWVuY3kgVGhlcm1hbCBDb250cm9sPiBvbiBjcHUxMQplc3QxMjogPEVuaGFu Y2VkIFNwZWVkU3RlcCBGcmVxdWVuY3kgQ29udHJvbD4gb24gY3B1MTIKZXN0OiBDUFUgc3VwcG9y dHMgRW5oYW5jZWQgU3BlZWRzdGVwLCBidXQgaXMgbm90IHJlY29nbml6ZWQuCmVzdDogY3B1X3Zl bmRvciBHZW51aW5lSW50ZWwsIG1zciBiMmMwYjJjMDYwMDBiMmMKZGV2aWNlX2F0dGFjaDogZXN0 MTIgYXR0YWNoIHJldHVybmVkIDYKcDR0Y2MxMjogPENQVSBGcmVxdWVuY3kgVGhlcm1hbCBDb250 cm9sPiBvbiBjcHUxMgplc3QxMzogPEVuaGFuY2VkIFNwZWVkU3RlcCBGcmVxdWVuY3kgQ29udHJv bD4gb24gY3B1MTMKZXN0OiBDUFUgc3VwcG9ydHMgRW5oYW5jZWQgU3BlZWRzdGVwLCBidXQgaXMg bm90IHJlY29nbml6ZWQuCmVzdDogY3B1X3ZlbmRvciBHZW51aW5lSW50ZWwsIG1zciBiMmMwYjJj MDYwMDBiMmMKZGV2aWNlX2F0dGFjaDogZXN0MTMgYXR0YWNoIHJldHVybmVkIDYKcDR0Y2MxMzog PENQVSBGcmVxdWVuY3kgVGhlcm1hbCBDb250cm9sPiBvbiBjcHUxMwplc3QxNDogPEVuaGFuY2Vk IFNwZWVkU3RlcCBGcmVxdWVuY3kgQ29udHJvbD4gb24gY3B1MTQKZXN0OiBDUFUgc3VwcG9ydHMg RW5oYW5jZWQgU3BlZWRzdGVwLCBidXQgaXMgbm90IHJlY29nbml6ZWQuCmVzdDogY3B1X3ZlbmRv ciBHZW51aW5lSW50ZWwsIG1zciBiMmMwYjJjMDYwMDBiMmMKZGV2aWNlX2F0dGFjaDogZXN0MTQg YXR0YWNoIHJldHVybmVkIDYKcDR0Y2MxNDogPENQVSBGcmVxdWVuY3kgVGhlcm1hbCBDb250cm9s PiBvbiBjcHUxNAplc3QxNTogPEVuaGFuY2VkIFNwZWVkU3RlcCBGcmVxdWVuY3kgQ29udHJvbD4g b24gY3B1MTUKZXN0OiBDUFUgc3VwcG9ydHMgRW5oYW5jZWQgU3BlZWRzdGVwLCBidXQgaXMgbm90 IHJlY29nbml6ZWQuCmVzdDogY3B1X3ZlbmRvciBHZW51aW5lSW50ZWwsIG1zciBiMmMwYjJjMDYw MDBiMmMKZGV2aWNlX2F0dGFjaDogZXN0MTUgYXR0YWNoIHJldHVybmVkIDYKcDR0Y2MxNTogPENQ VSBGcmVxdWVuY3kgVGhlcm1hbCBDb250cm9sPiBvbiBjcHUxNQpEZXZpY2UgY29uZmlndXJhdGlv biBmaW5pc2hlZC4KbWZpMDogMjY0MCAoNDQ2NDkxMDUzcy8weDAwMjAvaW5mbykgLSBTaHV0ZG93 biBjb21tYW5kIHJlY2VpdmVkIGZyb20gaG9zdAptZmkwOiAyNjQxIChib290ICsgM3MvMHgwMDIw L2luZm8pIC0gRmlybXdhcmUgaW5pdGlhbGl6YXRpb24gc3RhcnRlZCAoUENJIElEIDAwNjAvMTAw MC8xZjBiLzEwMjgpCm1maTA6IDI2NDIgKGJvb3QgKyAzcy8weDAwMjAvaW5mbykgLSBGaXJtd2Fy ZSB2ZXJzaW9uIDEuMjIuMDItMDYxMgptZmkwOiAyNjQzIChib290ICsgMjNzLzB4MDAwOC9pbmZv KSAtIEJhdHRlcnkgUHJlc2VudAptZmkwOiAyNjQ0IChib290ICsgMjNzLzB4MDAyMC9pbmZvKSAt IENvbnRyb2xsZXIgaGFyZHdhcmUgcmV2aXNpb24gSUQgKDB4MCkKbWZpMDogMjY0NSAoYm9vdCAr IDIzcy8weDAwMjAvaW5mbykgLSBQYWNrYWdlIHZlcnNpb24gNi4yLjAtMDAxMwptZmkwOiAyNjQ2 IChib290ICsgMjNzLzB4MDAyMC9pbmZvKSAtIEJvYXJkIFJldmlzaW9uIAptZmkwOiAyNjQ3IChi b290ICsgNDhzLzB4MDAwNC9pbmZvKSAtIEVuY2xvc3VyZSBQRCAyMChjIE5vbmUvcDApIGNvbW11 bmljYXRpb24gcmVzdG9yZWQKbWZpMDogMjY0OCAoYm9vdCArIDQ4cy8weDAwMDIvaW5mbykgLSBJ bnNlcnRlZDogRW5jbCBQRCAyMAptZmkwOiBNRklfRENNRF9QRF9MSVNUX1FVRVJZIGZhaWxlZCAy Cm1maTA6IDI2NDkgKGJvb3QgKyA0OHMvMHgwMDAyL2luZm8pIC0gSW5zZXJ0ZWQ6IFBEIDIwKGMg Tm9uZS9wMCkgSW5mbzogZW5jbFBkPTIwLCBzY3NpVHlwZT1kLCBwb3J0TWFwPTA5LCBzYXNBZGRy PTU3ODJiMGIwMzA1NTY3MDAsMDAwMDAwMDAwMDAwMDAwMAptZmkwOiAyNjUwIChib290ICsgNDhz LzB4MDAwMi9pbmZvKSAtIEluc2VydGVkOiBQRCAwMChlMHgyMC9zMCkKbWZpMDogTUZJX0RDTURf UERfTElTVF9RVUVSWSBmYWlsZWQgMgptZmkwOiAyNjUxIChib290ICsgNDhzLzB4MDAwMi9pbmZv KSAtIEluc2VydGVkOiBQRCAwMChlMHgyMC9zMCkgSW5mbzogZW5jbFBkPTIwLCBzY3NpVHlwZT0w LCBwb3J0TWFwPTAwLCBzYXNBZGRyPTUwMDAwMGUwMTdkMTcxNTIsMDAwMDAwMDAwMDAwMDAwMApt ZmkwOiAyNjUyIChib290ICsgNDhzLzB4MDAwMi9pbmZvKSAtIEluc2VydGVkOiBQRCAwMShlMHgy MC9zMSkKbWZpMDogTUZJX0RDTURfUERfTElTVF9RVUVSWSBmYWlsZWQgMgptZmkwOiAyNjUzIChi b290ICsgNDhzLzB4MDAwMi9pbmZvKSAtIEluc2VydGVkOiBQRCAwMShlMHgyMC9zMSkgSW5mbzog ZW5jbFBkPTIwLCBzY3NpVHlwZT0wLCBwb3J0TWFwPTAxLCBzYXNBZGRyPTUwMDAwMGUwMTdkMTY4 ZTIsMDAwMDAwMDAwMDAwMDAwMAptZmkwOiAyNjU0ICg0NDY0OTExMTVzLzB4MDAyMC9pbmZvKSAt IFRpbWUgZXN0YWJsaXNoZWQgYXMgMDIvMjMvMTQgMTc6MTg6MzU7ICg1MCBzZWNvbmRzIHNpbmNl IHBvd2VyIG9uKQpwcm9jZnMgcmVnaXN0ZXJlZApsYXBpYzogRGl2aXNvciAyLCBGcmVxdWVuY3kg MTMyOTk2NjgzIEh6ClRpbWVjb3VudGVycyB0aWNrIGV2ZXJ5IDEuMDAwIG1zZWMKdmxhbjogaW5p dGlhbGl6ZWQsIHVzaW5nIGhhc2ggdGFibGVzIHdpdGggY2hhaW5pbmcKdGNwX2luaXQ6IG5ldC5p bmV0LnRjcC50Y2JoYXNoc2l6ZSBhdXRvIHR1bmVkIHRvIDUyNDI4OApsbzA6IGJwZiBhdHRhY2hl ZApocHRucjogbm8gY29udHJvbGxlciBkZXRlY3RlZC4KaHB0Mjd4eDogbm8gY29udHJvbGxlciBk ZXRlY3RlZC4KaHB0cnI6IG5vIGNvbnRyb2xsZXIgZGV0ZWN0ZWQuCnJhbmRvbTogdW5ibG9ja2lu ZyBkZXZpY2UuCnVzYnVzMDogMTJNYnBzIEZ1bGwgU3BlZWQgVVNCIHYxLjAKdXNidXMxOiAxMk1i cHMgRnVsbCBTcGVlZCBVU0IgdjEuMAp1c2J1czI6IDEyTWJwcyBGdWxsIFNwZWVkIFVTQiB2MS4w CnVzYnVzMzogMTJNYnBzIEZ1bGwgU3BlZWQgVVNCIHYxLjAKdXNidXM0OiA0ODBNYnBzIEhpZ2gg U3BlZWQgVVNCIHYyLjAKdWdlbjEuMTogPEludGVsPiBhdCB1c2J1czEKdWh1YjA6IDxJbnRlbCBV SENJIHJvb3QgSFVCLCBjbGFzcyA5LzAsIHJldiAxLjAwLzEuMDAsIGFkZHIgMT4gb24gdXNidXMx CnVnZW4wLjE6IDxJbnRlbD4gYXQgdXNidXMwCnVodWIxOiA8SW50ZWwgVUhDSSByb290IEhVQiwg Y2xhc3MgOS8wLCByZXYgMS4wMC8xLjAwLCBhZGRyIDE+IG9uIHVzYnVzMAp1Z2VuNC4xOiA8SW50 ZWw+IGF0IHVzYnVzNAp1aHViMjogPEludGVsIEVIQ0kgcm9vdCBIVUIsIGNsYXNzIDkvMCwgcmV2 IDIuMDAvMS4wMCwgYWRkciAxPiBvbiB1c2J1czQKdWdlbjMuMTogPEludGVsPiBhdCB1c2J1czMK dWh1YjM6IDxJbnRlbCBVSENJIHJvb3QgSFVCLCBjbGFzcyA5LzAsIHJldiAxLjAwLzEuMDAsIGFk ZHIgMT4gb24gdXNidXMzCnVnZW4yLjE6IDxJbnRlbD4gYXQgdXNidXMyCnVodWI0OiA8SW50ZWwg VUhDSSByb290IEhVQiwgY2xhc3MgOS8wLCByZXYgMS4wMC8xLjAwLCBhZGRyIDE+IG9uIHVzYnVz MgptZmlkMCBvbiBtZmkwCm1maWQwOiAxMzkzOTJNQiAoMjg1NDc0ODE2IHNlY3RvcnMpIFJBSUQg dm9sdW1lIChubyBsYWJlbCkgaXMgb3B0aW1hbAptZmkwOiAyNjU1ICg0NDY0OTExNTNzLzB4MDAw OC9pbmZvKSAtIEJhdHRlcnkgdGVtcGVyYXR1cmUgaXMgbm9ybWFsCnVodWIwOiAyIHBvcnRzIHdp dGggMiByZW1vdmFibGUsIHNlbGYgcG93ZXJlZAp1aHViMTogMiBwb3J0cyB3aXRoIDIgcmVtb3Zh YmxlLCBzZWxmIHBvd2VyZWQKdWh1YjM6IDIgcG9ydHMgd2l0aCAyIHJlbW92YWJsZSwgc2VsZiBw b3dlcmVkCnVodWI0OiAyIHBvcnRzIHdpdGggMiByZW1vdmFibGUsIHNlbGYgcG93ZXJlZAptZmkw OiAyNjU2ICg0NDY0OTExNTNzLzB4MDAwOC9pbmZvKSAtIEN1cnJlbnQgY2FwYWNpdHkgb2YgdGhl IGJhdHRlcnkgaXMgYWJvdmUgdGhyZXNob2xkCmF0YTA6IFNBVEEgcmVzZXQ6IHBvcnRzIHN0YXR1 cz0weDAxCmF0YTA6IHJlc2V0IHRwMSBtYXNrPTAzIG9zdGF0MD0wMCBvc3RhdDE9MDAKYXRhMDog c3RhdDA9MHgwMCBlcnI9MHgwMSBsc2I9MHgxNCBtc2I9MHhlYgphdGEwOiBzdGF0MT0weDAwIGVy cj0weDAxIGxzYj0weDE0IG1zYj0weGViCmF0YTA6IHJlc2V0IHRwMiBzdGF0MD0wMCBzdGF0MT0w MCBkZXZpY2VzPTB4MzAwMDAKYXRhMTogU0FUQSByZXNldDogcG9ydHMgc3RhdHVzPTB4MDAKR0VP TTogbmV3IGRpc2sgbWZpZDAKcGFzczAgYXQgYXRhMCBidXMgMCBzY2J1czAgdGFyZ2V0IDAgbHVu IDAKcGFzczA6IDxURUFDIERWRC1ST00gRFYtMjhTVyBSLjJBPiBSZW1vdmFibGUgQ0QtUk9NIFND U0ktMCBkZXZpY2UgCnBhc3MwOiBTZXJpYWwgTnVtYmVyIDEwMDMwNTIyMDQ0OTQ5CnBhc3MwOiAx NTAuMDAwTUIvcyB0cmFuc2ZlcnMgKFNBVEEsIFVETUE1LCBBVEFQSSAxMmJ5dGVzLCBQSU8gODE5 MmJ5dGVzKQpjZDAgYXQgYXRhMCBidXMgMCBzY2J1czAgdGFyZ2V0IDAgbHVuIDAKY2QwOiA8VEVB QyBEVkQtUk9NIERWLTI4U1cgUi4yQT4gUmVtb3ZhYmxlIENELVJPTSBTQ1NJLTAgZGV2aWNlIApj ZDA6IFNlcmlhbCBOdW1iZXIgMTAwMzA1MjIwNDQ5NDkKY2QwOiAxNTAuMDAwTUIvcyB0cmFuc2Zl cnMgKFNBVEEsIFVETUE1LCBBVEFQSSAxMmJ5dGVzLCBQSU8gODE5MmJ5dGVzKQpjZDA6IEF0dGVt cHQgdG8gcXVlcnkgZGV2aWNlIHNpemUgZmFpbGVkOiBOT1QgUkVBRFksIE1lZGl1bSBub3QgcHJl c2VudCAtIHRyYXkgb3BlbgpHRU9NOiBuZXcgZGlzayBjZDAKTmV0dnNjIGluaXRpYWxpemluZy4u LiBkb25lIQpTTVA6IEFQIENQVSAjMTAgTGF1bmNoZWQhCmNwdTEwIEFQOgogICAgIElEOiAweDEy MDAwMDAwICAgVkVSOiAweDAwMDUwMDE0IExEUjogMHgwMDAwMDAwMCBERlI6IDB4ZmZmZmZmZmYK ICBsaW50MDogMHgwMDAxMDcwMCBsaW50MTogMHgwMDAwMDQwMCBUUFI6IDB4MDAwMDAwMDAgU1ZS OiAweDAwMDAwMWZmCiAgdGltZXI6IDB4MDAwMTAwZWYgdGhlcm06IDB4MDAwMTAwMDAgZXJyOiAw eDAwMDAwMGYwIHBtYzogMHgwMDAxMDQwMApTTVA6IEFQIENQVSAjOCBMYXVuY2hlZCEKY3B1OCBB UDoKICAgICBJRDogMHgxMDAwMDAwMCAgIFZFUjogMHgwMDA1MDAxNCBMRFI6IDB4MDAwMDAwMDAg REZSOiAweGZmZmZmZmZmCiAgbGludDA6IDB4MDAwMTA3MDAgbGludDE6IDB4MDAwMDA0MDAgVFBS OiAweDAwMDAwMDAwIFNWUjogMHgwMDAwMDFmZgogIHRpbWVyOiAweDAwMDEwMGVmIHRoZXJtOiAw eDAwMDEwMDAwIGVycjogMHgwMDAwMDBmMCBwbWM6IDB4MDAwMTA0MDAKU01QOiBBUCBDUFUgIzEx IExhdW5jaGVkIQpjcHUxMSBBUDoKICAgICBJRDogMHgxMzAwMDAwMCAgIFZFUjogMHgwMDA1MDAx NCBMRFI6IDB4MDAwMDAwMDAgREZSOiAweGZmZmZmZmZmCiAgbGludDA6IDB4MDAwMTA3MDAgbGlu dDE6IDB4MDAwMDA0MDAgVFBSOiAweDAwMDAwMDAwIFNWUjogMHgwMDAwMDFmZgogIHRpbWVyOiAw eDAwMDEwMGVmIHRoZXJtOiAweDAwMDEwMDAwIGVycjogMHgwMDAwMDBmMCBwbWM6IDB4MDAwMTA0 MDAKU01QOiBBUCBDUFUgIzkgTGF1bmNoZWQhCmNwdTkgQVA6CiAgICAgSUQ6IDB4MTEwMDAwMDAg ICBWRVI6IDB4MDAwNTAwMTQgTERSOiAweDAwMDAwMDAwIERGUjogMHhmZmZmZmZmZgogIGxpbnQw OiAweDAwMDEwNzAwIGxpbnQxOiAweDAwMDAwNDAwIFRQUjogMHgwMDAwMDAwMCBTVlI6IDB4MDAw MDAxZmYKICB0aW1lcjogMHgwMDAxMDBlZiB0aGVybTogMHgwMDAxMDAwMCBlcnI6IDB4MDAwMDAw ZjAgcG1jOiAweDAwMDEwNDAwClNNUDogQVAgQ1BVICMxNCBMYXVuY2hlZCEKY3B1MTQgQVA6CiAg ICAgSUQ6IDB4MWEwMDAwMDAgICBWRVI6IDB4MDAwNTAwMTQgTERSOiAweDAwMDAwMDAwIERGUjog MHhmZmZmZmZmZgogIGxpbnQwOiAweDAwMDEwNzAwIGxpbnQxOiAweDAwMDAwNDAwIFRQUjogMHgw MDAwMDAwMCBTVlI6IDB4MDAwMDAxZmYKICB0aW1lcjogMHgwMDAxMDBlZiB0aGVybTogMHgwMDAx MDAwMCBlcnI6IDB4MDAwMDAwZjAgcG1jOiAweDAwMDEwNDAwClNNUDogQVAgQ1BVICMxMiBMYXVu Y2hlZCEKY3B1MTIgQVA6CiAgICAgSUQ6IDB4MTgwMDAwMDAgICBWRVI6IDB4MDAwNTAwMTQgTERS OiAweDAwMDAwMDAwIERGUjogMHhmZmZmZmZmZgogIGxpbnQwOiAweDAwMDEwNzAwIGxpbnQxOiAw eDAwMDAwNDAwIFRQUjogMHgwMDAwMDAwMCBTVlI6IDB4MDAwMDAxZmYKICB0aW1lcjogMHgwMDAx MDBlZiB0aGVybTogMHgwMDAxMDAwMCBlcnI6IDB4MDAwMDAwZjAgcG1jOiAweDAwMDEwNDAwClNN UDogQVAgQ1BVICMxNSBMYXVuY2hlZCEKY3B1MTUgQVA6CiAgICAgSUQ6IDB4MWIwMDAwMDAgICBW RVI6IDB4MDAwNTAwMTQgTERSOiAweDAwMDAwMDAwIERGUjogMHhmZmZmZmZmZgogIGxpbnQwOiAw eDAwMDEwNzAwIGxpbnQxOiAweDAwMDAwNDAwIFRQUjogMHgwMDAwMDAwMCBTVlI6IDB4MDAwMDAx ZmYKICB0aW1lcjogMHgwMDAxMDBlZiB0aGVybTogMHgwMDAxMDAwMCBlcnI6IDB4MDAwMDAwZjAg cG1jOiAweDAwMDEwNDAwClNNUDogQVAgQ1BVICMxMyBMYXVuY2hlZCEKY3B1MTMgQVA6CiAgICAg SUQ6IDB4MTkwMDAwMDAgICBWRVI6IDB4MDAwNTAwMTQgTERSOiAweDAwMDAwMDAwIERGUjogMHhm ZmZmZmZmZgogIGxpbnQwOiAweDAwMDEwNzAwIGxpbnQxOiAweDAwMDAwNDAwIFRQUjogMHgwMDAw MDAwMCBTVlI6IDB4MDAwMDAxZmYKICB0aW1lcjogMHgwMDAxMDBlZiB0aGVybTogMHgwMDAxMDAw MCBlcnI6IDB4MDAwMDAwZjAgcG1jOiAweDAwMDEwNDAwClNNUDogQVAgQ1BVICMxIExhdW5jaGVk IQpjcHUxIEFQOgogICAgIElEOiAweDAxMDAwMDAwICAgVkVSOiAweDAwMDUwMDE0IExEUjogMHgw MDAwMDAwMCBERlI6IDB4ZmZmZmZmZmYKICBsaW50MDogMHgwMDAxMDcwMCBsaW50MTogMHgwMDAw MDQwMCBUUFI6IDB4MDAwMDAwMDAgU1ZSOiAweDAwMDAwMWZmCiAgdGltZXI6IDB4MDAwMTAwZWYg dGhlcm06IDB4MDAwMTAwMDAgZXJyOiAweDAwMDAwMGYwIHBtYzogMHgwMDAxMDQwMApTTVA6IEFQ IENQVSAjMyBMYXVuY2hlZCEKY3B1MyBBUDoKICAgICBJRDogMHgwMzAwMDAwMCAgIFZFUjogMHgw MDA1MDAxNCBMRFI6IDB4MDAwMDAwMDAgREZSOiAweGZmZmZmZmZmCiAgbGludDA6IDB4MDAwMTA3 MDAgbGludDE6IDB4MDAwMDA0MDAgVFBSOiAweDAwMDAwMDAwIFNWUjogMHgwMDAwMDFmZgogIHRp bWVyOiAweDAwMDEwMGVmIHRoZXJtOiAweDAwMDEwMDAwIGVycjogMHgwMDAwMDBmMCBwbWM6IDB4 MDAwMTA0MDAKU01QOiBBUCBDUFUgIzIgTGF1bmNoZWQhCmNwdTIgQVA6CiAgICAgSUQ6IDB4MDIw MDAwMDAgICBWRVI6IDB4MDAwNTAwMTQgTERSOiAweDAwMDAwMDAwIERGUjogMHhmZmZmZmZmZgog IGxpbnQwOiAweDAwMDEwNzAwIGxpbnQxOiAweDAwMDAwNDAwIFRQUjogMHgwMDAwMDAwMCBTVlI6 IDB4MDAwMDAxZmYKICB0aW1lcjogMHgwMDAxMDBlZiB0aGVybTogMHgwMDAxMDAwMCBlcnI6IDB4 MDAwMDAwZjAgcG1jOiAweDAwMDEwNDAwClNNUDogQVAgQ1BVICM1IExhdW5jaGVkIQpjcHU1IEFQ OgogICAgIElEOiAweDA5MDAwMDAwICAgVkVSOiAweDAwMDUwMDE0IExEUjogMHgwMDAwMDAwMCBE RlI6IDB4ZmZmZmZmZmYKICBsaW50MDogMHgwMDAxMDcwMCBsaW50MTogMHgwMDAwMDQwMCBUUFI6 IDB4MDAwMDAwMDAgU1ZSOiAweDAwMDAwMWZmCiAgdGltZXI6IDB4MDAwMTAwZWYgdGhlcm06IDB4 MDAwMTAwMDAgZXJyOiAweDAwMDAwMGYwIHBtYzogMHgwMDAxMDQwMApTTVA6IEFQIENQVSAjNyBM YXVuY2hlZCEKY3B1NyBBUDoKICAgICBJRDogMHgwYjAwMDAwMCAgIFZFUjogMHgwMDA1MDAxNCBM RFI6IDB4MDAwMDAwMDAgREZSOiAweGZmZmZmZmZmCiAgbGludDA6IDB4MDAwMTA3MDAgbGludDE6 IDB4MDAwMDA0MDAgVFBSOiAweDAwMDAwMDAwIFNWUjogMHgwMDAwMDFmZgogIHRpbWVyOiAweDAw MDEwMGVmIHRoZXJtOiAweDAwMDEwMDAwIGVycjogMHgwMDAwMDBmMCBwbWM6IDB4MDAwMTA0MDAK U01QOiBBUCBDUFUgIzQgTGF1bmNoZWQhCmNwdTQgQVA6CiAgICAgSUQ6IDB4MDgwMDAwMDAgICBW RVI6IDB4MDAwNTAwMTQgTERSOiAweDAwMDAwMDAwIERGUjogMHhmZmZmZmZmZgogIGxpbnQwOiAw eDAwMDEwNzAwIGxpbnQxOiAweDAwMDAwNDAwIFRQUjogMHgwMDAwMDAwMCBTVlI6IDB4MDAwMDAx ZmYKICB0aW1lcjogMHgwMDAxMDBlZiB0aGVybTogMHgwMDAxMDAwMCBlcnI6IDB4MDAwMDAwZjAg cG1jOiAweDAwMDEwNDAwClNNUDogQVAgQ1BVICM2IExhdW5jaGVkIQpjcHU2IEFQOgogICAgIElE OiAweDBhMDAwMDAwICAgVkVSOiAweDAwMDUwMDE0IExEUjogMHgwMDAwMDAwMCBERlI6IDB4ZmZm ZmZmZmYKICBsaW50MDogMHgwMDAxMDcwMCBsaW50MTogMHgwMDAwMDQwMCBUUFI6IDB4MDAwMDAw MDAgU1ZSOiAweDAwMDAwMWZmCiAgdGltZXI6IDB4MDAwMTAwZWYgdGhlcm06IDB4MDAwMTAwMDAg ZXJyOiAweDAwMDAwMGYwIHBtYzogMHgwMDAxMDQwMAppb2FwaWMwOiByb3V0aW5nIGludHBpbiAx IChJU0EgSVJRIDEpIHRvIGxhcGljIDEgdmVjdG9yIDQ4CmlvYXBpYzA6IHJvdXRpbmcgaW50cGlu IDMgKElTQSBJUlEgMykgdG8gbGFwaWMgMiB2ZWN0b3IgNDgKaW9hcGljMDogcm91dGluZyBpbnRw aW4gNCAoSVNBIElSUSA0KSB0byBsYXBpYyAzIHZlY3RvciA0OAppb2FwaWMwOiByb3V0aW5nIGlu dHBpbiA5IChJU0EgSVJRIDkpIHRvIGxhcGljIDggdmVjdG9yIDQ4CmlvYXBpYzA6IHJvdXRpbmcg aW50cGluIDE0IChJU0EgSVJRIDE0KSB0byBsYXBpYyA5IHZlY3RvciA0OAppb2FwaWMwOiByb3V0 aW5nIGludHBpbiAxNSAoSVNBIElSUSAxNSkgdG8gbGFwaWMgMTAgdmVjdG9yIDQ4CmlvYXBpYzA6 IHJvdXRpbmcgaW50cGluIDIxIChQQ0kgSVJRIDIxKSB0byBsYXBpYyAxMSB2ZWN0b3IgNDgKbXNp OiBBc3NpZ25pbmcgTVNJIElSUSAyNTYgdG8gbG9jYWwgQVBJQyAxNiB2ZWN0b3IgNDgKbXNpOiBB c3NpZ25pbmcgTVNJIElSUSAyNTcgdG8gbG9jYWwgQVBJQyAxNyB2ZWN0b3IgNDgKbXNpOiBBc3Np Z25pbmcgTVNJIElSUSAyNTggdG8gbG9jYWwgQVBJQyAxOCB2ZWN0b3IgNDgKbXNpOiBBc3NpZ25p bmcgTVNJIElSUSAyNTkgdG8gbG9jYWwgQVBJQyAxOSB2ZWN0b3IgNDgKbXNpOiBBc3NpZ25pbmcg TVNJIElSUSAyNjAgdG8gbG9jYWwgQVBJQyAyNCB2ZWN0b3IgNDgKU01QOiBwYXNzZWQgVFNDIHN5 bmNocm9uaXphdGlvbiB0ZXN0ClRTQyB0aW1lY291bnRlciBkaXNjYXJkcyBsb3dlciAxIGJpdChz KQpUaW1lY291bnRlciAiVFNDLWxvdyIgZnJlcXVlbmN5IDE0NjI5NjMxMDYgSHogcXVhbGl0eSAx MDAwCldBUk5JTkc6IFdJVE5FU1Mgb3B0aW9uIGVuYWJsZWQsIGV4cGVjdCByZWR1Y2VkIHBlcmZv cm1hbmNlLgpSb290IG1vdW50IHdhaXRpbmcgZm9yOiB1c2J1czQKUm9vdCBtb3VudCB3YWl0aW5n IGZvcjogdXNidXM0CnVodWIyOiA4IHBvcnRzIHdpdGggOCByZW1vdmFibGUsIHNlbGYgcG93ZXJl ZApSb290IG1vdW50IHdhaXRpbmcgZm9yOiB1c2J1czQKdWdlbjQuMjogPHZlbmRvciAweDQxM2M+ IGF0IHVzYnVzNAp1aHViNTogPHZlbmRvciAweDQxM2MgcHJvZHVjdCAweGEwMDEsIGNsYXNzIDkv MCwgcmV2IDIuMDAvMC4wMCwgYWRkciAyPiBvbiB1c2J1czQKdWh1YjU6IE1UVCBlbmFibGVkCnVo dWI1OiAyIHBvcnRzIHdpdGggMiByZW1vdmFibGUsIHNlbGYgcG93ZXJlZApSb290IG1vdW50IHdh aXRpbmcgZm9yOiB1c2J1czQKdWdlbjQuMzogPERlbGw+IGF0IHVzYnVzNAp1a2JkMDogPEtleWJv YXJkPiBvbiB1c2J1czQKa2JkMiBhdCB1a2JkMAprYmQyOiB1a2JkMCwgZ2VuZXJpYyAoMCksIGNv bmZpZzoweDAsIGZsYWdzOjB4M2QwMDAwClJvb3QgbW91bnQgd2FpdGluZyBmb3I6IHVzYnVzNAp1 Z2VuNC40OiA8REVMTCAgSU5DLj4gYXQgdXNidXM0CnVtYXNzMDogPFZJUlRVQUwgIENEUk9NICA+ IG9uIHVzYnVzNAp1bWFzczA6ICBTQ1NJIG92ZXIgQnVsay1Pbmx5OyBxdWlya3MgPSAweDAxMDAK dW1hc3MwOjI6MDogQXR0YWNoZWQgdG8gc2NidXMyCnVtYXNzMTogPFZJUlRVQUwgIEZMT1BQWSA+ IG9uIHVzYnVzNAp1bWFzczE6ICBTQ1NJIG92ZXIgQnVsay1Pbmx5OyBxdWlya3MgPSAweDAxMDAK dW1hc3MxOjM6MTogQXR0YWNoZWQgdG8gc2NidXMzCihwcm9iZTA6dW1hc3Mtc2ltMDowOjA6MCk6 IERvd24gcmV2aW5nIFByb3RvY29sIFZlcnNpb24gZnJvbSAyIHRvIDA/Cihwcm9iZTE6dW1hc3Mt c2ltMToxOjA6MCk6IERvd24gcmV2aW5nIFByb3RvY29sIFZlcnNpb24gZnJvbSAyIHRvIDA/CkdF T006IG5ldyBkaXNrIGNkMQpwYXNzMSBhdCB1bWFzcy1zaW0wIGJ1cyAwIHNjYnVzMiB0YXJnZXQg MCBsdW4gMApwYXNzMTogPERlbGwgVmlydHVhbCAgQ0RST00gMTIzPiBSZW1vdmFibGUgQ0QtUk9N IFNDU0ktMCBkZXZpY2UgCnBhc3MxOiA0MC4wMDBNQi9zIHRyYW5zZmVycwpwYXNzMiBhdCB1bWFz cy1zaW0xIGJ1cyAxIHNjYnVzMyB0YXJnZXQgMCBsdW4gMApwYXNzMjogPERlbGwgVmlydHVhbCAg RmxvcHB5IDEyMz4gUmVtb3ZhYmxlIERpcmVjdCBBY2Nlc3MgU0NTSS0wIGRldmljZSAKcGFzczI6 IDQwLjAwME1CL3MgdHJhbnNmZXJzCmNkMSBhdCB1bWFzcy1zaW0wIGJ1cyAwIHNjYnVzMiB0YXJn ZXQgMCBsdW4gMApjZDE6IDxEZWxsIFZpcnR1YWwgIENEUk9NIDEyMz4gUmVtb3ZhYmxlIENELVJP TSBTQ1NJLTAgZGV2aWNlIApjZDE6IDQwLjAwME1CL3MgdHJhbnNmZXJzCmNkMTogQXR0ZW1wdCB0 byBxdWVyeSBkZXZpY2Ugc2l6ZSBmYWlsZWQ6IE5PVCBSRUFEWSwgTWVkaXVtIG5vdCBwcmVzZW50 CmRhMCBhdCB1bWFzcy1zaW0xIGJ1cyAxIHNjYnVzMyB0YXJnZXQgMCBsdW4gMApjZDE6IHF1aXJr cz0weDEwPDEwX0JZVEVfT05MWT4KZGEwOiA8RGVsbCBWaXJ0dWFsICBGbG9wcHkgMTIzPiBSZW1v dmFibGUgRGlyZWN0IEFjY2VzcyBTQ1NJLTAgZGV2aWNlIApkYTA6IDQwLjAwME1CL3MgdHJhbnNm ZXJzCmRhMDogQXR0ZW1wdCB0byBxdWVyeSBkZXZpY2Ugc2l6ZSBmYWlsZWQ6IE5PVCBSRUFEWSwg TWVkaXVtIG5vdCBwcmVzZW50CmRhMDogcXVpcmtzPTB4MjxOT182X0JZVEU+CmRhMDogRGVsZXRl IG1ldGhvZHM6IDxOT05FKCopPgpHRU9NOiBuZXcgZGlzayBkYTAKUm9vdCBtb3VudCB3YWl0aW5n IGZvcjogdXNidXM0CnVnZW40LjU6IDx2ZW5kb3IgMHgwNGI0PiBhdCB1c2J1czQKdWh1YjY6IDx2 ZW5kb3IgMHgwNGI0IHByb2R1Y3QgMHg2NTYwLCBjbGFzcyA5LzAsIHJldiAyLjAwLzkwLjE1LCBh ZGRyIDU+IG9uIHVzYnVzNApSb290IG1vdW50IHdhaXRpbmcgZm9yOiB1c2J1czQKdWh1YjY6IDIg cG9ydHMgd2l0aCAyIHJlbW92YWJsZSwgc2VsZiBwb3dlcmVkCnVnZW4xLjI6IDxEZWxsPiBhdCB1 c2J1czEKdWtiZDE6IDxEZWxsIERlbGwgVVNCIEtleWJvYXJkLCBjbGFzcyAwLzAsIHJldiAxLjEw LzMuMDEsIGFkZHIgMj4gb24gdXNidXMxCmtiZDMgYXQgdWtiZDEKa2JkMzogdWtiZDEsIGdlbmVy aWMgKDApLCBjb25maWc6MHgwLCBmbGFnczoweDNkMDAwMAp1Z2VuNC42OiA8dmVuZG9yIDB4MDRi ND4gYXQgdXNidXM0CnVodWI3OiA8dmVuZG9yIDB4MDRiNCBwcm9kdWN0IDB4NjU2MCwgY2xhc3Mg OS8wLCByZXYgMi4wMC85MC4xNSwgYWRkciA2PiBvbiB1c2J1czQKUm9vdCBtb3VudCB3YWl0aW5n IGZvcjogdXNidXM0CnVodWI3OiAyIHBvcnRzIHdpdGggMiByZW1vdmFibGUsIHNlbGYgcG93ZXJl ZApUcnlpbmcgdG8gbW91bnQgcm9vdCBmcm9tIHVmczovZGV2L21maWQwcDIgW3J3XS4uLgpzdGFy dF9pbml0OiB0cnlpbmcgL3NiaW4vaW5pdAp1bXMwOiA8TW91c2U+IG9uIHVzYnVzNAp1bXMwOiAz IGJ1dHRvbnMgYW5kIFtaXSBjb29yZGluYXRlcyBJRD0wCm1maTA6IDI2NTcgKDQ0NjQ5MTIxOHMv MHgwMDA4L2luZm8pIC0gQmF0dGVyeSBzdGFydGVkIGNoYXJnaW5nCkxpbnV4IEVMRiBleGVjIGhh bmRsZXIgaW5zdGFsbGVkCmxpbnByb2NmcyByZWdpc3RlcmVkCmxpbnN5c2ZzIHJlZ2lzdGVyZWQK --=_9ece7ca3fa5ad763b67bc18fdc2bdee7-- From owner-freebsd-fs@FreeBSD.ORG Sun Feb 23 21:33:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 11ECAF5D; Sun, 23 Feb 2014 21:33:51 +0000 (UTC) Received: from tensor.andric.com (tensor.andric.com [IPv6:2001:7b8:3a7:1:2d0:b7ff:fea0:8c26]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AFE5C17F8; Sun, 23 Feb 2014 21:33:50 +0000 (UTC) Received: from [IPv6:2001:7b8:3a7::2953:bff3:cb76:c707] (unknown [IPv6:2001:7b8:3a7:0:2953:bff3:cb76:c707]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by tensor.andric.com (Postfix) with ESMTPSA id 3D35B5C45; Sun, 23 Feb 2014 22:33:46 +0100 (CET) Content-Type: multipart/signed; boundary="Apple-Mail=_DEF1A75D-17C6-428B-B203-06F3FD19012B"; protocol="application/pgp-signature"; micalg=pgp-sha1 Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\)) Subject: Re: BUG: possible NULL pointer dereference in nfs server From: Dimitry Andric In-Reply-To: <346483760.1090608.1391212324309.JavaMail.root@uoguelph.ca> Date: Sun, 23 Feb 2014 22:33:31 +0100 Message-Id: <3B667D71-2ED6-4EBF-A586-297BEDC9499A@FreeBSD.org> References: <346483760.1090608.1391212324309.JavaMail.root@uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1827) Cc: freebsd-fs@freebsd.org, Roman Divacky X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Feb 2014 21:33:51 -0000 --Apple-Mail=_DEF1A75D-17C6-428B-B203-06F3FD19012B Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=us-ascii On 01 Feb 2014, at 00:52, Rick Macklem wrote: > John Baldwin wrote: ... >> Why not make a simple abort() that calls panic()? It seems clumsy to >> have to >> add hacks in the source code. Maybe, but it is better to just avoid undefined behavior. Remember the compiler can do anything it likes here, and it is just being convenient by inserting an "impossible" instruction. However, it might as well insert some of those well-known nasal demons... :) >> OTOH, the new_lfpp thing just seems to be obfuscation. Seems you can >> remove >> one layer of pointer there. It doesn't help you with the compiler >> not being >> able to see the invariant that prevents the problem though. >> >> Index: nfs_nfsdstate.c >> =================================================================== >> --- nfs_nfsdstate.c (revision 261291) >> +++ nfs_nfsdstate.c (working copy) >> @@ -79,7 +79,7 @@ static int nfsrv_getstate(struct nfsclient *clp, n >> static void nfsrv_getowner(struct nfsstatehead *hp, struct nfsstate >> *new_stp, >> struct nfsstate **stpp); >> static int nfsrv_getlockfh(vnode_t vp, u_short flags, >> - struct nfslockfile **new_lfpp, fhandle_t *nfhp, NFSPROC_T *p); >> + struct nfslockfile *new_lfp, fhandle_t *nfhp, NFSPROC_T *p); >> static int nfsrv_getlockfile(u_short flags, struct nfslockfile >> **new_lfpp, >> struct nfslockfile **lfpp, fhandle_t *nfhp, int lockit); >> static void nfsrv_insertlock(struct nfslock *new_lop, >> @@ -1985,7 +1985,7 @@ tryagain: >> MALLOC(new_lfp, struct nfslockfile *, sizeof (struct nfslockfile), >> M_NFSDLOCKFILE, M_WAITOK); >> if (vp) >> - getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, &new_lfp, >> + getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, new_lfp, >> NULL, p); >> NFSLOCKSTATE(); >> /* >> @@ -2235,7 +2235,7 @@ tryagain: >> M_NFSDSTATE, M_WAITOK); >> MALLOC(new_deleg, struct nfsstate *, sizeof (struct nfsstate), >> M_NFSDSTATE, M_WAITOK); >> - getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, &new_lfp, >> + getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, new_lfp, >> NULL, p); >> NFSLOCKSTATE(); >> /* >> @@ -3143,10 +3143,9 @@ out: >> */ >> static int >> nfsrv_getlockfh(vnode_t vp, u_short flags, >> - struct nfslockfile **new_lfpp, fhandle_t *nfhp, NFSPROC_T *p) >> + struct nfslockfile *new_lfp, fhandle_t *nfhp, NFSPROC_T *p) >> { >> fhandle_t *fhp = NULL; >> - struct nfslockfile *new_lfp; >> int error; >> >> /* >> @@ -3154,7 +3153,6 @@ nfsrv_getlockfh(vnode_t vp, u_short flags, >> * a fhandle_t on the stack. >> */ >> if (flags & NFSLCK_OPEN) { >> - new_lfp = *new_lfpp; >> fhp = &new_lfp->lf_fh; >> } else if (nfhp) { >> fhp = nfhp; >> >> > Yep, this looks good to me, although I have no idea if it makes the > compiler happy? It seems to, though I think it could still crash if it ever got its flags in an unexpected state, while new_lfp is NULL. Let's just hope that never happens. So can we commit jhb's patch now? That would be very handy for my clang-sparc64 project branch. :) -Dimitry --Apple-Mail=_DEF1A75D-17C6-428B-B203-06F3FD19012B Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename=signature.asc Content-Type: application/pgp-signature; name=signature.asc Content-Description: Message signed with OpenPGP using GPGMail -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iEYEARECAAYFAlMKaTMACgkQsF6jCi4glqMKqACff44aLvzw4GKlzo1IV2KNb0JD 0/wAoJeegYnHi41LwDrsgQUv4ln7wUKl =SojS -----END PGP SIGNATURE----- --Apple-Mail=_DEF1A75D-17C6-428B-B203-06F3FD19012B-- From owner-freebsd-fs@FreeBSD.ORG Sun Feb 23 21:49:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1C91E494; Sun, 23 Feb 2014 21:49:04 +0000 (UTC) Received: from tensor.andric.com (tensor.andric.com [87.251.56.140]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A9F2618FD; Sun, 23 Feb 2014 21:49:03 +0000 (UTC) Received: from [IPv6:2001:7b8:3a7::2953:bff3:cb76:c707] (unknown [IPv6:2001:7b8:3a7:0:2953:bff3:cb76:c707]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by tensor.andric.com (Postfix) with ESMTPSA id 825125C45; Sun, 23 Feb 2014 22:49:00 +0100 (CET) Content-Type: multipart/signed; boundary="Apple-Mail=_6EB85032-F88F-4335-91CA-89D723CDADB4"; protocol="application/pgp-signature"; micalg=pgp-sha1 Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\)) Subject: Re: BUG: possible NULL pointer dereference in nfs server From: Dimitry Andric In-Reply-To: <3B667D71-2ED6-4EBF-A586-297BEDC9499A@FreeBSD.org> Date: Sun, 23 Feb 2014 22:48:45 +0100 Message-Id: References: <346483760.1090608.1391212324309.JavaMail.root@uoguelph.ca> <3B667D71-2ED6-4EBF-A586-297BEDC9499A@FreeBSD.org> To: Rick Macklem X-Mailer: Apple Mail (2.1827) Cc: freebsd-fs@freebsd.org, Roman Divacky X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Feb 2014 21:49:04 -0000 --Apple-Mail=_6EB85032-F88F-4335-91CA-89D723CDADB4 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=us-ascii On 23 Feb 2014, at 22:33, Dimitry Andric wrote: > On 01 Feb 2014, at 00:52, Rick Macklem wrote: >> John Baldwin wrote: > ... >>> OTOH, the new_lfpp thing just seems to be obfuscation. Seems you can >>> remove >>> one layer of pointer there. It doesn't help you with the compiler >>> not being >>> able to see the invariant that prevents the problem though. >>> >>> Index: nfs_nfsdstate.c >>> =================================================================== >>> --- nfs_nfsdstate.c (revision 261291) >>> +++ nfs_nfsdstate.c (working copy) >>> @@ -79,7 +79,7 @@ static int nfsrv_getstate(struct nfsclient *clp, n >>> static void nfsrv_getowner(struct nfsstatehead *hp, struct nfsstate >>> *new_stp, >>> struct nfsstate **stpp); >>> static int nfsrv_getlockfh(vnode_t vp, u_short flags, >>> - struct nfslockfile **new_lfpp, fhandle_t *nfhp, NFSPROC_T *p); >>> + struct nfslockfile *new_lfp, fhandle_t *nfhp, NFSPROC_T *p); >>> static int nfsrv_getlockfile(u_short flags, struct nfslockfile >>> **new_lfpp, >>> struct nfslockfile **lfpp, fhandle_t *nfhp, int lockit); >>> static void nfsrv_insertlock(struct nfslock *new_lop, >>> @@ -1985,7 +1985,7 @@ tryagain: >>> MALLOC(new_lfp, struct nfslockfile *, sizeof (struct nfslockfile), >>> M_NFSDLOCKFILE, M_WAITOK); >>> if (vp) >>> - getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, &new_lfp, >>> + getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, new_lfp, >>> NULL, p); >>> NFSLOCKSTATE(); >>> /* >>> @@ -2235,7 +2235,7 @@ tryagain: >>> M_NFSDSTATE, M_WAITOK); >>> MALLOC(new_deleg, struct nfsstate *, sizeof (struct nfsstate), >>> M_NFSDSTATE, M_WAITOK); >>> - getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, &new_lfp, >>> + getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, new_lfp, >>> NULL, p); >>> NFSLOCKSTATE(); >>> /* >>> @@ -3143,10 +3143,9 @@ out: >>> */ >>> static int >>> nfsrv_getlockfh(vnode_t vp, u_short flags, >>> - struct nfslockfile **new_lfpp, fhandle_t *nfhp, NFSPROC_T *p) >>> + struct nfslockfile *new_lfp, fhandle_t *nfhp, NFSPROC_T *p) >>> { >>> fhandle_t *fhp = NULL; >>> - struct nfslockfile *new_lfp; >>> int error; >>> >>> /* >>> @@ -3154,7 +3153,6 @@ nfsrv_getlockfh(vnode_t vp, u_short flags, >>> * a fhandle_t on the stack. >>> */ >>> if (flags & NFSLCK_OPEN) { >>> - new_lfp = *new_lfpp; >>> fhp = &new_lfp->lf_fh; >>> } else if (nfhp) { >>> fhp = nfhp; >>> >>> >> Yep, this looks good to me, although I have no idea if it makes the >> compiler happy? > > It seems to, though I think it could still crash if it ever got its > flags in an unexpected state, while new_lfp is NULL. Let's just hope > that never happens. > > So can we commit jhb's patch now? That would be very handy for my > clang-sparc64 project branch. :) Alternatively, just this simple fix: Index: sys/fs/nfsserver/nfs_nfsdstate.c =================================================================== --- sys/fs/nfsserver/nfs_nfsdstate.c (revision 262397) +++ sys/fs/nfsserver/nfs_nfsdstate.c (working copy) @@ -3154,6 +3154,9 @@ nfsrv_getlockfh(vnode_t vp, u_short flags, * a fhandle_t on the stack. */ if (flags & NFSLCK_OPEN) { + if (new_lfpp == NULL) { + panic("nfsrv_getlockfh"); + } new_lfp = *new_lfpp; fhp = &new_lfp->lf_fh; } else if (nfhp) { If new_lfpp is really never going to be NULL the panic will never be hit. If the "impossible" happens anyway, you will have a nice panic. No undefined behavior anywhere anymore, and no need for abort() implementations. :-) -Dimitry --Apple-Mail=_6EB85032-F88F-4335-91CA-89D723CDADB4 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename=signature.asc Content-Type: application/pgp-signature; name=signature.asc Content-Description: Message signed with OpenPGP using GPGMail -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iEYEARECAAYFAlMKbMUACgkQsF6jCi4glqPtJQCbBd3jHAnFjF0olEh7Oq+6HIS/ G/4AoPaAKm92JuBSlBd/Gw14rw0SLnMY =I+2a -----END PGP SIGNATURE----- --Apple-Mail=_6EB85032-F88F-4335-91CA-89D723CDADB4-- From owner-freebsd-fs@FreeBSD.ORG Mon Feb 24 03:13:05 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BF9D67F9; Mon, 24 Feb 2014 03:13:05 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 4AAB516A1; Mon, 24 Feb 2014 03:13:04 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAFi3ClODaFve/2dsb2JhbABZhBiDA71ggSR0giUBAQUjBFIbDgoCAg0ZAlkGE4gFpU2hCReBKY0HATMHgm+BSQSJSKETg0segW4 X-IronPort-AV: E=Sophos;i="4.97,531,1389762000"; d="scan'208";a="99636398" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 23 Feb 2014 22:11:55 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id D1C70B3F7D; Sun, 23 Feb 2014 22:11:55 -0500 (EST) Date: Sun, 23 Feb 2014 22:11:55 -0500 (EST) From: Rick Macklem To: Dimitry Andric Message-ID: <116945763.10073177.1393211515850.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: BUG: possible NULL pointer dereference in nfs server MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: freebsd-fs@freebsd.org, Roman Divacky X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Feb 2014 03:13:05 -0000 Dimitry Andric wrote: > On 23 Feb 2014, at 22:33, Dimitry Andric wrote: > > On 01 Feb 2014, at 00:52, Rick Macklem > > wrote: > >> John Baldwin wrote: > > ... > >>> OTOH, the new_lfpp thing just seems to be obfuscation. Seems you > >>> can > >>> remove > >>> one layer of pointer there. It doesn't help you with the > >>> compiler > >>> not being > >>> able to see the invariant that prevents the problem though. > >>> > >>> Index: nfs_nfsdstate.c > >>> =================================================================== > >>> --- nfs_nfsdstate.c (revision 261291) > >>> +++ nfs_nfsdstate.c (working copy) > >>> @@ -79,7 +79,7 @@ static int nfsrv_getstate(struct nfsclient > >>> *clp, n > >>> static void nfsrv_getowner(struct nfsstatehead *hp, struct > >>> nfsstate > >>> *new_stp, > >>> struct nfsstate **stpp); > >>> static int nfsrv_getlockfh(vnode_t vp, u_short flags, > >>> - struct nfslockfile **new_lfpp, fhandle_t *nfhp, NFSPROC_T > >>> *p); > >>> + struct nfslockfile *new_lfp, fhandle_t *nfhp, NFSPROC_T *p); > >>> static int nfsrv_getlockfile(u_short flags, struct nfslockfile > >>> **new_lfpp, > >>> struct nfslockfile **lfpp, fhandle_t *nfhp, int lockit); > >>> static void nfsrv_insertlock(struct nfslock *new_lop, > >>> @@ -1985,7 +1985,7 @@ tryagain: > >>> MALLOC(new_lfp, struct nfslockfile *, sizeof (struct > >>> nfslockfile), > >>> M_NFSDLOCKFILE, M_WAITOK); > >>> if (vp) > >>> - getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, &new_lfp, > >>> + getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, new_lfp, > >>> NULL, p); > >>> NFSLOCKSTATE(); > >>> /* > >>> @@ -2235,7 +2235,7 @@ tryagain: > >>> M_NFSDSTATE, M_WAITOK); > >>> MALLOC(new_deleg, struct nfsstate *, sizeof (struct nfsstate), > >>> M_NFSDSTATE, M_WAITOK); > >>> - getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, &new_lfp, > >>> + getfhret = nfsrv_getlockfh(vp, new_stp->ls_flags, new_lfp, > >>> NULL, p); > >>> NFSLOCKSTATE(); > >>> /* > >>> @@ -3143,10 +3143,9 @@ out: > >>> */ > >>> static int > >>> nfsrv_getlockfh(vnode_t vp, u_short flags, > >>> - struct nfslockfile **new_lfpp, fhandle_t *nfhp, NFSPROC_T > >>> *p) > >>> + struct nfslockfile *new_lfp, fhandle_t *nfhp, NFSPROC_T *p) > >>> { > >>> fhandle_t *fhp = NULL; > >>> - struct nfslockfile *new_lfp; > >>> int error; > >>> > >>> /* > >>> @@ -3154,7 +3153,6 @@ nfsrv_getlockfh(vnode_t vp, u_short flags, > >>> * a fhandle_t on the stack. > >>> */ > >>> if (flags & NFSLCK_OPEN) { > >>> - new_lfp = *new_lfpp; > >>> fhp = &new_lfp->lf_fh; > >>> } else if (nfhp) { > >>> fhp = nfhp; > >>> > >>> > >> Yep, this looks good to me, although I have no idea if it makes > >> the > >> compiler happy? > > > > It seems to, though I think it could still crash if it ever got its > > flags in an unexpected state, while new_lfp is NULL. Let's just > > hope > > that never happens. > > > > So can we commit jhb's patch now? That would be very handy for my > > clang-sparc64 project branch. :) > > Alternatively, just this simple fix: > > Index: sys/fs/nfsserver/nfs_nfsdstate.c > =================================================================== > --- sys/fs/nfsserver/nfs_nfsdstate.c (revision 262397) > +++ sys/fs/nfsserver/nfs_nfsdstate.c (working copy) > @@ -3154,6 +3154,9 @@ nfsrv_getlockfh(vnode_t vp, u_short flags, > * a fhandle_t on the stack. > */ > if (flags & NFSLCK_OPEN) { > + if (new_lfpp == NULL) { > + panic("nfsrv_getlockfh"); > + } > new_lfp = *new_lfpp; > fhp = &new_lfp->lf_fh; > } else if (nfhp) { > > If new_lfpp is really never going to be NULL the panic will never be > hit. If the "impossible" happens anyway, you will have a nice panic. > > No undefined behavior anywhere anymore, and no need for abort() > implementations. :-) > > -Dimitry > > I have no strong opinion on this, but I do think jhb@'s patch cleans up the code (and I think I mentioned before that I didn't know if it made the compiler happy). Personally, I'd suggest jhb@'s patch plus whatever it takes to make clang happy. I can't do commits until mid-April, so I'd suggest you guys commit whatever you think is appropriate. rick From owner-freebsd-fs@FreeBSD.ORG Mon Feb 24 11:06:47 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D6F73A6A for ; Mon, 24 Feb 2014 11:06:47 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B1D191616 for ; Mon, 24 Feb 2014 11:06:47 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id s1OB6lwe027498 for ; Mon, 24 Feb 2014 11:06:47 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s1OB6lY0027496 for freebsd-fs@FreeBSD.org; Mon, 24 Feb 2014 11:06:47 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 24 Feb 2014 11:06:47 GMT Message-Id: <201402241106.s1OB6lY0027496@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Feb 2014 11:06:47 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186574 fs [zfs] zpool history hangs (infinite loop) o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185858 fs [zfs] zvol clone can't see new device o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o kern/121385 fs [unionfs] unionfs cross mount -> kernel panic o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 339 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Feb 24 22:01:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E994C2CE for ; Mon, 24 Feb 2014 22:01:49 +0000 (UTC) Received: from dmz-mailsec-scanner-1.mit.edu (dmz-mailsec-scanner-1.mit.edu [18.9.25.12]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 767C31C01 for ; Mon, 24 Feb 2014 22:01:49 +0000 (UTC) X-AuditID: 1209190c-f794a6d000000c27-96-530bc01839ac Received: from mailhub-auth-1.mit.edu ( [18.9.21.35]) (using TLS with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by dmz-mailsec-scanner-1.mit.edu (Symantec Messaging Gateway) with SMTP id 1F.67.03111.910CB035; Mon, 24 Feb 2014 16:56:41 -0500 (EST) Received: from outgoing.mit.edu (outgoing-auth-1.mit.edu [18.9.28.11]) by mailhub-auth-1.mit.edu (8.13.8/8.9.2) with ESMTP id s1OLue53014390; Mon, 24 Feb 2014 16:56:40 -0500 Received: from multics.mit.edu (system-low-sipb.mit.edu [18.187.2.37]) (authenticated bits=56) (User authenticated as kaduk@ATHENA.MIT.EDU) by outgoing.mit.edu (8.13.8/8.12.4) with ESMTP id s1OLucPJ015550 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Mon, 24 Feb 2014 16:56:39 -0500 Received: (from kaduk@localhost) by multics.mit.edu (8.12.9.20060308) id s1OLubiW017474; Mon, 24 Feb 2014 16:56:37 -0500 (EST) Date: Mon, 24 Feb 2014 16:56:37 -0500 (EST) From: Benjamin Kaduk To: mikej Subject: Re: ffs_fsync: dirty In-Reply-To: Message-ID: References: User-Agent: Alpine 1.10 (GSO 962 2008-03-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrLIsWRmVeSWpSXmKPExsUixCmqrCt5gDvY4NIJNotjj3+yWcy4u53d gcljxqf5LB5fFzxiDWCK4rJJSc3JLEst0rdL4MqYMfElc8F+toqvU24xNzC2snYxcnJICJhI bH21mBHCFpO4cG89WxcjF4eQwGwmid3PGtkhnI2MEr/brjNCOIeYJG43TmWGcBoYJaa3bmMG 6WcR0Ja4uOgJ2Fw2ARWJmW82soHYIgLSEgd+TAKKc3AwC0hJ3FlbARIWFpCV+NG2Bmw1p4Cd xJSLv8DKeQUcJO6c/sQOYgsJ2Er8fXIMbLyogI7E6v1TWCBqBCVOznwCZjMLWEqc+3OdbQKj 4CwkqVlIUgsYmVYxyqbkVunmJmbmFKcm6xYnJ+blpRbpGurlZpbopaaUbmIEh6okzw7GNweV DjEKcDAq8fB2FHMHC7EmlhVX5h5ilORgUhLlPboXKMSXlJ9SmZFYnBFfVJqTWnyIUYKDWUmE N2c7UI43JbGyKrUoHyYlzcGiJM5ba/ErSEggPbEkNTs1tSC1CCYrw8GhJMH7ex9Qo2BRanpq RVpmTglCmomDE2Q4D9Bwxf0gw4sLEnOLM9Mh8qcYFaXEee+CNAuAJDJK8+B6YankFaM40CvC vE9AqniAaQiu+xXQYCagwfvAri4uSURISTUwJpetUlidMCeD7960GokpYUr7mTpPdt/Y8nyJ zaRKnTdlFe/FbJ01HMx8HpjNN4tbOOnYmkznZ/+1H+/ImLhC+pt4jrwH0/SEJRu7eYJiLJ0r fTY676/znyq1c7rmj01fF1SI/vxmE91b6K3hltC8+laydc7Z4u+JXEud70nJqNw3uJMvVjBd iaU4I9FQi7moOBEAT8AgqwADAAA= Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Feb 2014 22:01:50 -0000 On Sun, 23 Feb 2014, mikej wrote: > FreeBSD custom 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r261885: Fri Feb 14 > 08:51:48 EST 2014 mikej@custom:/usr/obj/usr/src/sys/GENERIC amd64 > > > I get a bunch of these while running poudriere. > > ffs_fsync: dirty > 0xfffff808e200e3b0: tag ufs, type VDIR > usecount 1, writecount 0, refcount 8 mountedhere 0 > flags (VI_ACTIVE) > v_object 0xfffff8039e934300 ref 0 pages 38 cleanbuf 1 dirtybuf 4 > lock type ufs: EXCL by thread 0xfffff8021bf72920 (pid 48820, cpdup, tid > 100292) > ino 1527731, on dev mfid0p2 That is a potentially interesting violation caught by INVARIANTS. I don't think I have time to investigate right now, though. > I also get these LOR's but it never drops to the debugger. These are well-known and harmless. -Ben From owner-freebsd-fs@FreeBSD.ORG Mon Feb 24 22:25:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 059229A6 for ; Mon, 24 Feb 2014 22:25:17 +0000 (UTC) Received: from chez.mckusick.com (chez.mckusick.com [IPv6:2001:5a8:4:7e72:4a5b:39ff:fe12:452]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id CD9811DE9 for ; Mon, 24 Feb 2014 22:25:16 +0000 (UTC) Received: from chez.mckusick.com (localhost [127.0.0.1]) by chez.mckusick.com (8.14.3/8.14.3) with ESMTP id s1OMPBed043219; Mon, 24 Feb 2014 14:25:11 -0800 (PST) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201402242225.s1OMPBed043219@chez.mckusick.com> To: Benjamin Kaduk Subject: Re: ffs_fsync: dirty In-reply-to: Date: Mon, 24 Feb 2014 14:25:11 -0800 From: Kirk McKusick Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Feb 2014 22:25:17 -0000 > Date: Mon, 24 Feb 2014 16:56:37 -0500 (EST) > From: Benjamin Kaduk > To: mikej > Subject: Re: ffs_fsync: dirty > Cc: freebsd-fs@freebsd.org > > On Sun, 23 Feb 2014, mikej wrote: > >> FreeBSD custom 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r261885: Fri Feb 14 >> 08:51:48 EST 2014 mikej@custom:/usr/obj/usr/src/sys/GENERIC amd64 >> >> I get a bunch of these while running poudriere. >> >> ffs_fsync: dirty >> 0xfffff808e200e3b0: tag ufs, type VDIR >> usecount 1, writecount 0, refcount 8 mountedhere 0 >> flags (VI_ACTIVE) >> v_object 0xfffff8039e934300 ref 0 pages 38 cleanbuf 1 dirtybuf 4 >> lock type ufs: EXCL by thread 0xfffff8021bf72920 (pid 48820, cpdup, tid >> 100292) >> ino 1527731, on dev mfid0p2 > > That is a potentially interesting violation caught by INVARIANTS. I don't > think I have time to investigate right now, though. > > -Ben The above output is rather vexing, but its only bad effect is to waste a bit of memory. Several of us (kib, jeff, and myself) have tried to track it down without success. Hopefully we will get it sorted out before too much longer. But other more pressing problems keep this one on the back burner. Kirk McKusick From owner-freebsd-fs@FreeBSD.ORG Tue Feb 25 17:27:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D8D556D3; Tue, 25 Feb 2014 17:27:59 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [IPv6:2001:470:1f11:75::1]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AE65E11F2; Tue, 25 Feb 2014 17:27:59 +0000 (UTC) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 8767FB989; Tue, 25 Feb 2014 12:27:58 -0500 (EST) From: John Baldwin To: Dimitry Andric Subject: Re: BUG: possible NULL pointer dereference in nfs server Date: Tue, 25 Feb 2014 12:21:40 -0500 User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; ) References: <346483760.1090608.1391212324309.JavaMail.root@uoguelph.ca> <3B667D71-2ED6-4EBF-A586-297BEDC9499A@FreeBSD.org> In-Reply-To: <3B667D71-2ED6-4EBF-A586-297BEDC9499A@FreeBSD.org> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-15" Content-Transfer-Encoding: 7bit Message-Id: <201402251221.40160.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Tue, 25 Feb 2014 12:27:58 -0500 (EST) Cc: freebsd-fs@freebsd.org, Roman Divacky X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Feb 2014 17:27:59 -0000 On Sunday, February 23, 2014 4:33:31 pm Dimitry Andric wrote: > So can we commit jhb's patch now? That would be very handy for my > clang-sparc64 project branch. :) I haven't had a chance to test it though it does compile. If you think it looks good I can commit it. I'm then happy with your simple if - panic patch to appease clang. -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Wed Feb 26 15:38:19 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DF26A9EB for ; Wed, 26 Feb 2014 15:38:19 +0000 (UTC) Received: from smtp.01.com (smtp.01.com [199.36.142.181]) by mx1.freebsd.org (Postfix) with ESMTP id AA85B16AB for ; Wed, 26 Feb 2014 15:38:18 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id 30B083F6213 for ; Wed, 26 Feb 2014 09:38:12 -0600 (CST) X-Virus-Scanned: amavisd-new at smtp-out-1.01.com Received: from smtp.01.com ([127.0.0.1]) by localhost (smtp-out-1.01.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id XvOoZvUNcp1d for ; Wed, 26 Feb 2014 09:38:12 -0600 (CST) Received: from smtp.01.com (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id 117F33F6214 for ; Wed, 26 Feb 2014 09:38:12 -0600 (CST) Received: from localhost (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id EEFAF3F6213 for ; Wed, 26 Feb 2014 09:38:11 -0600 (CST) X-Virus-Scanned: amavisd-new at smtp-out-1.01.com Received: from smtp.01.com ([127.0.0.1]) by localhost (smtp-out-1.01.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id y-_hfomrA7K3 for ; Wed, 26 Feb 2014 09:38:11 -0600 (CST) Received: from newman.zxcvm.com (unknown [38.109.103.138]) by smtp-out-1.01.com (Postfix) with ESMTPSA id 911E23F6211 for ; Wed, 26 Feb 2014 09:38:11 -0600 (CST) From: Jason Breitman Subject: Identify the ZFS Snapshot Disk Hog Message-Id: Date: Wed, 26 Feb 2014 10:38:09 -0500 To: fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) X-Mailer: Apple Mail (2.1874) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Feb 2014 15:38:19 -0000 What is my best tool or set of command line scripts to find the snapshot = or snapshots that are the disk hogs? =20 I am familiar with the scripts zfs list -r -o space,refer -t snapshot tank/username and with the command below to identify the estimated space savings zfs destroy -nv tank/username@zfs-auto-snap_monthly-2013-11-01-03h00 When I go through and destroy the snapshots from oldest to the most = recent, I do not seem to reclaim any space and am forced to believe = there must be a better way. The users in questions are developers so there is churn causing the = snapshots to be larger than an average user which means I will need to = create a process I can use on a regular basis. I am using refquota for each user. OS: Freebsd 9.1 # zpool upgrade -v This system is currently running ZFS pool version 28. The following versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices 11 Improved scrub performance 12 Snapshot properties 13 snapused property 14 passthrough-x aclinherit 15 user/group space accounting 16 stmf property support 17 Triple-parity RAID-Z 18 Snapshot user holds 19 Log device removal 20 Compression using zle (zero-length encoding) 21 Deduplication 22 Received properties 23 Slim ZIL 24 System attributes 25 Improved scrub stats 26 Improved snapshot deletion performance 27 Improved snapshot creation performance 28 Multiple vdev replacements For more information on a particular version, including supported = releases, see the ZFS Administration Guide. # zfs upgrade -v The following filesystem versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS filesystem version 2 Enhanced directory entries 3 Case insensitive and filesystem user identifier (FUID) 4 userquota, groupquota properties 5 System attributes For more information on a particular version, including supported = releases, see the ZFS Administration Guide. Jason Breitman jbreitman@zxcvm.com From owner-freebsd-fs@FreeBSD.ORG Wed Feb 26 16:36:03 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CB668828 for ; Wed, 26 Feb 2014 16:36:03 +0000 (UTC) Received: from mail-pb0-x229.google.com (mail-pb0-x229.google.com [IPv6:2607:f8b0:400e:c01::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 96D811C86 for ; Wed, 26 Feb 2014 16:36:03 +0000 (UTC) Received: by mail-pb0-f41.google.com with SMTP id jt11so1219018pbb.0 for ; Wed, 26 Feb 2014 08:36:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphix.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=t9R7fl6ouVIBb5gAlVkmyl66ResJa7uNG6aF5zBHdhk=; b=YLHH9cPlgKs4hxGm+Jj6E+eTWd1YYfv1a/SX0+LpFt+kdpYDmYny+nu3ueK6A2/zuZ j9MKT1v5cUwVAItDhI5flnIBL5fhCI8mpJfMG+c7Zil3RbfMcZi8MKEwm0jCAAbE2RbC dw5FlXsUkUStcCq8oNKNRXPnf8YeZYQLVSjps= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=t9R7fl6ouVIBb5gAlVkmyl66ResJa7uNG6aF5zBHdhk=; b=a5HxKs6Qp897BcAsLnd5kQOp95yTHPbi5d8Na2zbAJXI3H6EmiaN/8a+5D/VIXwHNl /5HljiV0QNQhg2kkhsKKYn0YfU455t3w3P2LhkaIySUvKhJ3AO0FBCJoBVv1hO9lEhQG m4B1qN27nbvs9gvCKgig1RokgAHtl/IsGHuV2mv+MelqfZwvLepAeGuh9bH9t0d31vO/ mTiSal55/URf6zYmbfJhb3zbn2RYY5wlFdZl26+QGHtarBz9Ewfbvt66p+z4mxmPFdsu 7in8Wr692yry44FtvqmN/yZckQGMyMgGkPyUSLyWlmHJxVwMC1+GxrKh1xRqfoOa1mDk ymVw== X-Gm-Message-State: ALoCoQl2U/zWHvd/f2AvPwGBNlkhMzs0lt+QXTEpImno95PwO+XoI0mAOcr9VaKTTvQXCuVbi2bs MIME-Version: 1.0 X-Received: by 10.68.241.198 with SMTP id wk6mr7822009pbc.11.1393432563222; Wed, 26 Feb 2014 08:36:03 -0800 (PST) Received: by 10.70.58.41 with HTTP; Wed, 26 Feb 2014 08:36:03 -0800 (PST) In-Reply-To: References: Date: Wed, 26 Feb 2014 08:36:03 -0800 Message-ID: Subject: Re: Identify the ZFS Snapshot Disk Hog From: Matthew Ahrens To: Jason Breitman Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Feb 2014 16:36:04 -0000 On Wed, Feb 26, 2014 at 7:38 AM, Jason Breitman wrote: > What is my best tool or set of command line scripts to find the snapshot > or snapshots that are the disk hogs? > As you probably know, the problem is that the space "used" by a given snapshot only tells you how much space is unique to that snapshot (i.e. will be freed up when that snapshot is deleted, and is listed by the "zfs destroy -nv fs@snap" command). It doesn't tell you anything about how much space is shared between snapshots. One way to get a clue about this is the "written" property. (e.g "zfs list -r -o name,written -t snapshot ...") This tells you how much data was added in that snapshot. So deleting that snapshot and some immediately following might free up some space. The way to get a truly accurate view of the space shared by multiple snapshots is with an extension to the "zfs destroy -nv" command. You can list multiple snapshots, as described in the zfs manpage, below. This way you can see exactly how much space would be reclaimed, taking into account space that is shared among these snapshots. E.g. you could look at snapshots with a large "written" and then do "zfs destroy -nv fs@%", changing the later snap until you find a range that will free up enough space. zfs destroy [-dnpRrv] filesystem|volume@snap[%snap][,...] ... An inclusive range of snapshots may be specified by separating the first and last snapshots with a percent sign. The first and/or last snapshots may be left blank, in which case the filesystem's oldest or newest snapshot will be implied. Multiple snapshots (or ranges of snapshots) of the same filesystem or volume may be specified in a comma- separated list of snapshots. Only the snapshot's short name (the part after the @) should be specified when using a range or comma-separated list to identify multi- ple snapshots. > > I am familiar with the scripts > zfs list -r -o space,refer -t snapshot tank/username > > and with the command below to identify the estimated space savings > zfs destroy -nv tank/username@zfs-auto-snap_monthly-2013-11-01-03h00 > > When I go through and destroy the snapshots from oldest to the most > recent, I do not seem to reclaim any space and am forced to believe there > must be a better way. > If you don't reclaim any space even after deleting *all* snapshots of a given filesystem, then it wasn't the snapshots that were using space. You can determine this beforehand by looking at the "usedbysnapshots" property, e.g. in the output of "zfs list -o space" > The users in questions are developers so there is churn causing the > snapshots to be larger than an average user which means I will need to > create a process I can use on a regular basis. > I am using refquota for each user. > > OS: Freebsd 9.1 > I'm not sure if the "written" property and the "zfs destroy -nv " are avilable in 9.1, you may need to upgrade to 9.2 to get them. --matt > > # zpool upgrade -v > This system is currently running ZFS pool version 28. > > The following versions are supported: > > VER DESCRIPTION > --- -------------------------------------------------------- > 1 Initial ZFS version > 2 Ditto blocks (replicated metadata) > 3 Hot spares and double parity RAID-Z > 4 zpool history > 5 Compression using the gzip algorithm > 6 bootfs pool property > 7 Separate intent log devices > 8 Delegated administration > 9 refquota and refreservation properties > 10 Cache devices > 11 Improved scrub performance > 12 Snapshot properties > 13 snapused property > 14 passthrough-x aclinherit > 15 user/group space accounting > 16 stmf property support > 17 Triple-parity RAID-Z > 18 Snapshot user holds > 19 Log device removal > 20 Compression using zle (zero-length encoding) > 21 Deduplication > 22 Received properties > 23 Slim ZIL > 24 System attributes > 25 Improved scrub stats > 26 Improved snapshot deletion performance > 27 Improved snapshot creation performance > 28 Multiple vdev replacements > > For more information on a particular version, including supported releases, > see the ZFS Administration Guide. > > # zfs upgrade -v > The following filesystem versions are supported: > > VER DESCRIPTION > --- -------------------------------------------------------- > 1 Initial ZFS filesystem version > 2 Enhanced directory entries > 3 Case insensitive and filesystem user identifier (FUID) > 4 userquota, groupquota properties > 5 System attributes > > For more information on a particular version, including supported releases, > see the ZFS Administration Guide. > > > > Jason Breitman > jbreitman@zxcvm.com > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Feb 27 06:29:21 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1BD678ED for ; Thu, 27 Feb 2014 06:29:21 +0000 (UTC) Received: from mail-pd0-x234.google.com (mail-pd0-x234.google.com [IPv6:2607:f8b0:400e:c02::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E70471BFC for ; Thu, 27 Feb 2014 06:29:20 +0000 (UTC) Received: by mail-pd0-f180.google.com with SMTP id y10so2020970pdj.39 for ; Wed, 26 Feb 2014 22:29:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=exT7fhgBBbSJOmMKd+eLksLFVaBloJgfdywHI1zkryA=; b=Hh0Fwu0oadhU3cvL9Gg343WLRcRWO43hnKrvGvQYlHgwZpfx9jt1Pa322fwCpwClHR Dz946hSb+sZ79IdNifviEvUrMSLV3v4Rwt7efptouj/fbqD1YYODgrdbGIgUCoBsWnwY LG+AlPoSaFhCMBlitdTTxyxNLeFMCOw/UYU9B9BtsM5PfcoQWeGsoy3Mk/l/l+5vSWwW v777Ie3YvZsxoREgbmAuf2LedObMhLhfRIbxgZ+OmIeesnMNm1oNym9+awUjmLGTireB hjehyCyonxKzp5ZyWwSh70ucnKdXI2y49DNxOAg+vs3qv61WQjkjWf1xlRzMIlu7pblO eQ5w== MIME-Version: 1.0 X-Received: by 10.66.102.4 with SMTP id fk4mr13226305pab.59.1393482560520; Wed, 26 Feb 2014 22:29:20 -0800 (PST) Received: by 10.66.50.72 with HTTP; Wed, 26 Feb 2014 22:29:20 -0800 (PST) In-Reply-To: References: Date: Thu, 27 Feb 2014 07:29:20 +0100 Message-ID: Subject: Re: Recovering deleted file, strange structure From: Felipe Monteiro de Carvalho To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Feb 2014 06:29:21 -0000 Hello, I found that this is the inside of the file .sujournal, does anyone know where I can find where this file is implemented? I already found in FreeBSD sources the file sys/geom/journal/g_journal.h But it doesn't match what I see in the file =( According to g_journal.h there should be magic chars in the journal, for example: #define GJ_HEADER_MAGIC "GJHDR" But there is no such thing in my .sujournal file =( The structure sizes and overall layout also doesn't match =( Really, noone has any ideas how to help me? Maybe at least recommend another mailling list if I am in the wrong one? thanks, -- Felipe Monteiro de Carvalho From owner-freebsd-fs@FreeBSD.ORG Thu Feb 27 07:20:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A9F88364 for ; Thu, 27 Feb 2014 07:20:20 +0000 (UTC) Received: from mail-pa0-x236.google.com (mail-pa0-x236.google.com [IPv6:2607:f8b0:400e:c03::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7FA2D1152 for ; Thu, 27 Feb 2014 07:20:20 +0000 (UTC) Received: by mail-pa0-f54.google.com with SMTP id fa1so2110161pad.41 for ; Wed, 26 Feb 2014 23:20:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Wg/svKEbNVK28Ksv5fOEpwIP+0D2LRQQxnPRSuHISvE=; b=i5nkBownBhKQ6S0nvshzkO981CTqRuG2M529PpMf//IF+oBLLPfbhnazHm8kXhAfx+ BNkRbKRU4WUOV0eInLztGXx5/tU1dm2fF59ABPNWsaAGS/3V6U6MTi4iLNvKW5p+ZR7A bOrxC/w7XpUhzndZhbusyGsGMkZnOTq51npRTrVEXU8G9lTDn1qtGrJvVKzUKM2O25Ue L8EE8aeeoeUrPtMYgmx29mcKfHXRTFRsAp0hnhPZPgIgZKMgXT1lNTs2da9UiCLbKrlI Vz7AqExyUV9khwxB6zqCDyoeZvQGPn8DBA8wbnGx86CZVtQhlLTl9IX5nK9AIlt9ezrb rn6g== MIME-Version: 1.0 X-Received: by 10.67.5.7 with SMTP id ci7mr13488706pad.99.1393485620162; Wed, 26 Feb 2014 23:20:20 -0800 (PST) Received: by 10.66.50.72 with HTTP; Wed, 26 Feb 2014 23:20:20 -0800 (PST) In-Reply-To: References: Date: Thu, 27 Feb 2014 08:20:20 +0100 Message-ID: Subject: Re: Recovering deleted file, strange structure From: Felipe Monteiro de Carvalho To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Feb 2014 07:20:20 -0000 Hello, It seams that I found the structures in the source code for FFS ... but I had explicitly asked for a UFS2 filesystem ... strange ... FFS is simply UFS2 in FreeBSD? -- Felipe Monteiro de Carvalho From owner-freebsd-fs@FreeBSD.ORG Thu Feb 27 07:52:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3374D57D for ; Thu, 27 Feb 2014 07:52:30 +0000 (UTC) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.81]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id DC9C7153B for ; Thu, 27 Feb 2014 07:52:29 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1WIvWn-0003j3-S5; Thu, 27 Feb 2014 08:37:10 +0100 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-fs@freebsd.org, "Felipe Monteiro de Carvalho" Subject: Re: Recovering deleted file, strange structure References: Date: Thu, 27 Feb 2014 08:37:08 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.16 (FreeBSD) X-Authenticated-As-Hash: 398f5522cb258ce43cb679602f8cfe8b62a256d1 X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: 0.8 X-Spam-Status: No, score=0.8 required=5.0 tests=BAYES_50 autolearn=disabled version=3.3.1 X-Scan-Signature: 68e84d89c742f8c9e42c8c72d6372b71 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Feb 2014 07:52:30 -0000 On Thu, 27 Feb 2014 08:20:20 +0100, Felipe Monteiro de Carvalho wrote: > Hello, > > It seams that I found the structures in the source code for FFS ... > but I had explicitly asked for a UFS2 filesystem ... strange ... FFS > is simply UFS2 in FreeBSD? > More or less it is the same. Ronald. From owner-freebsd-fs@FreeBSD.ORG Thu Feb 27 09:12:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7F7D4314 for ; Thu, 27 Feb 2014 09:12:50 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2748B1E19 for ; Thu, 27 Feb 2014 09:12:49 +0000 (UTC) Received: from firewall.mikej.com (162-238-140-44.lightspeed.lsvlky.sbcglobal.net [162.238.140.44]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s1R9CgRa066532 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Thu, 27 Feb 2014 04:12:43 -0500 (EST) (envelope-from mikej@mikej.com) Received: from firewall.mikej.com (localhost.mikej.com [127.0.0.1]) by firewall.mikej.com (8.14.8/8.14.8) with ESMTP id s1R7h5xZ012485 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Thu, 27 Feb 2014 02:43:05 -0500 (EST) (envelope-from mikej@mikej.com) Received: (from www@localhost) by firewall.mikej.com (8.14.8/8.14.8/Submit) id s1R7h5SM012484; Thu, 27 Feb 2014 02:43:05 -0500 (EST) (envelope-from mikej@mikej.com) X-Authentication-Warning: firewall.mikej.com: www set sender to mikej@mikej.com using -f To: Subject: Re: =?UTF-8?Q?ffs=5Ffsync=3A=20dirty?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Thu, 27 Feb 2014 03:42:35 -0400 From: mikej In-Reply-To: <201402242225.s1OMPBed043219@chez.mckusick.com> References: <201402242225.s1OMPBed043219@chez.mckusick.com> Message-ID: X-Sender: mikej@mikej.com User-Agent: Roundcube Webmail/0.6-beta X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Feb 2014 09:12:50 -0000 On 2014-02-24 18:25, Kirk McKusick wrote: >> Date: Mon, 24 Feb 2014 16:56:37 -0500 (EST) >> From: Benjamin Kaduk >> To: mikej >> Subject: Re: ffs_fsync: dirty >> Cc: freebsd-fs@freebsd.org >> >> On Sun, 23 Feb 2014, mikej wrote: >> >>> FreeBSD custom 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r261885: Fri >>> Feb 14 >>> 08:51:48 EST 2014 mikej@custom:/usr/obj/usr/src/sys/GENERIC >>> amd64 >>> >>> I get a bunch of these while running poudriere. >>> >>> ffs_fsync: dirty >>> 0xfffff808e200e3b0: tag ufs, type VDIR >>> usecount 1, writecount 0, refcount 8 mountedhere 0 >>> flags (VI_ACTIVE) >>> v_object 0xfffff8039e934300 ref 0 pages 38 cleanbuf 1 dirtybuf 4 >>> lock type ufs: EXCL by thread 0xfffff8021bf72920 (pid 48820, >>> cpdup, tid >>> 100292) >>> ino 1527731, on dev mfid0p2 >> >> That is a potentially interesting violation caught by INVARIANTS. I >> don't >> think I have time to investigate right now, though. >> >> -Ben > > The above output is rather vexing, but its only bad effect is to > waste > a bit of memory. Several of us (kib, jeff, and myself) have tried to > track it down without success. Hopefully we will get it sorted out > before > too much longer. But other more pressing problems keep this one on > the > back burner. > > Kirk McKusick All, In an effort of full disclosure I poked around and found we had set vfs.hirunningspace to 3MB with a note for the change because processes were blocking on WSWBUF. After reverting this back to default 16777216 the errors have disappeared however processes block again. Note, this sysctl is mentioned in the freebsd tuning guide. Hopefully this gives you some additional insight into the issue. Regards, Michael Jung From owner-freebsd-fs@FreeBSD.ORG Thu Feb 27 16:14:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DB9291B1 for ; Thu, 27 Feb 2014 16:14:49 +0000 (UTC) Received: from mail-lb0-x22e.google.com (mail-lb0-x22e.google.com [IPv6:2a00:1450:4010:c04::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 63015169E for ; Thu, 27 Feb 2014 16:14:49 +0000 (UTC) Received: by mail-lb0-f174.google.com with SMTP id u14so1561891lbd.5 for ; Thu, 27 Feb 2014 08:14:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=zra/qVto/mCMzLVPyP2yBF15zCSy7fZylPqP+fwL86I=; b=Vf4PYPA1hg1f5VGQ3o5kbQKsQtZ/CqE4HbpCxSn03Zsl7EWcy/M18itkmIyXyxs4rO OR+25zBMOU6YoOBqpsprYPLxRQTszfJtzPnF3wU3zEHZeCEfV8DWQCaMQELlzwArjxLp CnPhH8Jp3SyDsU7oBiSKdXybiTtBVDYHwQyFleZ4KwYwhZJEB9o3E4Yp3sk7K2ws/1qd LtRffGyF1qg2KvntIX3hbyIyFy9b4xYQXw9u7LaErO6cmt3zw66wR/+tGYkHp/rhj2IJ D0apEsdZYwaXelAGBgd8ckjzU3W+o8xeISj430xNWOVIhzCy92jpfRqJi9MMd5iaZapj kW3g== X-Received: by 10.152.26.135 with SMTP id l7mr2708640lag.43.1393517686618; Thu, 27 Feb 2014 08:14:46 -0800 (PST) Received: from [192.168.1.129] (mau.donbass.com. [92.242.127.250]) by mx.google.com with ESMTPSA id e1sm8368374laa.8.2014.02.27.08.14.45 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 27 Feb 2014 08:14:45 -0800 (PST) Message-ID: <530F6475.4090508@gmail.com> Date: Thu, 27 Feb 2014 18:14:45 +0200 From: Volodymyr Kostyrko User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: Anton Sayetsky , freebsd-fs@freebsd.org Subject: Re: ZFS and Wired memory, again References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Feb 2014 16:14:49 -0000 22.11.2013 21:53, Anton Sayetsky напиÑав(ла): > Hello, > > I'm planning to deploy a ~150 TiB ZFS pool and when playing with ZFS > noticed that amount of wired memory is MUCH bigger than ARC size (in > absence of other hungry memory consumers, of course). I'm afraid that > this strange behavior may become even worse on a machine with big pool > and some hundreds gibibytes of RAM. > > So let me explain what happened. +1 from me, FreeBSD 10, uma=0 52 processes: 2 running, 49 sleeping, 1 zombie CPU: 0.0% user, 0.0% nice, 0.0% system, 0.4% interrupt, 99.6% idle Mem: 31M Active, 16K Inact, 3352M Wired, 17M Cache, 48M Free ARC: 1838M Total, 110M MFU, 18M MRU, 548K Anon, 1876M Header, 75M Other Swap: 4096M Total, 126M Used, 3969M Free, 3% Inuse Machine is plain dead. Running database or squid or anything causes excessive swapping. This is the state when I disabled all payload, with everything started swap goes to 500M and machine is burning disks. -- Sphinx of black quartz, judge my vow. From owner-freebsd-fs@FreeBSD.ORG Thu Feb 27 17:25:15 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 427E0276; Thu, 27 Feb 2014 17:25:15 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id 02A001F8D; Thu, 27 Feb 2014 17:25:14 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id EC0C6909; Thu, 27 Feb 2014 18:23:39 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id EAA6D908; Thu, 27 Feb 2014 18:23:39 +0100 (CET) Date: Thu, 27 Feb 2014 18:23:39 +0100 (CET) From: krichy@tvnetwork.hu To: Julian Elischer Subject: Re: What types of SSDs to use..... In-Reply-To: <530818E5.4090100@freebsd.org> Message-ID: References: <5305F8B0.1060308@digiware.nl> <53060AF4.9070900@tefre.com> <530818E5.4090100@freebsd.org> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: Erik Stian Tefre , fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Feb 2014 17:25:15 -0000 We use Intel S3700, +1 for that, larger endurance, has power-loss protection, and IOPS is very nice. Kojedzinszky Richard Euronet Magyarorszag Informatika Zrt. On Sat, 22 Feb 2014, Julian Elischer wrote: > Date: Sat, 22 Feb 2014 11:26:29 +0800 > From: Julian Elischer > To: Erik Stian Tefre , Willem Jan Withagen , > fs@freebsd.org > Subject: Re: What types of SSDs to use..... > > On 2/20/14, 10:02 PM, Erik Stian Tefre wrote: >>> Hi, >>> >>> I'm looking for advise and suggestions on what SSDs to use for the >>> ZFS-systems I have and/or am building.... >>> >>> I know the difference between SLC en MLC , but beyond that I have not >>> the best experience with all the older SSD I have lingering about here. >>> >>> Most of them just "disconnect" after a while. >>> It could be because they have exceeded their wear level and just can not >>> write any more. But than that has occurred rather fast. >>> >>> So what types SSD are others using for >>> ZIL >>> cache >>> what is your experience with them >>> >> Intel SSDs are good. I've got some handfuls of x25-e, 320, DC S3500 and DC >> S3700 running in supermicro servers and none of them have ever failed or >> dropped out in any way. The x25-es have been hammered with database loads >> 24/7 since 2009 without any issues. > > the newest intel drives have enough RAM inhtem to actually index the entire > drive in ram, so the performance is a lot better. > Prior to that they had to hold the index on flash, (with a cache). > >> >> I use DC S3500 for cache. DC S3700 is probably better for ZIL. (The 3700 >> has a higher write endurance HET-MLC, 10x the endurance of the S3500 I >> think and 4x the write IOPS.) >> >> Here's an interesting SSD stress test report by the way: >> http://www.extremetech.com/computing/173887-ssd-stress-testing-finds-intel-might-be-the-only-reliable-drive-manufacturer > > doesn't count PCI SSDs.. > >> >> -- >> Erik >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Feb 27 17:27:15 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 34FA8371; Thu, 27 Feb 2014 17:27:15 +0000 (UTC) Received: from krichy.tvnetwork.hu (unknown [IPv6:2a01:be00:0:2::10]) by mx1.freebsd.org (Postfix) with ESMTP id B15CF1FAD; Thu, 27 Feb 2014 17:27:14 +0000 (UTC) Received: by krichy.tvnetwork.hu (Postfix, from userid 1000) id CD64C90F; Thu, 27 Feb 2014 18:25:33 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by krichy.tvnetwork.hu (Postfix) with ESMTP id CCC0F90E; Thu, 27 Feb 2014 18:25:33 +0100 (CET) Date: Thu, 27 Feb 2014 18:25:33 +0100 (CET) From: krichy@tvnetwork.hu To: Julian Elischer Subject: Re: What types of SSDs to use..... In-Reply-To: Message-ID: References: <5305F8B0.1060308@digiware.nl> <53060AF4.9070900@tefre.com> <530818E5.4090100@freebsd.org> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: Erik Stian Tefre , fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Feb 2014 17:27:15 -0000 And actually taking their lifetime endurance into account, they are not more expensive than other consumer drives. Kojedzinszky Richard Euronet Magyarorszag Informatika Zrt. On Thu, 27 Feb 2014, krichy@tvnetwork.hu wrote: > Date: Thu, 27 Feb 2014 18:23:39 +0100 (CET) > From: krichy@tvnetwork.hu > To: Julian Elischer > Cc: Erik Stian Tefre , fs@freebsd.org > Subject: Re: What types of SSDs to use..... > > > We use Intel S3700, +1 for that, larger endurance, has power-loss protection, > and IOPS is very nice. > > > Kojedzinszky Richard > Euronet Magyarorszag Informatika Zrt. > > On Sat, 22 Feb 2014, Julian Elischer wrote: > >> Date: Sat, 22 Feb 2014 11:26:29 +0800 >> From: Julian Elischer >> To: Erik Stian Tefre , Willem Jan Withagen >> , >> fs@freebsd.org >> Subject: Re: What types of SSDs to use..... >> >> On 2/20/14, 10:02 PM, Erik Stian Tefre wrote: >>>> Hi, >>>> >>>> I'm looking for advise and suggestions on what SSDs to use for the >>>> ZFS-systems I have and/or am building.... >>>> >>>> I know the difference between SLC en MLC , but beyond that I have not >>>> the best experience with all the older SSD I have lingering about here. >>>> >>>> Most of them just "disconnect" after a while. >>>> It could be because they have exceeded their wear level and just can not >>>> write any more. But than that has occurred rather fast. >>>> >>>> So what types SSD are others using for >>>> ZIL >>>> cache >>>> what is your experience with them >>>> >>> Intel SSDs are good. I've got some handfuls of x25-e, 320, DC S3500 and DC >>> S3700 running in supermicro servers and none of them have ever failed or >>> dropped out in any way. The x25-es have been hammered with database loads >>> 24/7 since 2009 without any issues. >> >> the newest intel drives have enough RAM inhtem to actually index the entire >> drive in ram, so the performance is a lot better. >> Prior to that they had to hold the index on flash, (with a cache). >> >>> >>> I use DC S3500 for cache. DC S3700 is probably better for ZIL. (The 3700 >>> has a higher write endurance HET-MLC, 10x the endurance of the S3500 I >>> think and 4x the write IOPS.) >>> >>> Here's an interesting SSD stress test report by the way: >>> http://www.extremetech.com/computing/173887-ssd-stress-testing-finds-intel-might-be-the-only-reliable-drive-manufacturer >> >> doesn't count PCI SSDs.. >> >>> >>> -- >>> Erik >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 11:47:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 333CA6A for ; Fri, 28 Feb 2014 11:47:49 +0000 (UTC) Received: from mail-vc0-x229.google.com (mail-vc0-x229.google.com [IPv6:2607:f8b0:400c:c03::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E192C126A for ; Fri, 28 Feb 2014 11:47:48 +0000 (UTC) Received: by mail-vc0-f169.google.com with SMTP id hq11so605906vcb.28 for ; Fri, 28 Feb 2014 03:47:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=V/rkAizfZo01cpYQgcCQISHIp3g52ih9uaBA7W2SXKw=; b=GNydLZPlwfoE1VxpO6712Bqih2G9/o2EClcXWjmYFXpvx5OL0FQHvvKsxJX2VEzmFX DxxDVyknHdaSqzOnB2NOhMMfPQcTvHYHFFeU5UK670SrWjOFDep2FAsuE0B2sQExE6pb FRznSB1L6/ueBTN8VXqR+616LVbQrP8DOD+TnvFXouv0Vm0GIoPd+Jxp1Mb3hyeKpXNQ RhvcuiPenF0pbqCvc7mY57gGyLd3hZi1EODp3/y268gND3I+i1zgIY5XeLyzWD/06FVo ekHy+BzF7DEHrXIAomtMovCxCELmmsO4vMSqutrweGKVjRis8JenJ69Di9blVw8yH5w2 MGAQ== X-Received: by 10.58.169.7 with SMTP id aa7mr2193201vec.24.1393588068017; Fri, 28 Feb 2014 03:47:48 -0800 (PST) MIME-Version: 1.0 Received: by 10.59.10.202 with HTTP; Fri, 28 Feb 2014 03:47:17 -0800 (PST) In-Reply-To: <530F6475.4090508@gmail.com> References: <530F6475.4090508@gmail.com> From: Matthias Gamsjager Date: Fri, 28 Feb 2014 12:47:17 +0100 Message-ID: Subject: Re: ZFS and Wired memory, again To: Volodymyr Kostyrko Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 11:47:49 -0000 > > >> > +1 from me, FreeBSD 10, uma=0 > > 52 processes: 2 running, 49 sleeping, 1 zombie > CPU: 0.0% user, 0.0% nice, 0.0% system, 0.4% interrupt, 99.6% idle > Mem: 31M Active, 16K Inact, 3352M Wired, 17M Cache, 48M Free > ARC: 1838M Total, 110M MFU, 18M MRU, 548K Anon, 1876M Header, 75M Other > Swap: 4096M Total, 126M Used, 3969M Free, 3% Inuse > > Machine is plain dead. Running database or squid or anything causes > excessive swapping. This is the state when I disabled all payload, with > everything started swap goes to 500M and machine is burning disks. > > I wonder do you use any zfs tuning? Like max arc size? Wonder if setting that to a reasonable amount would help. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 14:11:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CEAEF3E0 for ; Fri, 28 Feb 2014 14:11:11 +0000 (UTC) Received: from mail-la0-x229.google.com (mail-la0-x229.google.com [IPv6:2a00:1450:4010:c03::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 538891047 for ; Fri, 28 Feb 2014 14:11:11 +0000 (UTC) Received: by mail-la0-f41.google.com with SMTP id gl10so2769460lab.0 for ; Fri, 28 Feb 2014 06:11:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=ihhTnZjc205kPUocQyOqZjayYiFk5BprFrWSSU1AzVU=; b=wyfh/brW7EPlXjwTN8D5FC3zesiw5k/k284q3jtlIy2Fu/1HFfEw7Jd2sm5HDUSpWu 8wDk5Xvtvle6dKkb/x4yz0bls4EH00WPG9S2vI/GJc4YZwhzSPaT572pj3mrHBEU6n2W PYMLJ7jLZMqNoH2sfIw7x4CkpbILQMWOslKkGPs5PIfwoPvLAjre6y2NM3K8NNrC8Te2 9DKOejvdxtA4rczd3bjKQ2K+opULj9xfzQ6fd32wEgGBKD960KJHpbRbN5AhKcSFeDwh fdEIxMcL/XkfqI9NTjA08zew8onVCBVpH+4AAE8yw2M1lHkgbsFcUTWkNrg5eEJaCje0 h3xg== X-Received: by 10.153.8.194 with SMTP id dm2mr2006427lad.54.1393596669505; Fri, 28 Feb 2014 06:11:09 -0800 (PST) Received: from [192.168.1.129] (mau.donbass.com. [92.242.127.250]) by mx.google.com with ESMTPSA id v5sm13817306laj.0.2014.02.28.06.11.08 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 28 Feb 2014 06:11:09 -0800 (PST) Message-ID: <531098FC.6090806@gmail.com> Date: Fri, 28 Feb 2014 16:11:08 +0200 From: Volodymyr Kostyrko User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: Matthias Gamsjager Subject: Re: ZFS and Wired memory, again References: <530F6475.4090508@gmail.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 14:11:11 -0000 28.02.2014 13:47, Matthias Gamsjager wrote: > > > +1 from me, FreeBSD 10, uma=0 > > 52 processes: 2 running, 49 sleeping, 1 zombie > CPU: 0.0% user, 0.0% nice, 0.0% system, 0.4% interrupt, 99.6% idle > Mem: 31M Active, 16K Inact, 3352M Wired, 17M Cache, 48M Free > ARC: 1838M Total, 110M MFU, 18M MRU, 548K Anon, 1876M Header, 75M Other > Swap: 4096M Total, 126M Used, 3969M Free, 3% Inuse > > Machine is plain dead. Running database or squid or anything causes > excessive swapping. This is the state when I disabled all payload, > with everything started swap goes to 500M and machine is burning disks. > > > I wonder do you use any zfs tuning? Like max arc size? Wonder if setting > that to a reasonable amount would help. The previous sample was taken without any tunables. After that I enabled vfs.zfs.zio.use_uma in /boot/loader.conf and now everything looks like: 103 processes: 1 running, 101 sleeping, 1 zombie CPU: 0.0% user, 0.0% nice, 1.2% system, 2.0% interrupt, 96.9% idle Mem: 137M Active, 13M Inact, 3176M Wired, 14M Cache, 111M Free ARC: 2026M Total, 781M MFU, 72M MRU, 414K Anon, 1221M Header, 98M Other Swap: 4096M Total, 263M Used, 3833M Free, 6% Inuse Currently wired memory grows and pushes inactive to swap. Sometimes wired memory retracts freeing 100MB or so. Machine is in working state currently. The major changes are: * less wired; * more arc; * a lot of header memory but less then before, I wonder whether it only grows or retracts... I'm also thinking about trying a patch to l2arc as this system uses external l2 cache. -- Sphinx of black quartz, judge my vow. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 18:31:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F1DEBEEF for ; Fri, 28 Feb 2014 18:31:51 +0000 (UTC) Received: from mail-ve0-x236.google.com (mail-ve0-x236.google.com [IPv6:2607:f8b0:400c:c01::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B014918C4 for ; Fri, 28 Feb 2014 18:31:51 +0000 (UTC) Received: by mail-ve0-f182.google.com with SMTP id jy13so1165332veb.27 for ; Fri, 28 Feb 2014 10:31:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=KcOetXLlXTrdFaI7JZ3rt66uCiz2m9mhvGkU5jq3zDA=; b=x1PWYry+kiU1Eg8b1XcsKn1HmpDdOtShHEAYfZK+/ZKYoKf9SVOP75zd/t7CVJN3Y1 JBz907G79wvrggpjt7go+gRh5s4+q2UucRe9iTWUTTCwiMJDzNrs+JEJAbWL/FDgQVn3 VgHAfAqY9rFPxvKBiFiGHu84osagMQYh7qJ5xDbA2Ek0lLconrExNuYIBUEeGmNM+EvD CaUnOHOh1l9jZR3EJ/uCevYEOp4VcZmGW4mG5yq2ZhV/EqKELn8rCrb3EIPBXLOjMMdB UoRLYfRrgq6ojcEJ/XoWs4LvNARQ+10tkpk7fgNocICve7o68d3jANc6WYpzZIi+YuBi C/DA== X-Received: by 10.220.103.141 with SMTP id k13mr3644751vco.25.1393612310768; Fri, 28 Feb 2014 10:31:50 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.91.74 with HTTP; Fri, 28 Feb 2014 10:31:30 -0800 (PST) In-Reply-To: References: <530F6475.4090508@gmail.com> From: Anton Sayetsky Date: Fri, 28 Feb 2014 20:31:30 +0200 Message-ID: Subject: Re: ZFS and Wired memory, again To: Matthias Gamsjager Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, Volodymyr Kostyrko X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 18:31:52 -0000 2014-02-28 13:47 GMT+02:00 Matthias Gamsjager : >>> >> >> +1 from me, FreeBSD 10, uma=0 >> >> 52 processes: 2 running, 49 sleeping, 1 zombie >> CPU: 0.0% user, 0.0% nice, 0.0% system, 0.4% interrupt, 99.6% idle >> Mem: 31M Active, 16K Inact, 3352M Wired, 17M Cache, 48M Free >> ARC: 1838M Total, 110M MFU, 18M MRU, 548K Anon, 1876M Header, 75M Other >> Swap: 4096M Total, 126M Used, 3969M Free, 3% Inuse >> >> Machine is plain dead. Running database or squid or anything causes >> excessive swapping. This is the state when I disabled all payload, with >> everything started swap goes to 500M and machine is burning disks. >> > > I wonder do you use any zfs tuning? Like max arc size? Wonder if setting > that to a reasonable amount would help. Please read carefully my first message. No any tuning (configs posted), and problem is not that ZFS uses big amount of memory. I'm experiencing exactly one problem - Wired mem is significantly larger than ARC. E.g. if my ARC size is 2048M, I'm expecting that Wired will not consume more than ARC+~150M. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 18:42:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2A4084FD for ; Fri, 28 Feb 2014 18:42:42 +0000 (UTC) Received: from thebighonker.lerctr.org (lrosenman-1-pt.tunnel.tserv8.dal1.ipv6.he.net [IPv6:2001:470:1f0e:3ad::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id EE99019C9 for ; Fri, 28 Feb 2014 18:42:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Message-ID:References:In-Reply-To:Subject:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=eRr9xMlBtHVKp254sWoPIP1nkVEAWzJgrEdfMXugPUs=; b=jZHgqVtmpr4FAvXVaUJVo5hSBr5R8ApoJu0Fm0N4CvzrdLCMQZxVBUD9JJD+52yvC9T8oUeMyxpoHRAxk3OShrq1kRF8dsGQto9jhocWz9tBTNPE8+/e0vdoN04x09OtKkLo/ljOyvxZSfwz5+/q0K+p1m6hSfQ4Umj/onfstUg=; Received: from localhost.lerctr.org ([127.0.0.1]:24285 helo=webmail.lerctr.org) by thebighonker.lerctr.org with esmtpa (Exim 4.82 (FreeBSD)) (envelope-from ) id 1WJSOJ-0002nH-Fe for freebsd-fs@freebsd.org; Fri, 28 Feb 2014 12:42:41 -0600 Received: from proxy.lucent.com ([135.245.48.14]) by webmail.lerctr.org with HTTP (HTTP/1.1 POST); Fri, 28 Feb 2014 12:42:35 -0600 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Fri, 28 Feb 2014 12:42:35 -0600 From: Larry Rosenman To: freebsd-fs@freebsd.org Subject: Re: ZFS and Wired memory, again In-Reply-To: References: <530F6475.4090508@gmail.com> Message-ID: X-Sender: ler@lerctr.org User-Agent: Roundcube Webmail/0.9.5 X-Spam-Score: -2.9 (--) X-LERCTR-Spam-Score: -2.9 (--) X-Spam-Report: SpamScore (-2.9/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-0.001 X-LERCTR-Spam-Report: SpamScore (-2.9/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-0.001 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 18:42:42 -0000 On 2014-02-28 12:31, Anton Sayetsky wrote: > 2014-02-28 13:47 GMT+02:00 Matthias Gamsjager : >>>> >>> >>> +1 from me, FreeBSD 10, uma=0 >>> >>> 52 processes: 2 running, 49 sleeping, 1 zombie >>> CPU: 0.0% user, 0.0% nice, 0.0% system, 0.4% interrupt, 99.6% >>> idle >>> Mem: 31M Active, 16K Inact, 3352M Wired, 17M Cache, 48M Free >>> ARC: 1838M Total, 110M MFU, 18M MRU, 548K Anon, 1876M Header, 75M >>> Other >>> Swap: 4096M Total, 126M Used, 3969M Free, 3% Inuse >>> >>> Machine is plain dead. Running database or squid or anything causes >>> excessive swapping. This is the state when I disabled all payload, >>> with >>> everything started swap goes to 500M and machine is burning disks. >>> >> >> I wonder do you use any zfs tuning? Like max arc size? Wonder if >> setting >> that to a reasonable amount would help. > Please read carefully my first message. No any tuning (configs > posted), and problem is not that ZFS uses big amount of memory. I'm > experiencing exactly one problem - Wired mem is significantly larger > than ARC. > E.g. if my ARC size is 2048M, I'm expecting that Wired will not > consume more than ARC+~150M. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" Other pieces of the system used wired memory...... Have you investigated that as well? -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: ler@lerctr.org US Mail: 108 Turvey Cove, Hutto, TX 78634-5688 From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 18:53:57 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7B2FE7D2 for ; Fri, 28 Feb 2014 18:53:57 +0000 (UTC) Received: from mail-ve0-x22a.google.com (mail-ve0-x22a.google.com [IPv6:2607:f8b0:400c:c01::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 353A91AA4 for ; Fri, 28 Feb 2014 18:53:57 +0000 (UTC) Received: by mail-ve0-f170.google.com with SMTP id pa12so1183949veb.29 for ; Fri, 28 Feb 2014 10:53:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=80/LD/omtOqR6eKsefVpwZhdAyd8YePInhgdb9fc0eU=; b=cGUTtKBwdbdY9JgVfUwhZmXJ+okMZD9U6TYWEezsMA3UCjSfIJ34es33qNMSsC1Hf2 gT7wu1RhzxeJHFeEpkzegUoSKEwWJCr1kB0fBG7pztrTzAQtz2LRZF5ut/DbVku7OxXS /brlpaULDcS553cagp9huI1b8YuvLnK5foDhYYCPPEBtsmsSfq6YvwjGM2lX5RlpmVUS vmP0cH/O+9Fc75JU7NhyoXfsVEl8x3H6AN+/0603kextlrzBDjDzsqXc++RyFAxQ7mkn CUDU/NmklpK1pYpZERzgpEYOasifaZkTjdIBPmrTSUdnfZG/hBLevzC+8ivQ6ydEGUg4 nk0Q== X-Received: by 10.52.229.133 with SMTP id sq5mr1886078vdc.45.1393613636268; Fri, 28 Feb 2014 10:53:56 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.91.74 with HTTP; Fri, 28 Feb 2014 10:53:36 -0800 (PST) In-Reply-To: References: <530F6475.4090508@gmail.com> From: Anton Sayetsky Date: Fri, 28 Feb 2014 20:53:36 +0200 Message-ID: Subject: Re: ZFS and Wired memory, again To: Larry Rosenman Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 18:53:57 -0000 2014-02-28 20:42 GMT+02:00 Larry Rosenman : > On 2014-02-28 12:31, Anton Sayetsky wrote: >> >> 2014-02-28 13:47 GMT+02:00 Matthias Gamsjager : >>>>> >>>>> >>>> >>>> +1 from me, FreeBSD 10, uma=0 >>>> >>>> 52 processes: 2 running, 49 sleeping, 1 zombie >>>> CPU: 0.0% user, 0.0% nice, 0.0% system, 0.4% interrupt, 99.6% idle >>>> Mem: 31M Active, 16K Inact, 3352M Wired, 17M Cache, 48M Free >>>> ARC: 1838M Total, 110M MFU, 18M MRU, 548K Anon, 1876M Header, 75M Other >>>> Swap: 4096M Total, 126M Used, 3969M Free, 3% Inuse >>>> >>>> Machine is plain dead. Running database or squid or anything causes >>>> excessive swapping. This is the state when I disabled all payload, with >>>> everything started swap goes to 500M and machine is burning disks. >>>> >>> >>> I wonder do you use any zfs tuning? Like max arc size? Wonder if setting >>> that to a reasonable amount would help. >> >> Please read carefully my first message. No any tuning (configs >> posted), and problem is not that ZFS uses big amount of memory. I'm >> experiencing exactly one problem - Wired mem is significantly larger >> than ARC. >> E.g. if my ARC size is 2048M, I'm expecting that Wired will not >> consume more than ARC+~150M. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > Other pieces of the system used wired memory...... > > Have you investigated that as well? And again - this has detailed explanation in the first letter. In short: 1. I've booted the system without any memory hungry services (only basic like cron, powerd). Wired is 95M, ARC is 25M. 2. Then I started reading ZFS pool (tar cpf /dev/null /pool/mountpoint). ARC - 2048M, Wired - ~2800M. WTF? Who eats more than 700M of kernel memory? Do you really think that powerd or cron can do this? From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 19:11:28 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 54FC1E62 for ; Fri, 28 Feb 2014 19:11:28 +0000 (UTC) Received: from mail-pd0-x22d.google.com (mail-pd0-x22d.google.com [IPv6:2607:f8b0:400e:c02::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2677B1C29 for ; Fri, 28 Feb 2014 19:11:28 +0000 (UTC) Received: by mail-pd0-f173.google.com with SMTP id z10so1102728pdj.18 for ; Fri, 28 Feb 2014 11:11:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=cUdQF9kVRAiMrnb9qzsBjpk1kgCUW5Q5pz5P51Nqvrs=; b=lffZrxPKKqEMLd4kUvMBMmvAPEVBWcbhK0Ur4n8SCpLaeeoQNyxX5WnPOf7ChkCn9p /gndtJgt2gkQWYrMSanzE2a9d9J7w4Y3Tt5y3GXvvaH7BCo50NOr1porEENu/yGUR9RQ klccza7m5axZZ270it7UJVfAAREAXMJULF02YQVNKjAcVPJyad5AbbJYQD3q61Wt0+CK IyXG5qE10m7OOvuAzLg4CzYkzIpw2MRhgaIyIEWToNtbIk/Y5qVT4vu5uETZ6mb7tPDM 2xWA8llVylQ0Su1gXm136q0mVtXMlkErejMfSgnrJgKziofnOFpOm/yQZ638ZJDN2psA w2zQ== MIME-Version: 1.0 X-Received: by 10.66.146.105 with SMTP id tb9mr5227470pab.157.1393614687749; Fri, 28 Feb 2014 11:11:27 -0800 (PST) Received: by 10.70.55.7 with HTTP; Fri, 28 Feb 2014 11:11:27 -0800 (PST) In-Reply-To: References: <530F6475.4090508@gmail.com> Date: Fri, 28 Feb 2014 13:11:27 -0600 Message-ID: Subject: Re: ZFS and Wired memory, again From: Adam Vande More To: Anton Sayetsky Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 19:11:28 -0000 On Fri, Feb 28, 2014 at 12:53 PM, Anton Sayetsky wrote: > 2014-02-28 20:42 GMT+02:00 Larry Rosenman : > > On 2014-02-28 12:31, Anton Sayetsky wrote: > >> > >> 2014-02-28 13:47 GMT+02:00 Matthias Gamsjager : > >>>>> > >>>>> > >>>> > >>>> +1 from me, FreeBSD 10, uma=0 > >>>> > >>>> 52 processes: 2 running, 49 sleeping, 1 zombie > >>>> CPU: 0.0% user, 0.0% nice, 0.0% system, 0.4% interrupt, 99.6% idle > >>>> Mem: 31M Active, 16K Inact, 3352M Wired, 17M Cache, 48M Free > >>>> ARC: 1838M Total, 110M MFU, 18M MRU, 548K Anon, 1876M Header, 75M > Other > >>>> Swap: 4096M Total, 126M Used, 3969M Free, 3% Inuse > >>>> > >>>> Machine is plain dead. Running database or squid or anything causes > >>>> excessive swapping. This is the state when I disabled all payload, > with > >>>> everything started swap goes to 500M and machine is burning disks. > >>>> > >>> > >>> I wonder do you use any zfs tuning? Like max arc size? Wonder if > setting > >>> that to a reasonable amount would help. > >> > >> Please read carefully my first message. No any tuning (configs > >> posted), and problem is not that ZFS uses big amount of memory. I'm > >> experiencing exactly one problem - Wired mem is significantly larger > >> than ARC. > >> E.g. if my ARC size is 2048M, I'm expecting that Wired will not > >> consume more than ARC+~150M. > >> _______________________________________________ > >> freebsd-fs@freebsd.org mailing list > >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > Other pieces of the system used wired memory...... > > > > Have you investigated that as well? > And again - this has detailed explanation in the first letter. In short: > 1. I've booted the system without any memory hungry services (only > basic like cron, powerd). Wired is 95M, ARC is 25M. > 2. Then I started reading ZFS pool (tar cpf /dev/null > /pool/mountpoint). ARC - 2048M, Wired - ~2800M. > WTF? Who eats more than 700M of kernel memory? Do you really think > that powerd or cron can do this? Without question, cron could do it. -- Adam From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 19:15:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 856F11C9 for ; Fri, 28 Feb 2014 19:15:29 +0000 (UTC) Received: from mail-ve0-x230.google.com (mail-ve0-x230.google.com [IPv6:2607:f8b0:400c:c01::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 406FA1C59 for ; Fri, 28 Feb 2014 19:15:29 +0000 (UTC) Received: by mail-ve0-f176.google.com with SMTP id cz12so1187460veb.35 for ; Fri, 28 Feb 2014 11:15:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=2ceCV8XlLO2SCNkpE5pgRTrFeCrU9mzohrnbtasj1co=; b=g+K5C8SfONNAnTCrmhVKGTpCwSGvuv03LTnfZ6iPpuSNSxowbYpa73YBcFV4T+Bfzo xR8zUmXDs1OvXE5Vo25XcTI9tuMPKcb7mwoWYEVva36yZSsNaPTpQt7FwJkj+62w3obJ Ddv7I9ziXb8wV9rLx/noznwed6s0YEgFaGeE3I/yncdk52IqKn4fiUgZtG+ftJOyWLRQ /7+fEyRygnCIWlX5Gi5hCXF68xv34E/+8A1hJoexHk7SEowCQEbuy7hfAn9MnuCJc2DC S51GCAu1tVOOWpqe5eN9zToN2JoIKEUdPOQZrCKZ8mqQFpDw/cBGayegSuA/GSwGl2Sc LBUg== X-Received: by 10.58.146.5 with SMTP id sy5mr1633315veb.43.1393614928482; Fri, 28 Feb 2014 11:15:28 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.91.74 with HTTP; Fri, 28 Feb 2014 11:15:08 -0800 (PST) In-Reply-To: References: <530F6475.4090508@gmail.com> From: Anton Sayetsky Date: Fri, 28 Feb 2014 21:15:08 +0200 Message-ID: Subject: Re: ZFS and Wired memory, again To: Adam Vande More Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 19:15:29 -0000 2014-02-28 21:11 GMT+02:00 Adam Vande More : > On Fri, Feb 28, 2014 at 12:53 PM, Anton Sayetsky wrote: >> >> 2014-02-28 20:42 GMT+02:00 Larry Rosenman : >> > On 2014-02-28 12:31, Anton Sayetsky wrote: >> >> >> >> 2014-02-28 13:47 GMT+02:00 Matthias Gamsjager : >> >>>>> >> >>>>> >> >>>> >> >>>> +1 from me, FreeBSD 10, uma=0 >> >>>> >> >>>> 52 processes: 2 running, 49 sleeping, 1 zombie >> >>>> CPU: 0.0% user, 0.0% nice, 0.0% system, 0.4% interrupt, 99.6% >> >>>> idle >> >>>> Mem: 31M Active, 16K Inact, 3352M Wired, 17M Cache, 48M Free >> >>>> ARC: 1838M Total, 110M MFU, 18M MRU, 548K Anon, 1876M Header, 75M >> >>>> Other >> >>>> Swap: 4096M Total, 126M Used, 3969M Free, 3% Inuse >> >>>> >> >>>> Machine is plain dead. Running database or squid or anything causes >> >>>> excessive swapping. This is the state when I disabled all payload, >> >>>> with >> >>>> everything started swap goes to 500M and machine is burning disks. >> >>>> >> >>> >> >>> I wonder do you use any zfs tuning? Like max arc size? Wonder if >> >>> setting >> >>> that to a reasonable amount would help. >> >> >> >> Please read carefully my first message. No any tuning (configs >> >> posted), and problem is not that ZFS uses big amount of memory. I'm >> >> experiencing exactly one problem - Wired mem is significantly larger >> >> than ARC. >> >> E.g. if my ARC size is 2048M, I'm expecting that Wired will not >> >> consume more than ARC+~150M. >> >> _______________________________________________ >> >> freebsd-fs@freebsd.org mailing list >> >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > >> > Other pieces of the system used wired memory...... >> > >> > Have you investigated that as well? >> And again - this has detailed explanation in the first letter. In short: >> 1. I've booted the system without any memory hungry services (only >> basic like cron, powerd). Wired is 95M, ARC is 25M. >> 2. Then I started reading ZFS pool (tar cpf /dev/null >> /pool/mountpoint). ARC - 2048M, Wired - ~2800M. >> WTF? Who eats more than 700M of kernel memory? Do you really think >> that powerd or cron can do this? > > > Without question, cron could do it. > > -- > Adam But never does on the same system with ZFS disabled. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 19:35:33 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BB336D59 for ; Fri, 28 Feb 2014 19:35:33 +0000 (UTC) Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7732F1E4B for ; Fri, 28 Feb 2014 19:35:33 +0000 (UTC) Received: from compute2.internal (compute2.nyi.mail.srv.osa [10.202.2.42]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 6B45B211FD for ; Fri, 28 Feb 2014 14:35:23 -0500 (EST) Received: from web3 ([10.202.2.213]) by compute2.internal (MEProxy); Fri, 28 Feb 2014 14:35:24 -0500 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=message-id:from:to:mime-version :content-transfer-encoding:content-type:subject:date:in-reply-to :references; s=smtpout; bh=PcTZHrfN0hmd3YT/8bgafr9E0lg=; b=PfMwR V+18fR0+rQ250ZsadDsSUvN5HaAHcuhLuNtL51JIWdkKiX7l7FstYUnIMgUTogBW QShI1dghZGc1C3nKY6Al78rm2XMyBkwE37fVIiy5G55LvzriY3aeBRERjVV7zzlw 7cHeIOgtMFM8Z3lwJiqtmur1mT1eh1gR7u+zkg= Received: by web3.nyi.mail.srv.osa (Postfix, from userid 99) id 8318C11E1EA; Fri, 28 Feb 2014 14:35:23 -0500 (EST) Message-Id: <1393616123.28153.89089441.54713282@webmail.messagingengine.com> X-Sasl-Enc: aA22q/i4qOUOaDqLdgPoFoiEEdZwKPkN2PgEh50quZXs 1393616123 From: Mark Felder To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain X-Mailer: MessagingEngine.com Webmail Interface - ajax-4527a23f Subject: Re: ZFS and Wired memory, again Date: Fri, 28 Feb 2014 13:35:23 -0600 In-Reply-To: References: <530F6475.4090508@gmail.com> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 19:35:33 -0000 On Fri, Feb 28, 2014, at 13:11, Adam Vande More wrote: > On Fri, Feb 28, 2014 at 12:53 PM, Anton Sayetsky > wrote: > > > 2014-02-28 20:42 GMT+02:00 Larry Rosenman : > > > On 2014-02-28 12:31, Anton Sayetsky wrote: > > >> > > >> 2014-02-28 13:47 GMT+02:00 Matthias Gamsjager : > > >>>>> > > >>>>> > > >>>> > > >>>> +1 from me, FreeBSD 10, uma=0 > > >>>> > > >>>> 52 processes: 2 running, 49 sleeping, 1 zombie > > >>>> CPU: 0.0% user, 0.0% nice, 0.0% system, 0.4% interrupt, 99.6% idle > > >>>> Mem: 31M Active, 16K Inact, 3352M Wired, 17M Cache, 48M Free > > >>>> ARC: 1838M Total, 110M MFU, 18M MRU, 548K Anon, 1876M Header, 75M > > Other > > >>>> Swap: 4096M Total, 126M Used, 3969M Free, 3% Inuse > > >>>> > > >>>> Machine is plain dead. Running database or squid or anything causes > > >>>> excessive swapping. This is the state when I disabled all payload, > > with > > >>>> everything started swap goes to 500M and machine is burning disks. > > >>>> > > >>> > > >>> I wonder do you use any zfs tuning? Like max arc size? Wonder if > > setting > > >>> that to a reasonable amount would help. > > >> > > >> Please read carefully my first message. No any tuning (configs > > >> posted), and problem is not that ZFS uses big amount of memory. I'm > > >> experiencing exactly one problem - Wired mem is significantly larger > > >> than ARC. > > >> E.g. if my ARC size is 2048M, I'm expecting that Wired will not > > >> consume more than ARC+~150M. > > >> _______________________________________________ > > >> freebsd-fs@freebsd.org mailing list > > >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > > > Other pieces of the system used wired memory...... > > > > > > Have you investigated that as well? > > And again - this has detailed explanation in the first letter. In short: > > 1. I've booted the system without any memory hungry services (only > > basic like cron, powerd). Wired is 95M, ARC is 25M. > > 2. Then I started reading ZFS pool (tar cpf /dev/null > > /pool/mountpoint). ARC - 2048M, Wired - ~2800M. > > WTF? Who eats more than 700M of kernel memory? Do you really think > > that powerd or cron can do this? > > > Without question, cron could do it. > I can't see cron using kernel memory; that just doesn't make sense to me. Not even the periodic scripts that cron executes should be able to balloon kernel like that. I think I know what meant to infer though -- that some nonstandard cron script is doing something ugly. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 19:43:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AB7E02AD; Fri, 28 Feb 2014 19:43:34 +0000 (UTC) Received: from mail-pa0-x230.google.com (mail-pa0-x230.google.com [IPv6:2607:f8b0:400e:c03::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 77DE01060; Fri, 28 Feb 2014 19:43:34 +0000 (UTC) Received: by mail-pa0-f48.google.com with SMTP id kx10so1174857pab.35 for ; Fri, 28 Feb 2014 11:43:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=oqznw/xViQg4dMUegOnYhCLZQoHOBTE1K8Kbxhhgvzs=; b=aLCfxnU8NBay5v65Q6uokZab2TS1RsW/8XsWnJ6jx2aaOqwJg2yyVZ/zUxRfb6cW9Z ovjohhhnc1oC8DZykIQJodRarlEibbdHMqdMS0cOiXEaZZwQA/pQwFDzk05tthCdQv/a v+PRh16ecuCauhGtFKHZwBMzB3FjdOFPGMVonb7l3wr/OYHcV6s2PahOqLElDYh4rdNW f+D0+XAoJMUGx0PObFvWuC4pc53MWojyqwbNOxNNsMQ6hDA5pRN/VinM3BKgcb1vAu4R xrkykvnOA6zgS7tCt2Y+PeveaFybfC/YfXqhnvOPU0vIj9JnF0zL/G8fBLOBGqoRc6Ix tHdA== MIME-Version: 1.0 X-Received: by 10.66.163.138 with SMTP id yi10mr5428527pab.95.1393616614073; Fri, 28 Feb 2014 11:43:34 -0800 (PST) Received: by 10.70.55.7 with HTTP; Fri, 28 Feb 2014 11:43:34 -0800 (PST) In-Reply-To: <1393616123.28153.89089441.54713282@webmail.messagingengine.com> References: <530F6475.4090508@gmail.com> <1393616123.28153.89089441.54713282@webmail.messagingengine.com> Date: Fri, 28 Feb 2014 13:43:34 -0600 Message-ID: Subject: Re: ZFS and Wired memory, again From: Adam Vande More To: Mark Felder Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 19:43:34 -0000 On Fri, Feb 28, 2014 at 1:35 PM, Mark Felder wrote: > > > > > > > Without question, cron could do it. > > > > I can't see cron using kernel memory; that just doesn't make sense to > me. Not even the periodic scripts that cron executes should be able to > balloon kernel like that. > > I think I know what meant to infer though -- that some nonstandard cron > script is doing something ugly. He's running 150 TB on 3 GB of mem. Periodic I think could consume that alone. -- Adam From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 19:48:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BF456524; Fri, 28 Feb 2014 19:48:23 +0000 (UTC) Received: from mail-ve0-x234.google.com (mail-ve0-x234.google.com [IPv6:2607:f8b0:400c:c01::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 66A5410B4; Fri, 28 Feb 2014 19:48:23 +0000 (UTC) Received: by mail-ve0-f180.google.com with SMTP id jz11so1219764veb.39 for ; Fri, 28 Feb 2014 11:48:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=2cWNiMfaUWrWBIxfsPcgoM4nwJBa1zjIJHqZj1HWTyA=; b=Qmr/wULNDfPktRYk22jIdev09d9kWdOhN885bc0IYIb5YXL5R0yFKMGkn9GsmuJD1c 3agQRLCHIQWFyR+k3KPAHdzHUaNhQ9d90bo6nVwzXWFxRaSsQruUYpOERjvDLEtrbNrp m2a+UWjVNITCIOK8lMSGvx9Ey7HUqkiZ+N3lGV9/gvWyjTxaz3ZWYvS2KzfJRrA/5H/M B/uNHYlX4nbRzck0/1YnFKdOE8iyAhOMomp/SfqAeulq2LfkhOuP9TfgD7Lp3MKEKwWv M724P/pt9G+e4JqMJk6FOmgBWpbf7Nt1OjOQZ/+3bANP+6IItw/0GeFMYgICoz66rRO8 8smQ== X-Received: by 10.220.98.204 with SMTP id r12mr93857vcn.48.1393616902467; Fri, 28 Feb 2014 11:48:22 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.91.74 with HTTP; Fri, 28 Feb 2014 11:48:02 -0800 (PST) In-Reply-To: References: <530F6475.4090508@gmail.com> <1393616123.28153.89089441.54713282@webmail.messagingengine.com> From: Anton Sayetsky Date: Fri, 28 Feb 2014 21:48:02 +0200 Message-ID: Subject: Re: ZFS and Wired memory, again To: Adam Vande More Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 19:48:23 -0000 2014-02-28 21:43 GMT+02:00 Adam Vande More : > On Fri, Feb 28, 2014 at 1:35 PM, Mark Felder wrote: > >> >> > >> > >> > Without question, cron could do it. >> > >> >> I can't see cron using kernel memory; that just doesn't make sense to >> me. Not even the periodic scripts that cron executes should be able to >> balloon kernel like that. >> >> I think I know what meant to infer though -- that some nonstandard cron >> script is doing something ugly. > > > He's running 150 TB on 3 GB of mem. Periodic I think could consume that > alone. And again - read _carefully_. System with 3G RAM is only test system with single disk. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 19:58:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 22268A2F for ; Fri, 28 Feb 2014 19:58:13 +0000 (UTC) Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E0E0911A2 for ; Fri, 28 Feb 2014 19:58:12 +0000 (UTC) Received: from compute4.internal (compute4.nyi.mail.srv.osa [10.202.2.44]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 862C021222; Fri, 28 Feb 2014 14:58:09 -0500 (EST) Received: from web3 ([10.202.2.213]) by compute4.internal (MEProxy); Fri, 28 Feb 2014 14:58:09 -0500 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=message-id:from:to:cc:mime-version :content-transfer-encoding:content-type:in-reply-to:references :subject:date; s=smtpout; bh=cXWe5LugXI6XBBVKwohn7jN3QSg=; b=pki NRRfffrzn1U5gnI6nW/u561dqVRccR/ZVCDuo/6yjr8k6EuPsp4jP68fHAjqbMBs Xz2C35CzLZ1KrgW6dqx0RQPnEOqtYGZkXk3WNlGq8ZXzmPHMqIt9BmpVy+NrFWBj i6twWVb7W1jM5jKpaQ84vspB7/uc9DV2ox01aAFg= Received: by web3.nyi.mail.srv.osa (Postfix, from userid 99) id D998C112A0D; Fri, 28 Feb 2014 14:58:08 -0500 (EST) Message-Id: <1393617488.2929.89095301.157E5E1B@webmail.messagingengine.com> X-Sasl-Enc: DoPuTycToYhF48tOSvegiDQ1Q+otz9AQb+NLhTvwPH98 1393617488 From: Mark Felder To: Adam Vande More MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain X-Mailer: MessagingEngine.com Webmail Interface - ajax-4527a23f In-Reply-To: References: <530F6475.4090508@gmail.com> <1393616123.28153.89089441.54713282@webmail.messagingengine.com> Subject: Re: ZFS and Wired memory, again Date: Fri, 28 Feb 2014 13:58:08 -0600 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 19:58:13 -0000 On Fri, Feb 28, 2014, at 13:43, Adam Vande More wrote: > On Fri, Feb 28, 2014 at 1:35 PM, Mark Felder wrote: > > > > > > > > > > > > Without question, cron could do it. > > > > > > > I can't see cron using kernel memory; that just doesn't make sense to > > me. Not even the periodic scripts that cron executes should be able to > > balloon kernel like that. > > > > I think I know what meant to infer though -- that some nonstandard cron > > script is doing something ugly. > > > He's running 150 TB on 3 GB of mem. Periodic I think could consume that > alone. > Which periodic script are you referring to? The daily security check? I suppose I could see resource usage increase after checking for new setuid bits... assuming he had a lot of files to crawl through. But I don't actually know if that would do much to wired memory. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 20:05:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 36142C67 for ; Fri, 28 Feb 2014 20:05:03 +0000 (UTC) Received: from mail-ve0-x235.google.com (mail-ve0-x235.google.com [IPv6:2607:f8b0:400c:c01::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E706A1258 for ; Fri, 28 Feb 2014 20:05:02 +0000 (UTC) Received: by mail-ve0-f181.google.com with SMTP id jw12so1264488veb.12 for ; Fri, 28 Feb 2014 12:05:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=ZpsIMniGv3rKvdOL8QRnGN5OIJkDJPT58LJOGR05Rc8=; b=OhFITWcFM4jf4EVJCA1J8MW+dNgS5QT8vt39koxVIVL48a3U5W+fAaRhtEwUocQydz PHAkTNjET+TxGIZ8SQNMCmeVHrim/+vb0bnbCWk/8ECHBnDxmCux8rzm+B5l4TCZU7VA 3D2Ba9qu8W3tNajRAmvt0y0pkfslNAkkARn/6pkh/q9leZH5SkWe9TXu2mUBK72LjSyS aZ20/rMVwuoaV+1JRaEaV7aafXQhD/LOr3VLNb9DT3XuUPt/5ansiruRzzFCUIpFjBtZ VgDOe2BbC03UMMeMAX8NK0SPbAKZjMNYEJycYWkJlB9EZ04B11/5kefUlmdIMP7CfYES YqBA== X-Received: by 10.52.171.68 with SMTP id as4mr14620807vdc.0.1393617902096; Fri, 28 Feb 2014 12:05:02 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.91.74 with HTTP; Fri, 28 Feb 2014 12:04:41 -0800 (PST) In-Reply-To: References: From: Anton Sayetsky Date: Fri, 28 Feb 2014 22:04:41 +0200 Message-ID: Subject: Fwd: ZFS and Wired memory, again To: freebsd-fs Content-Type: multipart/mixed; boundary=047d7b6dbd9a01f7b804f37cf494 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 20:05:03 -0000 --047d7b6dbd9a01f7b804f37cf494 Content-Type: text/plain; charset=ISO-8859-1 2014-02-28 21:58 GMT+02:00 Mark Felder : > > > On Fri, Feb 28, 2014, at 13:43, Adam Vande More wrote: >> On Fri, Feb 28, 2014 at 1:35 PM, Mark Felder wrote: >> >> > >> > > >> > > >> > > Without question, cron could do it. >> > > >> > >> > I can't see cron using kernel memory; that just doesn't make sense to >> > me. Not even the periodic scripts that cron executes should be able to >> > balloon kernel like that. >> > >> > I think I know what meant to infer though -- that some nonstandard cron >> > script is doing something ugly. >> >> >> He's running 150 TB on 3 GB of mem. Periodic I think could consume that >> alone. >> > > Which periodic script are you referring to? The daily security check? I > suppose I could see resource usage increase after checking for new > setuid bits... assuming he had a lot of files to crawl through. But I > don't actually know if that would do much to wired memory. Ok, I'll repost my first message. I repeat: this in only _test_ system, with _single disk_, with _only_ system, ports & src on ZFS pool. No services are running except those which are in base system. ---------- Forwarded message ---------- From: Anton Sayetsky Date: 2013-11-22 21:53 GMT+02:00 Subject: ZFS and Wired memory, again To: freebsd-fs@freebsd.org Hello, I'm planning to deploy a ~150 TiB ZFS pool and when playing with ZFS noticed that amount of wired memory is MUCH bigger than ARC size (in absence of other hungry memory consumers, of course). I'm afraid that this strange behavior may become even worse on a machine with big pool and some hundreds gibibytes of RAM. So let me explain what happened. Immediately after booting system top says the following: ===== Mem: 14M Active, 13M Inact, 117M Wired, 2947M Free ARC: 24M Total, 5360K MFU, 18M MRU, 16K Anon, 328K Header, 1096K Other ===== Ok, wired mem - arc = 92 MiB Then I started to read pool (tar cpf /dev/null /). Memory usage when ARC size is ~1GiB ===== Mem: 16M Active, 15M Inact, 1410M Wired, 1649M Free ARC: 1114M Total, 29M MFU, 972M MRU, 21K Anon, 18M Header, 95M Other ===== 1410-1114=296 MiB Memory usage when ARC size reaches it's maximum of 2 GiB ===== Mem: 16M Active, 16M Inact, 2523M Wired, 536M Free ARC: 2067M Total, 3255K MFU, 1821M MRU, 35K Anon, 38M Header, 204M Other ===== 2523-2067=456 MiB Memory usage a few minutes later ===== Mem: 10M Active, 27M Inact, 2721M Wired, 333M Free ARC: 2002M Total, 22M MFU, 1655M MRU, 21K Anon, 36M Header, 289M Other ===== 2721-2002=719 MiB So why the wired ram on a machine with only minimal amount of services has grown from 92 to 719 MiB? Sometimes I can even see about a gig! I'm using 9.2-RELEASE-p1 amd64. Test machine has a T5450 C2D CPU and 4 G RAM (actual available amount is 3 G). ZFS pool is configured on a GPT partition of a single 1 TB HDD. Disabling/enabling prefetch does't helps. Limiting ARC to 1 gig doesn't helps. When reading a pool, evict skips can increment very fast and sometimes arc metadata exceeds limit (2x-5x). I've attached logs with system configuration, outputs from top, ps, zfs-stats and vmstat. conf.log = system configuration, also uploaded to http://pastebin.com/NYBcJPeT top_ps_zfs-stats_vmstat_afterboot = memory stats immediately after booting system, http://pastebin.com/mudmEyG5 top_ps_zfs-stats_vmstat_1g-arc = after ARC grown to 1 gig, http://pastebin.com/4AC8dn5C top_ps_zfs-stats_vmstat_fullmem = when ARC reached limit of 2 gigs, http://pastebin.com/bx7svEP0 top_ps_zfs-stats_vmstat_fullmem_2 = few minutes later, http://pastebin.com/qYWFaNeA What should I do next? --047d7b6dbd9a01f7b804f37cf494 Content-Type: application/octet-stream; name="logs.txz" Content-Disposition: attachment; filename="logs.txz" Content-Transfer-Encoding: base64 X-Attachment-Id: f_hobtsfr80 /Td6WFoAAATm1rRGAgAhARwAAAAQz1jM5Df/OepdADGbyhiFAR7cPMpHwde56AFiTUjr9DMyYLjJ r3JPBvqTJx5O7aNAK9bZxallCkUID+kU+ZcoZs6SkoDx61KF6Ng8QJPvajuV5VEyDU6JXK31qFcJ 7SKnuYSom1vNI/ccLjjOiaWPVAodsFfA58GfU+EPWBpUw7QYXkuNJ9gEBdgFzpDroE7G8ighVKk1 kVVwHpNy44ISDhlhEuwymcNHumcxkaWTRvZkjBr+UV/FyKNkJ6jioE0mcOU2DE3VX2I8W8hx4YJ8 dMe6owS2nFQUw2HZlKHU55uud/JKb+VTZ6L6ghC4x47prfagEPX2A63VemILDB5Y9TW0oQBNGdtC QS+1pSzNz1OjOf+VjZIQlyt9/kBkZDpaV4Zb37yR/kcUDp96RajzBe2XUMBTTAQNhCv5No21pSSI n6GWtsyR70sO+IBEqkxRkV9i237WqKtTjWnFwp5ik8nxxyysb5RQGVNcrMduPl5qodveR0CM6zX4 FTiXI+NIvP2hsIuY9LmbwDtlnk4VRzv4XN76/+m2fbYIV6+t+yE9Ky1QMdPz9Hlh4jawK/jNeXNR qjJaqpE5tZ42g9AYx3Mdp26Z1IoVcPjmymWKUH+kaGy5jI8lXmQs7pdq722N2WBiJ1uAIwkV90q/ B1sCsBlejSncOozdxYA5T+kkh7gk+5HKzJhLFN9UTg/JvhtvG5J/Pf7Gv1lL/fZXk3A4E1ZkVgDA ytIrx+US71xIls1dh5gok4iWP5skPzPEWuAJFXKVwESWZ0q1ZsMQ76OvxaaSBIcZkZzQQvxUxOOU bxKM58xknaThmTRwxYeSToCtyOu1xcP1K4fL/yYsRFbQ9GiQQ6fvpdqQzA/Q8YSnRW5qALV8R3O8 JoTHvUlGHst0hWYSBxdeoc1mw48ruoASVGtYDgaFzqfxWYWZkXe7WxCdRPly94/nai4atSvUNrb0 I64w6QzPJrlDf+EsaeIjdlkuTqtdiSPtDf5V0U5r46tJu4PkX2M9+EkrMRlWCg+3K0EFxJs9xSnL JGuDnqgNkiJsP1RoT8zEyqcuV1uEM/7VPkCB4UsIEAmBOs6v+oKJWKjc1rGkkTIY1EQnWuUxn1Zf o/hiad1PkVmrkwWIuDP4VeAXGCP5QrSyIqvioYkFKux4bDSemi7vAjpc3IWFUNBHd+2TJxUYFfAj FkB6hIOb6AzNZ87idzWo5DsI/BAS7iObCvx3GzCTKjI+0ZM4s/D9Vexm29GkX59vCsxP4/3RAkP1 2bEYCq9MtdDW6QXHda0FdsDaLB2gADjAW8kt2mrCTXqNw5xtU0OH8OfQH/fuSGAKFnqAOT4QRpFV Z/wtoOo4upTzIRGerfsWn1hQroV5PMB+x/Y94fe9y+xWO1xOK+MQOkAVRH62vLtIS/uCoOgT2M86 piVMEEpARMjTKnxrIj+AxZ1nyBgc2+P2e1eBHdOM4q9QgO7+YUsdKlAJEZTaWd483eBSWn8TGfAT tLhZX1KZJZg1Ta8YyzFmEmDTGSzjx+TtWE8UapAB2cdnqITMeEMBxWHcEibytVjalZyB2Dey+VLZ n1RtQdScbfy3MsCk0sQDHsByDPvo2tBoiFrmY7I5eDwEH0kot5/U3ZEIx9cG1OFAZ08m47u2DROB QthEdTmER9yUKRVDevpau8iEe8CjCGwq6QQKJVTjMA54IItQ94CtRv5EbOyGz+Cx9UVlp/+BHmST 4V+0PXpHNGyYxGmPOUhrEAaPwktYoMDUlt8DPTnSIiW30EavqX14FlX2MEQmK2V/5OaOR6elIoiR je7qWKRfP1hbDnspIhJRqa5WX7UpLCeTYKo0ppAVqej0m0fLblY97ktRuj0J5aponqmNgkcsiJVi JgvoAwZcugtTvMaBw3Tgjb40swaozDPCKaZSCfc5X1eWDpktm/rOXxhdRL6evQR4gMHD4kTRgwKN RM3VC+A6sbDWqD40BaiRyiZyWL9/G61xdkxttcx9BnMzgZpnwJVlEL5OwuB3FlpTnlg2/HMcx3h8 wo7Jw/ZUPFwW5Ey0pLHNrTIW/pAj5CHgoIZclzl7dNlLLM5IzD6Ry4CJ3V6dh/zpIih7muz5rA+W qNNxiLI4bJwpl/JrsTapR9hSI/Lcd1zz0PDFAPUKy52PZ8iUQJhfx0YKJ5droenmxCpgi1NbxENk fJk2wQPvaYdd88IR6UyzoYeOUNBSbML3r4KbU30AljgsaPLkRCZ4gsOIuJD5fgzukpPsH53SmFoq mbn6xG7SNa/2h67mifXksMhXNsnsFy6ZcxFptDjnBmuJJjbmtDll24jyKw9jW8vgnVWXABA5K0zn jFU24LDv0DuWv47v+2tbcgEK26DprPOTAyKSxG/psTj8jgioiy9QyZw4KKTT8/t4rbGAUDjcjFlc Yqu0gDz1//atcmcO6UlkO2d7C9WYN2LpqM80sZXKQIQQzRAhrgym/j1DrinVGf1cQU85+4xKP62D LACxyAsKqxqDKcz+HwLUJfMsvbLS/5G44y2TrUn/fUFqA5INDtprVfyoxiYOQI+ygasaRE+ei76h oAG62eM/IhHJKr3Ayqcp62Xm88Sno51Oi5QZ5MuNLx7WW1SyU6fwjNDQ3y6Q9yu9dCqRr2HGJUd/ UHio+PnRwdRvSLy+9214drQCEq5Miy7JDLHm/U0O/Iq2rvlYhv0am4ds3V21oZPqKa4xZwQ+cE1a K0eyo16R3n7U+tu5b+YMKDSHy52KBPtna+AbboLK7T0Z3QdZ+CAwRQfU3hnrj5YlIuqdIsQcUJbJ d6eI4JVapcXqNQ7ZtllUi+Olhu2pXoh7tgQYBJoFw5OGLu81aKThMPG9EKO4bldUy2sZAlZMqGLR eM1G3Bf2+kG/JM6hYFKrfVGQRkllR51+0cA3x2XXK6wTStQEzYSI3R6jfnxgqJGvtk8GUQI1xTTe XtMreKpLXxglnw8d6loFOB/cAHGFLaZ4/rDoEUqQkvACSzRj9D40wRD6vuueRH/y+n4/hnboglao Xew2xuA/fiJ2hZRG7gcQGnSCOxQ7qFI8VGP/fbx5GQVSCgZrQ+H4eNsjm4SFOWGz/si8dF1rgNO2 k72hMGgi2PBpG5qy5mkIDClce68Za/wUXvlbgQjgQx1mkmbr4YH3bItFoOmLACFbUt/mQ2u3/Ypa 1TMkDVWifVVkCiPVcv7Hx93lk4f7tUwaUQyDK37DirN8b4j3H5PosIAtqQMLbx//xGqBoV21Jtej li2kMwEP9JkwRheQArn6wcjqntZ6Db9Iz3xNyMecwJhDwpWkGLCJIgkYqTvL49ORn7Ll5EIgDhQT qn1SLPjtBlqkkbwVmqJkN0x5/1LMAyVjG5JA/4ASInw8bjqGIdw4ALKqh2JiO5n8N2A+2gZxDz9f +6EAqkM6fqAqvLqZOyf9KnTBZJOCYvQDrWC8WxANrmqICRJ7D9XbLtc5ugH6r3oDk3GjX1xbtBTN nPwOQebVIh23E8PbAFRHJAv/DuEvMU1hrKcA5/KIYB5rCLkzDjTUobTAs32rSf8VhvTjBIgcNekM a7+3h6oXyzxIZ2QN4NDi0zbVb3rrm2czhAfhSdFkswfK7nk6qzrQbxChgOntJXR41hvQVEvS7JzS LspTXKWvVtIUB3ISDwadFPSZUDl8TIEGHW31XRCQNIFTrOMmm7Et7Q5gnmZIICds98QWp/QGkSgq yV60bTRJGKUcjw9ng0MzRI+YuLu7v45ktmPhTaY68gebCFXLiBWeZ7O/iGiXdta5kNcIKFTzy6XB M4mm2TizhPySza6ANTGBA7CeYIcb4WzUCrbQzDpv78NulvYdSBNBNXbN/rcgez6g5Rcq+aqYDm1E 1sjFW/cpw4DlJios/HXNG1yj0X4D2ifXpcoGbPJ9wPoP5laKk4d6rquVegzag6Aj8tzPjdkFQUhg hqOgSUVwLaumatu9boANCx7mrn5miCOYHVcuppIj6MzlmZC90wq+t/TjKmtj8h8keOPUj9gzLaiU GC2QpvD6uUT9JVZkPQjhGBuO90TnOhKDsfvXphLzJH9Ia+1Up3mBhehqI8H0sQlScIbrKDGIzEWj 8JNfOM2EzXnU8H9maYqA6seu+txdHtXHTScFACguYlhGUwZ/pict4UIGf4Vt6a0qEVGeLT2p3zpq D1wERT+CKuav6+XRg32aA0CtOIgxBFssQRfXQ18+eU72MJTRzG6KbX26frdOFoK2pxJ2NDb+Gu6W PbP/1vt27mhLbp6TC/ORn8kOFpGHibBPds4VgOxsH2jb+/FtVp91/X9hBYfkgreVMWj7HmJb8egi A4N4jY2AsOoH7FsvE0GPyvpEWsiAoUzylAAkHCzbG7YvALtiN595wAjJslSZ2bpa9GaAbNv18yb1 A2hVjuKnnyILY5sInX/0CCS7q9E3pTnlgREvLPrRuG/F3LY/7nTx0zeCaEFCZjGiV/WpzpmDSd8z KZHWBLl/PGOZwE4cYkZVBYcm7S8DZCy08ybCIu7B9CFY1jUjsBZU9L5oBIe8l14HRPdAKVZ/6xdI MflyqM7cFXeHCD19nNEI/vA+H0k4TTTgwHFR8+j2z24xojO1HWc+1kHJa0BqL7qxrvTlzcbAXSwG kIOymlgfDrSIBkscB2m3Vc9yFGTU6vlPTBghuvx3StBM/eQTB0jbhKMkmQRqf+DZMdwXCrkW+Kys YQ1NXfufQGC0ZmvDfnU4a1Bv3cCgF4W6RlNecQb1YPPhoQVwPB8+iIcmlZDcW/eseZWn0uIczJc/ sMeaZPrsqQ00+i4uAK9nXKXqzf21A9qhfi4J+8FeE5aRqaMw+ynkTuDN+VGn8lzXdY/SUS2t5OU9 QRuKA9o1hrssvDBoTckoTmjSFe0/FlHhgjEZY8j8ZrF7lxfaRwgU6qFWeHbEHBPO548TAAcFKzes of80YsL9t5l5Tbprv8omMtdc1GwHlFVBkhneqZT/kAu6TmAayNBWrAACBtuXbGPsb0sfmvddr9Lo dp9UyFM4hNEmBqSpdFWoAgXibGqAbgmDNZMm8kahR3xNVqn8FMOVKxZJbknzkiGol3Rnh4F3F/PH OkysDbnqdI/Uk+Abv1PG+uWAp4cmTJUVQE/5uiq29MZS24Lo9cT/pGT5eBLE2XMSjn1Pq+XoVG3+ 9y0Bmhr2HeHORYIiqZt66ur2rY8CM69TTYV24MVp3Y/y7gLTONPryiXueW6XBDfvPeL6tebBfMkt IsQ+e2qXLLr0WHFHY93CQHMhD+4yHXtb2r/5C0uxJ4eQ3+orxPPHJtLJFeYp0pvMUSkUGuXKD798 RJXn+RddllCVWh0EfPxd9LcoIcrX+WRwCUNuMvlADAfOe6REppKu/r4lgfbkyF5GxZKVomkUc2CO O6LuvKFFZJSCOdzx9zMlaHcMfvZZFYJcckiAGX7q01tQo6Wnep+zNBVdDRz0cObI60CGAsrsStPO y9CPLhb1viYfvLbu+q2+q9u2/dPITIWDghUUVDMn6zEEJY8DSZ/YWm6AgDLBjZaYwBkwnq3ZXl4/ WipheTRchK0k3du6QGY/KY6O4kUVAIbE18VhPTWd09GSVmFqOkOkGsztMtb1ybhWYlPFyiN17oUh RaanrTL0gWsaq/AjBkP0Vi9qvMYjokIRYYvvJ+5YcADiCgyMF+m9TH9C3FOl1JRecDQx/EW3mvmF ct6k8mZJ0VFm1Xwr8BlMQfQGFbAeOut8tFoLoi1hTFNVa/wr26Dz4RLuK9pA2IHljLZAR2yS9cAl WQaLKE30BtqE6566Sc6q4qKJKNwbEtZEqzuONRCVN1L4RK4MiFTz/twLtfjOsXHUFkW3CHZ1Q7IK 8clvWL8mW+ycwGkMhE4sQaZC6vNSgCrMX7LXx+Txjm8TmNuEvDxhg7dliARXrmDXHZQON2VKfkHE vUmo5OZ4DjEYJoCc65nP64ey1BVwXbEVZ6PvF32xtkzI2El30NitnXr0WE49YX/4DAKTdpPvHkHi UFDw4o1DQFqkghCTUe5KinhwGlD4lCV5A/3QZ1htZjicQlN1JLMSUF8p0RoMMZqEiQ4aqjEpejYy EGHJgIWmw6I7YaFS+OxV3zWDPNLOsoiP3JHXmRZ2lzmUAjPXGNPYiGFsnm109ympXmMO1Z994yay t7pKglFae0mX6nF40/y3GHwbr1Choy1BZvISIjQD8wzGrR1yms9E80QO7QtP4MQViC6tJRltv0jT qr6m7rRe4v0indNdR4mymgSk/cOcVu8exOgEyC2rb1EOmrSmjSI6duPGGSoGE1fMy8CQsyo46O4Z LTR4bHdWJupxBsyqjWlflYsJqv64OF9SO7eNv05zXH8L+kR4KzJYZPtvE0jQputeTWIXGm+6Bas3 ySwzpLGXYietUfLyPt/PsuwbRDh3vPQSdmqkAmv8OS3OiKaoS9cfwJ8g0dymIMx9IwmuX1+IZmR4 uyfZgYmiUu47zZ78wT2KXvpU62i5hiEBQkmHq9dEBO7PraSU9HVPIhnHh8MGlyeiZcTx4GR/l4qB YCioPaWPz4NQJLHClh6K+dsIiLajeZxSCn7dYqVfFRbOYZ8FnpBjN6hTXX8oZPWdDUDl6QLLpizf f7dOMChZE2uQFNHByxp4OVwBkxUJ8e/DKFhl59MZATQpmz+xjZ108exo1AWIcDXrpQB8CjYfG/+G aLirQVGw/uIVnZ22ls7/WgDYtTLauI8nPC8aeYoHiebVQOv/4VgfKmBp5zjfFlnmwvC2JBrGSqoL cjcIxlqYrkoLgnrxhaEsj9rBhwCa5U3TSFxo/wEXUCZiM9+DmTCfGnaUwQ1exaniP8tyQseTg0rU Z8ZtKowsgjuqunb4+14VdITanw0GMTLDnLGfYdhIH+aJi3+T/Nr+WoTIMpd7hWVLEIVU1gPRVftF segQ8UAEi+9waP1jI/LRUQzC4H8HcopjXPSCCKY/FqMTzkqTqBiBjSDuStR7CmER4aq+9uPN59Df WFLhtL7bboRMZXrflrii9jD+M5G+Dv2/29YCCoEN3y+W14hPXxnn7pLsN/uuizRqDjaOCbvmHwFi /7tpj1qJk1MTII6vA2PedhR0UWk5ij36VmS/sgW2BYR2SAkwfhhC/FOwjdO8YG6tqUSLglrHsyau VB7QhGBJoa0+3I+U9Bl4z7JUC150//OAIjdt1FFTeComts8ILJamAprn1Fl69e4AD8ubbqGEUg+w Ap1awzCTP65pEyB+KtTQrF3aa4/PKOmC6iDesyli6jp9bX6vrSZDXCpT1KpwIxN1DyWO/EFXwNOc D/ZpaCAKZrlByN6HwQYr70mN+mK98f1ovJwf0P4gLFgxL6gYvLWD6zIyWY6oX5rzPwhKhPHo/joW 7fSElYF0AY9QpdayV6/c3KHhitqGzugYKGSXqlyr/EnV0yuL9TSmgghO8ypMriKXstalaDAWIWcD 7anYMz1k4Uvquto1DwTkBbgFUFv0t3IPh7LDVSikslydJHfN+fLJVkbhkNyBAYF8WxRr3Hh1j1Jb 683tdPjJSGLqMdB7MOv8Ew9Xk1qEOvrF4RHs94aqdVZPFW3RmOxOZfzG3QILG/Nm5BTvKO6T1MyD 7U7mPm8y/xu4Vzg/1TMEGlgmA/i8e4NSq3SQT4U2F4KXXAKi0m6vbPLDmHUx88pyTTBKMMQVKe9k TDaQLTrhs7q338OPyRTItVGMtiEkgOHqmmZZ5i7i0fHZaLQKp6516GpCmZlofU4ss6MgXyabClJe +PI+WG2wtvI387G9fGml6tvjuF58g34SUA2Jgyc9ASmAdIs/Fp//oFwzYwpazrfNaOA8dDe1h4Gu MWoOMwFlgcFdr8aq1rKADQrbhs7a6Yjg2E1cIssKFcKegHIffAvUIRY71EqSUlgPVqeq06MUQLFD AZfyU/GqULFUxdrdSdd5H+3gwoAaXEszjINYTz8Ak3j9mhmnJqg88IG0NK9II2Hr2lMfdm5TYQO7 xYW0TOS6YYak4Qc9GyYo4zZ0iu/f4YzWSH0QA/dk6mTWuc5AEDkeFQ8COvxeGBanBTuZoR59uTk2 PR02qqA+4dSbKcBzx0T6Gll+Fm33Yw3so0tQF23j9+867I24LKOP7P2bCEs0CK06UulZHcdL/A+a QJwX1i66QIEf4YnA/ZdjcmGOe7sfYdHIufGm6+08J609T+geG4wq/NxX6aQDIuJ/UPZUMIgElSjQ hKpdpYPDnEUYsd+Ta3UXdo0jDlMSUfPyX7CFhAniretfctlRGJwhcDOChJGbpZlEwCmYOXJFz5Xa 1imLD79Wap55kQZJg+iq5Wtlk+sl47CC/P8sM5VXrndH8VGDYgff8RkkVU11mOeNPRzwQqPgcOcD szb9vi12Q15w4bMkubxirNWD42RRnDfrsh24TWsLm1Aqr/1qvqSVkX+6OqcRiEWJbIZF34aWUmw8 h3nSevV6ZKJWfPht1iX1RKD0Nw/OAoyYUCz5zuptZ8tRDpcCI+zDZ24SnSWSAoA7bj0APmOxo4rS zU+TsJH4HlhQiQP84yU8ZP2ej9nf5N1j684PS0RZL2c4Or0CaKLPP0EtlAwqdt2hEc0bzELB74ug pNlZMTZ9g9yI2PYGxgUBLB8sp7C+t/mEBWZBNSmRAkMvpF/N5PQdHgf/CRjVtG4EUU/N0qU+k+er 1F+VVMg5KfUX2CfNLeQFXeYKqwWjv0exHMPt+kp5fn8kx/4OCf0YPIAXbIMJuCHnb59+JzSsEtYQ 3QB99w1AlIbnk0i/XFaN8ltlpGhQCCaGEjbP5VDznk8rVk1+Uy+NoJ0X2uEJ4ep+S3sFo5DwaWGL R2oGTVUrSPQFM/X+R7o9lTLFs3H9NKzuyN1aekOdP/yqxN71Am2PNeThbwZnRGDkp/+7/5Y6PI3I 1B03JPZ1dF5ZqG5YeEXfEsLSNUuCdoMD4zJPa2xs2g0mxmLdXLsfbdDnh0WCcWHyPy+BRrsd2Oi4 dPztocwTXZRD96dty4wdJvrFGeq3VtSy6Rq5Mqc40xK3V7bKuB9xfXGvHRbxU3wzKloGSXZO1zEO yhz3wStYmqt5xeMU3Irpl8coQPpaC/hRYE6x+wRu0iJB4h2XdWnI3FcWNV6ZWPBRrKga/JLYDJk5 a8xdEEM1dA96fhDDolF0DKRkbcF1PzbWr3NOqCBZZZVzzYbsbdK+ux18uR+cQp1WI7WcYsEGXJKI YFS6EM0hOWge9IxjtWVbLPvNhKtIuZPLdp/rQ9sjwcvIESZ1FeZoKEMQ8zQQt4vfwwCyO/4R9eKr RACC0kpRGSXwh9dVyFns23V9wXgLMTZrRhkc0e31iWQE7CUpAa9bKAHr98K/TVLoGOt3e4lbCiun C14Ed694EKfOKilYpzUzV2sQoHmqB/BUVf3ZGCLzoimrulY/+hEkHTpBO2kaojklJM+u/oWk9aFe 1YcmB/vK+DSg1UBTSMh2VErzFrBVCtYUciltXdAr1r4nM19QAF/LgvzSQRgM2biKGI6z09ye9vIr x02iwAJZxG1dDLjzdHJSL88nP2VvGKIl/2DEQn6o0nuQqZehL08rp9NRhql9TwXlZhR3pd2DzoU0 Hy9denrJgiXYBM2zI4HEZCej051NpDXkL2Y4IYlTgwrBE6O0IwYEWZT210wCvYUhGu6wyZ8w+XTX Po4qF0DFLlMAjoglktmgG3P2zZJfUdaEWPvH9+ZOG+N47M+/iNKcB3Ffl03d+DDqg5Mi6wNrH/pw Hn+sKRslX7oVidGZgTUKvkqSgCwAnWGKXg71hqevuHyWaRzzxt7LIP/AW/+8ZdENFD3eNhM8ud4O UJj3FUvgvcqFrSAGe9j/XjgvvJDUzNf9d3YiyCSfSq3lxY1lhG9tYTScEUpX0Fcr32iBuXzRFt8Y gxL5TFPSkE6Ph88evM/xN3WK9xZfdFXK7rIGYfoSZX6Ef7vaIsIsVTG3JzMRcYanc1FMoAlB7vvA m+TDuGD9DRByFKMTudWF6G8hs0wWiPxtiS3f9IpqQdPJ0Vjj38sEVsFrpcdAAY77xp367qryoDw2 1f0+x4zRhmQ1vs+2PXXIOS7cJC6cb7tQXTw/bGsveUX9oUrXuas4G07lGdMVG7mwYlJ25CyLl659 dmZ3JzIUxv8dhRRHzVb5pnsRvA+yD6EQUCgKmDae2I2TYP48/0PejHQwA22qBHd492tlIIfRaasx mYdSrImPF3I7hdTIVEQLKN5JzW3A1BFY6I+OehhQsv6qAK1VMD6RZYyOvoZpFOmtm+nRo92mOFri s1e8Sk4G/eEBLLrDdV6vVM79i9VypwGXD5BzjjAGEPpg2hT4nBQle8dAlzpCmYr55o50uLdM9reT vP8AeAPseWkr++wStZ6p8nWGEqYFZLl6C8xNs0WEmGr6Bl6hEjItHwe9sIaE8s4xMxLvxZTc+jWO IiCeWvVNcrVMbM7GJZZ2bNbX+Oyg9LvOtXqXug84OnwVknhbI0JasOZHx2OrEwTOsFNC5amLW2LT uJ2U1XNq8bbYp1s+3JpuRqbKyHcc7G6wAWDC3hqlR6il2jCdwL76xvfnVAyVYiCbK9iLdL22Feaw 5Wfs0H4vZU9J50eozANoZzSRKwHbE9oIof03f4xIMP7ZAuR6BE6rVah7qKqwwHeXy3Gw4nBjE+sk j02XDZLILAtwCQrJo2eXL2mVQt0W7cGATIt/dQiqAxyQlryjN0wGK9RUqoO9PaAP8tzeypBc6YKu /10MBpq9PsEiMk5w6SqbeA5PsTYUPwqqAtgOBeEclaXuUNPZbdwleXWHvKBCz+obCECOnHy5x4OF WKHBscbwgRNJFI7JwP6M065sZWFPXAXy7Av0eAyT0Bvl6J46IzP56+jJrIoKEdsIsBZpg54nSQ7U q2/x/LKdVnIgsUpIen8/JYV2LT4umnFEZeHHBbIF58X311QhZE3AlMo4vfBuKwsKORUvWiMfRQGQ MjCxOeO1XMUij6QZ9LpEKjFNZLQHyjqqzqCfw85wIOPla9oChsgfdbQwsl4Gy5g/X4FE2vGhZhL4 U7TB7uXuS55vtQk7bS0jfICLzQHf6s+xccy7ZAtGeuaNC1Hj4EWbYTu4EGoJfq1tUGGjArKml3jv aC8GBZ0Dkd5EcmRUoEi/8jqAtty4qpSN9eID2M82yGnCjY/sMt7AbWWHhLFmAJvwmXMVQoB12iYq 7srOedFGEZKWn79JtyGZliTewAtlEoCkMw7ejpMw6b8kNMGwt5h+CWIFHNj+sNUG56i18UPc+VLB 7pkk6/dTOIJe7thxTHSxkiVauaPr4rFL4g3CgbksgyYKtTpeOowKYZFWRZ4DZoO9UJq6RG2i8AuB ktDRrpdsgNnKB9AlTIJ+KRPie8kN/LAY8Llqx+Ivav6LzeTgtBB18PYUbik3iJp80m6QcaLwWbiv 4seosz7SVYounFmyYoBEDeaERfRvyOf01LcyjxVH31xAw4FUJ2Ihq3gN+qgzhcTNRm7jNx7hiewN 84rM0ilA7At1Him9pS7EpkbwqrRxutw+rNDwkI4Sh0BlCqE1FvQP3IvPngXlLcFyUaLiIfoByP+I gH0hfs4VX3wfYzp9exqMtD5gDThFyMtHKmdT+wYggtQAXk6MsXYFyHbfzk7XFMHHLoYPSbD8a0Hl j1hcn8Z5VMH2XRClQ/5GiVBy3GjBLuQ9FLYHEuecBJZnVjGzInM/vge7mjQtJZGD7T8OzOm+cvfb Jbo3vM9yvvHpoj48a4Kt0d7f+4qvKp1D9nEGfPKpEFQNGtkIjxCFUr7Kzeq4fFQX4Yj+tzsCvEjp X1itPfrb28H32ppmQ3525lvYKUvnfn5jdoTKbwquDyvGLuRK8Yu0udAVfjFzlUGTKNDsDguP3Jo8 qUdTH3IloWtUIYf6YH5VMP1yDd7a8zQh2FKUUVrA60vzTtSbqjtYfQu8+j97yIi4uP8kK2iFRVsf PkyGYM2Cb7BGgMMNLP7R8uiffhsKgP+OOjlLRoudWiEc0+vr1ZbxTf8JhsMdz3dhlCClOvv4nzNo MSnb3TVl/j3ZlWthKuvCaE3FczukIQHhQeIiKZkKAe/7rM419ASVFmG4ZC+Ju/1wA2RRR5RYac7o p7A9Amkz1B15yspqujK4b3AQ8hPt+wUfVDx90Z3gJxkW51Z22x5bfPj3KFKBYchupLbXY6HzW2by t9Eg5qh51vdMamLNoapsR10CQLsWTKcK6iC5rIfpLgF90lPIBCreEFBdp2D1XupXB6p7b2Rq+AvS QpinEZRzUkCbZ1pA7uDZ6mJGqo0qY/HCkHYSdLCkTd6767Iuj3rr25sxCd/AXTWVNrO1qs9dPxM4 CvXmfp8tbsjr9Bn/PoIWOSK0mknFejW/dPAHcZrphyc+yIE6CHQ5DNsF7+47wHvTnaeDu01+joW2 XdEhCnkOgQm4FWSb1OFanO9F9XQJOe+UnW+gqN2sySQsRWog3Xh2RB0QZr+WB7Ug4wheRg7B7Anv 69hxTZDfuQ77H4Msy21hwEKu+TmDjr8OUNhkS+dj6InZuwupvMdmz2g4LSpUSGx4x8ogkFWFZbvu EZCwlWmc+sAtpWVDo1dUmB/1GPYoaPDMqcMyX+Q/sUMz6b+5V5Lr4YAVAzSGMRxaoz71BS5qFG6P R7B1eviNv9Xtwy2FlLgxX5dTKJrQ4BelT5uXKtxTAEBa2bLeDZ/3ZBrRYcAdCl8TJia89GwaktWI 3PEfdZOTJjph0mrEQhhbTlbySeme5FyvFWw9y9zcd5cRNAlHZKI2e/4l16ayP381MFvvVbZHVRpP w4noAx9I4cUrH7nbjVa1VPCsmsG4r/+rMXb9w6AyhO+jlh5IPJ77ZRPRtXpf8yqpcd6tbucscPSn H4XJnwxZFavuncvpijcGj8CVcxqylPbxDW4tKbwt0mWJaht9t1A/rOgi5VHI+YIPkNoH05ltNYtu ONcc5BRYUWrCo4HsE8EyvjAm8LykLicfFOQSqIMsD5kJlMwhpBs4KKyzzO+Mszud/InemCB+Ric/ CDW/DibJvQjiiDnGxWDDWSo38n99oVm4XqJemKrZYJCuI9qaCk8ri9tgcPsbA45iKLy7YA8rKLCg OM2XRf1fLKlqJCqYET22gzwT021sL3hxqL7DNWcC6qcNJRFawmEswhF5qLDYTMyiwXQHQZTKIVr7 S83siJUq+4TPYw4rCbQCGvMr2IH5CYjphd5WvMTnCxAVSNUK6tlmsszYN0G8fGrS6wCHmDqySGOG /lgPBtYIwbzTCLTgGGm+iy2pzU59KADBcNZ/T2ViwknZz02z21JQegkki0WmWx9/Q4gVCuFdqBsn 3lSUPdWkbIyGafJj8RmDl72gR9b6dVggSQWRJ1fbnieyyoywimjy4ckk12N/Wz9N/T+WO6FC/oo8 IRCIIQXb8z1CTWYTgXsW49/C/Sk8UYsIBUKWuxRtxfmf8xa0UTpO+LvZzc8g+d3iA22Rkr58jtnR Ucfx4L9PjSmjxNtsmFVS743DZhxhWqtJb0kWkjiWDoDYoUwquTqupfw5zB6jJEe+GfHk7BMhCDbq fpGJGsdfaXFqy+bYc0Q/poNtqR+unRKDQ5lvic3WZnOr9b+Hh+uJFGpqQGcnp6CASVzd7vazENx8 NyPBjRNwmPuwZxWIzwUzWGESUafsS0AAbPhmTiUe8UORVhPfhbBjEn09gONphWufCFIQFUTMoHZf OzkR4aEPfhs1Z8l6EEtPm/bTBAhifohTZgeW0uUA869j89FwhSsiZJFEtKXdaG+HeSaD+BBmxgEt QRxClxNOdd8ThDG8aOnitE6B/TsY98uxhyGz8AG7dSaBtPEI6GCby/Udk+uYC2o+bXLND2OmiIvO inlZ3KOgHe6+YIe2Z7TUcnNnhfbfXv+KAxJB0JprkpA/tqFbDRauAX0tl+gmAuh3JC3d7Vhb8Epu la41I4WlZ4lRFnCHE/X654MmRr/aNsasuz7c2MwKgufQsd52CpozR2AsdJSwhBuF0T313HbFM2fK FPg9QOKqP/Qn1iJNKfYt7iT25kikUiX4Ub8pu7ZvC7KZeWh33QvAyvkuQI09h6HXtHj5PAS7OFgl CpAkg1pIGGMXurilR+k9AKHoveT64LfspqSzrhBj/ESIXVXBCURrr8vuVnY+bxv0GzIuUObqcKqn vw8VTKpW3GWAnS+1r6RoBaOG2b1ltBoUF140PYxCRXvbiZsOYgvMLgXNbFVlfj49CiMxmfvhYCPl IINZzjWw9TgEJnA4aji4oSSy/DPBdaD/vFJjaRQhKDW0DqJkh1oT4II2cTlD9zM7H6SPu/8XGtS4 SynXHLlH0ByRkyknn+LNbSRzhCyYOKm2T4ZdzuOfcpBoTAYXgPZ/ogKpFw/JUJoL7uuB5/t9gnyS IDQ7DLVlkHAuZiRVPhLugLmSdRdLvNYabznps8TP75W8M8Cbh+o1IBrVzOStkg7rDf85L5AAHZBn LjmclFODccdbEcw0LZBX8/aN8xEv1IyFJrv5Wz3dsDMF3KPGzhNbEBTRP+KShrhWchSEhN0qkp1U b7a+ukmYA2JVwlCjsubCKyK70iYGzHNbn9fUKIREFezDito8634XkmpmJfDgz8P6uXtK/mHUbHod wqnrdGU6ECj3GV1rGG75ByOGi3jHU9Oq3AB2cTFGZEX1LHaIXOKPtDy2arjCM6IloVSrXqqsxyac 8VwdU2OJZ9uozV2g2zTQygGcuaJxLwmdvzu5XHDjZ0yLpbNaUgmYJnXn7JfFDu+hcdGM6eSe7bJB eF2Sl/5Uuu2u2IOh9pSECOCbhR0ZwzW66ObSTzSSCySOuKU2L5FgXBzKdhZYnG8R1HoTixw1a8Ja kP2g6BAo7ty4OyQ9OG7nHs/AABGKnzcAA0KioLhgaHF3ukYCP4FjB4aw9KDBWy3j7WaO757hP51v 0ycu1Y2VPdA/mLYZt6FT5eOGwqHHUCfoM+Wsg5Geea5XlOnvexitAuh4vCRKCKi7Ffz70ckvUXXd YMTdNYWgeEjjgUBjcyJlCjKHedje2o8rkSPnqZIcWleGkh+QxHO0RPh2wqAuwEr2XccHxVLrrmvU lJjy3HFG6MxH0Z3YHwmFQOdon8aSyaqCfGrjv34lD7YLqZvuMpA6fSFCyq0K8XQvxpfXanFJXh8g wq7a4o6g2S41tVxllv0aCxtaIs87YTbi59JR6PKccPRZDPxFo1PD8VEKHOremX8SUqbRgK6oYYip AGqQyTgEwH8j4xByey+66GhkUxefLly4u76zr6glYNQqEuRGygOyahbN3uE6QdviYUjvizg7vJFQ mdrw28hy7IbQUSD2sYRseFFr15qu22tk3+n2+jzIvleEPvVtzOwXz6U+ANyLmtRj+SezkISgJAsC EwSLROTeQvU/ssff+8ibwj1y/5OI5yPjb+XyA2B01Uv/LZOP5lQryX59vVBPbu6T/gpBJ7b6VYe5 20pWDpc6E5Q7CohGPfMfPkMUop8abhXnX47bHWXSBZ/sOcEy3C/QjOAILgtI6o19RMzGi7btNlwz PwFJoaWkeKDKfKOBG66z6FFCMm6A5c/DaFZqAl9o1a8feZ1qjfbJS+GLcxwYiovaczLg3ljCwtsF jspTA0UaUtLvGzgD9JbAGHrRlhCpn+UIiWKgxhMWeCJPZEI7o5gbG/0+1UjD0KuX1SMOmMynq8qc ggO212hcxGFcVrDqAe6sbR0k/lDz1w2vJgDpENCM6RzpYgGwbgXlZcp9Pc8HZJ6j3t/IUtVVqtMz QMtAanT2IuBoDM15o4rJ+pU6+5dvNFr/3VOOmK0A8FDEc1KKtSkqBbTgANGJkEYJXY/IG586l09I FhAZfF7jYAzyeVNufjJfWE3n85YhJZ/AcU6WmWSFAVA/kcrCygc+1VEJTMaXqVI8lQ8c+4Hm4NrF JDK+GrfpiBAMHfjTbQeX52yV0RgiGZYTqSCbbARvadpXLu69SXgKnWm52jofEO77XNpDpRg5QVd+ B5vu6PJ55EjNIRjpbNQTsz/w6Y4GZhvkpSMdyPlrd29ptXgP3WudyATShg5Tplpqby3J/czr0goT MEQbWZXQURjqMDaEfKPJ8OO0+6PzSaAZhszj6x46MB+I1euBqG3MBfIGSwpbfhxQK7n8vsfVEKmd yZy7LMuDoA+SOWkz8zYud2rPMShB8WjXSmaD6tNdCBU9ux+TL1nTO/kaqUEzBDPOryN8DYqJydnP YDMcrE9BsUIiABT9G23q8l1u6QF0wwEavV/VNm1K7r9VRlV2N6Bh2/FEsT29hOsPoS8yPAeDOTQG Eqr7cNxy0NXET8mWoSOgrsEa+2VzuduAK0IMZF0+ynnVNXJvXryGi5S1N7Jho6niv9I/oGFZYWeR Yr//uUi6DmTQr3+vf/odKCiDzlcaRvsc2Jzp6jsG31snotiIKmYWtxZE2XTx6zppmwpJLVI9ZoJl 6Z0cf7SZXSVXaS5uIi5GNLDeg3jATKnCDLiXklHxrAkNe5hLIeXfVYg90lYF6egm4abDgWK3+yhe MFpBsa9sQao9647ERSR2TQvUSm5LN9LWlK+UEC/BCiNzAAQqSoVZoAnZ9lDfO7PsNY7A/FwKSqKk QXgOhv555cSzCMZ73dIKOCdowHahjIqZpGjyAwnv9hwhLjz81x9tcdf1jDwLEie48mBKI1/qnJyY loSwB/XOq/vEKTF2gYeuTvIYf5n8X8IomQ7qAxsn0QQ4bikiKrlrbsoBMhsGsEm2XF7HK6492t57 2uPM6S3bcPw1q5ov66Lo5VzhiDFFLY65inp44W/3tfpgqfnw1kbX7k7VTYajosUnbAcC1iai2NxZ 0+kBuvOv0n/8FQuL0oeq8sD5s26I0keqBAvPFUYuzrb5EI1TbJ6FV3J0YC3PQlpQKb3o0KuS+Teu UAVtSigN/kfbq0mNquDGauMqZC3lP2fbVVbyEauqR1Gdpmjp1+JWfEl/hP1DfVJRxXUkaNrHFDiL QXk84cPWjyr9/aIHdwb1WgpuPUSC/MKblhfNJSnEplLEgnbT106x6+EaMguOY757/0UNKuhEaCgE 1QT/7Xg3UwonY3wGwi2phpyy/1AxmJ/vLCf2U72ofezR5tjRUjcQ2MaGBYOwu4GumDzbf+IJ7x/D iJAQiB51QpUdf6at5WwbhUI4klhyyW5i0ncR6CZMPpwZBXqc65xd8leVL4v8joLHO7PrVBWs7cWR aeHFC0opYv0DI1roKUROYcr8FxDBoP8Uc/L+h2IP+IMY/r4ZxsPpDD7VgRkJVRoYnvCGvDPfohDq oLk7BGnOts5sLZqkDV+t7HccTxHlqqVe4QtAxKHyDfl8QK0LSAh2d5s8ht46rPxNNCnu1xR/SRd2 VJO68TCU2Lg0mwQfZHZFoKPqslzWm6OznBQk/lS8QIM2x8NWv1bUeT96RkEKI+UwYKP9E8egOgAv wHcINIiuPWWSJw8IWs5MR445KN+DStxcUG8nOAcaGVyN8pFV0pacQd+L5HiLgBpxVWm6kABqy2Nz 7SZ9oDT0eXlalg8Fr1rF0V87947ZlbJpSBknQg3wHMcVAjwtVeUFXGATXH7/t1s0dkvpK+vQqDS7 RbyZwGamK0OQgKnDNG3H2HlmtBdnI+A+GCKDhExoypvWpMATjtLGWUjsLm9rhGnbYh5jsH7tGUmz W3NrJKpyR+dIdlWyc7lzwB6lAYLnsfZEVYvQSDMCmdo4ZqCPYhrKdhkwq2rvg4JEFXSbHwTPPSGP Af5MQ1PbTpGImodifl56UYc/n4ut501/NJtdLJEfyckSs1TCrpFG6F7t8m2lI8P0R6Z3MxlbYnbQ WkxM5kl7WAlna0pD/1MG5CCRHl3FW4HmpKqsIVi1Lecsa7gfhiQWZKWgzidQYjNmd9fonTaFDm8s AyrAyn6m0Y77z3N8yfo7u5n+rEW+/Z+T0oyyLuKugQ5/69Ys6GPMhsoccffKjnDfBItn7nLmlIqc Yl7WVZWgq0S+NBKkBjhgeD+zaqCPBVnZeGQmBdFkN1Tg9lvCmABDY/p23vtJWsHwQw4H0IyKQyIn zxk+TH8rtEoy14UfqAkibm8s1OtgUREPTnUb+o5YcnYuKbVmEoWvsv5aP+KH7IsHZCZf2bX6acmn hlrXN3jBllfCRUpO0e+DkHxlEQCO7GhYlnjo/4duArIhzWthjPAZiosz97zPWY1gl+XhA9AQVs/L Db9Jus5D3QU/JsTCk4Lb4wBMhOjn8BskYZAWRJRhWV0dzreUR35w1/mcVF3tx0yR6Cqut4lqa635 NxNA54EAikQm358Kpd0WK0yl6mtgxY03c0k3vWipM1nTR0wre5xzdBmD0VHg6OIddFLbf+lHZNoy fR6FzMKH5Yw3js3NVTq8M1MoAQIFFRhw4f97bmK0+Q10oBsqfc5YHto+QoBNYkXH0vboguRe5R8C w+O8v00QcvgKTOwgUvf0ZbtesCc33Nq6+oyw3ylIKEuWH487YY3TqVhKwijSj0gRVexhxeGdUKYV LxajXEqeAbp7wJ7KkEoPjuXNefRT/sGmAntdDyWOUXixKd78hWzQJ+hO5ASTMXcDmYruqmz//SzO sk/1+uaTcF5ssM8/s/MXmcOeFssc+08WGv2Pnha+YZywACJpg02mXtF1vk3MiDIwWhiCe3s5keGb /OosIGYEznMho+QxE1zLEAkOOEgiO4GHFezQuNU282ed0EIETAwZiUcBPFHUdsYEIf0gxZeqG0ph +j41ChaQLP1XAEJQILKluTLVelqGRnitAphgdOLv/YSLUGOi7vQl0ZTeU3NGDKZuCORcn/etQRl5 J285/fJcYECdrGNI7rpte+92xcWbdy1vMbFxmdYhexYcipaBh7OJwmi5VOMdMjZmUiAVVKugUgxZ HcWyjUwJVuwB20rTRc8ojse45p9TN3iGHUY1iyI4R+Eoy6Jz1FBi2ZtaN7GE6SO+ixFQHGV1hB+S wuEumLEjWmiNoib83JRPU83anRYGKOMqoHY21IqyD9eo1z1Vf4FzDTV7yRqxGGpH2coF0fE6qAc5 oZSx0oSqWNQuR2XUW5hT0JRSc8+uKNFt8d/ufDQM2tJ3eUmMkAANZR2HPADddXNc6pVDcy5V73C7 uvD/H8Rjj8AC6Na03CWihtwcFCSYav79wHqY6cPLiYMme1wziKvBIE+ivXPr2X2jz1KSvoKfd6z9 2A1a6WYuHiWxgFWWjYXB9E/8/bRjplh2JSc6i2KL+jnZ9yUb1hqQgBi1KGLEFSpWsIAvYNEgfNxP nVnVeJhRiKQpvxbYvARPhLnJ/SfrVxL/J1KhfB2aw/gCzH8jXMVdauffwOh5Q3RHymmCu5MVfJPf jifNkP/A4ZNau3XLZcJsBqotg9F5oitJUwIi6OY2CfDfUjq5gu02M+REXujzvROb/XkHof6LD/SL Xn+omucYNsnKM5GHP/vsxbnJa3rZnXRtrPmLa5AETr5t2PwJO9xUtb4ZcBbFzlRMcZIEVi/Dh3o7 5bEIgQfpAZxFj+ADb2UlmkzAqYtiSSqbJBq5jbznpU53HBn6bcBpXCkyTgToEXS4+V7gNaPTIJPL 9ka3yGI3ghpOZjja2m5kUy/vBxkml6V0DtyG2lZ7ZjE9EsL8Wlb1SHvcEQuLRFafoGr2wqIV/Aqn G3RdSEQNpdicR2G2R9dsjuzeI4Ga3khCUDZcEhe8kEpbkSJtqjKAVAHXu0j0uztZg0pMnY1U24mc g8D6F8c8PeGkBCHyf6+pJIUtHEt49fy+WnSljMMrN13ZsiAsMrp1ZmcefUiePUDZe96lGfR8IQhw zDFF8Zwy60gjYk+E+6Bf9Y0lJ4USQj9f166hbva8vV4u8jfKAAAAACP1mCK3/GqvAAGGdIDwEACt srEVscRn+wIAAAAABFla --047d7b6dbd9a01f7b804f37cf494-- From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 20:08:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EE090DF9; Fri, 28 Feb 2014 20:07:59 +0000 (UTC) Received: from mail-pa0-x231.google.com (mail-pa0-x231.google.com [IPv6:2607:f8b0:400e:c03::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B83571283; Fri, 28 Feb 2014 20:07:59 +0000 (UTC) Received: by mail-pa0-f49.google.com with SMTP id hz1so1199479pad.36 for ; Fri, 28 Feb 2014 12:07:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=45KbPN0t9DIf/qwUxfR2mzCqz/rEFHLi+4I8a5/FC2E=; b=T5XUsoA9L6NxTADfpRq4jKOdKdR29hOnDqwGVd35pGo9n8Gac3349/6yw2A+DUK0fM zxi8vo0hDcIrcy0zzLCJ0+UPSfBu4gaSD8ZvGOkPMZIc0d+1WnlwjuayDR3A7d8XtYbD qDgfIOEAZFnJb0K9PKERAQ3VU61jTZMWOj+cRY7oGw+9d1r/UweybaYIrjyojztCaov5 bUQwqsw6h/6t+4+Z/u0eEJ4utP8zawbmftJu++lJrNgrGkkrtuc5MHpWKO8XNNxeA3y5 9TRpdG3aBatM5ylgMANNE9P3Bk7iRAIUvbzDczbF/Vayrc5TRpn+450yP2AxRykbMq09 nxpQ== MIME-Version: 1.0 X-Received: by 10.68.33.106 with SMTP id q10mr5604283pbi.132.1393618079322; Fri, 28 Feb 2014 12:07:59 -0800 (PST) Received: by 10.70.55.7 with HTTP; Fri, 28 Feb 2014 12:07:59 -0800 (PST) In-Reply-To: <1393617488.2929.89095301.157E5E1B@webmail.messagingengine.com> References: <530F6475.4090508@gmail.com> <1393616123.28153.89089441.54713282@webmail.messagingengine.com> <1393617488.2929.89095301.157E5E1B@webmail.messagingengine.com> Date: Fri, 28 Feb 2014 14:07:59 -0600 Message-ID: Subject: Re: ZFS and Wired memory, again From: Adam Vande More To: Mark Felder Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 20:08:00 -0000 On Fri, Feb 28, 2014 at 1:58 PM, Mark Felder wrote: > On Fri, Feb 28, 2014, at 13:43, Adam Vande More wrote: > > > He's running 150 TB on 3 GB of mem. Periodic I think could consume > that > > alone. > > > > Which periodic script are you referring to? The daily security check? I > suppose I could see resource usage increase after checking for new > setuid bits... assuming he had a lot of files to crawl through. But I > don't actually know if that would do much to wired memory. > Yes that and perhaps stuff from weekly as well as the "ugly" you inferred earlier. Regardless, the amount of mem for the "test" system is too low for the dataset size. The swapping is the issue and if he wants to eliminate that he should increase the memory or limit what is available to the kernel. Limiting the kernel also has its side effects. -- Adam From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 20:30:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0DDFC7A1 for ; Fri, 28 Feb 2014 20:30:18 +0000 (UTC) Received: from mario.brtsvcs.net (mario.brtsvcs.net [IPv6:2607:fc50:0:a400::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id D0C36149F for ; Fri, 28 Feb 2014 20:30:17 +0000 (UTC) Received: from chombo.houseloki.net (c-76-115-19-22.hsd1.or.comcast.net [76.115.19.22]) by mario.brtsvcs.net (Postfix) with ESMTPSA id D53022C1622; Fri, 28 Feb 2014 12:30:14 -0800 (PST) Received: from [IPv6:2601:7:880:bd0:957a:b9e2:b4c0:512a] (unknown [IPv6:2601:7:880:bd0:957a:b9e2:b4c0:512a]) by chombo.houseloki.net (Postfix) with ESMTPSA id C016AEDB; Fri, 28 Feb 2014 12:30:12 -0800 (PST) Message-ID: <5310F1D3.9020908@bluerosetech.com> Date: Fri, 28 Feb 2014 12:30:11 -0800 From: Darren Pilgrim User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: Anton Sayetsky Subject: Re: ZFS and Wired memory, again References: <530F6475.4090508@gmail.com> <1393616123.28153.89089441.54713282@webmail.messagingengine.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: freebsd-fs List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 20:30:18 -0000 On 2/28/2014 11:48 AM, Anton Sayetsky wrote: > And again - read _carefully_. System with 3G RAM is only test system > with single disk. Sorry for Adam and Mark not reading and completely missing the point. You can use the -m and -z flags to vmstat to get a listing of the various things using memory. A comparison between the "just booted" listing and one after wired memory has grown to hundreds of MB more than ARC may help indicate what's using memory. FWIW, on a production mailbox server with 32 GiB of memory and 8 TB of mirrored disk: Mem: 30M Active, 265M Inact, 22G Wired, 2032K Cache, 9366M Free ARC: 20G Total, 5081M MFU, 15G MRU, 1554K Anon, 260M Header, 577M Other IIRC, wired memory is just kernel pages and there is quite a bit of other stuff using kernel memory in the modern FreeBSD. A few GB of kernel memory plus ARC is normal on a modern system with lots of memory. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 20:35:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 90E85B5F for ; Fri, 28 Feb 2014 20:35:45 +0000 (UTC) Received: from mail-pb0-x229.google.com (mail-pb0-x229.google.com [IPv6:2607:f8b0:400e:c01::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6057B155B for ; Fri, 28 Feb 2014 20:35:45 +0000 (UTC) Received: by mail-pb0-f41.google.com with SMTP id jt11so1233125pbb.14 for ; Fri, 28 Feb 2014 12:35:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=6c58FDTYd2jjwNK0oasuj7Y1jnp3ykNgGyzvk3WNqJo=; b=EgQc3dV90xzCDKmmNeOflz57DCgwuZ1xRAJPusuebKWN+YjK8QtUunv4eTRiPlIB9Y fHVn65tdIe6O4+sZTE0+QWfX2ba7MRpfLmQGv9bQ9U9kyFyq4hUDCJ2iXbyEEBRbqYYx qmovM+SqxS4BIDf/6MyYG4wXH/iTlgGNRQw4wo5Myu27LxX21MADvEbb08irJwMeDLhM LdDE+em9U/Ip9LCdo4Q0VHEt491Iqs3uJfoZ36cD2xnKwRD9XIyCqqaBmqouGzbXJs2R G0ZaCdldfV3SJr3gUfZ375v7RfqE2R+UrUhPBRTCNRRaBi2PZbmqkkmXU6gZyvjULAjq OZhw== MIME-Version: 1.0 X-Received: by 10.66.49.74 with SMTP id s10mr5889424pan.0.1393619744976; Fri, 28 Feb 2014 12:35:44 -0800 (PST) Received: by 10.70.55.7 with HTTP; Fri, 28 Feb 2014 12:35:44 -0800 (PST) In-Reply-To: <5310F1D3.9020908@bluerosetech.com> References: <530F6475.4090508@gmail.com> <1393616123.28153.89089441.54713282@webmail.messagingengine.com> <5310F1D3.9020908@bluerosetech.com> Date: Fri, 28 Feb 2014 14:35:44 -0600 Message-ID: Subject: Re: ZFS and Wired memory, again From: Adam Vande More To: freebsd-fs Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 20:35:45 -0000 On Fri, Feb 28, 2014 at 2:30 PM, Darren Pilgrim < list_freebsd@bluerosetech.com> wrote: > On 2/28/2014 11:48 AM, Anton Sayetsky wrote: > >> And again - read _carefully_. System with 3G RAM is only test system >> with single disk. >> > > Sorry for Adam and Mark not reading and completely missing the point. Please don't ever apologize or assume anything about me or my behavior. -- Adam From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 23:00:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 686CBCC1 for ; Fri, 28 Feb 2014 23:00:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 364471250 for ; Fri, 28 Feb 2014 23:00:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id s1SN01le021137 for ; Fri, 28 Feb 2014 23:00:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s1SN01dY021136; Fri, 28 Feb 2014 23:00:01 GMT (envelope-from gnats) Date: Fri, 28 Feb 2014 23:00:01 GMT Message-Id: <201402282300.s1SN01dY021136@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Steve Modica Subject: Re: kern/185858: [zfs] zvol clone can't see new device X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Steve Modica List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 23:00:01 -0000 The following reply was made to PR kern/185858; it has been noted by GNATS. From: Steve Modica To: bug-followup@FreeBSD.org, biatche@gmail.com Cc: Subject: Re: kern/185858: [zfs] zvol clone can't see new device Date: Fri, 28 Feb 2014 16:47:43 -0600 It looks like this might relate back to this checkin: = https://github.com/joyent/illumos-joyent/commit/3b2aab18808792cbd248a12f1e= df139b89833c13 -- Steve Modica CTO - Small Tree Communications www.small-tree.com From owner-freebsd-fs@FreeBSD.ORG Fri Feb 28 23:50:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4B3811A9 for ; Fri, 28 Feb 2014 23:50:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 37E0516AB for ; Fri, 28 Feb 2014 23:50:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id s1SNo1Tv036872 for ; Fri, 28 Feb 2014 23:50:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s1SNo192036871; Fri, 28 Feb 2014 23:50:01 GMT (envelope-from gnats) Date: Fri, 28 Feb 2014 23:50:01 GMT Message-Id: <201402282350.s1SNo192036871@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Matthew Ahrens Subject: Re: kern/185858: [zfs] zvol clone can't see new device X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Matthew Ahrens List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Feb 2014 23:50:02 -0000 The following reply was made to PR kern/185858; it has been noted by GNATS. From: Matthew Ahrens To: bug-followup@freebsd.org Cc: Subject: Re: kern/185858: [zfs] zvol clone can't see new device Date: Fri, 28 Feb 2014 15:41:07 -0800 --bcaec520f69bc84d2a04f37ff83d Content-Type: text/plain; charset=ISO-8859-1 I looked for differences between the illumos and FreeBSD code and found that zfs_ioc_create() has this: 3310#ifdef __FreeBSD__ 3311 if (error == 0 && type == DMU_OST_ZVOL) 3312 zvol_create_minors(fsname); 3313#endif It seems like similar code would be needed in zfs_ioc_clone(), but isn't there. The "type" (ZVOL vs ZPL filesystem) isn't as immediately available in zfs_ioc_clone(), so it might be a little tricker to add the code to zfs_ioc_clone(). --matt --bcaec520f69bc84d2a04f37ff83d Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
I looked for differences between the illumos and Free= BSD code and found that zfs_ioc_create() has this:

3310#ifdef __FreeBSD__
3311 if (error =3D=3D 0 && type =3D=3D DMU_OST_ZVOL)
3312 zvol_create_mi= nors(fsname);
3313#endif

It seems like s= imilar code would be needed in zfs_ioc_clone(), but isn't there. =A0The= "type" (ZVOL vs ZPL filesystem) isn't as immediately availab= le in zfs_ioc_clone(), so it might be a little tricker to add the code to z= fs_ioc_clone().

--matt
--bcaec520f69bc84d2a04f37ff83d-- From owner-freebsd-fs@FreeBSD.ORG Sun Mar 2 15:09:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 91D3E7C8 for ; Sun, 2 Mar 2014 15:09:48 +0000 (UTC) Received: from emea01-am1-obe.outbound.protection.outlook.com (mail-am1lp0017.outbound.protection.outlook.com [213.199.154.17]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E9DEA1919 for ; Sun, 2 Mar 2014 15:09:47 +0000 (UTC) Received: from AM3PR02MB067.eurprd02.prod.outlook.com (10.242.246.22) by AM3PR02MB004.eurprd02.prod.outlook.com (10.242.242.26) with Microsoft SMTP Server (TLS) id 15.0.888.9; Sun, 2 Mar 2014 15:09:44 +0000 Received: from AM3PR02MB067.eurprd02.prod.outlook.com ([169.254.5.141]) by AM3PR02MB067.eurprd02.prod.outlook.com ([169.254.5.141]) with mapi id 15.00.0888.003; Sun, 2 Mar 2014 15:09:44 +0000 From: =?utf-8?B?TWlrYWVsIE5pZW1lbMOk?= To: "freebsd-fs@freebsd.org" Subject: ZFS in 10-stable panics in i386 system Thread-Topic: ZFS in 10-stable panics in i386 system Thread-Index: AQHPNii7Ckmr0Q0PSEGje3JZS9qDKw== Date: Sun, 2 Mar 2014 15:09:43 +0000 Message-ID: <6131750ecf6e427a80d1e5b4bbb5d3c8@AM3PR02MB067.eurprd02.prod.outlook.com> Accept-Language: fi-FI, en-US Content-Language: fi-FI X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [94.237.68.88] x-forefront-prvs: 0138CD935C x-forefront-antispam-report: SFV:NSPM; SFS:(10009001)(6009001)(428001)(199002)(189002)(76176001)(4396001)(80976001)(50986001)(81686001)(54316002)(56776001)(85852003)(83072002)(19580395003)(51856001)(90146001)(53806001)(15975445006)(49866001)(47736001)(83322001)(47976001)(46102001)(54356001)(86362001)(81816001)(16799955002)(74706001)(59766001)(77982001)(76482001)(85306002)(79102001)(16236675002)(87266001)(94316002)(2656002)(33646001)(95416001)(93516002)(87936001)(94946001)(65816001)(66066001)(80022001)(77096001)(74316001)(76796001)(76786001)(76576001)(56816005)(81342001)(92566001)(74366001)(69226001)(93136001)(74502001)(63696002)(95666003)(74876001)(47446002)(31966008)(74482001)(74662001)(81542001)(24736002); DIR:OUT; SFP:1101; SCL:1; SRVR:AM3PR02MB004; H:AM3PR02MB067.eurprd02.prod.outlook.com; CLIP:94.237.68.88; FPR:BE92701C.7316C415.63C83BD7.4CE6DB21.20100; PTR:InfoNoRecords; A:0; MX:1; LANG:en; received-spf: None (: student.tut.fi does not designate permitted sender hosts) MIME-Version: 1.0 X-OriginatorOrg: student.tut.fi Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 02 Mar 2014 15:09:48 -0000 SGksDQoNCg0KSSB0cmllZCB0byB1cGRhdGUgRnJlZUJTRCAxMC1yZWxlYXNlIG9uIG15IE5BUyBi b3ggKGkzODYpIHRvIDEwLXN0YWJsZS4gQWZ0ZXIgdGhpcyBaRlMgY2F1c2VkIGtlcm5lbCBwYW5p Y3MsIGFuZCBvbmx5IHJldmVydGluZyB0byAxMC1yZWxlYXNlIHNvbHZlZCB0aGUgaXNzdWUsIHNv IGl0IHNlZW1zIGxpa2Ugc29tZSBjaGFuZ2VzIGJyb2tlIFpGUyBvbiBpMzg2LiBIZXJlJ3MgdGhl IGNvcmUudHh0LjEgZmlsZSB3aGljaCB3YXMgZ2VuZXJhdGVkIGFmdGVyIGEga2VybmVsIHBhbmlj OiBodHRwOi8va290aS5tYm5ldC5maS9+am9rdW5lbl8vY29yZS50eHTigIsNCg0KDQpBbHNvLCB0 aGVyZSdzIG1vcmUgaW5mb3JtYXRpb24gYWJvdXQgaXQgaW4gdGhpcyB0aHJlYWQgb24gRnJlZUJT RCBmb3J1bXM6IGh0dHBzOi8vZm9ydW1zLmZyZWVic2Qub3JnL3ZpZXd0b3BpYy5waHA/Zj00OCZ0 PTQ1MTg3DQoNCg0KLU1pa2FlbA0K From owner-freebsd-fs@FreeBSD.ORG Mon Mar 3 09:32:48 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CC07EEAD; Mon, 3 Mar 2014 09:32:48 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id EAA8CEE8; Mon, 3 Mar 2014 09:32:47 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id LAA03512; Mon, 03 Mar 2014 11:32:46 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WKPEs-000NaS-1f; Mon, 03 Mar 2014 11:32:46 +0200 Message-ID: <53144C06.40207@FreeBSD.org> Date: Mon, 03 Mar 2014 11:31:50 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs Subject: Re: l2arc_feed_thread cpu utlization References: <52B2D8D6.8090306@FreeBSD.org> <52FE0378.7070608@FreeBSD.org> In-Reply-To: <52FE0378.7070608@FreeBSD.org> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=x-viet-vps Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Mar 2014 09:32:48 -0000 on 14/02/2014 13:52 Andriy Gapon said the following: > on 19/12/2013 13:30 Andriy Gapon said the following: >> >> This is just a heads up, no patch yet. >> >> l2arc_feed_thread periodically wakes up and scans certain amount of ARC buffers >> and writes eligible buffers to a cache device. >> Number of scanned buffers is limited by a threshold on the amount of data in the >> buffers seen. The threshold is applied on a per buffer list basis. In upstream >> there are 4 relevant lists: (data, metadata) X (MFU, MRU). In FreeBSD each of >> the lists was subdivided into 16 lists. This was done to reduce contention on >> the locks that protect the lists. But as a side effect l2arc_feed_thread can >> scan 16 times more data (~ buffers). >> >> So, if you have a rather large ARC and L2ARC and your buffers tend to be >> sufficiently small, then you could observe l2arc_feed_thread burning a >> noticeable amount of CPU. On some of our systems I observed it using up to 40% >> of a single core. Scaling back the threshold by factor of 16 makes CPU >> utilization go down by the same factor. >> >> I plan to commit this change to FreeBSD ZFS code. >> Any comments are welcome. > > Here is what I have in mind: > https://github.com/avg-I/freebsd/compare/wip;hc;l2arc_feed_thread_scan_rate > > The calculations in the macro look somewhat ugly, but they should be correct :-) > Looks like the patch did more than I wanted. Not only it limited how much data is scanned per list, but it also limited how much data was written in total. So, this new patch should be better: https://github.com/avg-I/freebsd/compare/master...review;l2arc-feed-thread-scan-size.diff -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Mar 3 10:30:09 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7AC6FE3 for ; Mon, 3 Mar 2014 10:30:09 +0000 (UTC) Received: from mail3.icritical.com (mail3.icritical.com [212.57.248.143]) by mx1.freebsd.org (Postfix) with SMTP id F2E8A38D for ; Mon, 3 Mar 2014 10:30:07 +0000 (UTC) Received: (qmail 7280 invoked from network); 3 Mar 2014 10:23:23 -0000 Received: from localhost (127.0.0.1) by mail3.icritical.com with SMTP; 3 Mar 2014 10:23:23 -0000 Received: (qmail 7272 invoked by uid 599); 3 Mar 2014 10:23:20 -0000 Received: from unknown (HELO PDC002.icritical.int) (195.62.218.2) by mail3.icritical.com (qpsmtpd/0.28) with ESMTP; Mon, 03 Mar 2014 10:23:20 +0000 Date: Mon, 3 Mar 2014 10:22:48 +0000 From: Andy D'Arcy Jewell To: Subject: Is it feasible to run HAST with only ONE node long-term Message-ID: <20140303102248.6686617b@hyperion> Organization: iCritical X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.78.99.2] X-TLS-Incoming: YES X-Virus-Scanned: by iCritical at mail3.icritical.com X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Mar 2014 10:30:09 -0000 Hi, Can I run a HAST MASTER without a SLAVE indefinitely without problems? We have a beta system I set up on FreeBSD 9-RELEASE with HAST/uCARP for redundancy. For reasons beyond my control, I now need to remove the standby to recover the rack space it uses, but we need to continue to run the system for at least a while, without redundancy. It's far from ideal, I know. There is close to zero prospect of reviving the redundancy later on. Obviously for short periods of time, running without a slave is fine (kind of "in the spec" really). My main worry is that HAST performance might suffer as the delta grows between the master and it's long-lost slave; does the master consume extra resources (memory, disk) "saving up" the replication stream objects, or does it just rely on a time/version stamp so when (in normal operation) the slave gets back in contact with the master, it knows to do a full sync? I'd rather not have to remove the HAST layer, as this is obviously invasive and time-consuming. Regards, -Andy D'Arcy Jewell iCritical is a brand of Critical Software Ltd. Registered in England & Wales: 04909220. Registered Office: IC2, Keele Science Park, Keele, Staffordshire, ST5 5NH. This message has been scanned for security threats by iCritical. The information contained in this message is confidential and intended for the addressee only. If you have received this message in error, or there are any problems with its content, please contact the sender. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 3 11:06:43 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9AAC3DE0 for ; Mon, 3 Mar 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7D03A938 for ; Mon, 3 Mar 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s23B6hjb008471 for ; Mon, 3 Mar 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s23B6hi6008469 for freebsd-fs@FreeBSD.org; Mon, 3 Mar 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 3 Mar 2014 11:06:43 GMT Message-Id: <201403031106.s23B6hi6008469@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Mar 2014 11:06:43 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186574 fs [zfs] zpool history hangs (infinite loop) o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185858 fs [zfs] zvol clone can't see new device o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o kern/121385 fs [unionfs] unionfs cross mount -> kernel panic o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 339 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 3 19:11:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 20CEBDB7 for ; Mon, 3 Mar 2014 19:11:34 +0000 (UTC) Received: from mail-lb0-x229.google.com (mail-lb0-x229.google.com [IPv6:2a00:1450:4010:c04::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9C3F8FF8 for ; Mon, 3 Mar 2014 19:11:33 +0000 (UTC) Received: by mail-lb0-f169.google.com with SMTP id l4so4074650lbv.14 for ; Mon, 03 Mar 2014 11:11:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=pKw7Z8fOBnrEFXnAuO88NSVTtz2iXPTYL1NgnfcwZVs=; b=lIn2G6qf26B/4UtL0ORTOFwiJj1qts7bRXu73ThOxN6I5z99GL2n5zCppttG9X29ta l/5RGfO9U7O+MHw5rYkVGneJPNXNRs7wEFrQ8XcqmyLXDv/d9OpicFqabbuLP65vUdt5 JeQCmnYza1x2Q78RTh1UBJgAbqrdVyG+Z/hQMZjei0ckdW/6ajlpQxRZfbm/GBCTZAYv J0rwdM3uKHJeL5PXMBXZwW8PeXo5tIkg/Mc/KU88dbdSca9tBGTtukWnZMyF5vXQpIaE 627M6xo67Fa2D0Jt+4riS8DbIfe3il+Bq8jYZfpSligiXcz5zaB2HzVrx6ZUHmP1DlLj gV9Q== X-Received: by 10.112.140.202 with SMTP id ri10mr23119711lbb.9.1393873891095; Mon, 03 Mar 2014 11:11:31 -0800 (PST) Received: from localhost ([178.150.115.244]) by mx.google.com with ESMTPSA id sx1sm14255456lac.1.2014.03.03.11.11.29 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Mar 2014 11:11:30 -0800 (PST) Date: Mon, 3 Mar 2014 21:11:28 +0200 From: Mikolaj Golub To: Andy D'Arcy Jewell Subject: Re: Is it feasible to run HAST with only ONE node long-term Message-ID: <20140303191127.GA9602@gmail.com> References: <20140303102248.6686617b@hyperion> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140303102248.6686617b@hyperion> User-Agent: Mutt/1.5.22 (2013-10-16) Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Mar 2014 19:11:34 -0000 On Mon, Mar 03, 2014 at 10:22:48AM +0000, Andy D'Arcy Jewell wrote: > My main worry is that HAST performance might suffer as the delta grows > between the master and it's long-lost slave; does the master > consume extra resources (memory, disk) "saving up" the replication > stream objects, or does it just rely on a time/version stamp so when > (in normal operation) the slave gets back in contact with the master, > it knows to do a full sync? No performance degradation is expected. hastd(8) maintains a map of dirty extents. The map size is static, just more blocks are marked as dirty during disconnect. It would only affect synchronization time after the connection is restored. You might consider setting the remote address to "none" (see hast.conf(5)) to make hastd(1) even not try to connect to the secondary. -- Mikolaj Golub From owner-freebsd-fs@FreeBSD.ORG Tue Mar 4 07:01:40 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 20612E30 for ; Tue, 4 Mar 2014 07:01:40 +0000 (UTC) Received: from mail-wg0-f43.google.com (mail-wg0-f43.google.com [74.125.82.43]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AFC7297F for ; Tue, 4 Mar 2014 07:01:39 +0000 (UTC) Received: by mail-wg0-f43.google.com with SMTP id x13so3327137wgg.14 for ; Mon, 03 Mar 2014 23:01:31 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=oboQhlS6sU4428aaWUe8QcznstNm/pPRj2kF98uEL4M=; b=htWTiPE4UDW40l3gJ8YKoqJFBaiA8s7uaKDqJlPzdD9wyQbEQlMcJ9h8Ijl4yIiCWA lMZY2LZR4KuBaG8YVlczEEThhS3CsQNrAalFQ77H+yv1013hNwnGaIJFNfCXRzHixmA6 oQ1uqzGSwJfUeBgZpTIVMd3f2YE7T0uGfTCYvPT/+rGUNgTe33pRG/5Xi+cYvHPdG5h8 KirSVEy2vUB979GCACd/be9uJJZrRMd0ozopa+AUGxlbWCmqAiKW6CyyTNfczS6MIO5R qg+LeJ3ICV+YZEmwMtEXdrFU/FbO9Pc9bTkLVeRKyi8czVgRecH9oL6fV+pGBXEAji2P zjhA== X-Gm-Message-State: ALoCoQmaVHM0bBavJ1AkWWY+SpzPLimmsWSLkgyPJwJYvN0Aqc7E/k1Sacc/hx4uF4AZrz0vqRhj MIME-Version: 1.0 X-Received: by 10.194.234.106 with SMTP id ud10mr25009968wjc.0.1393916067652; Mon, 03 Mar 2014 22:54:27 -0800 (PST) Received: by 10.227.92.198 with HTTP; Mon, 3 Mar 2014 22:54:27 -0800 (PST) Date: Tue, 4 Mar 2014 07:54:27 +0100 Message-ID: Subject: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? From: Olav Gjerde To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Mar 2014 07:01:40 -0000 --=20 Olav Gr=F8n=E5s Gjerde From owner-freebsd-fs@FreeBSD.ORG Tue Mar 4 07:12:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5879297 for ; Tue, 4 Mar 2014 07:12:23 +0000 (UTC) Received: from smtp.infracaninophile.co.uk (smtp6.infracaninophile.co.uk [IPv6:2001:8b0:151:1:3cd3:cd67:fafa:3d78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id BE83CAA1 for ; Tue, 4 Mar 2014 07:12:22 +0000 (UTC) Received: from seedling.black-earth.co.uk (seedling.black-earth.co.uk [81.2.117.99]) (authenticated bits=0) by smtp.infracaninophile.co.uk (8.14.8/8.14.8) with ESMTP id s247CBCb052689 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO) for ; Tue, 4 Mar 2014 07:12:17 GMT (envelope-from matthew@FreeBSD.org) DKIM-Filter: OpenDKIM Filter v2.8.3 smtp.infracaninophile.co.uk s247CBCb052689 Authentication-Results: smtp.infracaninophile.co.uk/s247CBCb052689; dkim=none reason="no signature"; dkim-adsp=none Message-ID: <53157CC2.8080107@FreeBSD.org> Date: Tue, 04 Mar 2014 07:12:02 +0000 From: Matthew Seaman User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? References: In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="e97kNhdah4d5woSM2FejWKSkWIQj7QWLS" X-Virus-Scanned: clamav-milter 0.98.1 at lucid-nonsense.infracaninophile.co.uk X-Virus-Status: Clean X-Spam-Status: No, score=-3.0 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on lucid-nonsense.infracaninophile.co.uk X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Mar 2014 07:12:23 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --e97kNhdah4d5woSM2FejWKSkWIQj7QWLS Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On 04/03/2014 06:54, Olav Gjerde wrote: =2E.. but the question in the subject was: > Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? Yes. LZ4 is available in 9.2-RELEASE, 10.0-RELEASE and their respective stable branches. Matthew --=20 Dr Matthew J Seaman MA, D.Phil. PGP: http://www.infracaninophile.co.uk/pgpkey --e97kNhdah4d5woSM2FejWKSkWIQj7QWLS Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.20 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQJ8BAEBCgBmBQJTFXzLXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ2NTNBNjhCOTEzQTRFNkNGM0UxRTEzMjZC QjIzQUY1MThFMUE0MDEzAAoJELsjr1GOGkATeucQAK3Lee3XqyiuS//SsdOmCWgZ Ys4SKAt8GoO67mO0KBPoOwEYEMyjiGIuSbjs8alaoMW1PKKo/O7tgwaf4ajMK3re /AvOJ8Rk1cNN7GqG5cCgRIcx3CiJLu435D4B2xoYKD7H2RKCyO92mEGel8ON7qON 4uFD1V0qpB6QVMKQmqrYP8+xo2qacz/9W5NMlstYzoEmtQNNt0Jg+NQGjf6bSM46 jKALPD9Nh4IOGcl+Nkl7ulTrz+1g3xtEP3w+TRg9Gj9qminB3K/V324lSAWyl/PP Nex1SHcbFra1idn6ti+HG8sSj3sGABVrGQLHqm7lNs1cyb5B/P3B+Y0qaGgILDvT cZ5UkJqNIoqdM8qfsXj3IknxNEGoHgdPtDSeKEh33y5DnlPXf/dDwSYiPC0TX9Rq LnY6Wwtu1a5VxVlFGFRuFgvhNqVhTEp/+K7cxUSRj/EJDzI4m9NfzH6MOSrQcOr6 5nsMJb+f6B1EGnDSh/pTykdu16VO847Rf+fyOZvKyESpQyR3Y9/RiflG58EgHew7 JHq2hP4ZUoMHRdEwXtNXTCMlfJXEZGnvQ7JoRW1Ba0emuxMIWp/JP5QID5dRgJJ6 vUfXpEzcFUlNsypJOavX1ulpQJREnY/lV2d4nR3SD9yMvPTobUBMQE8JGAWsphIx rYgA5PKQ67jTbuqU5d3q =Fd8s -----END PGP SIGNATURE----- --e97kNhdah4d5woSM2FejWKSkWIQj7QWLS-- From owner-freebsd-fs@FreeBSD.ORG Tue Mar 4 15:00:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B4190D86 for ; Tue, 4 Mar 2014 15:00:23 +0000 (UTC) Received: from mail-ob0-x234.google.com (mail-ob0-x234.google.com [IPv6:2607:f8b0:4003:c01::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7D88FF35 for ; Tue, 4 Mar 2014 15:00:23 +0000 (UTC) Received: by mail-ob0-f180.google.com with SMTP id wn1so2734790obc.39 for ; Tue, 04 Mar 2014 07:00:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=RU8uo0zho44e/WKo3142J6ay5fVziL0n+99X3nEPAt4=; b=NTWS4Guw8pE1swbOSBXLuiTLUJ337e+a1JE/4x+LwqjKhOkETmPcfciwWuL2s3qeRr j35kUOzDJ/D4/2rOJzbg6BfzuM8NkLcNPKDo/wsKHg44x7dh5ejpIALI1Bfv2TOiJfYj WEvYznjuKldwn67rEY9VcZM8ZkL2szl9rRbIc/VuRrUn2FgtIYoh9NPtaM1e4/wKpnkF 7E5+DGWtiX0dBK5+/X0LbpLDrlNXzf69VGs3vLIf86pzglGsue9P/HAicf0NcsMmNULb /ZEM6NiwdVCZbcu+KU3+LaC2mEoAfe7JQhLX/wUVbbtj4t4LJmbRpBx+Rrrp0XiUpGeP Hiew== MIME-Version: 1.0 X-Received: by 10.60.116.74 with SMTP id ju10mr73580oeb.6.1393945222919; Tue, 04 Mar 2014 07:00:22 -0800 (PST) Received: by 10.76.180.164 with HTTP; Tue, 4 Mar 2014 07:00:22 -0800 (PST) Received: by 10.76.180.164 with HTTP; Tue, 4 Mar 2014 07:00:22 -0800 (PST) In-Reply-To: References: Date: Tue, 4 Mar 2014 07:00:22 -0800 Message-ID: Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? From: Freddie Cash To: Olav Gjerde Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Mar 2014 15:00:23 -0000 Lz4 compression appeared in 9.1-STABLE, so it's available in every release after that, including 9.2 and 10.0. Typos and terseness brought to you by the LG G2 running SlimKat. From owner-freebsd-fs@FreeBSD.ORG Tue Mar 4 15:44:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 27C63582 for ; Tue, 4 Mar 2014 15:44:32 +0000 (UTC) Received: from mail-yh0-x232.google.com (mail-yh0-x232.google.com [IPv6:2607:f8b0:4002:c01::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id D75A83F3 for ; Tue, 4 Mar 2014 15:44:31 +0000 (UTC) Received: by mail-yh0-f50.google.com with SMTP id t59so5037710yho.23 for ; Tue, 04 Mar 2014 07:44:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=jTxjorlCMT2KVuc3u6RRo7e5km3cHPjfD4AzhhoPx6g=; b=dG69ujuhlitfELiRX63ncePKTFr2z7agtoaK4igSFnIGSakIKwctuKP0cLMI7wa86A WwtZoY3XLXmsbDxsHJJm7293DUQ3xDIHnuCmcQI2vpltT80DBOW2pDxuR7Fr9p/nsl1k l1fLxfc+FegzTAWQqvCpeUn21O85X+C4J4iCXCyzIt2xym0CRx12Q5Bh59rrzqw1kM9I p9f9VOurlbNewz5DTyIzhAUs7345XBwR3hCu0TpXleD7hdylZHZsAbk2m7ttmF2s6hsB DwTguj+ZucoKSpPOJgRCuLaZeqEt4bog8P//UCOCYxQwDqZyS/GIbbtvKrWRZ50daCMR wv0g== X-Received: by 10.236.62.232 with SMTP id y68mr377114yhc.12.1393947871048; Tue, 04 Mar 2014 07:44:31 -0800 (PST) Received: from [192.168.1.76] (75-63-29-182.lightspeed.irvnca.sbcglobal.net. [75.63.29.182]) by mx.google.com with ESMTPSA id e5sm48974190yhj.14.2014.03.04.07.44.30 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 04 Mar 2014 07:44:30 -0800 (PST) Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? From: aurfalien In-Reply-To: Date: Tue, 4 Mar 2014 07:44:28 -0800 Message-Id: References: To: Freddie Cash X-Mailer: Apple Mail (2.1874) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems , Olav Gjerde X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Mar 2014 15:44:32 -0000 The OP was asking in terms of L2Arc which I think is 9.2 and up. - aurf "Janitorial Services" On Mar 4, 2014, at 7:00 AM, Freddie Cash wrote: > Lz4 compression appeared in 9.1-STABLE, so it's available in every release > after that, including 9.2 and 10.0. > > Typos and terseness brought to you by the LG G2 running SlimKat. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Mar 4 16:17:57 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C1A24111 for ; Tue, 4 Mar 2014 16:17:57 +0000 (UTC) Received: from mail-oa0-x22d.google.com (mail-oa0-x22d.google.com [IPv6:2607:f8b0:4003:c02::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 850EC8DC for ; Tue, 4 Mar 2014 16:17:57 +0000 (UTC) Received: by mail-oa0-f45.google.com with SMTP id o6so3074378oag.4 for ; Tue, 04 Mar 2014 08:17:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=pOqk9AwYdfhDIssepjgSb8ga48kR0TS6XgLamYd7QsQ=; b=0h58T4peGCieuqYsEowid3xk2tGX5ZDA9dwRKeDXbXilvTTybvzq2I85/zW5E3c2e7 iRupSUw8Ui3hyNk/QCuOoekULOlA73yW9nj4IHB8ZDqaqi9e2nj+d0AHD1KP0XnHFrr6 H8X2C+/Btj0BkYG67WqBGjdKyN7/f4x8fLXBhmk3TwyMTSuYfUYbvb8s45iNJ88MgSXM N6dFGBLshvc7+//0BiP3gsgeFGSiZOBT+rB2KMikMxjsqrqkE6+JoPlYcCeP6QVWNPnk OhW9zEZhcRhdFMsu5BpAwJVqwlgOuuMmrfLavYSB/llp5NvfOsxJiDlOsqFON3ob0afF 814Q== MIME-Version: 1.0 X-Received: by 10.182.29.33 with SMTP id g1mr216664obh.59.1393949876832; Tue, 04 Mar 2014 08:17:56 -0800 (PST) Received: by 10.76.180.164 with HTTP; Tue, 4 Mar 2014 08:17:56 -0800 (PST) In-Reply-To: References: Date: Tue, 4 Mar 2014 08:17:56 -0800 Message-ID: Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? From: Freddie Cash To: aurfalien Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems , Olav Gjerde X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Mar 2014 16:17:57 -0000 On Tue, Mar 4, 2014 at 7:44 AM, aurfalien wrote: > The OP was asking in terms of L2Arc which I think is 9.2 and up. > =E2=80=8BWhoops, missed the L2ARC part of the subject.=E2=80=8B --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Tue Mar 4 17:32:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F3E67771 for ; Tue, 4 Mar 2014 17:32:54 +0000 (UTC) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 84494FEA for ; Tue, 4 Mar 2014 17:32:54 +0000 (UTC) Received: from r2d2 ([82.69.141.170]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50008547818.msg for ; Tue, 04 Mar 2014 17:32:45 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Tue, 04 Mar 2014 17:32:45 +0000 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 82.69.141.170 X-Return-Path: prvs=114023f9f1=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: <247A7F146F704E2884FA3AC4EB43BDC5@multiplay.co.uk> From: "Steven Hartland" To: "Freddie Cash" , "aurfalien" References: Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? Date: Tue, 4 Mar 2014 17:32:29 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="utf-8"; reply-type=original Content-Transfer-Encoding: 8bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: FreeBSD Filesystems , Olav Gjerde X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Mar 2014 17:32:55 -0000 Also be aware there have been a few issues discovered in compression code recently which avg has some fixes for so if your going to use it would be worth backporting these. Regards Steve ----- Original Message ----- From: "Freddie Cash" To: "aurfalien" Cc: "FreeBSD Filesystems" ; "Olav Gjerde" Sent: Tuesday, March 04, 2014 4:17 PM Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? > On Tue, Mar 4, 2014 at 7:44 AM, aurfalien wrote: > >> The OP was asking in terms of L2Arc which I think is 9.2 and up. >> > > ​Whoops, missed the L2ARC part of the subject. ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Tue Mar 4 20:33:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A900FA77 for ; Tue, 4 Mar 2014 20:33:58 +0000 (UTC) Received: from mail-wg0-f51.google.com (mail-wg0-f51.google.com [74.125.82.51]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 405DA776 for ; Tue, 4 Mar 2014 20:33:58 +0000 (UTC) Received: by mail-wg0-f51.google.com with SMTP id a1so73812wgh.10 for ; Tue, 04 Mar 2014 12:33:56 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=ZTibA7ZNMFxxmEer4Ia1TRT9qPWSwUWIYABar2wy66U=; b=F21HGOIHb32MvBcO4YK8szKenZuO60jviZx2mat4BxjEIY5jGVAwTOVBaQmxjrVB1F uEKMJAQhAL70aHnTCjqXvjj3CWTx98r2/bqxH5SC1P36yqRtilak7QvjRHKfuKbUn+sB Ghre/F0iVZ44iL+6ndNeqnfH74FCSYn9PKqUyLYjdWEIdrSsx6tvA4BVFUPqM93BRfqw dn4FC7whnQpZ9vYLcCi5oaW41p2YxjE8uEWB8eJF/om7nQ34RNdlHw7vVVAD/drfmyc6 5pNrN8fWLBtU7YTQ+Ifyx83qZfT96WVaJTKnHfcBJLT+W4NwN8GMdMVgBBIMs17ZcdU/ 1MYQ== X-Gm-Message-State: ALoCoQlT6Sh8Ehj9RJsoTy6zTS0UR2M/XutQYQexRv6dpWwf+o/zbvKWZYXdQhYt/61M7QVlw33M MIME-Version: 1.0 X-Received: by 10.194.75.225 with SMTP id f1mr2138825wjw.87.1393965236528; Tue, 04 Mar 2014 12:33:56 -0800 (PST) Received: by 10.227.92.198 with HTTP; Tue, 4 Mar 2014 12:33:56 -0800 (PST) In-Reply-To: <5315D446.3040701@freebsd.org> References: <53157CC2.8080107@FreeBSD.org> <5315D446.3040701@freebsd.org> Date: Tue, 4 Mar 2014 21:33:56 +0100 Message-ID: Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? From: Olav Gjerde To: Matthew Seaman , freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Mar 2014 20:33:58 -0000 I managed to mess up who I replied to and Matthew replied back with a good answer which I think didn't reach the mailing list. I actually have a problem with query performance in one of my databases related to running PostgreSQL on ZFS. Which is why I'm so interested in compression for the L2ARC Cache. The problem is random IO read were creating a report were I aggregate 75000 rows takes 30 minutes!!! The table that I query has 400 million rows though. The dataset easily fit in memory, so if I run the same query again it takes less than a second. I'm going to test UFS with my dataset, it may be a lot faster as you said. Currently I've only tested ZFS with gzip, lz4 and no compression. Gzip and no compression has about the same performance and LZ4 is about 20% faster(for both read and write). LZ4 has a compressratio about 2.5 and gzip9 has a compressratio that is about 4.5 Steven Hartland, thank you for your suggestion. I will try the 10-STABLE then instead of a RELEASE. On Tue, Mar 4, 2014 at 2:25 PM, Matthew Seaman wrote: > On 03/04/14 12:17, Olav Gjerde wrote: > > This is really great, I wonder how well it plays together with PostgreS= QL > > and a SSD. > > You probably *don't* want to turn on any sort of compression for a > Postgresql cluster's data area (ie. /usr/local/pgsql) -- and there are a > bunch of other tuning things to make ZFS and Pg play well together, like > adjusting the ZFS block size. The sort of small random IOs that RDBMSes > do are hard work for any filesystem, but particularly difficult for ZFS > due to the copy-on-write semantics it uses. It's a lot easier to get > good performance on a UFS partition. > > On the other hand, ZFS has recently grown TRIM support, which makes it a > much happier prospect on SSDs. > > Cheers, > > Matthew > > > > --=20 Olav Gr=F8n=E5s Gjerde From owner-freebsd-fs@FreeBSD.ORG Tue Mar 4 20:52:40 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CB15A31F for ; Tue, 4 Mar 2014 20:52:40 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7B0D9981 for ; Tue, 4 Mar 2014 20:52:40 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.7/8.13.1) with ESMTP id s24KiuvW072607 for ; Tue, 4 Mar 2014 14:44:56 -0600 (CST) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Tue Mar 4 14:44:56 2014 Message-ID: <53163B43.7010009@denninger.net> Date: Tue, 04 Mar 2014 14:44:51 -0600 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? References: <53157CC2.8080107@FreeBSD.org> <5315D446.3040701@freebsd.org> In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080201060205040708070300" X-Antivirus: avast! (VPS 140304-1, 03/04/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Mar 2014 20:52:40 -0000 This is a cryptographically signed message in MIME format. --------------ms080201060205040708070300 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable There's all sorts of issues here and as someone who uses Postgres on ZFS/FreeBSD in a heavy mixed-I/O environment I'm pretty aware of them. Getting the L2ARC cache OFF the spindles where the data set is will make a huge difference in performance in any mixed I/O (read and write) environment. The interleaving of that data on the base data set is murder on spinning rust due to the requirement to move the heads. I strongly recommend starting there. UFS *may* be faster, but don't count on it as the small I/O issue still is a serious factor and seek times becomes the overwhelming latency problem very quickly with small I/Os. Remember too that you still need data protection (e.g. RAID, Gmirror, etc.) Benchmark for your workload and see. If that's not enough going to SSDs erases the head-movement penalty completely. You may well see improvements in net I/O under mixed database loads as high as 20 or more *times* (not percent) what you get from rotating media for this reason, especially with the better SSD devices. I have clocked (under synthetic conditions) improvements in I/O latency on "first accesses" for data not in-RAMcache as high as *one hundred times* if the workload includes interleaved writes (e.g. large numbers of clients who both need to read and write at once.) On 3/4/2014 2:33 PM, Olav Gjerde wrote: > I managed to mess up who I replied to and Matthew replied back with a g= ood > answer which I think didn't reach the mailing list. > > I actually have a problem with query performance in one of my databases= > related to running PostgreSQL on ZFS. Which is why I'm so interested in= > compression for the L2ARC Cache. The problem is random IO read were > creating a report were I aggregate 75000 rows takes 30 minutes!!! The t= able > that I query has 400 million rows though. > The dataset easily fit in memory, so if I run the same query again it t= akes > less than a second. > > I'm going to test UFS with my dataset, it may be a lot faster as you sa= id. > Currently I've only tested ZFS with gzip, lz4 and no compression. Gzip = and > no compression has about the same performance and LZ4 is about 20% > faster(for both read and write). LZ4 has a compressratio about 2.5 and > gzip9 has a compressratio that is about 4.5 > > Steven Hartland, thank you for your suggestion. I will try the 10-STABL= E > then instead of a RELEASE. > > > On Tue, Mar 4, 2014 at 2:25 PM, Matthew Seaman wr= ote: > >> On 03/04/14 12:17, Olav Gjerde wrote: >>> This is really great, I wonder how well it plays together with Postgr= eSQL >>> and a SSD. >> You probably *don't* want to turn on any sort of compression for a >> Postgresql cluster's data area (ie. /usr/local/pgsql) -- and there are= a >> bunch of other tuning things to make ZFS and Pg play well together, li= ke >> adjusting the ZFS block size. The sort of small random IOs that RDBMS= es >> do are hard work for any filesystem, but particularly difficult for ZF= S >> due to the copy-on-write semantics it uses. It's a lot easier to get >> good performance on a UFS partition. >> >> On the other hand, ZFS has recently grown TRIM support, which makes it= a >> much happier prospect on SSDs. >> >> Cheers, >> >> Matthew >> >> >> >> > --=20 -- Karl karl@denninger.net --------------ms080201060205040708070300 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMDQyMDQ0NTFaMCMGCSqGSIb3DQEJBDEW BBQ6uNq6mmv/pu3qrculPHmmiynbOjBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAPLdba8uy2Mo6JAf9Vr7rxK1mxMCz 5rjc/TG5QlnLsRshQRXe1B1rTsJzMm3I5Fwg6OV+dSq77cpNqlxkKs8Wgc/Z1OlAUczxmbtJ eH11ZFlIakDoSR+j+T3MOEwKFwLI5GfX87NSvvn9xoHusVqeTH2237beV2tnufwUjujHDtpj 604gFYLQYP+5QzYoOPykCn/lNlGkIx0K0CY5smjTzD1mKErZ7e3fPXdwqB+EzAR0bK4T7Xy6 G7mW530IRcx9OYQj6K8ZAAYcApN51jfBRLhwQfqLydRukX3muyYL6c/QsI7kSAZ2Jt1tlUm0 oerM8qi9UQi+CaS37EEMgiM50dx/f2onb3VQ4WiV5MGkgxSYnlPJOTNPNzPAdiO1wed/w6T0 khquwywWg0SFkHZq+55elBkiXdVMPjqz4tuVXHuLZCIWKB51pcLzAPe1ImXd76xI5ERrkKE6 Ly6vuFbFRyIQACDVGVQv7lR2QXCDkD/ly1fquCjfX4acp2670vNn8OQwzEvxtbQ5daZaqp6X wYNxPqk5HWq7RBSM+LX1XM+tbmGWiCQupKBcQvdxi2kHHAsrncWk6+chxkMSATCJfaj1lJLc RlHVkmxW4m4IQcExXQUAlitOfjuGfsSptAbfGdhUp7fgR7i1PNR2QwBBC5EYbcLkwzBDfAup EpVCMe0AAAAAAAA= --------------ms080201060205040708070300-- From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 02:42:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A43CC3D0 for ; Wed, 5 Mar 2014 02:42:36 +0000 (UTC) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 5D938FA7 for ; Wed, 5 Mar 2014 02:42:35 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id s252eb9k020123; Tue, 4 Mar 2014 20:40:38 -0600 (CST) Date: Tue, 4 Mar 2014 20:40:37 -0600 (CST) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Olav Gjerde Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? In-Reply-To: Message-ID: References: <53157CC2.8080107@FreeBSD.org> <5315D446.3040701@freebsd.org> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Tue, 04 Mar 2014 20:40:38 -0600 (CST) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 02:42:36 -0000 On Tue, 4 Mar 2014, Olav Gjerde wrote: > I managed to mess up who I replied to and Matthew replied back with a good > answer which I think didn't reach the mailing list. > > I actually have a problem with query performance in one of my databases > related to running PostgreSQL on ZFS. Which is why I'm so interested in > compression for the L2ARC Cache. The problem is random IO read were > creating a report were I aggregate 75000 rows takes 30 minutes!!! The table > that I query has 400 million rows though. > The dataset easily fit in memory, so if I run the same query again it takes > less than a second. Make sure that your database is on a filesystem with zfs block-size matching the database block-size (rather than 128K). Otherwise far more data may be read than needed, and likewise, writes may result in writing far more data than needed. Regardless, L2ARC on SSD is a very good idea for this case. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 03:26:28 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4AA41109; Wed, 5 Mar 2014 03:26:28 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 1DB0777E; Wed, 5 Mar 2014 03:26:28 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s253QR8b005638; Wed, 5 Mar 2014 03:26:27 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s253QRpD005637; Wed, 5 Mar 2014 03:26:27 GMT (envelope-from linimon) Date: Wed, 5 Mar 2014 03:26:27 GMT Message-Id: <201403050326.s253QRpD005637@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/187261: [fuse] FUSE kernel panic when using socket / bind X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 03:26:28 -0000 Old Synopsis: FUSE kernel panic when using socket / bind New Synopsis: [fuse] FUSE kernel panic when using socket / bind Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Mar 5 03:25:57 UTC 2014 Responsible-Changed-Why: reclassify. http://www.freebsd.org/cgi/query-pr.cgi?pr=187261 From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 04:43:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 29740DBC for ; Wed, 5 Mar 2014 04:43:59 +0000 (UTC) Received: from mail-yh0-x22a.google.com (mail-yh0-x22a.google.com [IPv6:2607:f8b0:4002:c01::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id DF9B3E2E for ; Wed, 5 Mar 2014 04:43:58 +0000 (UTC) Received: by mail-yh0-f42.google.com with SMTP id a41so528050yho.1 for ; Tue, 04 Mar 2014 20:43:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:subject:message-id:date:to:mime-version; bh=ONZxMRgSvnFY9lzqb+UnMttrD2N9MlcghOH26xo36+w=; b=bg8+h5xm29F77N8Hp2HRzy925n9bwHl46PfSjCbnXckhzDRl+rHWmfKe7rf1HjuRcu vWVH8dIB+tKgM4m2Pm6KuV8Q9uBue2mViui5j94ZpojqTyaDNWHjm+gG23h8HuulYxAw Rk/M3/8NYpH1ep5u5vPJsAg0/GrwxjZPd5FSc0uVGgqg6ZJ9SagW085JrUJAWfGNfD43 5WcVVqNWZUO450j1pDc8ifShC8/Kked5t0VlY/ORhPRZQMXxPzAH/ywQ/zfPAOeRkP1r QwrLg4pRmazFMNdC/ncsJbay6o3mYVL/x5vZPBNnXzqpMw7CxoxdGcf+4BSgpMJT8E1T JNaA== X-Received: by 10.236.51.71 with SMTP id a47mr4204508yhc.100.1393994638064; Tue, 04 Mar 2014 20:43:58 -0800 (PST) Received: from [192.168.1.76] (75-63-29-182.lightspeed.irvnca.sbcglobal.net. [75.63.29.182]) by mx.google.com with ESMTPSA id t58sm3916128yho.20.2014.03.04.20.43.57 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 04 Mar 2014 20:43:57 -0800 (PST) From: aurfalien Subject: mdconfig via rc.conf Message-Id: <80EE335C-E85A-433B-A4A2-287BA8BA1345@gmail.com> Date: Tue, 4 Mar 2014 20:43:54 -0800 To: FreeBSD Filesystems Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) X-Mailer: Apple Mail (2.1874) Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 04:43:59 -0000 Hi, Apology for the cross post as I asked in the general questions list and = also if this is not an appropriate question. But are there any other parameters I would need to setup a ram disk at = boot time? I have this in my rc.conf; mdconfig_md100=3D=93-t malloc -s 12G=94 I=92ve opted not to mount or format it for now. But upon boot, I do not see the md100 device. Is there anything I need in rc.conf to enable this? Thanks in advance, - aurf "Janitorial Services" From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 05:19:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A1F43567 for ; Wed, 5 Mar 2014 05:19:17 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 53A16142 for ; Wed, 5 Mar 2014 05:19:17 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.8/8.14.8) with ESMTP id s255JFce039260 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 4 Mar 2014 22:19:16 -0700 (MST) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.8/8.14.8/Submit) with ESMTP id s255JFli039257; Tue, 4 Mar 2014 22:19:15 -0700 (MST) (envelope-from wblock@wonkity.com) Date: Tue, 4 Mar 2014 22:19:15 -0700 (MST) From: Warren Block To: aurfalien Subject: Re: mdconfig via rc.conf In-Reply-To: <80EE335C-E85A-433B-A4A2-287BA8BA1345@gmail.com> Message-ID: References: <80EE335C-E85A-433B-A4A2-287BA8BA1345@gmail.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Tue, 04 Mar 2014 22:19:16 -0700 (MST) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 05:19:17 -0000 On Tue, 4 Mar 2014, aurfalien wrote: > Hi, > > Apology for the cross post as I asked in the general questions list and also if this is not an appropriate question. > > But are there any other parameters I would need to setup a ram disk at boot time? > > I have this in my rc.conf; > > mdconfig_md100=?-t malloc -s 12G? rc.conf is sourced, but not the place for code. > I?ve opted not to mount or format it for now. > > But upon boot, I do not see the md100 device. > > Is there anything I need in rc.conf to enable this? It depends on the version of FreeBSD. On FreeBSD 10, it's easy to do with an entry in /etc/fstab, see fstab(5). For earlier versions, I'm not sure of the best way. There's /etc/rc.local, but may be more automated ways in the other rc scripts. From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 05:30:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BE05174F for ; Wed, 5 Mar 2014 05:30:44 +0000 (UTC) Received: from mail-yk0-x22d.google.com (mail-yk0-x22d.google.com [IPv6:2607:f8b0:4002:c07::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7C6C3289 for ; Wed, 5 Mar 2014 05:30:44 +0000 (UTC) Received: by mail-yk0-f173.google.com with SMTP id 10so1532116ykt.4 for ; Tue, 04 Mar 2014 21:30:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=96KgO9yhm4QOxzH8FMzf0jZ54ePiR9TM4Qr50K7EqPw=; b=y0zAKKXbEuJeXc+ywlrkdWnNq0XZWODl3WvenUG07GHj5oe6hkdS+/PrWFJfs2mfO+ O+bLPsnaC0IepSglx2hzsiOegCO/2yJPrJsQ1g1lNgnkqiJeVAFD1Z/PFxMTSmC0pUEn ogrmjLxDztzwVqs+V8T0qF6QJasmx1KFFN8Dw6zkcs/Je4UN8fGj5kCUsy8ovITIRi7s Cklj+OiNOj02lry46o+L5CvWJAAVbOrglUifWg8Krl4+41/xUs1/rlxDYcTZgGYqQlzG jjODEDYmrWyUI++lPxcHKU3T/oiTDlP1W6jHfJzW53Gnl8SDAHfxpyK3d1rP+M4Iqq0R ly0A== X-Received: by 10.236.24.196 with SMTP id x44mr4522248yhx.92.1393997443222; Tue, 04 Mar 2014 21:30:43 -0800 (PST) Received: from [192.168.1.76] (75-63-29-182.lightspeed.irvnca.sbcglobal.net. [75.63.29.182]) by mx.google.com with ESMTPSA id m21sm4278853yhl.9.2014.03.04.21.30.41 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 04 Mar 2014 21:30:42 -0800 (PST) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: mdconfig via rc.conf From: aurfalien In-Reply-To: Date: Tue, 4 Mar 2014 21:30:39 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: References: <80EE335C-E85A-433B-A4A2-287BA8BA1345@gmail.com> To: Warren Block X-Mailer: Apple Mail (2.1874) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 05:30:44 -0000 On Mar 4, 2014, at 9:19 PM, Warren Block wrote: > On Tue, 4 Mar 2014, aurfalien wrote: >=20 >> Hi, >>=20 >> Apology for the cross post as I asked in the general questions list = and also if this is not an appropriate question. >>=20 >> But are there any other parameters I would need to setup a ram disk = at boot time? >>=20 >> I have this in my rc.conf; >>=20 >> mdconfig_md100=3D?-t malloc -s 12G? >=20 > rc.conf is sourced, but not the place for code. >=20 >> I?ve opted not to mount or format it for now. >>=20 >> But upon boot, I do not see the md100 device. >>=20 >> Is there anything I need in rc.conf to enable this? >=20 > It depends on the version of FreeBSD. On FreeBSD 10, it's easy to do = with an entry in /etc/fstab, see fstab(5). For earlier versions, I'm = not sure of the best way. There's /etc/rc.local, but may be more = automated ways in the other rc scripts. Many thanks for the reply. What led me to rc.conf was this; http://ryanbowlby.com/2009/09/30/freebsd-ramdisk-mdconfig/ I=92m using FreebSD 9 and would need it to load very early on in the = boot process but not be mounted. I=92ll take a look at rc.local but if its anything like Linux, well = thats the last thing that loads. - aurf "Janitorial Services" From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 05:45:00 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BBE649A1 for ; Wed, 5 Mar 2014 05:45:00 +0000 (UTC) Received: from mail.allbsd.org (gatekeeper.allbsd.org [IPv6:2001:2f0:104:e001::32]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AEEA037A for ; Wed, 5 Mar 2014 05:44:59 +0000 (UTC) Received: from alph.d.allbsd.org (p2106-ipbf2009funabasi.chiba.ocn.ne.jp [114.146.169.106]) (authenticated bits=128) by mail.allbsd.org (8.14.5/8.14.5) with ESMTP id s255ib09025674 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 5 Mar 2014 14:44:47 +0900 (JST) (envelope-from hrs@FreeBSD.org) Received: from localhost (localhost [IPv6:::1]) (authenticated bits=0) by alph.d.allbsd.org (8.14.7/8.14.7) with ESMTP id s255iWu8015649; Wed, 5 Mar 2014 14:44:36 +0900 (JST) (envelope-from hrs@FreeBSD.org) Date: Wed, 05 Mar 2014 14:43:44 +0900 (JST) Message-Id: <20140305.144344.2256752746789462462.hrs@allbsd.org> To: aurfalien@gmail.com Subject: Re: mdconfig via rc.conf From: Hiroki Sato In-Reply-To: References: <80EE335C-E85A-433B-A4A2-287BA8BA1345@gmail.com> X-PGPkey-fingerprint: BDB3 443F A5DD B3D0 A530 FFD7 4F2C D3D8 2793 CF2D X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO) Mime-Version: 1.0 Content-Type: Multipart/Signed; protocol="application/pgp-signature"; micalg=pgp-sha1; boundary="--Security_Multipart(Wed_Mar__5_14_43_44_2014_615)--" Content-Transfer-Encoding: 7bit X-Virus-Scanned: clamav-milter 0.97.4 at gatekeeper.allbsd.org X-Virus-Status: Clean X-Greylist: Sender DNS name whitelisted, not delayed by milter-greylist-4.2.7 (mail.allbsd.org [133.31.130.32]); Wed, 05 Mar 2014 14:44:48 +0900 (JST) X-Spam-Status: No, score=-94.3 required=13.0 tests=CONTENT_TYPE_PRESENT, RCVD_IN_PBL,RCVD_IN_RP_RNBL,SPF_SOFTFAIL,USER_IN_WHITELIST autolearn=no version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on gatekeeper.allbsd.org Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 05:45:00 -0000 ----Security_Multipart(Wed_Mar__5_14_43_44_2014_615)-- Content-Type: Text/Plain; charset=iso-8859-7 Content-Transfer-Encoding: base64 YXVyZmFsaWVuIDxhdXJmYWxpZW5AZ21haWwuY29tPiB3cm90ZQ0KICBpbiA8QjlFRTkxNEYtNUIz My00MDYzLUFBQTYtOTE2NjI0REVDRDExQGdtYWlsLmNvbT46DQoNCmF1PiBPbiBNYXIgNCwgMjAx NCwgYXQgOToxOSBQTSwgV2FycmVuIEJsb2NrIDx3YmxvY2tAd29ua2l0eS5jb20+IHdyb3RlOg0K YXU+IA0KYXU+ID4gT24gVHVlLCA0IE1hciAyMDE0LCBhdXJmYWxpZW4gd3JvdGU6DQphdT4gPiAN CmF1PiA+PiBIaSwNCmF1PiA+PiANCmF1PiA+PiBBcG9sb2d5IGZvciB0aGUgY3Jvc3MgcG9zdCBh cyBJIGFza2VkIGluIHRoZSBnZW5lcmFsIHF1ZXN0aW9ucyBsaXN0IGFuZCBhbHNvIGlmIHRoaXMg aXMgbm90IGFuIGFwcHJvcHJpYXRlIHF1ZXN0aW9uLg0KYXU+ID4+IA0KYXU+ID4+IEJ1dCBhcmUg dGhlcmUgYW55IG90aGVyIHBhcmFtZXRlcnMgSSB3b3VsZCBuZWVkIHRvIHNldHVwIGEgcmFtIGRp c2sgYXQgYm9vdCB0aW1lPw0KYXU+ID4+IA0KYXU+ID4+IEkgaGF2ZSB0aGlzIGluIG15IHJjLmNv bmY7DQphdT4gPj4gDQphdT4gPj4gbWRjb25maWdfbWQxMDA9Py10IG1hbGxvYyAtcyAxMkc/DQph dT4gPiANCmF1PiA+IHJjLmNvbmYgaXMgc291cmNlZCwgYnV0IG5vdCB0aGUgcGxhY2UgZm9yIGNv ZGUuDQphdT4gPiANCmF1PiA+PiBJP3ZlIG9wdGVkIG5vdCB0byBtb3VudCBvciBmb3JtYXQgaXQg Zm9yIG5vdy4NCmF1PiA+PiANCmF1PiA+PiBCdXQgdXBvbiBib290LCBJIGRvIG5vdCBzZWUgdGhl IG1kMTAwIGRldmljZS4NCmF1PiA+PiANCmF1PiA+PiBJcyB0aGVyZSBhbnl0aGluZyBJIG5lZWQg aW4gcmMuY29uZiB0byBlbmFibGUgdGhpcz8NCmF1PiA+IA0KYXU+ID4gSXQgZGVwZW5kcyBvbiB0 aGUgdmVyc2lvbiBvZiBGcmVlQlNELiAgT24gRnJlZUJTRCAxMCwgaXQncyBlYXN5IHRvIGRvIHdp dGggYW4gZW50cnkgaW4gL2V0Yy9mc3RhYiwgc2VlIGZzdGFiKDUpLiAgRm9yIGVhcmxpZXIgdmVy c2lvbnMsIEknbSBub3Qgc3VyZSBvZiB0aGUgYmVzdCB3YXkuICBUaGVyZSdzIC9ldGMvcmMubG9j YWwsIGJ1dCBtYXkgYmUgbW9yZSBhdXRvbWF0ZWQgd2F5cyBpbiB0aGUgb3RoZXIgcmMgc2NyaXB0 cy4NCmF1PiANCmF1PiBNYW55IHRoYW5rcyBmb3IgdGhlIHJlcGx5Lg0KYXU+IA0KYXU+IFdoYXQg bGVkIG1lIHRvIHJjLmNvbmYgd2FzIHRoaXM7DQphdT4gDQphdT4gaHR0cDovL3J5YW5ib3dsYnku Y29tLzIwMDkvMDkvMzAvZnJlZWJzZC1yYW1kaXNrLW1kY29uZmlnLw0KYXU+IA0KYXU+IEmibSB1 c2luZyBGcmVlYlNEIDkgYW5kIHdvdWxkIG5lZWQgaXQgdG8gbG9hZCB2ZXJ5IGVhcmx5IG9uIGlu IHRoZSBib290IHByb2Nlc3MgYnV0IG5vdCBiZSBtb3VudGVkLg0KDQogbWRjb25maWdfbWQqIGhh dmUgdG8gYWx3YXlzIHN0YXJ0IGZyb20gMC4gIEFuIGFyYml0cmFyeSB2YWx1ZSBzdWNoIGFzDQog MTAwIGNhbm5vdCBiZSB1c2VkLg0KDQotLSBIaXJva2kNCg== ----Security_Multipart(Wed_Mar__5_14_43_44_2014_615)-- Content-Type: application/pgp-signature Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEABECAAYFAlMWuZAACgkQTyzT2CeTzy15XQCfZVvCJZzhwdREh/P5mD3ds0+A hoIAoJu0OITqF7+9qgJeZSANhBvv5tH3 =ODRK -----END PGP SIGNATURE----- ----Security_Multipart(Wed_Mar__5_14_43_44_2014_615)---- From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 05:47:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9DCE2AAB; Wed, 5 Mar 2014 05:47:34 +0000 (UTC) Received: from mail-yh0-x231.google.com (mail-yh0-x231.google.com [IPv6:2607:f8b0:4002:c01::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4D2F039E; Wed, 5 Mar 2014 05:47:34 +0000 (UTC) Received: by mail-yh0-f49.google.com with SMTP id z6so555758yhz.36 for ; Tue, 04 Mar 2014 21:47:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=QxsCwba8CG3ThdDHH9yrqEF7SS2ImZoSf2KOIMP4NQM=; b=Ox60YRnCujsjMXoKHDmFK84H7gpFfIUpyPvb5VuhOmF/SykuJXUEO71L+fafB1v0Wr OwA0cJ9xYB84bRamFIeDe26KT5usQHUUoP5I1Xozeh/jDEu8RIuBvqueHkijErYvLB5v UDhf3Jad2i3orJ2bmPGw9xuJou0vbUlX3utbpv7kkV0RovLKo0VELg0FhSMQSvQMXI8x hKX000E1UYU79z53YX90r60++xgj3dKlMCJ/kpqph/Nq9oOZRgoXj8u5PUlpYf4pWRUU EYsPu/zWGyO9fuGIsBKoFNJzutcLeMF7ofCBlGNViPUDSnLAOJJg6yUWDk9Y1DKgXM6A J7BQ== X-Received: by 10.236.3.10 with SMTP id 10mr3430870yhg.79.1393998453546; Tue, 04 Mar 2014 21:47:33 -0800 (PST) Received: from [192.168.1.76] (75-63-29-182.lightspeed.irvnca.sbcglobal.net. [75.63.29.182]) by mx.google.com with ESMTPSA id 44sm4360562yhp.17.2014.03.04.21.47.32 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 04 Mar 2014 21:47:33 -0800 (PST) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: mdconfig via rc.conf From: aurfalien In-Reply-To: <20140305.144344.2256752746789462462.hrs@allbsd.org> Date: Tue, 4 Mar 2014 21:47:31 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <3A96A2A5-0BB5-4F68-905D-D0889C1A1D6D@gmail.com> References: <80EE335C-E85A-433B-A4A2-287BA8BA1345@gmail.com> <20140305.144344.2256752746789462462.hrs@allbsd.org> To: Hiroki Sato X-Mailer: Apple Mail (2.1874) Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 05:47:34 -0000 On Mar 4, 2014, at 9:43 PM, Hiroki Sato wrote: > aurfalien wrote > in : >=20 > au> On Mar 4, 2014, at 9:19 PM, Warren Block = wrote: > au>=20 > au> > On Tue, 4 Mar 2014, aurfalien wrote: > au> >=20 > au> >> Hi, > au> >>=20 > au> >> Apology for the cross post as I asked in the general questions = list and also if this is not an appropriate question. > au> >>=20 > au> >> But are there any other parameters I would need to setup a ram = disk at boot time? > au> >>=20 > au> >> I have this in my rc.conf; > au> >>=20 > au> >> mdconfig_md100=3D?-t malloc -s 12G? > au> >=20 > au> > rc.conf is sourced, but not the place for code. > au> >=20 > au> >> I?ve opted not to mount or format it for now. > au> >>=20 > au> >> But upon boot, I do not see the md100 device. > au> >>=20 > au> >> Is there anything I need in rc.conf to enable this? > au> >=20 > au> > It depends on the version of FreeBSD. On FreeBSD 10, it's easy = to do with an entry in /etc/fstab, see fstab(5). For earlier versions, = I'm not sure of the best way. There's /etc/rc.local, but may be more = automated ways in the other rc scripts. > au>=20 > au> Many thanks for the reply. > au>=20 > au> What led me to rc.conf was this; > au>=20 > au> http://ryanbowlby.com/2009/09/30/freebsd-ramdisk-mdconfig/ > au>=20 > au> I=92m using FreebSD 9 and would need it to load very early on in = the boot process but not be mounted. >=20 > mdconfig_md* have to always start from 0. An arbitrary value such as > 100 cannot be used. Thanks of this. I assumed that md100 is fine since I did it manually from the command = line. Why would it behave diff manually after boot but not during boot? - aurf "Janitorial Services"= From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 07:24:19 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1CECEDDF for ; Wed, 5 Mar 2014 07:24:19 +0000 (UTC) Received: from mail-we0-f176.google.com (mail-we0-f176.google.com [74.125.82.176]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A5BF2AE for ; Wed, 5 Mar 2014 07:24:18 +0000 (UTC) Received: by mail-we0-f176.google.com with SMTP id x48so697045wes.35 for ; Tue, 04 Mar 2014 23:24:11 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=l86piV3ViEfX7qiig6f0oiP9T+xApbm5dt+CYOHiB2U=; b=RbFHJrOCfaaS7I0RMeBDnTy1LNtZV1iW1m51wEg08520zgDSZqvJtgLl2QxKS6D9tZ wYuJOjNHSvobYnIsQifeyDXajVoMcTRA+05Yrjga8ShjXU3497axHwvhuHt44QjvoSpI bJ134ILSe/Hztd8ZQRzM5e7xxLA+6+HSyzHK0b9tX/mqi4YUjwmFSfTiUVRbTLA0hVGb U49yIdZVnO7OUunE1AKaUrHM22vzxxLUI9NErJqu1wPKkPC97G2pr4IvSxIqsTAyrBZd qGoqAU9w/d1BaOXcG0avqTn4uEfa0OZyW1v8P3/kL5BHpOQ7vgeUHEAlyDzSkvxzEGw7 wTQA== X-Gm-Message-State: ALoCoQlFKcJ15tjtLirmOBpGx2k0m8NbbsWP7Mm1wNdFbD9PueIHJwLK0NSLx4IxkNA/0ZV2tegz MIME-Version: 1.0 X-Received: by 10.194.250.34 with SMTP id yz2mr6043910wjc.18.1394003827632; Tue, 04 Mar 2014 23:17:07 -0800 (PST) Received: by 10.227.92.198 with HTTP; Tue, 4 Mar 2014 23:17:07 -0800 (PST) In-Reply-To: References: <53157CC2.8080107@FreeBSD.org> <5315D446.3040701@freebsd.org> Date: Wed, 5 Mar 2014 08:17:07 +0100 Message-ID: Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? From: Olav Gjerde To: Bob Friesenhahn Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 07:24:19 -0000 Currently I've set the recordsize to 8k, however I'm thinking maybe a recordsize of 4k may more optimal? This is because the compressratio with LZ4 is around 2.5 and this value has been constant for all my data while growing from a few megabytes to a tenfold of gigabytes. Maybe something I should play with to see if it makes a difference. On Wed, Mar 5, 2014 at 3:40 AM, Bob Friesenhahn < bfriesen@simple.dallas.tx.us> wrote: > On Tue, 4 Mar 2014, Olav Gjerde wrote: > > I managed to mess up who I replied to and Matthew replied back with a go= od >> answer which I think didn't reach the mailing list. >> >> I actually have a problem with query performance in one of my databases >> related to running PostgreSQL on ZFS. Which is why I'm so interested in >> compression for the L2ARC Cache. The problem is random IO read were >> creating a report were I aggregate 75000 rows takes 30 minutes!!! The >> table >> that I query has 400 million rows though. >> The dataset easily fit in memory, so if I run the same query again it >> takes >> less than a second. >> > > Make sure that your database is on a filesystem with zfs block-size > matching the database block-size (rather than 128K). Otherwise far more > data may be read than needed, and likewise, writes may result in writing > far more data than needed. > > Regardless, L2ARC on SSD is a very good idea for this case. > > Bob > -- > Bob Friesenhahn > bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen= / > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > --=20 Olav Gr=F8n=E5s Gjerde From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 09:28:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B15B86CA for ; Wed, 5 Mar 2014 09:28:50 +0000 (UTC) Received: from mail-yh0-x235.google.com (mail-yh0-x235.google.com [IPv6:2607:f8b0:4002:c01::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6A3FDFC8 for ; Wed, 5 Mar 2014 09:28:50 +0000 (UTC) Received: by mail-yh0-f53.google.com with SMTP id v1so705411yhn.12 for ; Wed, 05 Mar 2014 01:28:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=BFD8dXNziphn8RNtKPbgs7b1VCr/94A6QmDNJc+TrmA=; b=L+SX9R7IhzNfMi45p2zohrLRmeUuEDlTDn8tpsyfub/4DCQZ9Z7YIBu4buH0zunu+H 7Xesbbyyk+qfMqwx/qfcogF2Vsz4FP/lDYpVL/dxzyksUQe6UEGijVb+6ZENfWMHYV2g 0sslW2WxBFwu1ZCbimZkt1POAr1ZGnyZpKG8gAcreS8o+iRlgc6kwZkxKAFCTnx41bZn K7QZYQvwNhu4Ybp0a9f4WkSkWDDwomps+V7ShK9V4bfqxtWxSpPn2hLageL+TUyqDnHo JgMxfEvS5KqsLoiTnJEqHDGJ1MeTw0k+TYT+Jd3lkQWP5yY864Z6a6Do+Guu+DDZYBon vQUA== MIME-Version: 1.0 X-Received: by 10.236.159.65 with SMTP id r41mr5658397yhk.20.1394011729542; Wed, 05 Mar 2014 01:28:49 -0800 (PST) Received: by 10.170.54.17 with HTTP; Wed, 5 Mar 2014 01:28:49 -0800 (PST) In-Reply-To: References: <53157CC2.8080107@FreeBSD.org> <5315D446.3040701@freebsd.org> Date: Wed, 5 Mar 2014 09:28:49 +0000 Message-ID: Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? From: krad To: Olav Gjerde Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 09:28:50 -0000 I thought the recordsize referred to the maximum block size rather than the actual block size. Please correct me if im wrong On 5 March 2014 07:17, Olav Gjerde wrote: > Currently I've set the recordsize to 8k, however I'm thinking maybe a > recordsize of 4k may more optimal? > This is because the compressratio with LZ4 is around 2.5 and this value h= as > been constant for all my data while growing from a few megabytes to a > tenfold of gigabytes. > Maybe something I should play with to see if it makes a difference. > > > On Wed, Mar 5, 2014 at 3:40 AM, Bob Friesenhahn < > bfriesen@simple.dallas.tx.us> wrote: > > > On Tue, 4 Mar 2014, Olav Gjerde wrote: > > > > I managed to mess up who I replied to and Matthew replied back with a > good > >> answer which I think didn't reach the mailing list. > >> > >> I actually have a problem with query performance in one of my database= s > >> related to running PostgreSQL on ZFS. Which is why I'm so interested i= n > >> compression for the L2ARC Cache. The problem is random IO read were > >> creating a report were I aggregate 75000 rows takes 30 minutes!!! The > >> table > >> that I query has 400 million rows though. > >> The dataset easily fit in memory, so if I run the same query again it > >> takes > >> less than a second. > >> > > > > Make sure that your database is on a filesystem with zfs block-size > > matching the database block-size (rather than 128K). Otherwise far mor= e > > data may be read than needed, and likewise, writes may result in writin= g > > far more data than needed. > > > > Regardless, L2ARC on SSD is a very good idea for this case. > > > > Bob > > -- > > Bob Friesenhahn > > bfriesen@simple.dallas.tx.us, > http://www.simplesystems.org/users/bfriesen/ > > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > > > > > -- > Olav Gr=F8n=E5s Gjerde > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 14:17:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AC57FD9D for ; Wed, 5 Mar 2014 14:17:39 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 5BAC6D9C for ; Wed, 5 Mar 2014 14:17:38 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.7/8.13.1) with ESMTP id s25EHXOJ073101 for ; Wed, 5 Mar 2014 08:17:33 -0600 (CST) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Wed Mar 5 08:17:33 2014 Message-ID: <531731F8.1050000@denninger.net> Date: Wed, 05 Mar 2014 08:17:28 -0600 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? References: <53157CC2.8080107@FreeBSD.org> <5315D446.3040701@freebsd.org> In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030307010802020605090308" X-Antivirus: avast! (VPS 140304-1, 03/04/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 14:17:39 -0000 This is a cryptographically signed message in MIME format. --------------ms030307010802020605090308 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable It probably won't matter all that much. You need to profile this but you can get a decent idea what's going on from sysstat or iostat; look at the transaction count, size per transaction and percentage of time the disks are busy. I bet you find low transaction size, moderate count and very high disk busy percentages, which points to lots of head movement on an average basis compared against bytes moved. That's the paradigm where spinning rust loses, basically, and the only answers are to spread the I/O across more spindles so you get more positioner economy, go to faster-rotating drives with faster seek times or move to SSDs. If your I/O pattern is as I suspect the first thing to do is get the L2ARC off the rest of the pool's disks as that de-couples that I/O from the actual data storage. In mixed, small I/O environments this frequently will double total throughput and it costs you just one spindle, and it can be a small one too as the L2ARC requirement is modest. Making that L2ARC a SSD is an option but beware of using cheap ones there as fault-tolerance rules apply to L2ARC as they do to data disks (this is not true for a cache drive which is ignored if it posts errors and results in no data loss.) Presuming that doesn't provide enough boost the next logical move is to consider putting the DBMS on SSDs itself. That completely removes positioning latency and will result in a massive speed increase. On 3/5/2014 1:17 AM, Olav Gjerde wrote: > Currently I've set the recordsize to 8k, however I'm thinking maybe a > recordsize of 4k may more optimal? > This is because the compressratio with LZ4 is around 2.5 and this value= has > been constant for all my data while growing from a few megabytes to a > tenfold of gigabytes. > Maybe something I should play with to see if it makes a difference. > > > On Wed, Mar 5, 2014 at 3:40 AM, Bob Friesenhahn < > bfriesen@simple.dallas.tx.us> wrote: > >> On Tue, 4 Mar 2014, Olav Gjerde wrote: >> >> I managed to mess up who I replied to and Matthew replied back with a= good >>> answer which I think didn't reach the mailing list. >>> >>> I actually have a problem with query performance in one of my databas= es >>> related to running PostgreSQL on ZFS. Which is why I'm so interested = in >>> compression for the L2ARC Cache. The problem is random IO read were >>> creating a report were I aggregate 75000 rows takes 30 minutes!!! The= >>> table >>> that I query has 400 million rows though. >>> The dataset easily fit in memory, so if I run the same query again it= >>> takes >>> less than a second. >>> >> Make sure that your database is on a filesystem with zfs block-size >> matching the database block-size (rather than 128K). Otherwise far mo= re >> data may be read than needed, and likewise, writes may result in writi= ng >> far more data than needed. >> >> Regardless, L2ARC on SSD is a very good idea for this case. >> >> Bob >> -- >> Bob Friesenhahn >> bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfrie= sen/ >> GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ >> > > --=20 -- Karl karl@denninger.net --------------ms030307010802020605090308 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMDUxNDE3MjhaMCMGCSqGSIb3DQEJBDEW BBTY3trLA2blwIfyqsG+j6HVJKm0hTBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAincT24V7zD6kZY4dVX92lV1OdyFl HfmwzLvVfF03hhQFnfnq5fjfSg/gu8TlUJS3xyspye692sa5VPtQ6xaVVgvkG0FvJD0kM7qe txsuFqrY1va/QYaioXxzpbTlX2D/6+2oAlZT2uL65NjI1TcwCI/tTpco9pxim++qbRS7Hvop asjwirEHMo0qWUz9I4Llz56L9ybuZ183Zm0b+bCgMfKVahGK9JjjWydNn/y984vmtIQsz7Bq 1GKZvxqzoiF+05NJcxXuCyv4Q77jExX+qMJJEN0luTZ/4uD+7BV9GDmkUMtnPftKSVD69Cra y/kQNOxEkQ1jwn2kS4/4ohWDc5zp7MlS3o8MEgJQTj1rKNm9ZV0059hb4l/PBwwOtgXo1cwk F5G9cA1hQVTTNLWMtDdWamRH7NYQY1APGW0minjBASQXTbd5wiL3bC+D6Nk6Of/UU8iRKvMN 9MRKq9ZTbRpR1Iwp2+sEw0daIsI5RR3Dj/OnOfN839zkeeCSecbxmL6N+fdiMVddrnM3E5Xe VAB4yOmzps6LykWvQpTC5iBSJelrVpEwx5aWNB+PHlSQhQ2lpkAlf+Bt02UESNW7Ww097uUP JqARXndP2CkWm9oEE6eUnbXzhE0EHCxrEyU+Puxia0llBrhqs2tohx/gyQ8E1StlxYpcC+Zs /Q3AHNsAAAAAAAA= --------------ms030307010802020605090308-- From owner-freebsd-fs@FreeBSD.ORG Wed Mar 5 14:53:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9276A91A for ; Wed, 5 Mar 2014 14:53:17 +0000 (UTC) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 51472218 for ; Wed, 5 Mar 2014 14:53:16 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id s25Er9m5022237; Wed, 5 Mar 2014 08:53:09 -0600 (CST) Date: Wed, 5 Mar 2014 08:53:09 -0600 (CST) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: krad Subject: Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE? In-Reply-To: Message-ID: References: <53157CC2.8080107@FreeBSD.org> <5315D446.3040701@freebsd.org> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Wed, 05 Mar 2014 08:53:09 -0600 (CST) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Mar 2014 14:53:17 -0000 On Wed, 5 Mar 2014, krad wrote: > I thought the recordsize referred to the maximum block size rather than the actual block size. Please correct me if im wrong It is the actual block size used except for tail blocks (i.e. end of file) and is the block size before any compression is applied. When compression is enabled, the actual block written to disk is hopefully smaller. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Thu Mar 6 22:30:08 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7D8EA4D2; Thu, 6 Mar 2014 22:30:08 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 53976324; Thu, 6 Mar 2014 22:30:08 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s26MU8QE054845; Thu, 6 Mar 2014 22:30:08 GMT (envelope-from bdrewery@freefall.freebsd.org) Received: (from bdrewery@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s26MU7Wh054770; Thu, 6 Mar 2014 16:30:07 -0600 (CST) (envelope-from bdrewery) Date: Thu, 6 Mar 2014 16:30:07 -0600 (CST) Message-Id: <201403062230.s26MU7Wh054770@freefall.freebsd.org> To: rick@wirelessleiden.nl, bdrewery@FreeBSD.org, freebsd-fs@FreeBSD.org From: bdrewery@FreeBSD.org Subject: Re: kern/121385: [unionfs] unionfs cross mount -> kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Mar 2014 22:30:08 -0000 Synopsis: [unionfs] unionfs cross mount -> kernel panic State-Changed-From-To: open->closed State-Changed-By: bdrewery State-Changed-When: Thu Mar 6 16:30:07 CST 2014 State-Changed-Why: duplicate of kern/172334 http://www.freebsd.org/cgi/query-pr.cgi?pr=121385 From owner-freebsd-fs@FreeBSD.ORG Sun Mar 9 15:42:37 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 812041BE; Sun, 9 Mar 2014 15:42:37 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 56AE87F6; Sun, 9 Mar 2014 15:42:37 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s29FgbsU059034; Sun, 9 Mar 2014 15:42:37 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s29Fgb9i059033; Sun, 9 Mar 2014 15:42:37 GMT (envelope-from linimon) Date: Sun, 9 Mar 2014 15:42:37 GMT Message-Id: <201403091542.s29Fgb9i059033@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: bin/187071: [nfs] nfs server only start 2 daemons 1 master & 1 server X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 09 Mar 2014 15:42:37 -0000 Old Synopsis: nfs server only start 2 daemons 1 master & 1 server New Synopsis: [nfs] nfs server only start 2 daemons 1 master & 1 server Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Mar 9 15:42:17 UTC 2014 Responsible-Changed-Why: reclassify http://www.freebsd.org/cgi/query-pr.cgi?pr=187071 From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 11:06:44 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2F3B3FF9 for ; Mon, 10 Mar 2014 11:06:44 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 10714803 for ; Mon, 10 Mar 2014 11:06:44 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2AB6hfx043169 for ; Mon, 10 Mar 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2AB6hC7043167 for freebsd-fs@FreeBSD.org; Mon, 10 Mar 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 10 Mar 2014 11:06:43 GMT Message-Id: <201403101106.s2AB6hC7043167@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 11:06:44 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/187261 fs [fuse] FUSE kernel panic when using socket / bind o bin/187071 fs [nfs] nfs server only start 2 daemons 1 master & 1 ser o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186574 fs [zfs] zpool history hangs (infinite loop) o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185858 fs [zfs] zvol clone can't see new device o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 340 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 13:46:21 2014 Return-Path: Delivered-To: fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D00F5831; Mon, 10 Mar 2014 13:46:21 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A5316B52; Mon, 10 Mar 2014 13:46:21 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2ADkL9K095547; Mon, 10 Mar 2014 13:46:21 GMT (envelope-from gnn@freefall.freebsd.org) Received: (from gnn@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2ADkLUR095546; Mon, 10 Mar 2014 13:46:21 GMT (envelope-from gnn) Date: Mon, 10 Mar 2014 13:46:21 GMT Message-Id: <201403101346.s2ADkLUR095546@freefall.freebsd.org> To: gnn@FreeBSD.org, fs@FreeBSD.org From: gnn@FreeBSD.org Subject: Re: kern/167362: [fusefs] Reproduceble Page Fault when running rsync over sshfs/encfs. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 13:46:21 -0000 Synopsis: [fusefs] Reproduceble Page Fault when running rsync over sshfs/encfs. Responsible-Changed-From-To: ->fs Responsible-Changed-By: gnn Responsible-Changed-When: Mon Mar 10 13:46:00 UTC 2014 Responsible-Changed-Why: Give to the Filesystem mailing list. http://www.freebsd.org/cgi/query-pr.cgi?pr=167362 From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 13:57:16 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1B9EBB5D; Mon, 10 Mar 2014 13:57:16 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E66ABC3E; Mon, 10 Mar 2014 13:57:15 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2ADvFSR098454; Mon, 10 Mar 2014 13:57:15 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2ADvFi0098453; Mon, 10 Mar 2014 13:57:15 GMT (envelope-from linimon) Date: Mon, 10 Mar 2014 13:57:15 GMT Message-Id: <201403101357.s2ADvFi0098453@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: bin/187221: [su+j] [patch] fsck_ufs -p segmentation fault with SU+J X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 13:57:16 -0000 Old Synopsis: fsck_ufs -p segmentation fault with SU+J New Synopsis: [su+j] [patch] fsck_ufs -p segmentation fault with SU+J Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Mar 10 13:56:34 UTC 2014 Responsible-Changed-Why: since this has a patch, assign it to fs@. http://www.freebsd.org/cgi/query-pr.cgi?pr=187221 From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 14:03:54 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 24A90E93 for ; Mon, 10 Mar 2014 14:03:54 +0000 (UTC) Received: from smtp-int-m.obspm.fr (smtp-int-m.obspm.fr [145.238.187.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9A927D06 for ; Mon, 10 Mar 2014 14:03:53 +0000 (UTC) Received: from pcjas.obspm.fr (pcjas.obspm.fr [145.238.184.233]) by smtp-int-m.obspm.fr (8.14.4/8.14.4/SIO Observatoire de Paris - 07/2009) with ESMTP id s2AE3hgE026741 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT) for ; Mon, 10 Mar 2014 15:03:44 +0100 Date: Mon, 10 Mar 2014 15:03:43 +0100 From: Albert Shih To: freebsd-fs@FreeBSD.org Subject: ZFS & Dedup & SSD. Message-ID: <20140310140343.GA22517@pcjas.obspm.fr> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit User-Agent: Mutt/1.5.22 (2013-10-16) X-Miltered: at smtp-int-m.obspm.fr with ID 531DC63F.000 by Joe's j-chkmail (http : // j-chkmail dot ensmp dot fr)! X-j-chkmail-Enveloppe: 531DC63F.000/145.238.184.233/pcjas.obspm.fr/pcjas.obspm.fr/ X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 14:03:54 -0000 Hi all, If I use a ssd for dedup, what's happen if I loose the content of the ssd. Can I get the data event without the ssd ? I ask that because SSD is expensive so make raid on SSD is more expensive. So I like to use just one SSD for the dedup. Regards. JAS -- Albert SHIH DIO bâtiment 15 Observatoire de Paris 5 Place Jules Janssen 92195 Meudon Cedex France Téléphone : +33 1 45 07 76 26/+33 6 86 69 95 71 xmpp: jas@obspm.fr Heure local/Local time: lun 10 mar 2014 15:02:05 CET From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 14:36:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CE8C4E37 for ; Mon, 10 Mar 2014 14:36:42 +0000 (UTC) Received: from mail-yh0-x22e.google.com (mail-yh0-x22e.google.com [IPv6:2607:f8b0:4002:c01::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 91CC3FE3 for ; Mon, 10 Mar 2014 14:36:42 +0000 (UTC) Received: by mail-yh0-f46.google.com with SMTP id b6so4118519yha.33 for ; Mon, 10 Mar 2014 07:36:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Pze036IIEO8Bm5KwGRxFEp7CR/95sacq8F671Hiis+Y=; b=gQ399k6Y3EUPGhFzJmBYtFpG2jzs2XCCBsLjTgLxfMIU+TQYj7qnbSwnyubzRgqbPk 97HgLXEG5KnuLjOdTryaymKvJYQPiI+EJTwWkJAgLW70YxjN9ZUT03eeE0Q/hQ6O39WH X2X13kWDQYSg6pgOJFU3p6Idt2oXEwKePclAZeEBLk3mOa6tZ+Ep+YwKY+DSRtpkWD0U GpNOyTygtmRj6L0hLPPo+bmHuvEfv/EhZxajD0OK7fpY+zEyFyDNCN49FNfHUCH4d8nt j7ZmwcKE3PPU2hfgj63mhCLDjN0leURJdlGPMFCF/jTu7ptm8JsXEtdycd3ImBRgWOMT DkOg== MIME-Version: 1.0 X-Received: by 10.236.157.167 with SMTP id o27mr2271381yhk.127.1394462201794; Mon, 10 Mar 2014 07:36:41 -0700 (PDT) Received: by 10.170.54.17 with HTTP; Mon, 10 Mar 2014 07:36:41 -0700 (PDT) In-Reply-To: <20140310140343.GA22517@pcjas.obspm.fr> References: <20140310140343.GA22517@pcjas.obspm.fr> Date: Mon, 10 Mar 2014 14:36:41 +0000 Message-ID: Subject: Re: ZFS & Dedup & SSD. From: krad To: Albert Shih Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 14:36:42 -0000 you dont use the ssd for dedup, more you use it as an l2arc device. Hopefully the full dedup table is stored in the cache. Alternatively just but LOTS of ram in you machine. In my experience dedup is very expensive, and unless the ratio is very high its cheaper and simpler just to buy the extra storage. Often you can get the benefits of dedup, without actually using is though. eg clone a zfs fs rather than clone at the vm level for instance. Thing will diverge over time of course, but it can give you a lot of what u need. On 10 March 2014 14:03, Albert Shih wrote: > Hi all, > > If I use a ssd for dedup, what's happen if I loose the content of the ssd= . > > Can I get the data event without the ssd ? > > I ask that because SSD is expensive so make raid on SSD is more expensive= . > So I like to use just one SSD for the dedup. > > Regards. > > JAS > > > -- > Albert SHIH > DIO b=E2timent 15 > Observatoire de Paris > 5 Place Jules Janssen > 92195 Meudon Cedex > France > T=E9l=E9phone : +33 1 45 07 76 26/+33 6 86 69 95 71 > xmpp: jas@obspm.fr > Heure local/Local time: > lun 10 mar 2014 15:02:05 CET > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 17:12:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6A52B320 for ; Mon, 10 Mar 2014 17:12:18 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A673B300 for ; Mon, 10 Mar 2014 17:12:17 +0000 (UTC) Received: (qmail 4388 invoked by uid 89); 10 Mar 2014 17:05:34 -0000 Received: from unknown (HELO eternal.bfh.ch) (ml-ktk@netlabs.org@147.87.42.175) by 0 with ESMTPA; 10 Mar 2014 17:05:34 -0000 Message-ID: <531DF0DD.8070809@netlabs.org> Date: Mon, 10 Mar 2014 18:05:33 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Reoccuring ZFS performance problems Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 17:12:18 -0000 Hi group, (I have a lot of pastes in here, see http://pastebin.com/yjQnLryP for this email in case the mail kills too long lines) On a regular base I run into some very weird ZFS performance issues on my server. When it happens file IO is terribly slow and even a simple ls can take a long time (worst case up to minutes). Everything which relies on file IO is basically dead in this mode so even starting top or other tools is a PITA. This stage can stay from minutes to hours and goes back to normal after some random time. A reboot does not necessarily fix it, often I'm back in exactly this stage after reboot. I do not see any patterns when it happens in my monitoring (munin), in terms of when munin starts to time out because of it I do not see peaks upfront in any of the system graphs I do. When I run 'top' in this mode I see many processes in one of these states: tx->tx zfs umtxn (mainly on mysql, which is unkillable in this mode) uwait Setup: * FreeBSD 9.2-RELEASE-p3 in a KVM (SmartOS Solaris host, running ZFS itself) * I'm using mfsbsd to do a ZFS only system * No specific ZFS changes (I did play with some, see last part of email) * There are 5 jails running * I run various Apaches (PHP/SVN/TRAC etc), MySQL, LDAP daemon, a JVM and some SIP servers (Freeswitch) * Normal load is around 20-30% (2 cores) * Swap is currently on 1% usage (4G available) * I have 16GB of memory available, munin still shows around 1-2 GB as free. It can be that the issue happens faster with less memory but I cannot prove it either. * Currently no dtrace enabled so can't get much further than standard tools shipped with BSD * zpool status does not report any failures The issues are not new, they first appeared while the system was still running on real hardware (FBSD 8.x) and not within a KVM. Back then I assumed I have a hardware problem but the problem reappeared on the virtualized install. This install was basically a 1:1 zfs send copy plus some bootloader hacking of the old system so exact same software levels. I switched to a new install on 9.x and had the issue on every single release there as well. I did not try 10.x yet. fstat | wc -l: 7068 (took forever to let it finish) gstat gives me: dT: 1.010s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 10 575 1 0 0.2 574 653 17.8 100.4| vtbd0 0 0 0 0 0.0 0 0 0.0 0.0| PART/vtbd0/vtbd0 0 0 0 0 0.0 0 0 0.0 0.0| vtbd0p1 0 0 0 0 0.0 0 0 0.0 0.0| vtbd0p2 10 575 1 0 0.3 574 653 18.0 100.4| vtbd0p3 0 0 0 0 0.0 0 0 0.0 0.0| DEV/vtbd0/vtbd0 0 0 0 0 0.0 0 0 0.0 0.0| cd0 0 0 0 0 0.0 0 0 0.0 0.0| DEV/vtbd0p1/vtbd0p1 0 0 0 0 0.0 0 0 0.0 0.0| LABEL/vtbd0p1/vtbd0p1 0 0 0 0 0.0 0 0 0.0 0.0| gptid/e402ecce-89ca-11e2-a867-3264262b9894 0 0 0 0 0.0 0 0 0.0 0.0| DEV/vtbd0p2/vtbd0p2 0 0 0 0 0.0 0 0 0.0 0.0| LABEL/vtbd0p2/vtbd0p2 0 0 0 0 0.0 0 0 0.0 0.0| gptid/e4112d88-89ca-11e2-a867-3264262b9894 0 0 0 0 0.0 0 0 0.0 0.0| DEV/vtbd0p3/vtbd0p3 0 0 0 0 0.0 0 0 0.0 0.0| SWAP/swap/gptid/e4112d88-89ca-11e2-a867-3264262b9894 0 0 0 0 0.0 0 0 0.0 0.0| DEV/cd0/cd0 0 0 0 0 0.0 0 0 0.0 0.0| DEV/gptid/e402ecce-89ca-11e2-a867-3264262b9894/gptid/e402ecce-89ca -11e2-a867-3264262b9894 0 0 0 0 0.0 0 0 0.0 0.0| DEV/gptid/e4112d88-89ca-11e2-a867-3264262b9894/gptid/e4112d88-89ca -11e2-a867-3264262b9894 0 0 0 0 0.0 0 0 0.0 0.0| ZFS::VDEV/zfs::vdev/vtbd0p3 ms/w changes a lot, highest I've seen right now was around 70 zfs iostat 2: capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- tank 173G 21.5G 3 56 30.9K 535K tank 173G 21.5G 62 376 49.0K 1.22M tank 173G 21.5G 46 340 84.3K 565K tank 173G 21.5G 45 566 74.6K 800K tank 173G 21.5G 32 222 92.0K 958K tank 173G 21.5G 63 392 120K 1.10M tank 173G 21.5G 16 286 14.2K 338K tank 173G 21.5G 29 313 24.6K 831K tank 173G 21.5G 0 289 0 445K tank 173G 21.5G 27 244 32.6K 293K tank 173G 21.5G 43 385 42.8K 477K tank 173G 21.5G 31 329 15.7K 710K tank 173G 21.5G 65 394 46.8K 1.50M tank 173G 21.5G 80 320 127K 754K tank 173G 21.5G 30 425 144K 1.09M tank 173G 21.5G 13 399 25.9K 379K tank 173G 21.5G 10 194 5.22K 162K tank 173G 21.5G 18 311 45.5K 1.02M tank 173G 21.5G 29 202 58.5K 344K tank 173G 21.5G 32 375 108K 926K on the host OS (SmartOS) zpool iostat 2 shows me (there is one other FBSD box but there is almost no IO on this one): capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- zones 602G 230G 5 278 44.0K 2.42M zones 602G 230G 2 154 19.9K 2.18M zones 602G 230G 0 1.47K 0 10.3M zones 602G 230G 0 128 0 1.44M zones 602G 230G 0 270 0 2.61M zones 602G 230G 0 1.39K 0 10.2M zones 602G 230G 0 114 7.96K 2.10M zones 602G 230G 0 979 7.96K 7.84M When the guest system is not in this state the writes are lower and I don't see modes where I have 0 on reads. I was googling around about this topic forever and I do find people who report similar issues. No one I've contacted found a real explanation for it. Based on various guides I started adapting the basic config: cat /boot/loader.conf: vfs.zfs.zfetch.block_cap=64 # this one was horrible, bootup alone was dogslow #vfs.zfs.write_limit_override=1048576 #vfs.zfs.txg.timeout="5" # so far good results? vfs.zfs.prefetch_disable="1" First I thought disabling prefetch did solve the issue for a while. But it looks like I was too optimistic with that one. However, ls feels *much* faster when the system is happy since I disabled prefetch. I'm really totally lost on this one so I would appreciate hints about how to debug that. I'm willing to test whatever it takes to figure out where this issue is. thanks Adrian From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 17:27:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 43627A6F for ; Mon, 10 Mar 2014 17:27:32 +0000 (UTC) Received: from mail-pb0-x22f.google.com (mail-pb0-x22f.google.com [IPv6:2607:f8b0:400e:c01::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 1B3016BE for ; Mon, 10 Mar 2014 17:27:32 +0000 (UTC) Received: by mail-pb0-f47.google.com with SMTP id up15so7574095pbc.20 for ; Mon, 10 Mar 2014 10:27:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=hMKHPa08zxZ/KBYAiglXLSHpP+kfNFonLvM1jNfDgBU=; b=QNJcpULM4JwUDISyF0z/OIrhaD+6RknxvfzwunqMlfO96DUfolf2ZwrOer/worxUcc sW+cajN82cXWo5mC/eLKuQ10hE0McW8TjtHTOrl/p/evi4iJkQClOnBB27lYHKGTSpnu OxRX4r0/tSPF0hXHpMIkb6spwdYVpeYjzdcwmZfYpWbckDWz2pY+rDpoSNsWcS0Hi871 Wb24xGVrWVWuBEajw3WO27sF/zhW/pN8bHVRvsp5Q/BRIELRPDQL+DDk3QgfzNdVZNSG NmWjA96CJ7qPreJZCB3+HvKp1uDln5Dn14hB5jMHgeS980NGc4lx9YSvP6jXuWjVib+b stJw== MIME-Version: 1.0 X-Received: by 10.68.163.197 with SMTP id yk5mr41703778pbb.57.1394472451710; Mon, 10 Mar 2014 10:27:31 -0700 (PDT) Received: by 10.70.55.7 with HTTP; Mon, 10 Mar 2014 10:27:31 -0700 (PDT) In-Reply-To: <531DF0DD.8070809@netlabs.org> References: <531DF0DD.8070809@netlabs.org> Date: Mon, 10 Mar 2014 12:27:31 -0500 Message-ID: Subject: Re: Reoccuring ZFS performance problems From: Adam Vande More To: Adrian Gschwend Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 17:27:32 -0000 On Mon, Mar 10, 2014 at 12:05 PM, Adrian Gschwend wrote: > First I thought disabling prefetch did solve the issue for a while. But > it looks like I was too optimistic with that one. However, ls feels > *much* faster when the system is happy since I disabled prefetch. > > I'm really totally lost on this one so I would appreciate hints about > how to debug that. > > I'm willing to test whatever it takes to figure out where this issue is. > Are you using a ZFS backed swap device? -- Adam From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 17:32:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E742DCBE for ; Mon, 10 Mar 2014 17:32:24 +0000 (UTC) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 735047D9 for ; Mon, 10 Mar 2014 17:32:24 +0000 (UTC) Received: from r2d2 ([82.69.141.170]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50008558905.msg for ; Mon, 10 Mar 2014 17:32:15 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Mon, 10 Mar 2014 17:32:15 +0000 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 82.69.141.170 X-Return-Path: prvs=1146ddc565=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: From: "Steven Hartland" To: "Adrian Gschwend" , References: <531DF0DD.8070809@netlabs.org> Subject: Re: Reoccuring ZFS performance problems Date: Mon, 10 Mar 2014 17:31:56 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 17:32:25 -0000 Looks like you may be out of IOP/s but just incase, are you using TRIM at all? sysctl -a |grep trim If you are what does "gstat -d" show? You also mention your using mysql, have you applied the standard tuning for mysql on ZFS? Regards Steve ----- Original Message ----- From: "Adrian Gschwend" To: Sent: Monday, March 10, 2014 5:05 PM Subject: Reoccuring ZFS performance problems > Hi group, > > (I have a lot of pastes in here, see http://pastebin.com/yjQnLryP for > this email in case the mail kills too long lines) > > On a regular base I run into some very weird ZFS performance issues on > my server. When it happens file IO is terribly slow and even a simple ls > can take a long time (worst case up to minutes). Everything which relies > on file IO is basically dead in this mode so even starting top or other > tools is a PITA. This stage can stay from minutes to hours and goes back > to normal after some random time. A reboot does not necessarily fix it, > often I'm back in exactly this stage after reboot. > > I do not see any patterns when it happens in my monitoring (munin), in > terms of when munin starts to time out because of it I do not see peaks > upfront in any of the system graphs I do. > > When I run 'top' in this mode I see many processes in one of these states: > > tx->tx > zfs > umtxn (mainly on mysql, which is unkillable in this mode) > uwait > > > Setup: > * FreeBSD 9.2-RELEASE-p3 in a KVM (SmartOS Solaris host, running ZFS itself) > * I'm using mfsbsd to do a ZFS only system > * No specific ZFS changes (I did play with some, see last part of email) > * There are 5 jails running > * I run various Apaches (PHP/SVN/TRAC etc), MySQL, LDAP daemon, a JVM > and some SIP servers (Freeswitch) > * Normal load is around 20-30% (2 cores) > * Swap is currently on 1% usage (4G available) > * I have 16GB of memory available, munin still shows around 1-2 GB as > free. It can be that the issue happens faster with less memory but I > cannot prove it either. > * Currently no dtrace enabled so can't get much further than standard > tools shipped with BSD > * zpool status does not report any failures > > The issues are not new, they first appeared while the system was still > running on real hardware (FBSD 8.x) and not within a KVM. Back then I > assumed I have a hardware problem but the problem reappeared on the > virtualized install. This install was basically a 1:1 zfs send copy plus > some bootloader hacking of the old system so exact same software levels. > I switched to a new install on 9.x and had the issue on every single > release there as well. I did not try 10.x yet. > > fstat | wc -l: > > 7068 > > (took forever to let it finish) > > gstat gives me: > > dT: 1.010s w: 1.000s > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 10 575 1 0 0.2 574 653 17.8 100.4| vtbd0 > 0 0 0 0 0.0 0 0 0.0 0.0| > PART/vtbd0/vtbd0 > 0 0 0 0 0.0 0 0 0.0 0.0| vtbd0p1 > 0 0 0 0 0.0 0 0 0.0 0.0| vtbd0p2 > 10 575 1 0 0.3 574 653 18.0 100.4| vtbd0p3 > 0 0 0 0 0.0 0 0 0.0 0.0| > DEV/vtbd0/vtbd0 > 0 0 0 0 0.0 0 0 0.0 0.0| cd0 > 0 0 0 0 0.0 0 0 0.0 0.0| > DEV/vtbd0p1/vtbd0p1 > 0 0 0 0 0.0 0 0 0.0 0.0| > LABEL/vtbd0p1/vtbd0p1 > 0 0 0 0 0.0 0 0 0.0 0.0| > gptid/e402ecce-89ca-11e2-a867-3264262b9894 > 0 0 0 0 0.0 0 0 0.0 0.0| > DEV/vtbd0p2/vtbd0p2 > 0 0 0 0 0.0 0 0 0.0 0.0| > LABEL/vtbd0p2/vtbd0p2 > 0 0 0 0 0.0 0 0 0.0 0.0| > gptid/e4112d88-89ca-11e2-a867-3264262b9894 > 0 0 0 0 0.0 0 0 0.0 0.0| > DEV/vtbd0p3/vtbd0p3 > 0 0 0 0 0.0 0 0 0.0 0.0| > SWAP/swap/gptid/e4112d88-89ca-11e2-a867-3264262b9894 > 0 0 0 0 0.0 0 0 0.0 0.0| DEV/cd0/cd0 > 0 0 0 0 0.0 0 0 0.0 0.0| > DEV/gptid/e402ecce-89ca-11e2-a867-3264262b9894/gptid/e402ecce-89ca > -11e2-a867-3264262b9894 > 0 0 0 0 0.0 0 0 0.0 0.0| > DEV/gptid/e4112d88-89ca-11e2-a867-3264262b9894/gptid/e4112d88-89ca > -11e2-a867-3264262b9894 > 0 0 0 0 0.0 0 0 0.0 0.0| > ZFS::VDEV/zfs::vdev/vtbd0p3 > > ms/w changes a lot, highest I've seen right now was around 70 > > zfs iostat 2: > > capacity operations bandwidth > pool alloc free read write read write > ---------- ----- ----- ----- ----- ----- ----- > tank 173G 21.5G 3 56 30.9K 535K > tank 173G 21.5G 62 376 49.0K 1.22M > tank 173G 21.5G 46 340 84.3K 565K > tank 173G 21.5G 45 566 74.6K 800K > tank 173G 21.5G 32 222 92.0K 958K > tank 173G 21.5G 63 392 120K 1.10M > tank 173G 21.5G 16 286 14.2K 338K > tank 173G 21.5G 29 313 24.6K 831K > tank 173G 21.5G 0 289 0 445K > tank 173G 21.5G 27 244 32.6K 293K > tank 173G 21.5G 43 385 42.8K 477K > tank 173G 21.5G 31 329 15.7K 710K > tank 173G 21.5G 65 394 46.8K 1.50M > tank 173G 21.5G 80 320 127K 754K > tank 173G 21.5G 30 425 144K 1.09M > tank 173G 21.5G 13 399 25.9K 379K > tank 173G 21.5G 10 194 5.22K 162K > tank 173G 21.5G 18 311 45.5K 1.02M > tank 173G 21.5G 29 202 58.5K 344K > tank 173G 21.5G 32 375 108K 926K > > > on the host OS (SmartOS) zpool iostat 2 shows me (there is one other > FBSD box but there is almost no IO on this one): > > capacity operations bandwidth > pool alloc free read write read write > ---------- ----- ----- ----- ----- ----- ----- > zones 602G 230G 5 278 44.0K 2.42M > zones 602G 230G 2 154 19.9K 2.18M > zones 602G 230G 0 1.47K 0 10.3M > zones 602G 230G 0 128 0 1.44M > zones 602G 230G 0 270 0 2.61M > zones 602G 230G 0 1.39K 0 10.2M > zones 602G 230G 0 114 7.96K 2.10M > zones 602G 230G 0 979 7.96K 7.84M > > When the guest system is not in this state the writes are lower and I > don't see modes where I have 0 on reads. > > I was googling around about this topic forever and I do find people who > report similar issues. No one I've contacted found a real explanation > for it. Based on various guides I started adapting the basic config: > > cat /boot/loader.conf: > > vfs.zfs.zfetch.block_cap=64 > > # this one was horrible, bootup alone was dogslow > #vfs.zfs.write_limit_override=1048576 > > #vfs.zfs.txg.timeout="5" > > # so far good results? > vfs.zfs.prefetch_disable="1" > > First I thought disabling prefetch did solve the issue for a while. But > it looks like I was too optimistic with that one. However, ls feels > *much* faster when the system is happy since I disabled prefetch. > > I'm really totally lost on this one so I would appreciate hints about > how to debug that. > > I'm willing to test whatever it takes to figure out where this issue is. > > thanks > > Adrian > > > > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 17:36:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B5DB7F6E for ; Mon, 10 Mar 2014 17:36:50 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 16B23812 for ; Mon, 10 Mar 2014 17:36:49 +0000 (UTC) Received: (qmail 6775 invoked by uid 89); 10 Mar 2014 17:36:48 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 10 Mar 2014 17:36:48 -0000 Message-ID: <531DF82F.3010607@netlabs.org> Date: Mon, 10 Mar 2014 18:36:47 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: Adam Vande More Subject: Re: Reoccuring ZFS performance problems References: <531DF0DD.8070809@netlabs.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 17:36:50 -0000 On 10.03.14 18:27, Adam Vande More wrote: Hi Adam, > Are you using a ZFS backed swap device? does not look like: 34 419430333 vtbd0 GPT (200G) 34 128 1 freebsd-boot (64k) 162 8388608 2 freebsd-swap (4.0G) 8388770 411041597 3 freebsd-zfs (196G) regards Adrian From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 17:40:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AFA25112 for ; Mon, 10 Mar 2014 17:40:55 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 296948A9 for ; Mon, 10 Mar 2014 17:40:54 +0000 (UTC) Received: (qmail 7103 invoked by uid 89); 10 Mar 2014 17:40:53 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 10 Mar 2014 17:40:53 -0000 Message-ID: <531DF924.5030109@netlabs.org> Date: Mon, 10 Mar 2014 18:40:52 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: Steven Hartland Subject: Re: Reoccuring ZFS performance problems References: <531DF0DD.8070809@netlabs.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 17:40:55 -0000 On 10.03.14 18:31, Steven Hartland wrote: > Looks like you may be out of IOP/s but just incase, are you using TRIM > at all? > sysctl -a |grep trim vfs.zfs.vdev.trim_on_init: 1 vfs.zfs.vdev.trim_max_pending: 64 vfs.zfs.vdev.trim_max_bytes: 2147483648 vfs.zfs.trim.enabled: 1 vfs.zfs.trim.max_interval: 1 vfs.zfs.trim.timeout: 30 vfs.zfs.trim.txg_delay: 32 kstat.zfs.misc.zio_trim.bytes: 0 kstat.zfs.misc.zio_trim.success: 0 kstat.zfs.misc.zio_trim.unsupported: 115 kstat.zfs.misc.zio_trim.failed: 0 so looks like trim is enabled > If you are what does "gstat -d" show? It looks like finally my MySQL process finished and now the system is back to completely fine: dT: 1.010s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d %busy Name 10 203 0 0 0.0 192 1674 38.8 0 0 0.0 95.2| vtbd0 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| vtbd0p1 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| vtbd0p2 10 203 0 0 0.0 192 1674 39.0 0 0 0.0 95.5| vtbd0p3 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| cd0 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| gptid/e402ecce-89ca-11e2-a867-3264262b9894 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| gptid/e4112d88-89ca-11e2-a867-3264262b9894 I restarted MySQL now, curious how long it will take. > You also mention your using mysql, have you applied the standard tuning > for mysql on ZFS? At first I didn't so anything special with MySQL, during the process I redid the MySQL ZFS with a new record size: # zfs get recordsize tank/storage/data/db/data NAME PROPERTY VALUE SOURCE tank/storage/data/db/data recordsize 16K local regards Adrian From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 17:50:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 09B3A361 for ; Mon, 10 Mar 2014 17:50:20 +0000 (UTC) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9A54091D for ; Mon, 10 Mar 2014 17:50:19 +0000 (UTC) Received: from r2d2 ([82.69.141.170]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50008558932.msg for ; Mon, 10 Mar 2014 17:50:17 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Mon, 10 Mar 2014 17:50:17 +0000 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 82.69.141.170 X-Return-Path: prvs=1146ddc565=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: From: "Steven Hartland" To: "Adrian Gschwend" References: <531DF0DD.8070809@netlabs.org> <531DF924.5030109@netlabs.org> Subject: Re: Reoccuring ZFS performance problems Date: Mon, 10 Mar 2014 17:50:08 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 17:50:20 -0000 ----- Original Message ----- From: "Adrian Gschwend" To: "Steven Hartland" Cc: Sent: Monday, March 10, 2014 5:40 PM Subject: Re: Reoccuring ZFS performance problems > On 10.03.14 18:31, Steven Hartland wrote: > > > >> Looks like you may be out of IOP/s but just incase, are you using TRIM >> at all? >> sysctl -a |grep trim > > vfs.zfs.vdev.trim_on_init: 1 > vfs.zfs.vdev.trim_max_pending: 64 > vfs.zfs.vdev.trim_max_bytes: 2147483648 > vfs.zfs.trim.enabled: 1 > vfs.zfs.trim.max_interval: 1 > vfs.zfs.trim.timeout: 30 > vfs.zfs.trim.txg_delay: 32 > kstat.zfs.misc.zio_trim.bytes: 0 > kstat.zfs.misc.zio_trim.success: 0 > kstat.zfs.misc.zio_trim.unsupported: 115 > kstat.zfs.misc.zio_trim.failed: 0 > > so looks like trim is enabled Its enabled but not in use as the devices are reporting unsupported. >> If you are what does "gstat -d" show? > > It looks like finally my MySQL process finished and now the system is > back to completely fine: > > dT: 1.010s w: 1.000s > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps > ms/d %busy Name > 10 203 0 0 0.0 192 1674 38.8 0 0 > 0.0 95.2| vtbd0 > 0 0 0 0 0.0 0 0 0.0 0 0 > 0.0 0.0| vtbd0p1 > 0 0 0 0 0.0 0 0 0.0 0 0 > 0.0 0.0| vtbd0p2 > 10 203 0 0 0.0 192 1674 39.0 0 0 > 0.0 95.5| vtbd0p3 > 0 0 0 0 0.0 0 0 0.0 0 0 > 0.0 0.0| cd0 > 0 0 0 0 0.0 0 0 0.0 0 0 > 0.0 0.0| gptid/e402ecce-89ca-11e2-a867-3264262b9894 > 0 0 0 0 0.0 0 0 0.0 0 0 > 0.0 0.0| gptid/e4112d88-89ca-11e2-a867-3264262b9894 > > I restarted MySQL now, curious how long it will take. > >> You also mention your using mysql, have you applied the standard tuning >> for mysql on ZFS? > > At first I didn't so anything special with MySQL, during the process I > redid the MySQL ZFS with a new record size: > > # zfs get recordsize tank/storage/data/db/data > NAME PROPERTY VALUE SOURCE > tank/storage/data/db/data recordsize 16K local Disabled atime, configured innodb settings? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 17:51:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D6C9241C for ; Mon, 10 Mar 2014 17:51:50 +0000 (UTC) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 72E7699F for ; Mon, 10 Mar 2014 17:51:49 +0000 (UTC) Received: from r2d2 ([82.69.141.170]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50008558943.msg for ; Mon, 10 Mar 2014 17:51:48 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Mon, 10 Mar 2014 17:51:48 +0000 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 82.69.141.170 X-Return-Path: prvs=1146ddc565=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: <1CF0F10D59CB4999B18D0922C232ED43@multiplay.co.uk> From: "Steven Hartland" To: "Adrian Gschwend" , "Adam Vande More" References: <531DF0DD.8070809@netlabs.org> <531DF82F.3010607@netlabs.org> Subject: Re: Reoccuring ZFS performance problems Date: Mon, 10 Mar 2014 17:51:39 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 17:51:50 -0000 ----- Original Message ----- From: "Adrian Gschwend" To: "Adam Vande More" Cc: "freebsd-fs" Sent: Monday, March 10, 2014 5:36 PM Subject: Re: Reoccuring ZFS performance problems > On 10.03.14 18:27, Adam Vande More wrote: > > Hi Adam, > >> Are you using a ZFS backed swap device? > > does not look like: > > 34 419430333 vtbd0 GPT (200G) > 34 128 1 freebsd-boot (64k) > 162 8388608 2 freebsd-swap (4.0G) > 8388770 411041597 3 freebsd-zfs (196G) If your hitting swap you'll likely hit a performance wall, does top show any swapping in or are you just seeing slow swap out of unused processes? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 18:01:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2FE72711 for ; Mon, 10 Mar 2014 18:01:46 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9D583A84 for ; Mon, 10 Mar 2014 18:01:45 +0000 (UTC) Received: (qmail 7884 invoked by uid 89); 10 Mar 2014 18:01:42 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 10 Mar 2014 18:01:42 -0000 Message-ID: <531DFE06.40307@netlabs.org> Date: Mon, 10 Mar 2014 19:01:42 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: Steven Hartland Subject: Re: Reoccuring ZFS performance problems References: <531DF0DD.8070809@netlabs.org> <531DF924.5030109@netlabs.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 18:01:46 -0000 On 10.03.14 18:50, Steven Hartland wrote: Hi Steven, > Its enabled but not in use as the devices are reporting unsupported. ah ok > Disabled atime, configured innodb settings? tank/storage/data/db/data atime off inherited from tank/storage my.cnf (part of it, let me know if you want full): -- # CACHES AND LIMITS # tmp-table-size = 32M max-heap-table-size = 32M query-cache-type = 0 query-cache-size = 0 max-connections = 500 thread-cache-size = 50 open-files-limit = 65535 table-definition-cache = 4096 table-open-cache = 4096 # INNODB # innodb-flush-method = O_DIRECT innodb-log-files-in-group = 2 innodb-log-file-size = 256M innodb-flush-log-at-trx-commit = 1 innodb-file-per-table = 1 innodb-buffer-pool-size = 4G skip-innodb_doublewrite = 1 -- MySQL is up again and it's getting slower: (same at pastebin: http://pastebin.com/1qrWFppK ) dT: 1.010s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d %busy Name 10 249 74 191 0.6 171 531 37.5 0 0 0.0 75.9| vtbd0 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| vtbd0p1 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| vtbd0p2 10 249 74 191 14.0 171 531 38.4 0 0 0.0 95.7| vtbd0p3 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| cd0 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| gptid/e402ecce-89ca-11e2-a867-3264262b9894 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0.0| gptid/e4112d88-89ca-11e2-a867-3264262b9894 regards Adrian From owner-freebsd-fs@FreeBSD.ORG Mon Mar 10 19:38:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3F7CA494 for ; Mon, 10 Mar 2014 19:38:17 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 94C1E62C for ; Mon, 10 Mar 2014 19:38:16 +0000 (UTC) Received: (qmail 12220 invoked by uid 89); 10 Mar 2014 19:38:11 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 10 Mar 2014 19:38:11 -0000 Message-ID: <531E14A2.1060207@netlabs.org> Date: Mon, 10 Mar 2014 20:38:10 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Reoccuring ZFS performance problems References: <531DF0DD.8070809@netlabs.org> <531DF924.5030109@netlabs.org> In-Reply-To: <531DF924.5030109@netlabs.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Mar 2014 19:38:17 -0000 On 10.03.14 18:40, Adrian Gschwend wrote: > It looks like finally my MySQL process finished and now the system is > back to completely fine: ok it doesn't look it's only MySQL, stopped the process a while ago and while it got calmer, I still have the issue. regards Adrian From owner-freebsd-fs@FreeBSD.ORG Tue Mar 11 02:16:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1DDA3B82 for ; Tue, 11 Mar 2014 02:16:47 +0000 (UTC) Received: from mail-pb0-x229.google.com (mail-pb0-x229.google.com [IPv6:2607:f8b0:400e:c01::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E9CF8F56 for ; Tue, 11 Mar 2014 02:16:46 +0000 (UTC) Received: by mail-pb0-f41.google.com with SMTP id jt11so8146363pbb.28 for ; Mon, 10 Mar 2014 19:16:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=GoC/1z7kvkubnGJ5HkvoP/sH7aEWfkwWHqkGmST2K6g=; b=oZhn97KHYZAPaxJqOTbdGs8/K0gGhZE0E86l4O/WhNhF5RGOzQLRbE3vSRwKtfGLZR HFrQjcutlfm2rzg4+9cw/QHAuh0aRB/c8EjBPQs27UEXg0vxWARqjoGGg+cqkwLENMP2 /kZBpMt5n3tauonTxMhDHGRMhDmNojQ9LqaC6f/rDlS3NKqUW17PtDjCYoII8i5dbVvt GSVzBr74f3QAgTVfs/VRmncVQbbXErBwu27l5CZxn1huz8YXILVStOzraTkWwgQiVsu8 gqLY2Ypix5TVLRxfHWAG+9Gu8eiOqiq8+ODGdflKfwHbYwQ+Sg9VH54RH+pXIM4gcGaC B/ow== MIME-Version: 1.0 X-Received: by 10.68.112.164 with SMTP id ir4mr7984578pbb.153.1394504206540; Mon, 10 Mar 2014 19:16:46 -0700 (PDT) Received: by 10.70.55.7 with HTTP; Mon, 10 Mar 2014 19:16:46 -0700 (PDT) In-Reply-To: <531DF82F.3010607@netlabs.org> References: <531DF0DD.8070809@netlabs.org> <531DF82F.3010607@netlabs.org> Date: Mon, 10 Mar 2014 21:16:46 -0500 Message-ID: Subject: Re: Reoccuring ZFS performance problems From: Adam Vande More To: Adrian Gschwend Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Mar 2014 02:16:47 -0000 On Mon, Mar 10, 2014 at 12:36 PM, Adrian Gschwend wrote: > On 10.03.14 18:27, Adam Vande More wrote: > > Hi Adam, > > > Are you using a ZFS backed swap device? > > does not look like: > > 34 419430333 vtbd0 GPT (200G) > 34 128 1 freebsd-boot (64k) > 162 8388608 2 freebsd-swap (4.0G) > 8388770 411041597 3 freebsd-zfs (196G) The question remains unanswered. swapinfo(8) will show the the swap device in use, and you can use that information to determine if it is ZFS backed. There was a swap device listed in your earlier post, but not enough info to determine what if it's the underlying block device. -- Adam From owner-freebsd-fs@FreeBSD.ORG Tue Mar 11 08:25:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4765D8EE for ; Tue, 11 Mar 2014 08:25:46 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B2A8B3EE for ; Tue, 11 Mar 2014 08:25:45 +0000 (UTC) Received: (qmail 23772 invoked by uid 89); 11 Mar 2014 08:25:42 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 11 Mar 2014 08:25:42 -0000 Message-ID: <531EC886.5040502@netlabs.org> Date: Tue, 11 Mar 2014 09:25:42 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: Adam Vande More Subject: Re: Reoccuring ZFS performance problems References: <531DF0DD.8070809@netlabs.org> <531DF82F.3010607@netlabs.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Mar 2014 08:25:46 -0000 On 11.03.14 03:16, Adam Vande More wrote: Hi Adam, > does not look like: > > 34 419430333 vtbd0 GPT (200G) > 34 128 1 freebsd-boot (64k) > 162 8388608 2 freebsd-swap (4.0G) > 8388770 411041597 3 freebsd-zfs (196G) > > > The question remains unanswered. swapinfo(8) will show the the swap > device in use, and you can use that information to determine if it is > ZFS backed. There was a swap device listed in your earlier post, but > not enough info to determine what if it's the underlying block device. swapinfo: Device 512-blocks Used Avail Capacity /dev/gptid/e4112d88-89ca-11e2-a 8388608 45592 8343016 1% looks like the gptid is truncated but gpart list shows: . Name: vtbd0p2 Mediasize: 4294967296 (4.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e1 rawuuid: e4112d88-89ca-11e2-a867-3264262b9894 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: (null) length: 4294967296 offset: 82944 type: freebsd-swap index: 2 end: 8388769 start: 162 # zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 vtbd0p3 ONLINE 0 0 0 so I guess it is indeed not zfs, right? regards Adrian From owner-freebsd-fs@FreeBSD.ORG Tue Mar 11 16:09:07 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C7CECBD6 for ; Tue, 11 Mar 2014 16:09:07 +0000 (UTC) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7A72B97C for ; Tue, 11 Mar 2014 16:09:06 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 83F341330D2A; Tue, 11 Mar 2014 17:03:26 +0100 (CET) X-Bogosity: Ham, tests=bogofilter, spamicity=0.014710, version=1.2.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 6.8671] X-CRM114-CacheID: sfid-20140311_17032_1BB92BBD X-CRM114-Status: Good ( pR: 6.8671 ) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Tue Mar 11 17:03:26 2014 X-DSPAM-Confidence: 0.8521 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 531f33ce834741959520001 X-DSPAM-Factors: 27, From*"Nagy, Attila" , 0.00069, Is+this, 0.00278, ZFS, 0.00439, Subject*ZFS, 0.00479, Received*online.co.hu+[195.228.243.99]), 0.00584, Received*[195.228.243.99]), 0.00584, Received*online.co.hu, 0.00584, a+server, 0.00584, Received*(japan.t, 0.00584, Received*(japan.t+online.co.hu, 0.00584, Received*japan.t, 0.00750, Received*japan.t+online.private, 0.00750, Received*online.private+(japan.t, 0.00750, Received*from+japan.t, 0.00750, do+some, 0.00750, files, 0.00968, zpool, 0.01000, Yesterday+I, 0.01000, 42+42, 0.01000, file+system, 0.01000, file+system, 0.01000, is+empty, 0.01000, a+zpool, 0.01000, files+for, 0.01000, see+these, 0.99000, directories, 0.01206, X-Spambayes-Classification: ham; 0.01 Received: from japan.t-online.private (japan.t-online.co.hu [195.228.243.99]) by people.fsn.hu (Postfix) with ESMTPSA id 4907D1330D1E for ; Tue, 11 Mar 2014 17:03:25 +0100 (CET) Message-ID: <531F33CD.8080005@fsn.hu> Date: Tue, 11 Mar 2014 17:03:25 +0100 From: "Nagy, Attila" MIME-Version: 1.0 To: FreeBSD FS Subject: ZFS eats memory/L2ARC while doing nothing? Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Mar 2014 16:09:07 -0000 Hi, Yesterday I have created a server with stable/10@r262152. Made a zpool on it and left it running. The whole file system had only some empty directories (needed to populate it later for storage) and some programs, which do some reads on it (stat'ing files for existence). Today, I can see these graphs: https://lh3.googleusercontent.com/-CY8hLr7s6DY/Ux8xgqolTuI/AAAAAAAAKec/5gR99PIhPYw/s800/zfs_mem-day.png https://lh6.googleusercontent.com/-gbUzPy5U8ko/Ux8xeKOq_zI/AAAAAAAAKeU/ZFpZx3zE4I4/s800/memory-day.png https://lh5.googleusercontent.com/-KT7DiptLwEA/Ux8xbMXZ9NI/AAAAAAAAKeM/x2ZbHUJVvug/s800/iopsstat-day.png So it seems ZFS did an average of 5 write IOPS and that L2ARC's side constantly grew to a whopping 6.2 GiBs (again: the file system is empty, only 42*42 directories were created on it). Is this normal? What causes this? From owner-freebsd-fs@FreeBSD.ORG Tue Mar 11 16:59:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 56D8AD47 for ; Tue, 11 Mar 2014 16:59:55 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A9512E07 for ; Tue, 11 Mar 2014 16:59:54 +0000 (UTC) Received: (qmail 56589 invoked by uid 89); 11 Mar 2014 16:59:08 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 11 Mar 2014 16:59:05 -0000 Message-ID: <531F40C1.5000702@netlabs.org> Date: Tue, 11 Mar 2014 17:58:41 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: Steven Hartland Subject: Re: Reoccuring ZFS performance problems References: <531DF0DD.8070809@netlabs.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Mar 2014 16:59:55 -0000 On 10.03.14 18:31, Steven Hartland wrote: Hi Steven, > Looks like you may be out of IOP/s but just incase, are you using TRIM > at all? Regarding IOP/s is it simply possible that the performance on my KVM sucks? regards Adrian From owner-freebsd-fs@FreeBSD.ORG Wed Mar 12 01:13:12 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 99806D3B; Wed, 12 Mar 2014 01:13:12 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6DEF5CD9; Wed, 12 Mar 2014 01:13:12 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2C1DCjK098645; Wed, 12 Mar 2014 01:13:12 GMT (envelope-from mckusick@freefall.freebsd.org) Received: (from mckusick@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2C1DC4D098641; Wed, 12 Mar 2014 01:13:12 GMT (envelope-from mckusick) Date: Wed, 12 Mar 2014 01:13:12 GMT Message-Id: <201403120113.s2C1DC4D098641@freefall.freebsd.org> To: mckusick@FreeBSD.org, freebsd-fs@FreeBSD.org, mckusick@FreeBSD.org From: mckusick@FreeBSD.org Subject: Re: bin/187221: [su+j] [patch] fsck_ufs -p segmentation fault with SU+J X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Mar 2014 01:13:12 -0000 Synopsis: [su+j] [patch] fsck_ufs -p segmentation fault with SU+J Responsible-Changed-From-To: freebsd-fs->mckusick Responsible-Changed-By: mckusick Responsible-Changed-When: Wed Mar 12 01:12:43 UTC 2014 Responsible-Changed-Why: I will take this one. http://www.freebsd.org/cgi/query-pr.cgi?pr=187221 From owner-freebsd-fs@FreeBSD.ORG Wed Mar 12 18:01:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 14924DBC for ; Wed, 12 Mar 2014 18:01:30 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AF15BC91 for ; Wed, 12 Mar 2014 18:01:29 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2CI1H5i089558 for ; Wed, 12 Mar 2014 13:01:17 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Wed Mar 12 13:01:17 2014 Message-ID: <5320A0E8.2070406@denninger.net> Date: Wed, 12 Mar 2014 13:01:12 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Reoccurring ZFS performance problems [[Possible Analysis]] References: <531E2406.8010301@denninger.net> In-Reply-To: <531E2406.8010301@denninger.net> X-Forwarded-Message-Id: <531E2406.8010301@denninger.net> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms090709030901010208060606" X-Antivirus: avast! (VPS 140312-0, 03/12/2014), Outbound message X-Antivirus-Status: Clean X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Mar 2014 18:01:30 -0000 This is a cryptographically signed message in MIME format. --------------ms090709030901010208060606 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/10/2014 2:38 PM, Adrian Gschwend wrote: > On 10.03.14 18:40, Adrian Gschwend wrote: > >> It looks like finally my MySQL process finished and now the system is >> back to completely fine: > ok it doesn't look it's only MySQL, stopped the process a while ago and= > while it got calmer, I still have the issue. ZFS can be convinced to engage in what I can only surmise is=20 pathological behavior, and I've seen no fix for it when it happens --=20 but there are things you can do to mitigate it. What IMHO _*should*_ happen is that the ARC cache should shrink as=20 necessary to prevent paging, subject to vfs.zfs.arc_min. To prevent=20 pathological problems with segments that have been paged off hours (or=20 more!) ago and never get paged back in because that particular piece of=20 code never executes again (but the process is also still alive so the=20 system cannot reclaim it and thus it shows "committed" in pstat -s but=20 unless it is paged back in has no impact on system performance) the=20 policing on this would have to apply a "reasonableness" filter to those=20 pages (e.g. if it has been out on the page file for longer than "X",=20 ignore that particular allocation unit for this purpose.) This would cause the ARC cache to flush itself down automatically as=20 executable and data segment RAM commitments increase. The documentation says that this is the case and how it should work but=20 it doesn't appear to actually be this way in practice for many=20 workloads. I have seen "wired" RAM pinned at 20GB on one of my servers=20 here with a fairly large DBMS running -- with pieces of its working set=20 and even the a user's shell (!) getting paged off, yet the ARC cache is=20 not pared down to release memory. Indeed you can let the system run for = hours under these conditions and the ARC wired memory will not=20 decrease. Cutting back the DBMS's internal buffering does not help. What I've done here is restrict the ARC cache size in an attempt to=20 prevent this particular bit of bogosity from biting me, and it appears=20 to (sort of) work. Unfortunately you cannot tune this while the system=20 is running (otherwise a user daemon could conceivably slash away at the=20 arc_max sysctl and force the deallocation of wired memory if it detected = paging -- or near-paging, such as free memory below some user-configured = threshold), only at boot time in /boot/loader.conf. This is something that, should I get myself a nice hunk of free time, I=20 may dive into and attempt to fix. It would likely take me quite a while = to get up to speed on this as I've not gotten into the zfs code at all=20 -- and mistakes in there could easily corrupt files.... (in other words = definitely NOT something to play with on a production system!) I have to assume there's a pretty-good reason why you can't change=20 arc_max while the system is running; it _*can*_ be changed on a running=20 system on some other implementations (e.g. Solaris.) It is marked with=20 CTLFLAG_RDTUN in the arc management file which prohibits run-time=20 changes and the only place I see it referenced with a quick look is in=20 the arc_init code. Note that the test in arc.c for "arc_reclaim_needed" appears to be=20 pretty basic -- essentially the system will not aggressively try to=20 reclaim memory unless used kmem > 3/4 of its size. (snippet from arc.c around line 2494 of arc.c in 10-STABLE; path=20 /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs) #else /* !sun */ if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ Up above that there's a test for "vm_paging_needed()" that would=20 (theoretically) appear to trigger first in these situations, but it=20 doesn't in many cases. IMHO this is too-basic of a test and leads to pathological situations in = that the system may wind up paging things off as opposed to paring back=20 the ARC cache. As soon as the working set of something that's actually=20 getting cycles gets paged out in most cases system performance goes=20 straight in the trash. On sun machines (from reading the code) it will allegedly try to pare=20 any time the "lotsfree" (plus "needfree" + "extra") amount of free=20 memory is invaded. As an example this is what a server I own that is exhibiting this=20 behavior now shows: 20202500 wire 1414052 act 2323280 inact 110340 cache 414484 free 1694896 buf Of that "wired" mem 15.7G of it is ARC cache (with a target of 15.81, so = it's essentially right up against it.) That "free" number would be ok if it didn't result in the system having=20 trashy performance -- but it does on occasion. Incidentally the=20 allocated swap is about 195k blocks (~200 Megabytes) which isn't much=20 all-in, but it's enough to force actual fetches of recently-used=20 programs (e.g. your shell!) from paged-off space. The thing is that if=20 the test in the code (75% of kmem available consumed) was looking only=20 at "free" the system should be aggressively trying to free up ARC=20 cache. It clearly is not; the included code calls this: uint64_t kmem_used(void) { return (vmem_size(kmem_arena, VMEM_ALLOC)); } I need to dig around and see exactly what that's measuring, because=20 what's quite clear is that the system _*thinks*_ it has plenty of free=20 memory when it very-clearly is essentially out! In fact free memory at=20 the moment (~400MB) is 1.7% of the total, _*not*_ 25%. From this I=20 surmise that the "vmem_size" call is not returning the sum of all the=20 above "in use" sizes (except perhaps "inact"); were it to do so that=20 would be essentially 100% of installed RAM and the ARC cache should be=20 actively under shrinkage, but it clearly is not. I'll keep this one on my "to-do" list somewhere and if I get the chance=20 see if I can come up with a better test. What might be interesting is=20 to change the test to be "pare if free space less (pagefile space in use = plus some modest margin) < 0" Fixing this tidbit of code could potentially be pretty significant in=20 terms of resolving the occasional but very annoying "freeze" problems=20 that people sometimes run into, along with some mildly-pathological but=20 very-significant behavior in terms of how the ARC cache auto-scales and=20 its impact on performance. I'm nowhere near up-to-speed enough on the=20 internals of the kernel when it comes to figuring out what it has=20 committed (e.g. how much swap is out, etc) and thus there's going to be=20 a lot of code-reading involved before I can attempt something useful. --=20 -- Karl karl@denninger.net --------------ms090709030901010208060606 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTIxODAxMTJaMCMGCSqGSIb3DQEJBDEW BBTTsHHNsMXEiDgYKv6lHaLFrJKVyDBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIApnwlq4BYcb0fQDN3W0JUi1uGrbON m4j/SjxhhWalnoXuKrci3PmyL74IVqfZduu1QukbUf1gSnn25fFPxIYjMHQZ0nNHHrWlkIKk RS5N5d4GvTs/iyNs1m/p8O6PFMFr7DAuBLsah/JS7VjiXuODstxZCLrGMIrsaBftqcdLql4z oQakj2GrmaLFeZSIOY7ffXuHqGF7ZphoZCD5R3Rp0ronx3Oqhx/rinAPw4F0Yxi7cW3+t+K9 ICnTmGckLYfrapNovig3akIsbJm8p4CXoGyqsU2azog+JPcHlpW0keEF7wyGNfeGh96l4zvg O49NZuoekHuXj0DMU1fA+5qQvSGFrgKgWPbGfx5j3za04bFscn81W7TXU/KgkimGBZ6RIjmf kCkxytPvfmAq4loJwXIsLcSksnr+vF8Sm3pxdiTadUXnmjL29pjr6GkwAx9Uf4PZjWKJuLhT iikalBdr/tkn6OHMy6PvTE39wUtZWvg7W8A+9i1r7DOCU2Chb3d8Mi+jiDyMN6t1rVuYt2F2 K7oUDIVhIk5cZtLtSG58iaRDrqyVZ7rdKbJeg913UGPLpKL8i7110RygDlhMunxcDR/um4sd 2jHTDqO0IV1f3oe4SQjBrhiNNGCYgXbkTV/sKhFoJ4lxnVRo69NAQSG04SYYEEaKpf+tYtSx 6Gfp7NcAAAAAAAA= --------------ms090709030901010208060606-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 14 11:21:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EBE61285 for ; Fri, 14 Mar 2014 11:21:57 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 95366B17 for ; Fri, 14 Mar 2014 11:21:57 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2EBLu6T082751 for ; Fri, 14 Mar 2014 06:21:56 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Fri Mar 14 06:21:56 2014 Message-ID: <5322E64E.8020009@denninger.net> Date: Fri, 14 Mar 2014 06:21:50 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Reoccurring ZFS performance problems [RESOLVED] References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> In-Reply-To: <5320A0E8.2070406@denninger.net> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms010709050506010308020207" X-Antivirus: avast! (VPS 140313-1, 03/13/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Mar 2014 11:21:58 -0000 This is a cryptographically signed message in MIME format. --------------ms010709050506010308020207 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/12/2014 1:01 PM, Karl Denninger wrote: > > On 3/10/2014 2:38 PM, Adrian Gschwend wrote: >> On 10.03.14 18:40, Adrian Gschwend wrote: >> >>> It looks like finally my MySQL process finished and now the system is= >>> back to completely fine: >> ok it doesn't look it's only MySQL, stopped the process a while ago an= d >> while it got calmer, I still have the issue. > ZFS can be convinced to engage in what I can only surmise is=20 > pathological behavior, and I've seen no fix for it when it happens --=20 > but there are things you can do to mitigate it. > > What IMHO _*should*_ happen is that the ARC cache should shrink as=20 > necessary to prevent paging, subject to vfs.zfs.arc_min. To prevent=20 > pathological problems with segments that have been paged off hours (or = > more!) ago and never get paged back in because that particular piece=20 > of code never executes again (but the process is also still alive so=20 > the system cannot reclaim it and thus it shows "committed" in pstat -s = > but unless it is paged back in has no impact on system performance)=20 > the policing on this would have to apply a "reasonableness" filter to=20 > those pages (e.g. if it has been out on the page file for longer than=20 > "X", ignore that particular allocation unit for this purpose.) > > This would cause the ARC cache to flush itself down automatically as=20 > executable and data segment RAM commitments increase. > > The documentation says that this is the case and how it should work=20 > but it doesn't appear to actually be this way in practice for many=20 > workloads. I have seen "wired" RAM pinned at 20GB on one of my=20 > servers here with a fairly large DBMS running -- with pieces of its=20 > working set and even the a user's shell (!) getting paged off, yet the = > ARC cache is not pared down to release memory. Indeed you can let the = > system run for hours under these conditions and the ARC wired memory=20 > will not decrease. Cutting back the DBMS's internal buffering does=20 > not help. > > What I've done here is restrict the ARC cache size in an attempt to=20 > prevent this particular bit of bogosity from biting me, and it appears = > to (sort of) work. Unfortunately you cannot tune this while the=20 > system is running (otherwise a user daemon could conceivably slash=20 > away at the arc_max sysctl and force the deallocation of wired memory=20 > if it detected paging -- or near-paging, such as free memory below=20 > some user-configured threshold), only at boot time in /boot/loader.conf= =2E > > This is something that, should I get myself a nice hunk of free time,=20 > I may dive into and attempt to fix. It would likely take me quite a=20 > while to get up to speed on this as I've not gotten into the zfs code=20 > at all -- and mistakes in there could easily corrupt files.... (in=20 > other words definitely NOT something to play with on a production=20 > system!) > > I have to assume there's a pretty-good reason why you can't change=20 > arc_max while the system is running; it _*can*_ be changed on a=20 > running system on some other implementations (e.g. Solaris.) It is=20 > marked with CTLFLAG_RDTUN in the arc management file which prohibits=20 > run-time changes and the only place I see it referenced with a quick=20 > look is in the arc_init code. > > Note that the test in arc.c for "arc_reclaim_needed" appears to be=20 > pretty basic -- essentially the system will not aggressively try to=20 > reclaim memory unless used kmem > 3/4 of its size. > > (snippet from arc.c around line 2494 of arc.c in 10-STABLE; path=20 > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs) > > #else /* !sun */ > if (kmem_used() > (kmem_size() * 3) / 4) > return (1); > #endif /* sun */ > > Up above that there's a test for "vm_paging_needed()" that would=20 > (theoretically) appear to trigger first in these situations, but it=20 > doesn't in many cases. > > IMHO this is too-basic of a test and leads to pathological situations=20 > in that the system may wind up paging things off as opposed to paring=20 > back the ARC cache. As soon as the working set of something that's=20 > actually getting cycles gets paged out in most cases system=20 > performance goes straight in the trash. > > On sun machines (from reading the code) it will allegedly try to pare=20 > any time the "lotsfree" (plus "needfree" + "extra") amount of free=20 > memory is invaded. > > As an example this is what a server I own that is exhibiting this=20 > behavior now shows: > 20202500 wire > 1414052 act > 2323280 inact > 110340 cache > 414484 free > 1694896 buf > > Of that "wired" mem 15.7G of it is ARC cache (with a target of 15.81,=20 > so it's essentially right up against it.) > > That "free" number would be ok if it didn't result in the system=20 > having trashy performance -- but it does on occasion. Incidentally the = > allocated swap is about 195k blocks (~200 Megabytes) which isn't much=20 > all-in, but it's enough to force actual fetches of recently-used=20 > programs (e.g. your shell!) from paged-off space. The thing is that if = > the test in the code (75% of kmem available consumed) was looking only = > at "free" the system should be aggressively trying to free up ARC=20 > cache. It clearly is not; the included code calls this: > > uint64_t > kmem_used(void) > { > > return (vmem_size(kmem_arena, VMEM_ALLOC)); > } > > I need to dig around and see exactly what that's measuring, because=20 > what's quite clear is that the system _*thinks*_ it has plenty of free = > memory when it very-clearly is essentially out! In fact free memory=20 > at the moment (~400MB) is 1.7% of the total, _*not*_ 25%. From this I = > surmise that the "vmem_size" call is not returning the sum of all the=20 > above "in use" sizes (except perhaps "inact"); were it to do so that=20 > would be essentially 100% of installed RAM and the ARC cache should be = > actively under shrinkage, but it clearly is not. > > I'll keep this one on my "to-do" list somewhere and if I get the=20 > chance see if I can come up with a better test. What might be=20 > interesting is to change the test to be "pare if free space less=20 > (pagefile space in use plus some modest margin) < 0" > > Fixing this tidbit of code could potentially be pretty significant in=20 > terms of resolving the occasional but very annoying "freeze" problems=20 > that people sometimes run into, along with some mildly-pathological=20 > but very-significant behavior in terms of how the ARC cache=20 > auto-scales and its impact on performance. I'm nowhere near=20 > up-to-speed enough on the internals of the kernel when it comes to=20 > figuring out what it has committed (e.g. how much swap is out, etc)=20 > and thus there's going to be a lot of code-reading involved before I=20 > can attempt something useful. > In the context of the above, here's a fix. Enjoy. http://www.freebsd.org/cgi/query-pr.cgi?pr=3D187572 > Category: kern > Responsible: freebsd-bugs > Synopsis: ZFS ARC cache code does not properly handle low memory > Arrival-Date: Fri Mar 14 11:20:00 UTC 2014 --=20 -- Karl karl@denninger.net --------------ms010709050506010308020207 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTQxMTIxNTBaMCMGCSqGSIb3DQEJBDEW BBTds4ydaijPjXGHN2BAQIuJIKPoyjBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAlIFVrzi/Urxw4OAC9iyAts0LEh8T RROxpUdarFJr1p9z5mBO7zHv6WhUH6hQ4w2E/3qimiWFkWoNjQ1YPclGBsMEI04d3nqBMNQI eSvq4KZNOtgxpp7pFsGJxSU9nHjTurH9P/dgb/UV3J4N5/lGZnOFQJ5DZA0q9FMddmzChfty /JNx4kdLafXIhXzQkDToCQzV/St4eVLYye5WLuqvbVG70ZeM2XE2Sqd+UOmSA28QbBvKAxSa LOnAcBSf6/yKbPfAXq7otk3OPr+OX2hMfYz4C7b19GM6CLe5b8OBfiGkGg6/QtM5o8Q50tMK iPNARAmN8dm13g1EJGcrxE75BoceqtyNplQfSpjSuIWSHkzTtJF0jdQYjyS2uVzZ/DzAprTB m8/Vi9r4M1s04n6uVjOu1oz9PWZQtYPiw70ZZcum5TWHvneKMbDwM0nDyECQKmDb9RG3ML3u IexB6enxpWBbUpeDxaw6g98/Grla31thTrb7ZvnYNAHcQ5dieZ8U1/JRrxsZA1SfHwpB/q3Q Iig5hDFxEe8KfzPIs5fMSp+swOeQSD9JG4SXoPKDwysr3Lw7KxifROwatAcpQa0czz0HmYuN ULivfmJDYzCZjBNxcLv45KufSOKVz5d2EVH+v/DTX0yDZ5lZQsoOSOaDfiZhN7kHU348SJZu TSG9EPwAAAAAAAA= --------------ms010709050506010308020207-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 14 12:00:07 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8B28AD29 for ; Fri, 14 Mar 2014 12:00:07 +0000 (UTC) Received: from mail-ve0-x229.google.com (mail-ve0-x229.google.com [IPv6:2607:f8b0:400c:c01::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4CA1BE26 for ; Fri, 14 Mar 2014 12:00:07 +0000 (UTC) Received: by mail-ve0-f169.google.com with SMTP id pa12so2553440veb.14 for ; Fri, 14 Mar 2014 05:00:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:cc :content-type; bh=U7Q1zGUjl31C8VvAVqxRObXFzwzapqwAArwpyD1bhk8=; b=T10jE7yAx5mjaUrvSg71zCwyjUxZmBbef7li/5u5GiaanFG5Fd1mkZOLubGDCAqqrM mLRWMVc0lupIv3aiKrjFyhqRWJk8OJPox2Frnd3iXdjUVs40RZVAV65r0oPwzxK3iTX9 G3746wrCOp7P/nDLgtvPf4wJHuLqNTwphiR+oKqekYDhCDFPux4mi3VmGVXlFduyLTqB mb4n7+KBR30+5ftlNgxl2OJEnGyVrxPnyW0WupmfXhdkC3Npj3glBCLvf8yrvtIWR2ne xfby7CZKOFWZ8q4PJO3b0hTTBu/BsG1zeJFGUGUSu/yoKS2rB+ERgCwftQbxeJaEt3Nk P+OA== X-Received: by 10.220.191.134 with SMTP id dm6mr6205168vcb.16.1394798405095; Fri, 14 Mar 2014 05:00:05 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.252.165 with HTTP; Fri, 14 Mar 2014 04:59:34 -0700 (PDT) In-Reply-To: <5322E64E.8020009@denninger.net> References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> From: Matthias Gamsjager Date: Fri, 14 Mar 2014 12:59:34 +0100 Message-ID: Subject: Re: Reoccurring ZFS performance problems [RESOLVED] Cc: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Mar 2014 12:00:07 -0000 Great news :) From owner-freebsd-fs@FreeBSD.ORG Fri Mar 14 18:00:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B241E322 for ; Fri, 14 Mar 2014 18:00:36 +0000 (UTC) Received: from mail-vc0-x235.google.com (mail-vc0-x235.google.com [IPv6:2607:f8b0:400c:c03::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6C586B53 for ; Fri, 14 Mar 2014 18:00:36 +0000 (UTC) Received: by mail-vc0-f181.google.com with SMTP id id10so3064339vcb.40 for ; Fri, 14 Mar 2014 11:00:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=7JgZh++Z+vD8u68r3gaDsqCIOaMvtsC3WZUk2rDqH6o=; b=zDJi7Em3N5XA3YJOBdf77nl6xZYVzpV+qHzf+SLyKFCzlxbA01rFA7iJzLRbkZDWRO rbm5Pa95jBayze9rXSM/6yP5pWUviqE7n7+iBWljjRY0ErbjwRYLsBifnkWKx0y8InqX I5QSEZoIDtybD3e16H3+9+P7aJHcq4uq+igpoH0/MuK8FMMdVm3yxSp8cqvDJm59Do/b ksHOLIKgST1LdSJox/zFhfeknLOG5E8Xn3tF2fMyYcWJdF/iDYyHhBhtbu+9nMNu3XtI TlpZrFtOxzoCt0bmKXGJ2l4+RGSfqzP1CWXNA+ZhtIPaME+ANArfetbe21nUx3gzFvCO 0hLA== X-Received: by 10.220.109.1 with SMTP id h1mr7389536vcp.20.1394820035522; Fri, 14 Mar 2014 11:00:35 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.252.165 with HTTP; Fri, 14 Mar 2014 11:00:04 -0700 (PDT) In-Reply-To: References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> From: Matthias Gamsjager Date: Fri, 14 Mar 2014 19:00:04 +0100 Message-ID: Subject: Re: Reoccurring ZFS performance problems [RESOLVED] To: Matthias Gamsjager Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Mar 2014 18:00:36 -0000 Not sure it's working correctly: Patched 10-stable and this are the arc stats: ARC Size: 12.50% 865.31 MiB Target Size: (Adaptive) 12.50% 865.05 MiB Min Size (Hard Limit): 12.50% 865.05 MiB Max Size (High Water): 8:1 6.76 GiB top: CPU: 1.5% user, 0.0% nice, 2.7% system, 0.2% interrupt, 95.6% idle Mem: 103M Active, 88M Inact, 1498M Wired, 130M Buf, 6254M Free ARC: 865M Total, 101M MFU, 478M MRU, 16K Anon, 43M Header, 242M Other Swap: 4096M Total, 4096M Free Arc doesn't grow lager then min size. From owner-freebsd-fs@FreeBSD.ORG Fri Mar 14 18:26:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A49B587D for ; Fri, 14 Mar 2014 18:26:44 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4EA96D7C for ; Fri, 14 Mar 2014 18:26:43 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2EIQcIr002853 for ; Fri, 14 Mar 2014 13:26:38 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Fri Mar 14 13:26:38 2014 Message-ID: <532349D9.7070902@denninger.net> Date: Fri, 14 Mar 2014 13:26:33 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: Matthias Gamsjager Subject: Re: Reoccurring ZFS performance problems [RESOLVED] References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070101030800040004070503" X-Antivirus: avast! (VPS 140314-0, 03/14/2014), Outbound message X-Antivirus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Mar 2014 18:26:44 -0000 This is a cryptographically signed message in MIME format. --------------ms070101030800040004070503 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable It's possible I send-pr'd up the wrong diff. Will look at it=20 immediately; it is working properly here. On 3/14/2014 1:00 PM, Matthias Gamsjager wrote: > Not sure it's working correctly: > Patched 10-stable and this are the arc stats: > > ARC Size: 12.50% 865.31 MiB > > Target Size: (Adaptive) 12.50% 865.05 MiB > > Min Size (Hard Limit): 12.50% 865.05 MiB > > Max Size (High Water): 8:1 6.76 GiB > > > > top: > > > CPU: 1.5% user, 0.0% nice, 2.7% system, 0.2% interrupt, 95.6% idle > > Mem: 103M Active, 88M Inact, 1498M Wired, 130M Buf, 6254M Free > > ARC: 865M Total, 101M MFU, 478M MRU, 16K Anon, 43M Header, 242M Other > > Swap: 4096M Total, 4096M Free > > > > Arc doesn't grow lager then min size. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > %SPAMBLOCK-SYS: Matched [@freebsd.org+], message ok --=20 -- Karl karl@denninger.net --------------ms070101030800040004070503 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTQxODI2MzNaMCMGCSqGSIb3DQEJBDEW BBS/R9l7V2hJtiMrHf1OHVJgLkCFnjBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAkIvSNZT5ypfVhbaKOIlsOuXcJUQP d5Wkfi9kc/qlcQxBTT6WgYQuYcCU2S1mZuYhcMWZ5TJzlrh5GnoRmKOSiiVTBihffwJ+gcxS mtfTT3b3WAXNr/tT0vukKsJi7hrwwHTnKvFsMSvHE/PGw4uxspfd67PObMDYSQBsvZwK3UiP rUY33CjQYKzP9z9Oa0mlfaoORKvWPwGg+lP1D0+RtvXCPnVHciK7+UsHcKNhaST02UgaAclQ k+7XECbtsMC6VOzMZS/kN8fVXJJHf+VQLC3SxNuPGlgqvFGgBL3ZKU9v79AveiJov6yTLXg6 QIjZMwR/V9zqu4C33v7EbdWYgivzrGAye0aPKUNCUGgtjunyaTi5HXMNN9bwHU+nIvJnGvJy pkSthBXWD/cRRQACh2fUFgZr0dPi9o88MgaL3zqiJDCUd5nTUGPr9eaT8G9kOl4tNBFbQg1t aVahc+T59jJwePST1VseCircHoNVSPFTlyEGajerxkhJ7GpvG6CxB9ctS5q+hpvZPKK9LcEv Ln/5kbGoXeUna/7Ga0j4SAC5h1yRdsW4xgdUNhwu2lir+dDVyv3zRLIaZ5vB+xKoKV5oC4eY ICfmUAaOIq5w0WSwD88n+G7fTbSksnVL6vImRWnUvz1NjZrFVtTifk8tTuigDvgrA2Mw7dz7 99aJ1rsAAAAAAAA= --------------ms070101030800040004070503-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 14 19:02:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7E50416D for ; Fri, 14 Mar 2014 19:02:48 +0000 (UTC) Received: from www94.your-server.de (www94.your-server.de [213.133.104.94]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 3EE40140 for ; Fri, 14 Mar 2014 19:02:48 +0000 (UTC) Received: from [92.230.243.161] (helo=[192.168.178.40]) by www94.your-server.de with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.74) (envelope-from ) id 1WOX6q-0000lF-W5 for freebsd-fs@freebsd.org; Fri, 14 Mar 2014 19:45:33 +0100 Subject: exfat troubles on 10-stable From: Mathias Picker To: freebsd-fs@freebsd.org Content-Type: text/plain; charset="UTF-8" Organization: virtual earth GmbH Date: Fri, 14 Mar 2014 19:45:23 +0100 Message-ID: <1394822723.2242.21.camel@marcopolo.fritz.box> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-Authenticated-Sender: Mathias.Picker@virtual-earth.de X-Virus-Scanned: Clear (ClamAV 0.97.8/18599/Fri Mar 14 18:05:32 2014) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Mar 2014 19:02:48 -0000 Hi all, I just got my first micro-sdxc formatted with exfat. I installed sysutils/fusefs-exfat and could easily mount it. Reading works find. Creating directories and removing them works fine. But I'm absolutely unable to create a file on it???? marcopolo# mount.exfat /dev/da1s1 /media/disk FUSE exfat 1.0.1 marcopolo# cd /media/disk marcopolo# lf DCIM/ System Volume Information/ marcopolo# lf DCIM/Camera 20140225_164112.jpg* 20140306_183202.jpg* 20140306_204927.jpg* marcopolo# mkdir test marcopolo# touch test marcopolo# lf DCIM/ test/ System Volume Information/ marcopolo# rmdir test marcopolo# lf DCIM/ System Volume Information/ marcopolo# touch test touch: test: Invalid argument marcopolo# lf DCIM/ System Volume Information/ marcopolo# cp /COPYRIGHT . cp: ./COPYRIGHT: Invalid argument marcopolo# lf DCIM/ System Volume Information/ marcopolo# mkdir test marcopolo# touch test/test touch: test/test: Invalid argument This is on a recent -stable build marcopolo# uname -a FreeBSD marcopolo 10.0-STABLE FreeBSD 10.0-STABLE #17 r263014: Tue Mar 11 14:14:17 CET 2014 mathiasp@marcopolo:/usr/obj/usr/src/sys/GENERIC amd64 I rebuild fuse-libs just now... ntfs3g works The card reader is shown like this ugen6.3: at usbus6, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA) It's labeled Rollei SDXC miniUSB 2.0 9-in-1 Card reader. Both show up like this in dmesg: ugen6.3: at usbus6 umass1: on usbus6 umass1: SCSI over Bulk-Only; quirks = 0x4100 umass1:4:1:-1: Attached to scbus4 da1 at umass-sim1 bus 1 scbus4 target 0 lun 0 da1: Removable Direct Access SCSI-0 device da1: Serial Number 0020100507A00000 da1: 40.000MB/s transfers da1: 60906MB (124735488 512 byte sectors: 255H 63S/T 7764C) da1: quirks=0x2 The card is a Sandisk Extreme micro-SDXC 64gb. Working fine with windows and android. Any ideas? Is this expected to work?? Shall I report this as bug? Thanks, Mathias From owner-freebsd-fs@FreeBSD.ORG Fri Mar 14 19:42:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1A382F7 for ; Fri, 14 Mar 2014 19:42:59 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id BD5CA6DC for ; Fri, 14 Mar 2014 19:42:58 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2EJgsf7029822 for ; Fri, 14 Mar 2014 14:42:54 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Fri Mar 14 14:42:54 2014 Message-ID: <53235BB9.8020804@denninger.net> Date: Fri, 14 Mar 2014 14:42:49 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Reoccurring ZFS performance problems [RESOLVED] References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <532349D9.7070902@denninger.net> In-Reply-To: <532349D9.7070902@denninger.net> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms000704050708020904070609" X-Antivirus: avast! (VPS 140314-0, 03/14/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Mar 2014 19:42:59 -0000 This is a cryptographically signed message in MIME format. --------------ms000704050708020904070609 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable It's the wrong diff -- I'll be sending up a corrected one shortly. My=20 bad; I didn't check in the last change on my local repo before I pulled=20 the diff. On 3/14/2014 1:26 PM, Karl Denninger wrote: > It's possible I send-pr'd up the wrong diff. Will look at it=20 > immediately; it is working properly here. > > On 3/14/2014 1:00 PM, Matthias Gamsjager wrote: >> Not sure it's working correctly: >> Patched 10-stable and this are the arc stats: >> >> ARC Size: 12.50% 865.31 MiB >> >> Target Size: (Adaptive) 12.50% 865.05 MiB >> >> Min Size (Hard Limit): 12.50% 865.05 MiB >> >> Max Size (High Water): 8:1 6.76 GiB >> >> >> >> top: >> >> >> CPU: 1.5% user, 0.0% nice, 2.7% system, 0.2% interrupt, 95.6% idle= >> >> Mem: 103M Active, 88M Inact, 1498M Wired, 130M Buf, 6254M Free >> >> ARC: 865M Total, 101M MFU, 478M MRU, 16K Anon, 43M Header, 242M Other >> >> Swap: 4096M Total, 4096M Free >> >> >> >> Arc doesn't grow lager then min size. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> >> %SPAMBLOCK-SYS: Matched [@freebsd.org+], message ok > --=20 -- Karl karl@denninger.net --------------ms000704050708020904070609 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTQxOTQyNDlaMCMGCSqGSIb3DQEJBDEW BBTmgOC1NT2+00dPEVSnFdOCzr/jXDBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAH5lAzoy7zT6J3Tkk5u2cuxycDeBW 3lwx1RZHofM9iV1Y+YU/GG4j0a1mkcLr5M+G/D9HCWAknQmiRu45S+z2vSdJC+txnFJo2cOt IaeoaYMDwLFzMRxFUg2lr1AHDNlt8pP+YDNnvWbtoqEqp8nWWqEOPT0ukc3V/t5ieL6CJKp1 BtESDsulzNczKn7GOahbCItEaKtSsBtWOVwyNM+Crvf+cJ40+xfoUN8fp7ZzWWWmcB7Qcgge rg8t50NkWsGVkvXRVNlcIemW4JWknT9HOt+zFEbShB53AUKTQ7m3eQhV6ZQ4GkPZWClumchn iveviE+P0dIykSpQq3xAWDwkqoWQosGjjSVdBeZZvgPsm0qwKA1Fub3AMyNNhZhMA6sUpO7k EXj1azovI85mPNO4KETJ+V91G+F/j7pe6u6DjuAQm7os32ShxLoHROyfomy45qOJg2V7Jkdf o0UsGphoc/4I8Ad0aw4iEUy4VAHOi/Kweq3yuScZEhIJRpuVDd2JaFdOPw3BMld9yRRPWiW2 /TNWvlnp0OMR0fuieYNH+ubk9RtQFzAyYsc5IqHTS3OJAVzTYTr0kvLK/6SORHGqIb12KVfs ddR8hsinHyyS3tCNlSrkh3Rq0mEaVXoESM0TefO+d4vTBbfkOy5Oqc3YPhXKQL5AfoS7MozV U0c+fOAAAAAAAAA= --------------ms000704050708020904070609-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 14 20:52:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A777FACA for ; Fri, 14 Mar 2014 20:52:10 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 548D6D5B for ; Fri, 14 Mar 2014 20:52:09 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2EKq8eh014853 for ; Fri, 14 Mar 2014 15:52:08 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Fri Mar 14 15:52:08 2014 Message-ID: <53236BF3.9060500@denninger.net> Date: Fri, 14 Mar 2014 15:52:03 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: Matthias Gamsjager Subject: Re: Reoccurring ZFS performance problems [RESOLVED] References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms050503060608000206090001" X-Antivirus: avast! (VPS 140314-0, 03/14/2014), Outbound message X-Antivirus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Mar 2014 20:52:10 -0000 This is a cryptographically signed message in MIME format. --------------ms050503060608000206090001 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/14/2014 1:00 PM, Matthias Gamsjager wrote: > Not sure it's working correctly: > Patched 10-stable and this are the arc stats: > > ARC Size: 12.50% 865.31 MiB > > Target Size: (Adaptive) 12.50% 865.05 MiB > > Min Size (Hard Limit): 12.50% 865.05 MiB > > Max Size (High Water): 8:1 6.76 GiB > > > > top: > > > CPU: 1.5% user, 0.0% nice, 2.7% system, 0.2% interrupt, 95.6% idle > > Mem: 103M Active, 88M Inact, 1498M Wired, 130M Buf, 6254M Free > > ARC: 865M Total, 101M MFU, 478M MRU, 16K Anon, 43M Header, 242M Other > > Swap: 4096M Total, 4096M Free > > > > Arc doesn't grow lager then min size. > You'll like this much better :-) http://www.freebsd.org/cgi/query-pr.cgi?pr=3D187594 Sorry about that.... --=20 -- Karl karl@denninger.net --------------ms050503060608000206090001 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTQyMDUyMDNaMCMGCSqGSIb3DQEJBDEW BBTOwt5Vz6cZpoX9ZlKOgKHXV4+/1DBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAgXxskzkY5UN58B1BuMcxULyjc+Dt IVAcehoAZV5mdQtwV4F+UHuOBfqnQ8IXwsiwwlZTpYbCpxmTWSUJ1XYvgvlVMiGec0LBnNGG XJIUvFivj5PC6VV5LhMpFuuAVQpLV0t/GeAnNGXxfR3ws3jZx7njpzwys78cUxRzPXUgLz12 JnrVPlui5gKKgQe47tqf+zrSVlCmV4n4hxsqpdPMWEnG4S3SOCmGx6HMlcnHkgQGxczxq0DU g6GjVfiR6yWETUYzLISNqRGhvInlhb/RXdZZB9rn0vcpej3EFBYyJRfkkDzKAqAmpfFDIdD5 xGoe3tN+U5vvndAOdHRqI9kehqf5ZZQ1YPLxY3hTYS/hdtCP3nJwNorGMnd9fzRDGphTVvxy WLl+VGlO01GhcGcgBXJw08CaEqNQZjZChnZBFXAYABUwwXHhR0tBPxIC+inP8igfupUGb6k5 OTr5QF4MxeYsSPK8HW+3+IS8O62INBkvAslkw12FYHCi/cO/p64tOID+RhVB7joB21TQ6jVb OInZFVPLZi0Rya6UlyQGOZcfWcWSwfwGv+0/2EaU9xeIrBzudM5Ga0J6XrH6B3y+E3KNqBni hx8xZif+PrygZEvNn+Hal/Fn8TVGFlyprjBPc6nLHt00MJv6Mqb4vuaVeKg2c6n50J3BAw0P OyxaORkAAAAAAAA= --------------ms050503060608000206090001-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 14 23:04:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8F6B5572 for ; Fri, 14 Mar 2014 23:04:44 +0000 (UTC) Received: from mail-ve0-x233.google.com (mail-ve0-x233.google.com [IPv6:2607:f8b0:400c:c01::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4B44AA8D for ; Fri, 14 Mar 2014 23:04:44 +0000 (UTC) Received: by mail-ve0-f179.google.com with SMTP id db12so3402205veb.24 for ; Fri, 14 Mar 2014 16:04:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=3/4EO5oZd1EbUzKcOrGf3jvuqFTWGQ5m1JR6KUKhpJI=; b=ExnisT/jlGmG5frsAQ/TO0hvsiAIRTrr8lxp/NPg843RfxmSpdt+GG6i5eeevIxLNQ L8UPuTceUjHnPRjkBCEtm5t0vZvyqeKVRi9j/Or+1K97QDTK8s4XlXxMjoJ51S22nkyK Zjf57X4tjFSQx9M9EYicgyTdyQ6ZqUumJeT1SsMwf4pHkF03nP5Gk7BjkvGA8O4dj6Qe tzidaAGvdmTmdXpGlavsWZSDUeMgaRuKTYKtf/0a1p+yMQ0wsUwYP+tqnRPylrBcxrV8 K/Ru1P3OtvsF2GG93wWj+L5W55RuA8CNOqliuaws4lfK7t+BU6rYI2leG+xiKd1rTfZh 9hCQ== X-Received: by 10.52.165.105 with SMTP id yx9mr7194970vdb.22.1394838283446; Fri, 14 Mar 2014 16:04:43 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.252.165 with HTTP; Fri, 14 Mar 2014 16:04:13 -0700 (PDT) In-Reply-To: <53236BF3.9060500@denninger.net> References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <53236BF3.9060500@denninger.net> From: Matthias Gamsjager Date: Sat, 15 Mar 2014 00:04:13 +0100 Message-ID: Subject: Re: Reoccurring ZFS performance problems [RESOLVED] To: Karl Denninger Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Mar 2014 23:04:44 -0000 Much better thx :) Will this patch be review by some kernel devs and merged? From owner-freebsd-fs@FreeBSD.ORG Fri Mar 14 23:57:02 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AE9B3A20 for ; Fri, 14 Mar 2014 23:57:02 +0000 (UTC) Received: from mail-lb0-x236.google.com (mail-lb0-x236.google.com [IPv6:2a00:1450:4010:c04::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 24479E1B for ; Fri, 14 Mar 2014 23:57:01 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id n15so2258818lbi.13 for ; Fri, 14 Mar 2014 16:57:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=OR2RClCsUJ4xIm7QzcchzvG/eis/GWvTffSX2rI44/A=; b=f3W+pMK+47vEynr5isNEPPPLDdJrSGuCRmCQrwi1xjIbS7CjqX1kI1L5qLTHHGKo5b DFZK08iQdtS3FV1Q2Ba9OaBF9avhFZecTzthLA8QMbFtLHupf7PLPJfIr12pSyKtFOCN tWSVDZ+hCEvKhBGT90Zqt9QHi/d3GIdD3mygKYvWQuykPKpc0rJ6krm+QRuU2pJ1oUMW J7K7IwBKO5I5FpDz6FJaYy5zo1QNtTxFIAtHCR+b+xT9D+FMD0cKXQA6v6R+B2xbgw+8 R7NpEGP/46lUBIhX6dBE+D0cIfP140DIiljx8TgYBeWQkV5shyUneK9bQ/JLdF/n9Wvo Qeeg== MIME-Version: 1.0 X-Received: by 10.112.52.104 with SMTP id s8mr7106620lbo.7.1394841420340; Fri, 14 Mar 2014 16:57:00 -0700 (PDT) Received: by 10.114.230.65 with HTTP; Fri, 14 Mar 2014 16:57:00 -0700 (PDT) Date: Fri, 14 Mar 2014 16:57:00 -0700 Message-ID: Subject: rsync w/ fake-super -> crashes zfs From: javocado To: FreeBSD Filesystems Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Mar 2014 23:57:02 -0000 System specifics: ZFS version 28 FreeBSD 8.3-RELEASE We're seeing a repeatable outcome where a remote rsync command like: rsync -axzHAXS --rsync-path="rsync --fake-super" --exclude '*/rsync.%stat' backing up to our zfs filesystem (with 15M inodes) will lead to a panic with output like: Fatal trap 12: page fault while in kernel modecpuid = 4; apic id = 04fault virtual address = 0x160fault code = supervisor read data, page not presentinstruction pointer = 0x20:0xffffffff80abb546stack pointer = 0x28:0xffffff976c62b910frame pointer = 0x28:0xffffff976c62b9d0code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1processor eflags = interrupt enabled, resume, IOPL = 0current process = 7295 (rsync)[thread pid 7295 tid 101008 ]Stopped at zfs_freebsd_remove+0x426: movq 0x160(%rax),%rsi On the sending side (RHEL, ext3), rsync reports errors like: rsync: failed to read xattr rsync.%statrsync: failed to write xattr rsync.%statrsync: get_xattr_names: llistxattr which we've seen occasionally with other systems when running rsync with fake-super, but it usually doesn't lead to a crash.* On the receiving side, other than the crashes, we are seeing a few new files (that don't exist on the source) named: rsync.%stat which correspond to and contain the owner and permission attributes that should have been stored in the extattr's for the file or directory. Not sure if they are a red herring, but they're usually not something we see (perhaps that's related to the --exclude '*/rsync.%stat' and rsync not being able to cleanup properly). We are still testing to see if any options in the rsync command (above) may be contributing to the crash, since fake-super in and of itself runs fine under basic (rsync -av --rsync-path="rsync --fake-super" /src /dst) circumstances. But we suspect that the problem is related to fake-super and its reliance upon extattr's. What we really need is a solution to the crashing - some way to make ZFS stop choking on whatever --fake-super produces and/or how it's interacting with extattr's on ZFS. Thanks! * we sometimes also see on the sending side w/ fake-super: rsync: failed to write xattr rsync.%stat for "xxxxxx/file" : No such file or directory (2) when (1) the file exists, (2) it's a symlink but that isn't happening in this instance. We only mention it here as another oddity of fake-super + ZFS + extattr From owner-freebsd-fs@FreeBSD.ORG Sat Mar 15 21:21:47 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 92AF8D7F; Sat, 15 Mar 2014 21:21:47 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 674957DD; Sat, 15 Mar 2014 21:21:47 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2FLLlgO015354; Sat, 15 Mar 2014 21:21:47 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2FLLlkr015353; Sat, 15 Mar 2014 21:21:47 GMT (envelope-from linimon) Date: Sat, 15 Mar 2014 21:21:47 GMT Message-Id: <201403152121.s2FLLlkr015353@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 15 Mar 2014 21:21:47 -0000 Old Synopsis: REPLACES PR187572 - ZFS ARC behavior problem and fix New Synopsis: [zfs] [patch] ZFS ARC behavior problem and fix Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sat Mar 15 21:21:18 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=187594 From owner-freebsd-fs@FreeBSD.ORG Sun Mar 16 03:50:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id ADEFCAED for ; Sun, 16 Mar 2014 03:50:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9A14493C for ; Sun, 16 Mar 2014 03:50:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2G3o188040406 for ; Sun, 16 Mar 2014 03:50:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2G3o1L1040405; Sun, 16 Mar 2014 03:50:01 GMT (envelope-from gnats) Date: Sun, 16 Mar 2014 03:50:01 GMT Message-Id: <201403160350.s2G3o1L1040405@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Adam McDougall Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Adam McDougall List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Mar 2014 03:50:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Adam McDougall To: bug-followup@FreeBSD.org, karl@fs.denninger.net Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Sat, 15 Mar 2014 23:42:00 -0400 This is generally working well for me so far, been running it over a day on my desktop at home with only 4G ram and I have not needlessly swapped. Generally have 1GB or more free ram now although I also decreased vfs.zfs.arc_freepage_percent_target to 15 because my ARC total was pretty low. At the moment I have 406M ARC and 1070M free while Thunderbird and over a dozen Chromium tabs open. Thanks for working on a patch! From owner-freebsd-fs@FreeBSD.ORG Mon Mar 17 08:33:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EB47280E for ; Mon, 17 Mar 2014 08:33:58 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 8C1AF23C for ; Mon, 17 Mar 2014 08:33:56 +0000 (UTC) Received: (qmail 9305 invoked by uid 89); 17 Mar 2014 08:33:48 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 17 Mar 2014 08:33:48 -0000 Message-ID: <5326B36B.2090601@netlabs.org> Date: Mon, 17 Mar 2014 09:33:47 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: freebsd-fs Subject: Growing a ZFS volume Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Mar 2014 08:33:59 -0000 Hi everyone, There is a manual about growing UFS partitions in the documentation: http://www.freebsd.org/doc/handbook/disks-growing.html growfs seems to be UFS only, anyone knows what would be the correct way to let a ZFS volume grow on the same disk? I have a similar setup from mfsbsd but my last partition is simply a ZFS one. regards Adrian From owner-freebsd-fs@FreeBSD.ORG Mon Mar 17 09:42:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 99D10E58 for ; Mon, 17 Mar 2014 09:42:23 +0000 (UTC) Received: from mail-ee0-x232.google.com (mail-ee0-x232.google.com [IPv6:2a00:1450:4013:c00::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 272C9A6B for ; Mon, 17 Mar 2014 09:42:22 +0000 (UTC) Received: by mail-ee0-f50.google.com with SMTP id c13so3839383eek.23 for ; Mon, 17 Mar 2014 02:42:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=zrDA0+PShCuK+iIbJPWHSGb4BJfVzdMs5TQJh9QRZxE=; b=m7Zp6YJayxwCa0AQpzY8cB807Uw9+6inR/5qXXR0zyAaZQpoDITtw5joydpmMQ13vf OzU1Tg2v83dOVZiYGctjk6ctgmccHkGrW3kyd8LOo9Qo/dQDFbjLzbkhZN+sTxB/irB0 nAhyWFe5RQgW7j5fntedlgibdJYnnut4R4w+2ynZtVeDGHpSZHn4scDooIhLfF63Txbn bhBpOZtHEvmsw4FDiCuLFxdw41zJhKLCvSWTIzvF2SWrM7yQooGrDNqQyzB3bxZ24Qd9 /1WagV+k0V+NjHEcqoiQRc4hvZGRbrv2VMsqryR/xoUnqaLywqtWDkAETxXSTYtwTvax xlOQ== X-Received: by 10.15.60.199 with SMTP id g47mr22875376eex.37.1395049341476; Mon, 17 Mar 2014 02:42:21 -0700 (PDT) Received: from strashydlo.home (aeba195.neoplus.adsl.tpnet.pl. [79.186.26.195]) by mx.google.com with ESMTPSA id i1sm38354202eeo.16.2014.03.17.02.42.20 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 17 Mar 2014 02:42:20 -0700 (PDT) Sender: =?UTF-8?Q?Edward_Tomasz_Napiera=C5=82a?= Subject: Re: Growing a ZFS volume Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=iso-8859-2 From: =?iso-8859-2?Q?Edward_Tomasz_Napiera=B3a?= In-Reply-To: <5326B36B.2090601@netlabs.org> Date: Mon, 17 Mar 2014 10:42:19 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <159E2EDC-C49D-4982-BB65-F757D949B5FE@FreeBSD.org> References: <5326B36B.2090601@netlabs.org> To: Adrian Gschwend X-Mailer: Apple Mail (2.1283) Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Mar 2014 09:42:23 -0000 Wiadomo=B6=E6 napisana przez Adrian Gschwend w dniu 17 mar 2014, o godz. = 09:33: > Hi everyone, >=20 > There is a manual about growing UFS partitions in the documentation: >=20 > http://www.freebsd.org/doc/handbook/disks-growing.html >=20 > growfs seems to be UFS only, anyone knows what would be the correct = way > to let a ZFS volume grow on the same disk? I have a similar setup from > mfsbsd but my last partition is simply a ZFS one. See "man zpool", search for "autoexpand", or "online -e". From owner-freebsd-fs@FreeBSD.ORG Mon Mar 17 10:10:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F23D393E for ; Mon, 17 Mar 2014 10:10:43 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 50E7DD5E for ; Mon, 17 Mar 2014 10:10:43 +0000 (UTC) Received: (qmail 17102 invoked by uid 89); 17 Mar 2014 10:10:39 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 17 Mar 2014 10:10:39 -0000 Message-ID: <5326CA1E.6060808@netlabs.org> Date: Mon, 17 Mar 2014 11:10:38 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: =?ISO-8859-2?Q?Edward_Tomasz_Napiera=B3a?= Subject: Re: Growing a ZFS volume References: <5326B36B.2090601@netlabs.org> <159E2EDC-C49D-4982-BB65-F757D949B5FE@FreeBSD.org> In-Reply-To: <159E2EDC-C49D-4982-BB65-F757D949B5FE@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-2 Content-Transfer-Encoding: 8bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Mar 2014 10:10:44 -0000 On 17.03.14 10:42, Edward Tomasz Napiera³a wrote: Hi Edward, > See "man zpool", search for "autoexpand", or "online -e". Great thanks, who could help adding this to the manual? regards Adrian From owner-freebsd-fs@FreeBSD.ORG Mon Mar 17 10:48:57 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D2895309; Mon, 17 Mar 2014 10:48:57 +0000 (UTC) Received: from mail-vc0-x235.google.com (mail-vc0-x235.google.com [IPv6:2607:f8b0:400c:c03::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7BAF22B; Mon, 17 Mar 2014 10:48:57 +0000 (UTC) Received: by mail-vc0-f181.google.com with SMTP id id10so5394120vcb.26 for ; Mon, 17 Mar 2014 03:48:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type:content-transfer-encoding; bh=gC/X3cM85TYj+pOfJu73iCmmq51+S0S1cOG34vGPMuI=; b=sxNexL4tbVrnm4W0MO0BMvENLNWJen1szHOxA8S3A/30gKeYFnrGKddV1Gu4niuGhx n96o9pKNF6dVs8rjSJBIMXMOr6NuYuMbvFvpkrg2M+IPJqLSWcFGKJU/7fyKdhynmDwM 6bGNPNgoQQYMIhdkjuZUb5EYexTPbRflCL0WiL3i36k9K9oGxFrwLPBNfREzP5tHse38 2n017yh4ceS1X3/hBuJnBPmW/nwX8+fOwz9aoBgywKfUDnvuWW0Arvpc4bJ22PAzxSNv aKDa0DGG1NCG05iTGguYZLNpcIPpZ/flsPz0VGPQiQ8JnivIgn5lrU8H5aAn+qOyUnM3 J89Q== X-Received: by 10.52.173.165 with SMTP id bl5mr7181458vdc.13.1395053336584; Mon, 17 Mar 2014 03:48:56 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.91.74 with HTTP; Mon, 17 Mar 2014 03:48:36 -0700 (PDT) In-Reply-To: <159E2EDC-C49D-4982-BB65-F757D949B5FE@FreeBSD.org> References: <5326B36B.2090601@netlabs.org> <159E2EDC-C49D-4982-BB65-F757D949B5FE@FreeBSD.org> From: Anton Sayetsky Date: Mon, 17 Mar 2014 12:48:36 +0200 Message-ID: Subject: Re: Growing a ZFS volume To: =?ISO-8859-2?Q?Edward_Tomasz_Napiera=B3a?= Content-Type: text/plain; charset=ISO-8859-2 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Mar 2014 10:48:57 -0000 2014-03-17 11:42 GMT+02:00 Edward Tomasz Napiera=B3a : > Wiadomo=B6=E6 napisana przez Adrian Gschwend w dniu 17 mar 2014, o godz. = 09:33: >> Hi everyone, >> >> There is a manual about growing UFS partitions in the documentation: >> >> http://www.freebsd.org/doc/handbook/disks-growing.html >> >> growfs seems to be UFS only, anyone knows what would be the correct way >> to let a ZFS volume grow on the same disk? I have a similar setup from >> mfsbsd but my last partition is simply a ZFS one. > > See "man zpool", search for "autoexpand", or "online -e". autoexpand does not works, same as autoreplace. ;) From owner-freebsd-fs@FreeBSD.ORG Mon Mar 17 10:55:41 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5FAB93E5 for ; Mon, 17 Mar 2014 10:55:41 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AAADB14A for ; Mon, 17 Mar 2014 10:55:40 +0000 (UTC) Received: (qmail 20070 invoked by uid 89); 17 Mar 2014 10:54:55 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 17 Mar 2014 10:54:53 -0000 Message-ID: <5326D468.9060909@netlabs.org> Date: Mon, 17 Mar 2014 11:54:32 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: Anton Sayetsky Subject: Re: Growing a ZFS volume References: <5326B36B.2090601@netlabs.org> <159E2EDC-C49D-4982-BB65-F757D949B5FE@FreeBSD.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-2 Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Mar 2014 10:55:41 -0000 On 17.03.14 11:48, Anton Sayetsky wrote: Hi Anton, > autoexpand does not works, same as autoreplace. ;) In terms of it does not work in ZFS on FreeBSD? I just expanded it with online -e, worked great. regards Adrian From owner-freebsd-fs@FreeBSD.ORG Mon Mar 17 11:00:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2C4E14A2; Mon, 17 Mar 2014 11:00:47 +0000 (UTC) Received: from mail-vc0-x22b.google.com (mail-vc0-x22b.google.com [IPv6:2607:f8b0:400c:c03::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id CC5F517A; Mon, 17 Mar 2014 11:00:46 +0000 (UTC) Received: by mail-vc0-f171.google.com with SMTP id lg15so5659808vcb.30 for ; Mon, 17 Mar 2014 04:00:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=3D6ndFKfkC+LePBJDZgs0pez/fTP8LtCJrr6lpjMmiI=; b=uAZwbpxk48tlJbT9/ugtqk0JSnRpNJvotLH92nq2gsufgOr3HtKXd7We8e/+InMt2z pxTxkVxbFtEW5br5tKxjLgtaNtsqRqQ77miUhyE/K0/qFwou1PpaCyiSmTOggwVQ6Qd4 T5PJJY1gG2AbgddDuSoeo6Dq9Q/RIcqLVZXv/qbNsS9rirtdWlqQ/13zUaFarsu9pdDz RrnxpTmanrayC4gI82x/cny9dPIcxvSwKBjvRuO7GEj18cC0n3I+GOWHhg4GEMZuWc1N e68t8DMxtV2RNeftQ2jKjz7GkJqLN6CahFA7pT2oeylWZFR31A+DFCcYvTd4WhI/3zRE 0Iuw== X-Received: by 10.58.31.136 with SMTP id a8mr5999349vei.20.1395054045112; Mon, 17 Mar 2014 04:00:45 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.91.74 with HTTP; Mon, 17 Mar 2014 04:00:25 -0700 (PDT) In-Reply-To: <5326D468.9060909@netlabs.org> References: <5326B36B.2090601@netlabs.org> <159E2EDC-C49D-4982-BB65-F757D949B5FE@FreeBSD.org> <5326D468.9060909@netlabs.org> From: Anton Sayetsky Date: Mon, 17 Mar 2014 13:00:25 +0200 Message-ID: Subject: Re: Growing a ZFS volume To: Adrian Gschwend Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Mar 2014 11:00:47 -0000 2014-03-17 12:54 GMT+02:00 Adrian Gschwend : > On 17.03.14 11:48, Anton Sayetsky wrote: > > Hi Anton, > >> autoexpand does not works, same as autoreplace. ;) > > In terms of it does not work in ZFS on FreeBSD? Yep, that's right. Below I prove it: Script started on Sat Mar 8 15:12:40 2014 root@jnb:~# truncate -s 20g /home/jason/test.fil root@jnb:~# mdconfig -a -t vnode -f /home/jason/test.fil md0 root@jnb:~# gpart create -s gpt md0 md0 created root@jnb:~# gpart add -a 4k -t freebsd-zfs -s 10g md0 md0p1 added root@jnb:~# gpart show md0 => 34 41942973 md0 GPT (20G) 34 6 - free - (3.0k) 40 20971520 1 freebsd-zfs (10G) 20971560 20971447 - free - (10G) root@jnb:~# zpool create -o cachefile=none -o autoexpand=on -O canmount=off ztest /dev/md0p1 root@jnb:~# zpool list ztest NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT ztest 9.94G 137K 9.94G 0% 1.00x ONLINE - root@jnb:~# zpool export ztest root@jnb:~# gpart resize -i1 -a 4k -s 15g md0 md0p1 resized root@jnb:~# gpart show md0 => 34 41942973 md0 GPT (20G) 34 6 - free - (3.0k) 40 31457280 1 freebsd-zfs (15G) 31457320 10485687 - free - (5G) root@jnb:~# zpool import -o cachefile=none ztest root@jnb:~# zpool list ztest NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT ztest 9.94G 204K 9.94G 0% 1.00x ONLINE - root@jnb:~# zpool online -e ztest md0p1 root@jnb:~# zpool list ztest NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT ztest 14.9G 208K 14.9G 0% 1.00x ONLINE - root@jnb:~# exit Script done on Sat Mar 8 15:13:52 2014 > > I just expanded it with online -e, worked great. This is the only way to expand pool. You cannot do it automatically on FreeBSD. > > regards > > Adrian From owner-freebsd-fs@FreeBSD.ORG Mon Mar 17 11:06:59 2014 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 03B90C70 for ; Mon, 17 Mar 2014 11:06:59 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id CD2842C2 for ; Mon, 17 Mar 2014 11:06:58 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2HB6wUx011513 for ; Mon, 17 Mar 2014 11:06:58 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2HB6wtx011511 for fs@FreeBSD.org; Mon, 17 Mar 2014 11:06:58 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 17 Mar 2014 11:06:58 GMT Message-Id: <201403171106.s2HB6wtx011511@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: fs@FreeBSD.org Subject: Current problem reports assigned to fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Mar 2014 11:06:59 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov 1 problem total. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 17 11:06:43 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D78F08D0 for ; Mon, 17 Mar 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B9AF428D for ; Mon, 17 Mar 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2HB6hGB011212 for ; Mon, 17 Mar 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2HB6hVP011210 for freebsd-fs@FreeBSD.org; Mon, 17 Mar 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 17 Mar 2014 11:06:43 GMT Message-Id: <201403171106.s2HB6hVP011210@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Mar 2014 11:06:43 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix o kern/187261 fs [fuse] FUSE kernel panic when using socket / bind o bin/187071 fs [nfs] nfs server only start 2 daemons 1 master & 1 ser o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186574 fs [zfs] zpool history hangs (infinite loop) o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185858 fs [zfs] zvol clone can't see new device o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 341 problems total. From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 01:26:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 27570807; Tue, 18 Mar 2014 01:26:30 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id AAE6B3F7; Tue, 18 Mar 2014 01:26:29 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqEEAPGfJ1ODaFve/2dsb2JhbABZhBiDBrwHgw6BO3SCLCMEUkQZAgRVBogMrkOibReOBgoBBgEcGRsHgm+BSQSQUZolg0khgSwBCBci X-IronPort-AV: E=Sophos;i="4.97,673,1389762000"; d="scan'208";a="106625938" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 17 Mar 2014 21:26:23 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 0E222B3F16; Mon, 17 Mar 2014 21:26:23 -0400 (EDT) Date: Mon, 17 Mar 2014 21:26:23 -0400 (EDT) From: Rick Macklem To: FreeBSD Filesystems Message-ID: <570922189.23999456.1395105983047.JavaMail.root@uoguelph.ca> In-Reply-To: <1351117550.23999435.1395105975009.JavaMail.root@uoguelph.ca> Subject: review/test: NFS patch to use pagesize mbuf clusters MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_23999454_1976975027.1395105983045" X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 01:26:30 -0000 ------=_Part_23999454_1976975027.1395105983045 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi, Several of the TSO capable network interfaces have a limit of 32 mbufs in the transmit mbuf chain (the drivers call these transmit segments, which I admit I find confusing). For a 64K read/readdir reply or 64K write request, NFS passes a list of 34 mbufs down to TCP. TCP will split the list, since it is slightly more than 64K bytes, but that split will normally be a copy by reference of the last mbuf cluster. As such, normally the network interface will get a list of 34 mbufs. For TSO enabled interfaces that are limited to 32 mbufs in the list, the usual workaround in the driver is to copy { real copy, not copy by reference } the list to 32 mbuf clusters via m_defrag(). (A few drivers use m_collapse() which is less likely to succeed.) As a workaround to this problem, the attached patch modifies NFS to use larger pagesize clusters, so that the 64K RPC message is in 18 mbufs (assuming a 4K pagesize). Testing on my slow hardware which does not have TSO capability shows it to be performance neutral, but I believe avoiding the overhead of copying via m_defrag() { and possible failures resulting in the message never being transmitted } makes this patch worth doing. As such, I'd like to request review and/or testing of this patch by anyone who can do so. Thanks in advance for your help, rick ps: If you don't get the attachment, just email and I'll send you a copy. ------=_Part_23999454_1976975027.1395105983045 Content-Type: text/x-patch; name=4kmcl.patch Content-Disposition: attachment; filename=4kmcl.patch Content-Transfer-Encoding: base64 LS0tIGZzL25mc3NlcnZlci9uZnNfbmZzZHBvcnQuYy5zYXYyCTIwMTQtMDEtMjYgMTg6NTQ6Mjku MDAwMDAwMDAwIC0wNTAwCisrKyBmcy9uZnNzZXJ2ZXIvbmZzX25mc2Rwb3J0LmMJMjAxNC0wMy0x NiAyMzoyMjo1Ni4wMDAwMDAwMDAgLTA0MDAKQEAgLTU2Niw4ICs1NjYsNyBAQCBuZnN2bm9fcmVh ZGxpbmsoc3RydWN0IHZub2RlICp2cCwgc3RydWN0CiAJbGVuID0gMDsKIAlpID0gMDsKIAl3aGls ZSAobGVuIDwgTkZTX01BWFBBVEhMRU4pIHsKLQkJTkZTTUdFVChtcCk7Ci0JCU1DTEdFVChtcCwg TV9XQUlUT0spOworCQlORlNNQ0xHRVQobXAsIE1fTk9XQUlUKTsKIAkJbXAtPm1fbGVuID0gTkZT TVNJWihtcCk7CiAJCWlmIChsZW4gPT0gMCkgewogCQkJbXAzID0gbXAyID0gbXA7CkBAIC02MjEs NyArNjIwLDcgQEAgbmZzdm5vX3JlYWQoc3RydWN0IHZub2RlICp2cCwgb2ZmX3Qgb2ZmLAogICAg IHN0cnVjdCB0aHJlYWQgKnAsIHN0cnVjdCBtYnVmICoqbXBwLCBzdHJ1Y3QgbWJ1ZiAqKm1wZW5k cCkKIHsKIAlzdHJ1Y3QgbWJ1ZiAqbTsKLQlpbnQgaTsKKwlpbnQgZG9fcGFnZXNpemUsIGk7CiAJ c3RydWN0IGlvdmVjICppdjsKIAlzdHJ1Y3QgaW92ZWMgKml2MjsKIAlpbnQgZXJyb3IgPSAwLCBs ZW4sIGxlZnQsIHNpeiwgdGxlbiwgaW9mbGFnID0gMDsKQEAgLTYzMCwxNCArNjI5LDMzIEBAIG5m c3Zub19yZWFkKHN0cnVjdCB2bm9kZSAqdnAsIG9mZl90IG9mZiwKIAlzdHJ1Y3QgbmZzaGV1ciAq bmg7CiAKIAlsZW4gPSBsZWZ0ID0gTkZTTV9STkRVUChjbnQpOworCWRvX3BhZ2VzaXplID0gMDsK KyNpZiBNSlVNUEFHRVNJWkUgIT0gTUNMQllURVMKKwlpZiAobGVmdCA+IE1DTEJZVEVTKQorCQlk b19wYWdlc2l6ZSA9IDE7CisjZW5kaWYKIAltMyA9IE5VTEw7CiAJLyoKIAkgKiBHZW5lcmF0ZSB0 aGUgbWJ1ZiBsaXN0IHdpdGggdGhlIHVpb19pb3YgcmVmLiB0byBpdC4KIAkgKi8KIAlpID0gMDsK IAl3aGlsZSAobGVmdCA+IDApIHsKLQkJTkZTTUdFVChtKTsKLQkJTUNMR0VUKG0sIE1fV0FJVE9L KTsKKwkJLyoKKwkJICogRm9yIGxhcmdlIHJlYWRzLCB0cnkgYW5kIGFjcXVpcmUgTUpVTVBBR0VT SVpFIGNsdXN0ZXJzLgorCQkgKiBIb3dldmVyLCBkbyBzbyB3aXRoIE1fTk9XQUlUIHNvIHRoZSB0 aHJlYWQgY2FuJ3QgZ2V0CisJCSAqIHN0dWNrIHNsZWVwaW5nIG9uICJidGFsbG9jIi4KKwkJICog SWYgdGhpcyBmYWlscywgdXNlIE5GU01DTEdFVCguLk1fTk9XQUlUKSwgd2hpY2ggZG9lcyBhbgor CQkgKiBNR0VUKC4uTV9XQUlUT0spIGZvbGxvd2VkIGJ5IGEgTUNMR0VUKC4uTV9OT1dBSVQpLiAg VGhlCisJCSAqIE1DTEdFVCguLk1fTk9XQUlUKSBtYXkgbm90IGdldCBhIGNsdXN0ZXIsIGJ1dCB3 aWxsIGRyYWluCisJCSAqIHRoZSBtYnVmIGNsdXN0ZXIgem9uZSB3aGVuIGl0IGZhaWxzLgorCQkg KiBBcyBzdWNoLCBhbiBtYnVmIHdpbGwgYWx3YXlzIGJlIGFsbG9jYXRlZCBhbmQgbW9zdCBsaWtl bHkKKwkJICogaXQgd2lsbCBoYXZlIGEgY2x1c3Rlci4KKwkJICovCisJCW0gPSBOVUxMOworCQlp ZiAoZG9fcGFnZXNpemUgIT0gMCkKKwkJCW0gPSBtX2dldGpjbChNX05PV0FJVCwgTVRfREFUQSwg MCwgTUpVTVBBR0VTSVpFKTsKKwkJaWYgKG0gPT0gTlVMTCkKKwkJCU5GU01DTEdFVChtLCBNX05P V0FJVCk7CiAJCW0tPm1fbGVuID0gMDsKIAkJc2l6ID0gbWluKE1fVFJBSUxJTkdTUEFDRShtKSwg bGVmdCk7CiAJCWxlZnQgLT0gc2l6OwpAQCAtMTY1MywxMCArMTY3MSwxMCBAQCBhZ2FpbjoKIAlp ZiAoc2l6ID09IDApIHsKIAkJdnB1dCh2cCk7CiAJCWlmIChuZC0+bmRfZmxhZyAmIE5EX05GU1Yy KSB7Ci0JCQlORlNNX0JVSUxEKHRsLCB1X2ludDMyX3QgKiwgMiAqIE5GU1hfVU5TSUdORUQpOwor CQkJTkZTTV9CVUlMRF9QQUdFTUJDTCh0bCwgdV9pbnQzMl90ICosIDIgKiBORlNYX1VOU0lHTkVE KTsKIAkJfSBlbHNlIHsKIAkJCW5mc3J2X3Bvc3RvcGF0dHIobmQsIGdldHJldCwgJmF0KTsKLQkJ CU5GU01fQlVJTEQodGwsIHVfaW50MzJfdCAqLCA0ICogTkZTWF9VTlNJR05FRCk7CisJCQlORlNN X0JVSUxEX1BBR0VNQkNMKHRsLCB1X2ludDMyX3QgKiwgNCAqIE5GU1hfVU5TSUdORUQpOwogCQkJ dHhkcl9oeXBlcihhdC5uYV9maWxlcmV2LCB0bCk7CiAJCQl0bCArPSAyOwogCQl9CkBAIC0xNzA4 LDcgKzE3MjYsNyBAQCBhZ2FpbjoKIAkgKi8KIAlpZiAobmQtPm5kX2ZsYWcgJiBORF9ORlNWMykg ewogCQluZnNydl9wb3N0b3BhdHRyKG5kLCBnZXRyZXQsICZhdCk7Ci0JCU5GU01fQlVJTEQodGws IHVfaW50MzJfdCAqLCAyICogTkZTWF9VTlNJR05FRCk7CisJCU5GU01fQlVJTERfUEFHRU1CQ0wo dGwsIHVfaW50MzJfdCAqLCAyICogTkZTWF9VTlNJR05FRCk7CiAJCXR4ZHJfaHlwZXIoYXQubmFf ZmlsZXJldiwgdGwpOwogCQlkaXJsZW4gPSBORlNYX1YzUE9TVE9QQVRUUiArIE5GU1hfVkVSRiAr IDIgKiBORlNYX1VOU0lHTkVEOwogCX0gZWxzZSB7CkBAIC0xNzM0LDIwICsxNzUyLDI0IEBAIGFn YWluOgogCQkJICogdGhlIGRpcmVudCBlbnRyeS4KIAkJCSAqLwogCQkJaWYgKG5kLT5uZF9mbGFn ICYgTkRfTkZTVjMpIHsKLQkJCQlORlNNX0JVSUxEKHRsLCB1X2ludDMyX3QgKiwgMyAqIE5GU1hf VU5TSUdORUQpOworCQkJCU5GU01fQlVJTERfUEFHRU1CQ0wodGwsIHVfaW50MzJfdCAqLAorCQkJ CSAgICAzICogTkZTWF9VTlNJR05FRCk7CiAJCQkJKnRsKysgPSBuZXduZnNfdHJ1ZTsKIAkJCQkq dGwrKyA9IDA7CiAJCQl9IGVsc2UgewotCQkJCU5GU01fQlVJTEQodGwsIHVfaW50MzJfdCAqLCAy ICogTkZTWF9VTlNJR05FRCk7CisJCQkJTkZTTV9CVUlMRF9QQUdFTUJDTCh0bCwgdV9pbnQzMl90 ICosCisJCQkJICAgIDIgKiBORlNYX1VOU0lHTkVEKTsKIAkJCQkqdGwrKyA9IG5ld25mc190cnVl OwogCQkJfQogCQkJKnRsID0gdHhkcl91bnNpZ25lZChkcC0+ZF9maWxlbm8pOwogCQkJKHZvaWQp IG5mc21fc3RydG9tKG5kLCBkcC0+ZF9uYW1lLCBubGVuKTsKIAkJCWlmIChuZC0+bmRfZmxhZyAm IE5EX05GU1YzKSB7Ci0JCQkJTkZTTV9CVUlMRCh0bCwgdV9pbnQzMl90ICosIDIgKiBORlNYX1VO U0lHTkVEKTsKKwkJCQlORlNNX0JVSUxEX1BBR0VNQkNMKHRsLCB1X2ludDMyX3QgKiwKKwkJCQkg ICAgMiAqIE5GU1hfVU5TSUdORUQpOwogCQkJCSp0bCsrID0gMDsKIAkJCX0gZWxzZQotCQkJCU5G U01fQlVJTEQodGwsIHVfaW50MzJfdCAqLCBORlNYX1VOU0lHTkVEKTsKKwkJCQlORlNNX0JVSUxE X1BBR0VNQkNMKHRsLCB1X2ludDMyX3QgKiwKKwkJCQkgICAgTkZTWF9VTlNJR05FRCk7CiAJCQkq dGwgPSB0eGRyX3Vuc2lnbmVkKCpjb29raWVwKTsKIAkJfQogCQljcG9zICs9IGRwLT5kX3JlY2xl bjsKQEAgLTE3NTcsNyArMTc3OSw3IEBAIGFnYWluOgogCX0KIAlpZiAoY3BvcyA8IGNlbmQpCiAJ CWVvZmZsYWcgPSAwOwotCU5GU01fQlVJTEQodGwsIHVfaW50MzJfdCAqLCAyICogTkZTWF9VTlNJ R05FRCk7CisJTkZTTV9CVUlMRF9QQUdFTUJDTCh0bCwgdV9pbnQzMl90ICosIDIgKiBORlNYX1VO U0lHTkVEKTsKIAkqdGwrKyA9IG5ld25mc19mYWxzZTsKIAlpZiAoZW9mZmxhZykKIAkJKnRsID0g bmV3bmZzX3RydWU7CkBAIC0xOTI4LDcgKzE5NTAsNyBAQCBhZ2FpbjoKIAkJdnB1dCh2cCk7CiAJ CWlmIChuZC0+bmRfZmxhZyAmIE5EX05GU1YzKQogCQkJbmZzcnZfcG9zdG9wYXR0cihuZCwgZ2V0 cmV0LCAmYXQpOwotCQlORlNNX0JVSUxEKHRsLCB1X2ludDMyX3QgKiwgNCAqIE5GU1hfVU5TSUdO RUQpOworCQlORlNNX0JVSUxEX1BBR0VNQkNMKHRsLCB1X2ludDMyX3QgKiwgNCAqIE5GU1hfVU5T SUdORUQpOwogCQl0eGRyX2h5cGVyKGF0Lm5hX2ZpbGVyZXYsIHRsKTsKIAkJdGwgKz0gMjsKIAkJ KnRsKysgPSBuZXduZnNfZmFsc2U7CkBAIC0yMDMxLDcgKzIwNTMsNyBAQCBhZ2FpbjoKIAl9IGVs c2UgewogCQlkaXJsZW4gPSBORlNYX1ZFUkYgKyAyICogTkZTWF9VTlNJR05FRDsKIAl9Ci0JTkZT TV9CVUlMRCh0bCwgdV9pbnQzMl90ICosIE5GU1hfVkVSRik7CisJTkZTTV9CVUlMRF9QQUdFTUJD TCh0bCwgdV9pbnQzMl90ICosIE5GU1hfVkVSRik7CiAJdHhkcl9oeXBlcihhdC5uYV9maWxlcmV2 LCB0bCk7CiAKIAkvKgpAQCAtMjE4NiwxMiArMjIwOCwxNCBAQCBhZ2FpbjoKIAkJCSAqIEJ1aWxk IHRoZSBkaXJlY3RvcnkgcmVjb3JkIHhkcgogCQkJICovCiAJCQlpZiAobmQtPm5kX2ZsYWcgJiBO RF9ORlNWMykgewotCQkJCU5GU01fQlVJTEQodGwsIHVfaW50MzJfdCAqLCAzICogTkZTWF9VTlNJ R05FRCk7CisJCQkJTkZTTV9CVUlMRF9QQUdFTUJDTCh0bCwgdV9pbnQzMl90ICosCisJCQkJICAg IDMgKiBORlNYX1VOU0lHTkVEKTsKIAkJCQkqdGwrKyA9IG5ld25mc190cnVlOwogCQkJCSp0bCsr ID0gMDsKIAkJCQkqdGwgPSB0eGRyX3Vuc2lnbmVkKGRwLT5kX2ZpbGVubyk7CiAJCQkJZGlybGVu ICs9IG5mc21fc3RydG9tKG5kLCBkcC0+ZF9uYW1lLCBubGVuKTsKLQkJCQlORlNNX0JVSUxEKHRs LCB1X2ludDMyX3QgKiwgMiAqIE5GU1hfVU5TSUdORUQpOworCQkJCU5GU01fQlVJTERfUEFHRU1C Q0wodGwsIHVfaW50MzJfdCAqLAorCQkJCSAgICAyICogTkZTWF9VTlNJR05FRCk7CiAJCQkJKnRs KysgPSAwOwogCQkJCSp0bCA9IHR4ZHJfdW5zaWduZWQoKmNvb2tpZXApOwogCQkJCW5mc3J2X3Bv c3RvcGF0dHIobmQsIDAsIG52YXApOwpAQCAtMjIwMCw3ICsyMjI0LDggQEAgYWdhaW46CiAJCQkJ aWYgKG52cCAhPSBOVUxMKQogCQkJCQl2cHV0KG52cCk7CiAJCQl9IGVsc2UgewotCQkJCU5GU01f QlVJTEQodGwsIHVfaW50MzJfdCAqLCAzICogTkZTWF9VTlNJR05FRCk7CisJCQkJTkZTTV9CVUlM RF9QQUdFTUJDTCh0bCwgdV9pbnQzMl90ICosCisJCQkJICAgIDMgKiBORlNYX1VOU0lHTkVEKTsK IAkJCQkqdGwrKyA9IG5ld25mc190cnVlOwogCQkJCSp0bCsrID0gMDsKIAkJCQkqdGwgPSB0eGRy X3Vuc2lnbmVkKCpjb29raWVwKTsKQEAgLTIyNjcsNyArMjI5Miw3IEBAIGFnYWluOgogCX0gZWxz ZSBpZiAoY3BvcyA8IGNlbmQpCiAJCWVvZmZsYWcgPSAwOwogCWlmICghbmQtPm5kX3JlcHN0YXQp IHsKLQkJTkZTTV9CVUlMRCh0bCwgdV9pbnQzMl90ICosIDIgKiBORlNYX1VOU0lHTkVEKTsKKwkJ TkZTTV9CVUlMRF9QQUdFTUJDTCh0bCwgdV9pbnQzMl90ICosIDIgKiBORlNYX1VOU0lHTkVEKTsK IAkJKnRsKysgPSBuZXduZnNfZmFsc2U7CiAJCWlmIChlb2ZmbGFnKQogCQkJKnRsID0gbmV3bmZz X3RydWU7Ci0tLSBmcy9uZnNjbGllbnQvbmZzX2NsY29tc3Vicy5jLnNhdjIJMjAxNC0wMi0wMSAy MDo0NzowNy4wMDAwMDAwMDAgLTA1MDAKKysrIGZzL25mc2NsaWVudC9uZnNfY2xjb21zdWJzLmMJ MjAxNC0wMy0xNiAyMzoyMjowNi4wMDAwMDAwMDAgLTA0MDAKQEAgLTE1NSw3ICsxNTUsNyBAQCBu ZnNjbF9yZXFzdGFydChzdHJ1Y3QgbmZzcnZfZGVzY3JpcHQgKm5kCiAJICogR2V0IHRoZSBmaXJz dCBtYnVmIGZvciB0aGUgcmVxdWVzdC4KIAkgKi8KIAlpZiAobmZzX2JpZ3JlcXVlc3RbcHJvY251 bV0pCi0JCU5GU01DTEdFVChtYiwgTV9XQUlUT0spOworCQlORlNNQ0xHRVQobWIsIE1fTk9XQUlU KTsKIAllbHNlCiAJCU5GU01HRVQobWIpOwogCW1idWZfc2V0bGVuKG1iLCAwKTsKQEAgLTI2Nyw5 ICsyNjcsMjkgQEAgbmZzbV91aW9tYnVmKHN0cnVjdCBuZnNydl9kZXNjcmlwdCAqbmQsIAogCQl3 aGlsZSAobGVmdCA+IDApIHsKIAkJCW1sZW4gPSBNX1RSQUlMSU5HU1BBQ0UobXApOwogCQkJaWYg KG1sZW4gPT0gMCkgewotCQkJCWlmIChjbGZsZykKLQkJCQkJTkZTTUNMR0VUKG1wLCBNX1dBSVRP Syk7Ci0JCQkJZWxzZQorCQkJCWlmIChjbGZsZyAhPSAwKSB7CisJCQkJCS8qCisJCQkJCSAqIEZv ciBsYXJnZSB3cml0ZXMsIHRyeSBhbmQgYWNxdWlyZQorCQkJCQkgKiBNSlVNUEFHRVNJWkUgY2x1 c3RlcnMuCisJCQkJCSAqIEhvd2V2ZXIsIGRvIHNvIHdpdGggTV9OT1dBSVQgc28gdGhlCisJCQkJ CSAqIHRocmVhZCBjYW4ndCBnZXQgc3R1Y2sgc2xlZXBpbmcgb24KKwkJCQkJICogImJ0YWxsb2Mi LiAgSWYgdGhpcyBmYWlscywgdXNlCisJCQkJCSAqIE5GU01DTEdFVCguLk1fTk9XQUlUKSwgd2hp Y2ggZG9lcyBhbgorCQkJCQkgKiBNR0VUKC4uTV9XQUlUT0spIGZvbGxvd2VkIGJ5IGEKKwkJCQkJ ICogTUNMR0UgVCguLk1fTk9XQUlUKS4gVGhpcyBtYXkgbm90IGdldAorCQkJCQkgKiBhIGNsdXN0 ZXIsIGJ1dCB3aWxsIGRyYWluIHRoZSBtYnVmCisJCQkJCSAqIGNsdXN0ZXIgem9uZSB3aGVuIGl0 IGZhaWxzLgorCQkJCQkgKiBBcyBzdWNoLCBhbiBtYnVmIHdpbGwgYWx3YXlzIGJlCisJCQkJCSAq IGFsbG9jYXRlZCBhbmQgbW9zdCBsaWtlbHkgaXQgd2lsbAorCQkJCQkgKiBoYXZlIGEgY2x1c3Rl ci4KKwkJCQkJICovCisjaWYgTUpVTVBBR0VTSVpFICE9IE1DTEJZVEVTCisJCQkJCW1wID0gbV9n ZXRqY2woTV9OT1dBSVQsIE1UX0RBVEEsIDAsCisJCQkJCSAgICBNSlVNUEFHRVNJWkUpOworCQkJ CQlpZiAobXAgPT0gTlVMTCkKKyNlbmRpZgorCQkJCQkJTkZTTUNMR0VUKG1wLCBNX05PV0FJVCk7 CisJCQkJfSBlbHNlCiAJCQkJCU5GU01HRVQobXApOwogCQkJCW1idWZfc2V0bGVuKG1wLCAwKTsK IAkJCQltYnVmX3NldG5leHQobXAyLCBtcCk7Ci0tLSBmcy9uZnMvbmZzbV9zdWJzLmguc2F2Mgky MDE0LTAyLTAxIDE5OjUxOjEyLjAwMDAwMDAwMCAtMDUwMAorKysgZnMvbmZzL25mc21fc3Vicy5o CTIwMTQtMDMtMTMgMTg6NTQ6MjcuMDAwMDAwMDAwIC0wNDAwCkBAIC04OSw2ICs4OSwzNyBAQCBu ZnNtX2J1aWxkKHN0cnVjdCBuZnNydl9kZXNjcmlwdCAqbmQsIGluCiAKICNkZWZpbmUJTkZTTV9C VUlMRChhLCBjLCBzKQkoKGEpID0gKGMpbmZzbV9idWlsZChuZCwgKHMpKSkKIAorLyoKKyAqIFNh bWUgYXMgYWJvdmUsIGJ1dCBhbGxvY2F0ZXMgTUpVTVBBR0VTSVpFIG1idWYgY2x1c3RlcnMsIGlm IHBvc3NpYmxlLgorICovCitzdGF0aWMgX19pbmxpbmUgdm9pZCAqCituZnNtX2J1aWxkX3BhZ2Vt YmNsKHN0cnVjdCBuZnNydl9kZXNjcmlwdCAqbmQsIGludCBzaXopCit7CisJdm9pZCAqcmV0cDsK KwlzdHJ1Y3QgbWJ1ZiAqbWIyOworCisJaWYgKHNpeiA+IE1fVFJBSUxJTkdTUEFDRShuZC0+bmRf bWIpKSB7CisJCW1iMiA9IE5VTEw7CisjaWYgTUpVTVBBR0VTSVpFICE9IE1DTEJZVEVTCisJCW1i MiA9IG1fZ2V0amNsKE1fTk9XQUlULCBNVF9EQVRBLCAwLCBNSlVNUEFHRVNJWkUpOworI2VuZGlm CisJCWlmIChtYjIgPT0gTlVMTCkKKwkJCU5GU01DTEdFVChtYjIsIE1fTk9XQUlUKTsKKwkJaWYg KHNpeiA+IE1MRU4pCisJCQlwYW5pYygiYnVpbGQgPiBNTEVOIik7CisJCW1idWZfc2V0bGVuKG1i MiwgMCk7CisJCW5kLT5uZF9icG9zID0gTkZTTVRPRChtYjIsIGNhZGRyX3QpOworCQluZC0+bmRf bWItPm1fbmV4dCA9IG1iMjsKKwkJbmQtPm5kX21iID0gbWIyOworCX0KKwlyZXRwID0gKHZvaWQg KikobmQtPm5kX2Jwb3MpOworCW5kLT5uZF9tYi0+bV9sZW4gKz0gc2l6OworCW5kLT5uZF9icG9z ICs9IHNpejsKKwlyZXR1cm4gKHJldHApOworfQorCisjZGVmaW5lCU5GU01fQlVJTERfUEFHRU1C Q0woYSwgYywgcykJKChhKSA9IChjKW5mc21fYnVpbGRfcGFnZW1iY2wobmQsIChzKSkpCisKIHN0 YXRpYyBfX2lubGluZSB2b2lkICoKIG5mc21fZGlzc2VjdChzdHJ1Y3QgbmZzcnZfZGVzY3JpcHQg Km5kLCBpbnQgc2l6KQogewotLS0gZnMvbmZzL25mc3BvcnQuaC5zYXYyCTIwMTQtMDItMTMgMTk6 MDM6MjIuMDAwMDAwMDAwIC0wNTAwCisrKyBmcy9uZnMvbmZzcG9ydC5oCTIwMTQtMDItMTMgMTk6 MTQ6MjQuMDAwMDAwMDAwIC0wNTAwCkBAIC0xMzgsNiArMTM4LDggQEAKIAogLyoKICAqIEFsbG9j YXRlIG1idWZzLiBNdXN0IHN1Y2NlZWQgYW5kIG5ldmVyIHNldCB0aGUgbWJ1ZiBwdHIgdG8gTlVM TC4KKyAqIE5vdGUgdGhhdCB3aGVuIE5GU01DTEdFVChtLCBNX05PV0FJVCkgaXMgZG9uZSwgaXQg c3RpbGwgbXVzdCBhbGxvY2F0ZQorICogYW4gbWJ1ZiAoYW5kIGNhbiBzbGVlcCksIGJ1dCBtaWdo dCBub3QgZ2V0IGEgY2x1c3RlciwgaW4gdGhlIHdvcnN0IGNhc2UuCiAgKi8KICNkZWZpbmUJTkZT TUdFVChtKQlkbyB7IAkJCQkJXAogCQlNR0VUKChtKSwgTV9XQUlUT0ssIE1UX0RBVEEpOyAJCQlc Cg== ------=_Part_23999454_1976975027.1395105983045-- From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 04:11:07 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0351A144; Tue, 18 Mar 2014 04:11:07 +0000 (UTC) Received: from mail-we0-x22a.google.com (mail-we0-x22a.google.com [IPv6:2a00:1450:400c:c03::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6A76862D; Tue, 18 Mar 2014 04:11:06 +0000 (UTC) Received: by mail-we0-f170.google.com with SMTP id w61so5410341wes.15 for ; Mon, 17 Mar 2014 21:11:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=zkVfTzzfHOTweqBsMqlzVfOIi6NkMIK3lbp4KuD2tbQ=; b=rO0fElSPA5ZuqpmZf9ft0EYUpsB9kaqSnX+c+DIgj/2HolWh2Cs3RENfjwArsA8stX XGoqSE9Qqk1QegF6d+baFKhxx1DzWp7u11LBrMYRN10XCzlpI7oSF3ZBXsJXgwc5MBpt ABfRIQAYNOUOMwAJ8oWI8kiw5UWHk3+XZO83n6oHeTlH3cFDwlqYhWRs+wtJKgBH3gtj tN2jhlvvDjJDbQk85zHsHfpbX1f1Jg4bL82i5R1dzXRaVkZiSU9yTAsm/I3Y+dzIr/qO X++icukA1d9Iw8iIkM6Dw8pV/IG8lSqh45riyWkhAHJhSn6BwZzGdRWlMDYAFahdcSoJ nO4Q== MIME-Version: 1.0 X-Received: by 10.180.164.174 with SMTP id yr14mr12412112wib.18.1395115864793; Mon, 17 Mar 2014 21:11:04 -0700 (PDT) Received: by 10.216.190.199 with HTTP; Mon, 17 Mar 2014 21:11:04 -0700 (PDT) In-Reply-To: <570922189.23999456.1395105983047.JavaMail.root@uoguelph.ca> References: <1351117550.23999435.1395105975009.JavaMail.root@uoguelph.ca> <570922189.23999456.1395105983047.JavaMail.root@uoguelph.ca> Date: Tue, 18 Mar 2014 12:11:04 +0800 Message-ID: Subject: Re: review/test: NFS patch to use pagesize mbuf clusters From: Marcelo Araujo To: Rick Macklem Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: araujo@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 04:11:07 -0000 Hello Rick, I have couple machines with 10G interface capable with TSO. Which kind of result do you expecting? Is it a speed up in read? I'm gonna make some tests today, but against 9.1-RELEASE, where my servers are working on. Best Regards, 2014-03-18 9:26 GMT+08:00 Rick Macklem : > Hi, > > Several of the TSO capable network interfaces have a limit of > 32 mbufs in the transmit mbuf chain (the drivers call these transmit > segments, which I admit I find confusing). > > For a 64K read/readdir reply or 64K write request, NFS passes > a list of 34 mbufs down to TCP. TCP will split the list, since > it is slightly more than 64K bytes, but that split will normally > be a copy by reference of the last mbuf cluster. As such, normally > the network interface will get a list of 34 mbufs. > > For TSO enabled interfaces that are limited to 32 mbufs in the > list, the usual workaround in the driver is to copy { real copy, > not copy by reference } the list to 32 mbuf clusters via m_defrag(). > (A few drivers use m_collapse() which is less likely to succeed.) > > As a workaround to this problem, the attached patch modifies NFS > to use larger pagesize clusters, so that the 64K RPC message is > in 18 mbufs (assuming a 4K pagesize). > > Testing on my slow hardware which does not have TSO capability > shows it to be performance neutral, but I believe avoiding the > overhead of copying via m_defrag() { and possible failures > resulting in the message never being transmitted } makes this > patch worth doing. > > As such, I'd like to request review and/or testing of this patch > by anyone who can do so. > > Thanks in advance for your help, rick > ps: If you don't get the attachment, just email and I'll > send you a copy. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > -- Marcelo Araujo araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 08:40:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A6530DFE for ; Tue, 18 Mar 2014 08:40:03 +0000 (UTC) Received: from mail-we0-x22d.google.com (mail-we0-x22d.google.com [IPv6:2a00:1450:400c:c03::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 3CCE9C76 for ; Tue, 18 Mar 2014 08:40:03 +0000 (UTC) Received: by mail-we0-f173.google.com with SMTP id w61so5592387wes.32 for ; Tue, 18 Mar 2014 01:40:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=0tfrQowP3LRNTZR3iaYTLGX50WGj0QBKBMaCVZCou3g=; b=Ud6i5Q8EpS4c840ad57GS+BzJlu+iiM2iVVGm1/5Nlt56OAd18oBd3vfa0spIaaLja KM2nNGrl8c0mWZu4zuMqZvYuOqKXijzbPBeIZatWcK/fnclr5CGbJ1VtvoE/r7yZoSXX WOXzq88J0ybrSeubBr7vzHUCofB8YuWvAdDQs5XZys4+3w0dlsoj97IY0izNljTdYSDd TxGVFi+iTfmqfpoZ8CcvRViOVLZOz6lwVjsPSa2HAKnk9iUo0W7GLZAtxDtrPs10BASi Z9Xk0ZBzCQEvp/yONeIPRBLGS7bc0D+ci42NQt2qax14hIfRq/2jlrUC7FhGo3URS8RX xbjw== X-Received: by 10.180.105.65 with SMTP id gk1mr13764184wib.12.1395132000830; Tue, 18 Mar 2014 01:40:00 -0700 (PDT) Received: from mavbook.mavhome.dp.ua ([134.249.139.101]) by mx.google.com with ESMTPSA id t5sm45255229wjw.15.2014.03.18.01.39.58 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 18 Mar 2014 01:39:59 -0700 (PDT) Sender: Alexander Motin Message-ID: <5328065D.60201@FreeBSD.org> Date: Tue, 18 Mar 2014 10:39:57 +0200 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: Rick Macklem , FreeBSD Filesystems Subject: Re: review/test: NFS patch to use pagesize mbuf clusters References: <570922189.23999456.1395105983047.JavaMail.root@uoguelph.ca> In-Reply-To: <570922189.23999456.1395105983047.JavaMail.root@uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 08:40:03 -0000 Hi. On 18.03.2014 03:26, Rick Macklem wrote: > Several of the TSO capable network interfaces have a limit of > 32 mbufs in the transmit mbuf chain (the drivers call these transmit > segments, which I admit I find confusing). > > For a 64K read/readdir reply or 64K write request, NFS passes > a list of 34 mbufs down to TCP. TCP will split the list, since > it is slightly more than 64K bytes, but that split will normally > be a copy by reference of the last mbuf cluster. As such, normally > the network interface will get a list of 34 mbufs. > > For TSO enabled interfaces that are limited to 32 mbufs in the > list, the usual workaround in the driver is to copy { real copy, > not copy by reference } the list to 32 mbuf clusters via m_defrag(). > (A few drivers use m_collapse() which is less likely to succeed.) > > As a workaround to this problem, the attached patch modifies NFS > to use larger pagesize clusters, so that the 64K RPC message is > in 18 mbufs (assuming a 4K pagesize). > > Testing on my slow hardware which does not have TSO capability > shows it to be performance neutral, but I believe avoiding the > overhead of copying via m_defrag() { and possible failures > resulting in the message never being transmitted } makes this > patch worth doing. > > As such, I'd like to request review and/or testing of this patch > by anyone who can do so. First, I've tried to find respective NIC to test: cxgb/cxgbe have limit of 36, and so probably unaffected, ixgb -- 100, igb -- 64, only on em I've found limit of 32. I run several profiles on em NIC with and without the patch. I can confirm that without the patch m_defrag() is indeed called, while with patch it is not any more. But profiler shows to me that very small amount of time (percents or even fractions) is spent there. I can't measure the effect (my Core-i7 desktop test system has only about 5% CPU load while serving full 1Gbps NFS over the em), though I can't say for sure that effect can't be there on some low-end system. I am also not very sure about replacing M_WAITOK with M_NOWAIT. Instead of waiting a bit while VM find a cluster, NFSMCLGET() will return single mbuf, as result, replacing chain of 2K clusters instead of 4K ones with chain of 256b mbufs. -- Alexander Motin From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 10:36:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B8F7EE5 for ; Tue, 18 Mar 2014 10:36:20 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7A668B4C for ; Tue, 18 Mar 2014 10:36:20 +0000 (UTC) Received: from firewall.mikej.com (162-238-140-44.lightspeed.lsvlky.sbcglobal.net [162.238.140.44]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s2IARa1C041895 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Tue, 18 Mar 2014 06:27:37 -0400 (EDT) (envelope-from mikej@mikej.com) Received: from firewall.mikej.com (localhost.mikej.com [127.0.0.1]) by firewall.mikej.com (8.14.8/8.14.8) with ESMTP id s2IAQsX6055124 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Tue, 18 Mar 2014 06:27:35 -0400 (EDT) (envelope-from mikej@mikej.com) Received: (from www@localhost) by firewall.mikej.com (8.14.8/8.14.8/Submit) id s2IAQsD0055123; Tue, 18 Mar 2014 06:26:54 -0400 (EDT) (envelope-from mikej@mikej.com) X-Authentication-Warning: firewall.mikej.com: www set sender to mikej@mikej.com using -f To: Subject: Re: Reoccurring ZFS performance problems [RESOLVED] MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Tue, 18 Mar 2014 06:26:24 -0400 From: mikej In-Reply-To: References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <53236BF3.9060500@denninger.net> Message-ID: X-Sender: mikej@mikej.com User-Agent: Roundcube Webmail/0.6-beta X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 10:36:20 -0000 On 2014-03-14 19:04, Matthias Gamsjager wrote: > Much better thx :) > > Will this patch be review by some kernel devs and merged? > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" I am a little surprised this thread has been so quiet. I have been running with this patch and my desktop is more pleasant when memory demands are great - no more swapping - and wired no longer grows uncontrollable. Is more review coming the silence is deffining. Regards, From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 11:06:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E6ACC602 for ; Tue, 18 Mar 2014 11:06:22 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AE2F2E2E for ; Tue, 18 Mar 2014 11:06:22 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2IB6EGK084724 for ; Tue, 18 Mar 2014 06:06:15 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Tue Mar 18 06:06:15 2014 Message-ID: <532828A1.6080605@denninger.net> Date: Tue, 18 Mar 2014 06:06:09 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Reoccurring ZFS performance problems [RESOLVED] References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <53236BF3.9060500@denninger.net> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080003040007000606060803" X-Antivirus: avast! (VPS 140317-1, 03/17/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 11:06:23 -0000 This is a cryptographically signed message in MIME format. --------------ms080003040007000606060803 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/18/2014 5:26 AM, mikej wrote: > On 2014-03-14 19:04, Matthias Gamsjager wrote: >> Much better thx :) >> >> Will this patch be review by some kernel devs and merged? >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > I am a little surprised this thread has been so quiet. I have been > running with this patch and my desktop is more pleasant when memory > demands are great - no more swapping - and wired no longer grows > uncontrollable. > > Is more review coming the silence is deffining. > It makes an utterly-enormous difference here. This is what one of my "nasty-busy" servers looks like this morning=20 (it's got a very busy blog on it along with other things, and is=20 pretty-quiet right now -- but it won't be in a couple of hours) 1 users Load 0.22 0.25 0.21 Mar 18 05:55 Mem:KB REAL VIRTUAL VN PAGER SWAP P= AGER Tot Share Tot Share Free in out in = out Act 4238440 31700 7953812 53652 2993908 count All 16025k 39644 8680436 249960 pages Proc: Interrup= ts r p d s w Csw Trp Sys Int Sof Flt ioflt 2083 to= tal 204 7321 1498 6416 665 313 707 207 cow 12 ua= rt0 4 428 zfod 20 uh= ci0 16 0.4%Sys 0.1%Intr 0.6%User 0.0%Nice 99.0%Idle ozfod pc= m0 17 | | | | | | | | | | %ozfod ehc= i0 uhci > daefr uhc= i1 21 dtbuf 417 prcfr 455 uh= ci3 ehci Namei Name-cache Dir-cache 485892 desvn 1197 totfr 16 twa= 0 30 Calls hits % hits % 136934 numvn react 994 cp= u0:timer 8063 8009 99 121473 frevn pdwak 42 mp= s0 256 871 pdpgs 15 em= 0:rx 0 Disks ada0 da0 da1 da2 da3 da4 da5 intrn 20 em0= :tx 0 KB/t 0.00 20.46 19.92 0.00 0.00 22.06 44.21 17177460 wire em0= :link tps 0 7 7 0 0 7 11 2131860 act 45 em1= :rx 0 MB/s 0.00 0.15 0.15 0.00 0.00 0.15 0.47 2158808 inact 38 em1= :tx 0 %busy 0 7 7 0 0 0 0 7512 cache em1= :link 2986396 free ah= ci0:ch0 buf 16 cp= u1:timer 23 cp= u11:time 17 cp= u5:timer 13 cp= u9:timer 44 cp= u4:timer 35 cp= u15:time 26 cp= u6:timer 16 cp= u14:time 28 cp= u7:timer 23 cp= u13:time 23 cp= u3:timer 43 cp= u10:time 50 cp= u2:timer 29 cp= u12:time 40 cp= u8:timer Here's the ARC cache.... [karl@NewFS ~]$ zfs-stats -A ------------------------------------------------------------------------ ZFS Subsystem Report Tue Mar 18 05:56:42 2014 ------------------------------------------------------------------------ ARC Summary: (HEALTHY) Memory Throttle Count: 0 ARC Misc: Deleted: 1.55m Recycle Misses: 66.33k Mutex Misses: 1.55k Evict Skips: 4.14m ARC Size: 60.01% 13.40 GiB Target Size: (Adaptive) 60.01% 13.40 GiB Min Size (Hard Limit): 12.50% 2.79 GiB Max Size (High Water): 8:1 22.33 GiB ARC Size Breakdown: Recently Used Cache Size: 79.13% 10.60 GiB Frequently Used Cache Size: 20.87% 2.80 GiB ARC Hash Breakdown: Elements Max: 1.34m Elements Current: 62.76% 840.43k Collisions: 7.02m Chain Max: 13 Chains: 247.65k ------------------------------------------------------------------------ Note the scale-down from the maximum -- this is with: [karl@NewFS ~]$ sysctl -a|grep percent vfs.zfs.arc_freepage_percent_target: 10 My test machine has a lot less memory in it and there the default (25%)=20 appears to be a good value. Before this delta was put on the code this system would have tried to=20 grab the entire 22GB to the exclusion of anything else. What I used to=20 do is limit it to 16GB via arc_max which was fine in the mornings and=20 overnight, but during the day it didn't cut it -- and there was no way=20 to change it without a reboot either. This particular machine has 24GB=20 of RAM in it and provides services both externally and internally=20 (separate interfaces.) How efficient is the cache? [karl@NewFS ~]$ zfs-stats -E ------------------------------------------------------------------------ ZFS Subsystem Report Tue Mar 18 05:59:01 2014 ------------------------------------------------------------------------ ARC Efficiency: 81.13m Cache Hit Ratio: 97.84% 79.38m Cache Miss Ratio: 2.16% 1.75m Actual Hit Ratio: 69.81% 56.64m Data Demand Efficiency: 99.09% 50.37m Data Prefetch Efficiency: 28.77% 1.46m CACHE HITS BY CACHE LIST: Anonymously Used: 28.48% 22.61m Most Recently Used: 6.81% 5.40m Most Frequently Used: 64.54% 51.23m Most Recently Used Ghost: 0.03% 24.86k Most Frequently Used Ghost: 0.13% 104.39k CACHE HITS BY DATA TYPE: Demand Data: 62.88% 49.91m Prefetch Data: 0.53% 419.73k Demand Metadata: 8.28% 6.57m Prefetch Metadata: 28.31% 22.47m CACHE MISSES BY DATA TYPE: Demand Data: 26.03% 456.20k Prefetch Data: 59.29% 1.04m Demand Metadata: 9.84% 172.53k Prefetch Metadata: 4.84% 84.81k ------------------------------------------------------------------------ --=20 -- Karl karl@denninger.net --------------ms080003040007000606060803 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTgxMTA2MDlaMCMGCSqGSIb3DQEJBDEW BBTcOO7UeJ4/nbtSto5QA6XRUEXPejBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAI0JBocgBiPhaaUMnDh3WPOrhQIiY SQO21PI8XyJdcPGfl5aAC//PA2LZFUuh2dAclIijuelHygj7SHkLZwtskQ7ZMn1QF/SD0SOF PW7zkSZUcJVGlF/nSWzJyD9x+dVdDHTM+hcatiCovgIH3ua7N28FXlRr5zSrvxNi12XZsdsN v35TiF76Vr4kRY0rKAvEeXaEjGgWgERgCNwawF65yInebFkRzZTSKJgxRaFi+7EzuTXDPhd8 qLLZoVa9yAeCHACGXS2yx04lBMCdpAZNbELWnASftLuk058ZNoRsgrnCiTzNtSoypeRNESRE 5X1XIrIqCSv6JRagC97JVpdEUe5EI2xcbaPzXobGCKVqq+3/WZMPKbsVk9dpak5siEiUMadC rmMKPoBlJoFXcxKHOFONSre3bYVF8rqy85zOKOSl2VHBoBSa1cUInp5nTBItGW8mMyXAxGUw s6A08WGJacFZCw0sioSQ3O2KzwWDkUWiv8AC+osmSyJjeVOH7UgxSAy61DbdePzlL+RLjN+m pIJ611l2JdWPEK3izuEF9Jht6aYv5xAH2/U7lRMWJwouPNnDFui7jZW7B1/dIceDgAJ5/JOC hPPpGZy+epqTWL3Y2/qUEIcmdCiNg3ea32ok9e2idPkCBjtGR0vo+ZFEQ4B9O34uScqYXJ0U azlwyOwAAAAAAAA= --------------ms080003040007000606060803-- From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 13:07:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 19B1ECA1 for ; Tue, 18 Mar 2014 13:07:34 +0000 (UTC) Received: from mail-wg0-x22f.google.com (mail-wg0-x22f.google.com [IPv6:2a00:1450:400c:c00::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9B786DD5 for ; Tue, 18 Mar 2014 13:07:33 +0000 (UTC) Received: by mail-wg0-f47.google.com with SMTP id x12so5963291wgg.30 for ; Tue, 18 Mar 2014 06:07:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=zi58LR3darm2Bw+ZOLRJl713QMLSsfWolj71CsnyF6c=; b=QZ14W6PXef3EVmLR4m2gJ+T/6JdvT1S8GV/4iVqT683vqQjrUuGruxryfjCqTtecs8 tLtDzkWFLxJR09ACw9HRuMA1TXnNJ9K38DEbsX1ixenximcb7TZ/8ujNkbMIjD1tULoN j+csTK0eMp4GBuJTR0TSbGZK3EqlfBsZMt2MXYHsaTvMYojL7JzvySfxrGqU12Bq5BNz QKI2o2AQ1bK/qt+XW7Des8u4061S0eZEgi03XCsCzikvZUXdlMNpfTHBTAe3bK8K/QNT arU++09XOU6l+jv/XLTWba0L+EKdmXuNCDXeb36IWV+sjENUXQNs/nsiWcp0TrwOAk1C 5Gpg== X-Received: by 10.194.103.36 with SMTP id ft4mr1556291wjb.66.1395148050783; Tue, 18 Mar 2014 06:07:30 -0700 (PDT) Received: from [192.168.1.129] ([193.173.55.180]) by mx.google.com with ESMTPSA id lz3sm34606176wic.1.2014.03.18.06.07.29 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 18 Mar 2014 06:07:30 -0700 (PDT) Message-ID: <53284511.3030901@gmail.com> Date: Tue, 18 Mar 2014 14:07:29 +0100 From: Johan Hendriks User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: Karl Denninger Subject: Re: Reoccurring ZFS performance problems [RESOLVED] References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <53236BF3.9060500@denninger.net> <532828A1.6080605@denninger.net> In-Reply-To: <532828A1.6080605@denninger.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 13:07:34 -0000 Karl Denninger schreef: > On 3/18/2014 5:26 AM, mikej wrote: >> On 2014-03-14 19:04, Matthias Gamsjager wrote: >>> Much better thx :) >>> >>> Will this patch be review by some kernel devs and merged? >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> I am a little surprised this thread has been so quiet. I have been >> running with this patch and my desktop is more pleasant when memory >> demands are great - no more swapping - and wired no longer grows >> uncontrollable. >> >> Is more review coming the silence is deffining. >> > It makes an utterly-enormous difference here. > > This is what one of my "nasty-busy" servers looks like this morning > (it's got a very busy blog on it along with other things, and is > pretty-quiet right now -- but it won't be in a couple of hours) > > 1 users Load 0.22 0.25 0.21 Mar 18 05:55 > > Mem:KB REAL VIRTUAL VN PAGER SWAP > PAGER > Tot Share Tot Share Free in out > in out > Act 4238440 31700 7953812 53652 2993908 count > All 16025k 39644 8680436 249960 pages > Proc: Interrupts > r p d s w Csw Trp Sys Int Sof Flt ioflt 2083 > total > 204 7321 1498 6416 665 313 707 207 cow 12 uart0 4 > 428 zfod 20 > uhci0 16 > 0.4%Sys 0.1%Intr 0.6%User 0.0%Nice 99.0%Idle ozfod pcm0 17 > | | | | | | | | | | %ozfod ehci0 uhci >> daefr uhci1 21 > dtbuf 417 prcfr 455 > uhci3 ehci > Namei Name-cache Dir-cache 485892 desvn 1197 totfr 16 > twa0 30 > Calls hits % hits % 136934 numvn react 994 > cpu0:timer > 8063 8009 99 121473 frevn pdwak 42 > mps0 256 > 871 pdpgs 15 > em0:rx 0 > Disks ada0 da0 da1 da2 da3 da4 da5 intrn 20 > em0:tx 0 > KB/t 0.00 20.46 19.92 0.00 0.00 22.06 44.21 17177460 wire > em0:link > tps 0 7 7 0 0 7 11 2131860 act 45 em1:rx 0 > MB/s 0.00 0.15 0.15 0.00 0.00 0.15 0.47 2158808 inact 38 > em1:tx 0 > %busy 0 7 7 0 0 0 0 7512 cache > em1:link > 2986396 free > ahci0:ch0 > buf 16 > cpu1:timer > 23 cpu11:time > 17 cpu5:timer > 13 cpu9:timer > 44 cpu4:timer > 35 cpu15:time > 26 cpu6:timer > 16 cpu14:time > 28 cpu7:timer > 23 cpu13:time > 23 cpu3:timer > 43 cpu10:time > 50 cpu2:timer > 29 cpu12:time > 40 cpu8:timer > > > Here's the ARC cache.... > > [karl@NewFS ~]$ zfs-stats -A > > ------------------------------------------------------------------------ > ZFS Subsystem Report Tue Mar 18 05:56:42 2014 > ------------------------------------------------------------------------ > > ARC Summary: (HEALTHY) > Memory Throttle Count: 0 > > ARC Misc: > Deleted: 1.55m > Recycle Misses: 66.33k > Mutex Misses: 1.55k > Evict Skips: 4.14m > > ARC Size: 60.01% 13.40 GiB > Target Size: (Adaptive) 60.01% 13.40 GiB > Min Size (Hard Limit): 12.50% 2.79 GiB > Max Size (High Water): 8:1 22.33 GiB > > ARC Size Breakdown: > Recently Used Cache Size: 79.13% 10.60 GiB > Frequently Used Cache Size: 20.87% 2.80 GiB > > ARC Hash Breakdown: > Elements Max: 1.34m > Elements Current: 62.76% 840.43k > Collisions: 7.02m > Chain Max: 13 > Chains: 247.65k > > ------------------------------------------------------------------------ > > Note the scale-down from the maximum -- this is with: > > [karl@NewFS ~]$ sysctl -a|grep percent > vfs.zfs.arc_freepage_percent_target: 10 > > My test machine has a lot less memory in it and there the default > (25%) appears to be a good value. > > Before this delta was put on the code this system would have tried to > grab the entire 22GB to the exclusion of anything else. What I used > to do is limit it to 16GB via arc_max which was fine in the mornings > and overnight, but during the day it didn't cut it -- and there was no > way to change it without a reboot either. This particular machine has > 24GB of RAM in it and provides services both externally and internally > (separate interfaces.) > > How efficient is the cache? > > [karl@NewFS ~]$ zfs-stats -E > > ------------------------------------------------------------------------ > ZFS Subsystem Report Tue Mar 18 05:59:01 2014 > ------------------------------------------------------------------------ > > ARC Efficiency: 81.13m > Cache Hit Ratio: 97.84% 79.38m > Cache Miss Ratio: 2.16% 1.75m > Actual Hit Ratio: 69.81% 56.64m > > Data Demand Efficiency: 99.09% 50.37m > Data Prefetch Efficiency: 28.77% 1.46m > > CACHE HITS BY CACHE LIST: > Anonymously Used: 28.48% 22.61m > Most Recently Used: 6.81% 5.40m > Most Frequently Used: 64.54% 51.23m > Most Recently Used Ghost: 0.03% 24.86k > Most Frequently Used Ghost: 0.13% 104.39k > > CACHE HITS BY DATA TYPE: > Demand Data: 62.88% 49.91m > Prefetch Data: 0.53% 419.73k > Demand Metadata: 8.28% 6.57m > Prefetch Metadata: 28.31% 22.47m > > CACHE MISSES BY DATA TYPE: > Demand Data: 26.03% 456.20k > Prefetch Data: 59.29% 1.04m > Demand Metadata: 9.84% 172.53k > Prefetch Metadata: 4.84% 84.81k > > ------------------------------------------------------------------------ > > How do i apply the patch ? regards Johan From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 13:26:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5C7695D2 for ; Tue, 18 Mar 2014 13:26:56 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A6F84FE3 for ; Tue, 18 Mar 2014 13:26:55 +0000 (UTC) Received: (qmail 38888 invoked by uid 89); 18 Mar 2014 13:26:32 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 18 Mar 2014 13:26:29 -0000 Message-ID: <53284973.8010203@netlabs.org> Date: Tue, 18 Mar 2014 14:26:11 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Reoccurring ZFS performance problems [RESOLVED] References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <53236BF3.9060500@denninger.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 13:26:56 -0000 On 18.03.14 11:26, mikej wrote: > I am a little surprised this thread has been so quiet. I have been > running with this patch and my desktop is more pleasant when memory > demands are great - no more swapping - and wired no longer grows > uncontrollable. > > Is more review coming the silence is deffining. same here, works very nice so far and growth of memory looks much more controlled now. Before within no time my server had all 16GB of RAM wired, now it's growing only slowly. It's too early to say if my performance degradation is gone now but it surely looks very good so far. Thanks again to Karl for the patch! Hope others test it and integrate it soon. regards Adrian From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 13:50:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5EC22EBE for ; Tue, 18 Mar 2014 13:50:36 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2683F29B for ; Tue, 18 Mar 2014 13:50:35 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2IDoYre024707 for ; Tue, 18 Mar 2014 08:50:34 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Tue Mar 18 08:50:34 2014 Message-ID: <53284F25.9070007@denninger.net> Date: Tue, 18 Mar 2014 08:50:29 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Reoccurring ZFS performance problems [RESOLVED] References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <53236BF3.9060500@denninger.net> <53284973.8010203@netlabs.org> In-Reply-To: <53284973.8010203@netlabs.org> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms000907020706050007040602" X-Antivirus: avast! (VPS 140318-1, 03/18/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 13:50:36 -0000 This is a cryptographically signed message in MIME format. --------------ms000907020706050007040602 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/18/2014 8:26 AM, Adrian Gschwend wrote: > On 18.03.14 11:26, mikej wrote: > >> I am a little surprised this thread has been so quiet. I have been >> running with this patch and my desktop is more pleasant when memory >> demands are great - no more swapping - and wired no longer grows >> uncontrollable. >> >> Is more review coming the silence is deffining. > same here, works very nice so far and growth of memory looks much more > controlled now. Before within no time my server had all 16GB of RAM > wired, now it's growing only slowly. > > It's too early to say if my performance degradation is gone now but it > surely looks very good so far. > > Thanks again to Karl for the patch! Hope others test it and integrate i= t > soon. > Watch zfs-stats -A; you will see what the system has adapted to as=20 opposed to the hard limits in arc_max and arc_min. Changes upward in reservation percentage will be almost-instantly=20 reflected in reduced allocation, where changes downward will grow slowly = (there's a timed lockdown in the cache code that prevents it from=20 grabbing more space immediately when it was previously throttled back,=20 and the ARC cache in general only grows when I/O that is not in the=20 cache occurs, and thus new data becomes available to cache for later=20 re-use.) The nice thing about the way it behaves now is that it will release=20 memory immediately when required by other demands on the system but if=20 your active and inactive page count shrinks as images release RAM back=20 through the cache and then to the free list it will also be allowed to=20 expand as I/O demand diversity warrants. That was clearly the original design intent but it was being badly=20 frustrated by the former cache memory allocation behavior. There is an argument for not including cache pages in the "used" bucket=20 (that is, counting them as "free" instead); the way I coded it is a bit=20 more conservative than going the other way. Given the design of the VM=20 subsystem either is arguably acceptable since a cache page can be freed=20 when RAM is demanded. I decided not to do for two reasons -- first, a=20 page that is in the cache bucket could be reactivated and if it is then=20 you are going to have to release that ARC cache memory -- economy of=20 action suggests that you not do something you might quickly have to=20 undo. Second, my experience with the VM system over roughly a decade of = use of FreeBSD supports an argument that the VM implementation is=20 arguably the greatest strength that FreeBSD has, especially under=20 stress, and by allowing it to do its job rather than trying to "push"=20 the VM system to do a particular thing the philosophy of trusting that=20 which is believed to know what it's up to is maintained. --=20 -- Karl karl@denninger.net --------------ms000907020706050007040602 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTgxMzUwMjlaMCMGCSqGSIb3DQEJBDEW BBRx7pWv+R4AwATQb1Zp9ftapezq+DBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAmERrBrm5ROP7dMbMrWoyfE/nGjWF spmhk0LEm6WLzqUh3d0N5C15mvSgqjxEdAzD/ZF8mb1wGLJpt5EgqUUtARJ1SPcQ6ce0WoVg vfdDb6Q5oSo9WQ/+wbMDBooDElbPvt1pZ3eXpzvuuGNOgycXJDpXHsQ6D0K8kVz5QKpWrx0q jkmpHvpnDZFWbscnq9c1F4JEV/2qkQ19omW1iwDsNVAAQVBDz/0hqu8aytEPeB95ppzk2qFd bywVG0QpNQZpn1sSZ8OHtYwf77uAF+Pbj+5aXrmyHBYTWzAd+k1xfMvsBH7RmGpjoujfK+UH 9u9sQVLp0WlXU6vkRIqdZHZnFc5oQamyf/GHBzqRuybaeHv/7B672oWRFkVJ2y5iM1XV66tW PcSb6GB4NLQi8lpq0CD6tg6RtKXid9sDquG0l94zBUlGl91H/mF6ldSKjTx4WyOfMqMd+MQe 7CjBmvBdHnNOkWK3xrk+sj3tAA8WI227Y0IB1KAhYZywtNQvsfqneDtQ2dw/A8kAjpHy1EHJ 4xUS08WB70pCFCxY1/E7jEbjmzS5ExD5I0vH5zglDf1OaPltdN0eGoE9w7/IlhFfrPOxBLId hVJCOZULAKsjoOiCBndLV39iPWOiRImu6ONhYT7Ki39utsQkWLGCDijdPOk5swEawP7EhnrK rwRKEU4AAAAAAAA= --------------ms000907020706050007040602-- From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 14:09:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 10C8B4D8 for ; Tue, 18 Mar 2014 14:09:24 +0000 (UTC) Received: from keltia.net (cl-90.mrs-01.fr.sixxs.net [IPv6:2a01:240:fe00:59::2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id C3A6D643 for ; Tue, 18 Mar 2014 14:09:23 +0000 (UTC) Received: from rron.local (unknown [207.126.87.2]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: roberto) by keltia.net (Postfix) with ESMTPSA id 2402B529E for ; Tue, 18 Mar 2014 15:09:10 +0100 (CET) Date: Tue, 18 Mar 2014 15:10:03 +0100 From: Ollivier Robert To: freebsd-fs@freebsd.org Subject: Re: Reoccurring ZFS performance problems [RESOLVED] Message-ID: <20140318141002.GB13818@rron.local> References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <53236BF3.9060500@denninger.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <53236BF3.9060500@denninger.net> X-Operating-System: MacOS X / Macbook Pro - FreeBSD 7.2 / Dell D820 SMP User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 14:09:24 -0000 According to Karl Denninger: > You'll like this much better :-) > > http://www.freebsd.org/cgi/query-pr.cgi?pr=187594 > > Sorry about that.... Just to be precise, the diff is reversed, right? Thanks for writing it :) -- Ollivier ROBERT -=- FreeBSD: The Power to Serve! -=- roberto@keltia.net In memoriam to Ondine, our 2nd child: http://ondine.keltia.net/ From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 14:18:05 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DEEC0780 for ; Tue, 18 Mar 2014 14:18:05 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 855397A1 for ; Tue, 18 Mar 2014 14:18:05 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2IEI4v7033027 for ; Tue, 18 Mar 2014 09:18:04 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Tue Mar 18 09:18:04 2014 Message-ID: <53285597.5030200@denninger.net> Date: Tue, 18 Mar 2014 09:17:59 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Reoccurring ZFS performance problems [RESOLVED] References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <53236BF3.9060500@denninger.net> <20140318141002.GB13818@rron.local> In-Reply-To: <20140318141002.GB13818@rron.local> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms050009020907080809010304" X-Antivirus: avast! (VPS 140318-1, 03/18/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 14:18:05 -0000 This is a cryptographically signed message in MIME format. --------------ms050009020907080809010304 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/18/2014 9:10 AM, Ollivier Robert wrote: > According to Karl Denninger: >> You'll like this much better :-) >> >> http://www.freebsd.org/cgi/query-pr.cgi?pr=3D187594 >> >> Sorry about that.... > Just to be precise, the diff is reversed, right? > > Thanks for writing it :) Yes, the product of insufficient coffee when I typed the command to=20 produce it. :-) --=20 -- Karl karl@denninger.net --------------ms050009020907080809010304 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTgxNDE3NTlaMCMGCSqGSIb3DQEJBDEW BBSuK7UJzSbIlsNJXwykW5ml5TJtUzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAIt1GHA2fqMBkSbRZHRPoZ2TrjWYL tQIv0kOy86EhGVr0DJbwhQz3QEGwTD0dxbnOEGdRu5EWjbJchzE1QKvV+8RH3M381yZjSvx1 PgDqQtiuY7GWOnikemf7/Cn81voI1QNuqAb5h+0Xsh6/IxoDeK+YRwicqAq0fSBU8drFLazN c82QniRrZpUxUUVrSZ0tINaXY2eg7WLB/Kop1NfQz9gUTTbx7OV0LtIFURtR653FMfNu6k2m FPtMrTJbQ8lGcwLxAgvGmPk3ZKHzJ/JiCLuC0a12Bq5UvJCEjLZJyIe14/i2lXWAy9g99IzD bSm9GcfBeVrXMky8xdo3G79KL4ubNVPNiBerZ5qOLouZ1Y9QdVclA+BWaXmzJGGHDf3WXUXs o1s7zuCvFvshWKe27/5nohrw3wjL29/x2U9YZ3gX9TdTBK9nyqqm9CYBZsnCI36Riy5GGNMK 1h36RR2CwdeT0uxz/hNWk5oYPJaoYN8QnHYuJowNxN66LLKUzALFEnZPHCEammT3NKBnI8IK s4cwM2ARXMn7Z0Xm65OfvQuottUqikeKA562ZvNVDM6Sxd1a3+zS1ayZcSDbKhJ6k8BgaJ3a iuaNCusVMYdTjBhQv16JkytOfQ1Q+T5ECDA+M8LkscYRiUQLJKiMA9IzhWM/llcNfrhA6WYo +kREOakAAAAAAAA= --------------ms050009020907080809010304-- From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 15:20:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5D196EA2 for ; Tue, 18 Mar 2014 15:20:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 3BD02EDA for ; Tue, 18 Mar 2014 15:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2IFK1PK069037 for ; Tue, 18 Mar 2014 15:20:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2IFK1M3069036; Tue, 18 Mar 2014 15:20:01 GMT (envelope-from gnats) Date: Tue, 18 Mar 2014 15:20:01 GMT Message-Id: <201403181520.s2IFK1M3069036@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Andriy Gapon Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Andriy Gapon List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 15:20:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Andriy Gapon To: bug-followup@FreeBSD.org, karl@fs.denninger.net Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Tue, 18 Mar 2014 17:15:05 +0200 Karl Denninger wrote: > ZFS can be convinced to engage in pathological behavior due to a bad > low-memory test in arc.c > > The offending file is at > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c; it allegedly > checks for 25% free memory, and if it is less asks for the cache to shrink. > > (snippet from arc.c around line 2494 of arc.c in 10-STABLE; path > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs) > > #else /* !sun */ > if (kmem_used() > (kmem_size() * 3) / 4) > return (1); > #endif /* sun */ > > Unfortunately these two functions do not return what the authors thought > they did. It's clear what they're trying to do from the Solaris-specific > code up above this test. No, these functions do return what the authors think they do. The check is for KVA usage (kernel virtual address space), not for physical memory. > The result is that the cache only shrinks when vm_paging_needed() tests > true, but by that time the system is in serious memory trouble and by No, it is not. The description and numbers here are a little bit outdated but they should give an idea of how paging works in general: https://wiki.freebsd.org/AvgPageoutAlgorithm > triggering only there it actually drives the system further into paging, How does ARC eviction drives the system further into paging? > because the pager will not recall pages from the swap until they are next > executed. This leads the ARC to try to fill in all the available RAM even > though pages have been pushed off onto swap. Not good. Unused physical memory is a waste. It is true that ARC tries to use as much of memory as it is allowed. The same applies to the page cache (Active, Inactive). Memory management is a dynamic system and there are a few competing agents. It is hard to correctly tune that system using a large hummer such as your patch. I believe that with your patch ARC will get shrunk to its minimum size in due time. Active + Inactive will grow to use the memory that you are denying to ARC driving Free below a threshold, which will reduce ARC. Repeated enough times this will drive ARC to its minimum. Also, there are a few technical problems with the patch: - you don't need to use sysctl interface in kernel, the values you need are available directly, just take a look at e.g. implementation of vm_paging_needed() - similarly, querying vfs.zfs.arc_freepage_percent_target value via kernel_sysctlbyname is just bogus; you can use percent_target directly - you don't need to sum various page counters to get a total count, there is v_page_count Lastly, can you try to test reverting your patch and instead setting vm.lowmem_period=0 ? -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 15:42:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5825B5CB for ; Tue, 18 Mar 2014 15:42:01 +0000 (UTC) Received: from mail-vc0-x22f.google.com (mail-vc0-x22f.google.com [IPv6:2607:f8b0:400c:c03::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 127252FE for ; Tue, 18 Mar 2014 15:42:01 +0000 (UTC) Received: by mail-vc0-f175.google.com with SMTP id lh14so7368494vcb.6 for ; Tue, 18 Mar 2014 08:42:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=um+g0WzrwQgyZcR6dRUBSUNIG4C8vmkCpCxkI9l3lR8=; b=wp7akCEun+TqhxcpOr8XPibiFKYUYim6dlxseC9U3bcVebqN2W0ue/f6TA7q6mFCQu v8yoed6X4mV0EMPP4ITMbfhLcMtD2OrHoSfzhNs+dWpsOHdeiQCrxwIQB9IublxxkY8f 5e3alz/crmEL1Bce0R8OHJId4Yabxa0C1uIIlLGLAwd8vhdSM0FROZMXMG2bCldYSpNe eB9aNbuxKq5m9fBfGn/KrfDfRMsR8FE4usosvrOw+xdw4guYJTX1+44A7ReMa28p2Nyx 0jMRaZLifl3vovj2BJsy1R+xmvmo/YtMkYsP0/vdLPqeY5FGHGVWWvchhvBQXWrA0QuY XO0A== X-Received: by 10.221.55.199 with SMTP id vz7mr1599044vcb.40.1395157320281; Tue, 18 Mar 2014 08:42:00 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.252.165 with HTTP; Tue, 18 Mar 2014 08:41:30 -0700 (PDT) In-Reply-To: <53284511.3030901@gmail.com> References: <531E2406.8010301@denninger.net> <5320A0E8.2070406@denninger.net> <5322E64E.8020009@denninger.net> <53236BF3.9060500@denninger.net> <532828A1.6080605@denninger.net> <53284511.3030901@gmail.com> From: Matthias Gamsjager Date: Tue, 18 Mar 2014 16:41:30 +0100 Message-ID: Subject: Re: Reoccurring ZFS performance problems [RESOLVED] To: Johan Hendriks Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 15:42:01 -0000 > > >> How do i apply the patch ? > > regards > Johan > > Download the patch. Svn up to 10-stable and put the patch file in /usr/src then run patch -p0 < patch.txt From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 15:58:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 260E5ADB for ; Tue, 18 Mar 2014 15:58:17 +0000 (UTC) Received: from mail-oa0-x22f.google.com (mail-oa0-x22f.google.com [IPv6:2607:f8b0:4003:c02::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E8471694 for ; Tue, 18 Mar 2014 15:58:16 +0000 (UTC) Received: by mail-oa0-f47.google.com with SMTP id i11so7204715oag.6 for ; Tue, 18 Mar 2014 08:58:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=SyfYh3AAoE4LvHuP84imDcgxNUwYiyh+onChjNf1QMs=; b=sym5HSiLnhN0LLaplqx01K5536YS9b2EkctCUUx2+vbfsMsNfDwGdw4BfNA4Gf8Tjo jA5CmNaLuxYK8Bu5wnLKd4i9QKyUfHvf/b+K2gTax9qmlhsI+Hs71Dceftf6g5k9BdG/ Rp3ntymd3tAsBoVaUU8soVsZYiHni2Z0eJB9f54Dw0ECy7Fc4oTfmkSKUSIdqJ6BGJqk 2PYmcZ0WtBa5+MskGUgpbh+AqPvk7Lgpc4NeFWGSlJ6/KEyL63DB6GjogHEVKyitWuZN CNPRMARUSYp4sikImB56uIZB38BXYBZXLc7+JtJXv0PtmjS2Ich57j2I2dT17niX8SEV TYnQ== MIME-Version: 1.0 X-Received: by 10.60.233.138 with SMTP id tw10mr3213910oec.56.1395158296268; Tue, 18 Mar 2014 08:58:16 -0700 (PDT) Received: by 10.76.115.129 with HTTP; Tue, 18 Mar 2014 08:58:16 -0700 (PDT) Date: Tue, 18 Mar 2014 11:58:16 -0400 Message-ID: Subject: Is the NFS Replay Cache needed for correctness with TCP mounts? From: Ryan Stone To: "freebsd-fs@freebsd.org" Content-Type: text/plain; charset=ISO-8859-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 15:58:17 -0000 My understanding of the NFS replay cache is that it's used so that the NFS server can avoid trying non-idempotent requests twice if a it handles a retransmitted request (because the response to the first request was lost in transmit, for example). Is this really only needed for UDP mounts? I would expect TCP mounts to not have the problem because the TCP layer should handle the retransmits and the NFS code should never see the same request twice. Is this correct? I ask because I have an NFS server (using the default legacy NFS implementation) running FreeBSD 8.2 that is having problems with entries in the replay cache becoming badly corrupted, leading to mbuf leaks and system crashes. I know that the NFS code has been rewritten as of FreeBSD 9 so hopefully the issue is fixed in future versions, but for the short term I'm not able to upgrade. I control the clients and I know that they all use TCP mounts, so I was wondering if patching the server to disable the replay cache would be a plausible short-term workaround for the issue until I can upgrade, or if I'm courting disaster. Thanks, Ryan From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 16:55:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2C1214FB; Tue, 18 Mar 2014 16:55:27 +0000 (UTC) Received: from mail-lb0-x234.google.com (mail-lb0-x234.google.com [IPv6:2a00:1450:4010:c04::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 673B6DC2; Tue, 18 Mar 2014 16:55:26 +0000 (UTC) Received: by mail-lb0-f180.google.com with SMTP id 10so4908830lbg.39 for ; Tue, 18 Mar 2014 09:55:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=a91tCEMvEooOaMc5DpF+25s/LAGV0GLxJ0QFPyIoxYw=; b=HyQGT2vac4+pqSn2Cf1meGyV7EHmyd8/1YxIfzJ/t7uNaKJjolScFYu+6euIVmcUUY rROKMfgeY+Cyy+RDFyOCpYPKlmNzC7P9HQhJuBX5e+JltopUGONtFvJEny3sD3pVPGKN 6xxwZx27dCh/mWKHf/XCrv0vibB0yqgz/YSCnlx323+V1cgvfqKLrw2aoyuwDfOJjhp5 M2+EsP/u3j4g6gRqAe8PYNDu17Ttz5mZ1roHJmosILd4EoxQqZUWUt/FElexyyQN7vcO XsWwUkWQ6raP+GupTxPSxQUdxxEfiD4hn2s1o5OegGlWID2Xs1hM7QOklI7G8qJl7Rxx V6tA== X-Received: by 10.152.22.37 with SMTP id a5mr22139567laf.4.1395161724459; Tue, 18 Mar 2014 09:55:24 -0700 (PDT) Received: from [192.168.1.129] (mau.donbass.com. [92.242.127.250]) by mx.google.com with ESMTPSA id k2sm7427710lbm.23.2014.03.18.09.55.22 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 18 Mar 2014 09:55:23 -0700 (PDT) Message-ID: <53287A79.9060807@b1t.name> Date: Tue, 18 Mar 2014 18:55:21 +0200 From: Volodymyr Kostyrko User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: Andriy Gapon , freebsd-fs@FreeBSD.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403181520.s2IFK1M3069036@freefall.freebsd.org> In-Reply-To: <201403181520.s2IFK1M3069036@freefall.freebsd.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 16:55:27 -0000 18.03.2014 17:20, Andriy Gapon wrote: > Karl Denninger wrote: > > ZFS can be convinced to engage in pathological behavior due to a bad > > low-memory test in arc.c > > > > The offending file is at > > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c; it allegedly > > checks for 25% free memory, and if it is less asks for the cache to shrink. > > > > (snippet from arc.c around line 2494 of arc.c in 10-STABLE; path > > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs) > > > > #else /* !sun */ > > if (kmem_used() > (kmem_size() * 3) / 4) > > return (1); > > #endif /* sun */ > > > > Unfortunately these two functions do not return what the authors thought > > they did. It's clear what they're trying to do from the Solaris-specific > > code up above this test. > > No, these functions do return what the authors think they do. > The check is for KVA usage (kernel virtual address space), not for physical memory. > > > The result is that the cache only shrinks when vm_paging_needed() tests > > true, but by that time the system is in serious memory trouble and by > > No, it is not. > The description and numbers here are a little bit outdated but they should give > an idea of how paging works in general: > https://wiki.freebsd.org/AvgPageoutAlgorithm > > > triggering only there it actually drives the system further into paging, > > How does ARC eviction drives the system further into paging? > > > because the pager will not recall pages from the swap until they are next > > executed. This leads the ARC to try to fill in all the available RAM even > > though pages have been pushed off onto swap. Not good. > > Unused physical memory is a waste. It is true that ARC tries to use as much of > memory as it is allowed. The same applies to the page cache (Active, Inactive). > Memory management is a dynamic system and there are a few competing agents. I'd better like it to be a maximum of 500M or 5% memory. On a loaded server this wouldn't hurt performance but will provide a good window for VM system to stay reasonable. > It is hard to correctly tune that system using a large hummer such as your > patch. I believe that with your patch ARC will get shrunk to its minimum size > in due time. Active + Inactive will grow to use the memory that you are denying > to ARC driving Free below a threshold, which will reduce ARC. Repeated enough > times this will drive ARC to its minimum. But what is worse - having programs memory paged to the disk or some random data from the disk to be cached? Yes, I know that there are situations where a big amount of inactive memory would hurt performance. But putting file cache above inactive memory is bad too. I see no benefit in having 4G ARC cache but 2G inactive memory swapped out leaving inactive at 50M. Any Java service can hold a number of memory and it will require it occasionally so most of this memory would be swapped out so the process would be slow but we can browse the disk faster... The only solution for this is making pages of ARC and inactive even in their odds to evict. > Also, there are a few technical problems with the patch: > - you don't need to use sysctl interface in kernel, the values you need are > available directly, just take a look at e.g. implementation of vm_paging_needed() > - similarly, querying vfs.zfs.arc_freepage_percent_target value via > kernel_sysctlbyname is just bogus; you can use percent_target directly > - you don't need to sum various page counters to get a total count, there is > v_page_count > > Lastly, can you try to test reverting your patch and instead setting > vm.lowmem_period=0 ? Actually I already tried that patch and compared it to lowmem_period. The patch works much better despite actually been a crutch... The whole thing is because of two issues: 1. Kernel cannot reorder memory when some process (like VirtualBox) needs to allocate a big hunk at once. Right now the only working solution for kernel is to push inactive to the swap even when there is enough free memory to hold whole allocation. There's no in-memory reordering. And as ARC is shrinking only when free memory is low it completely ignores this condition and doesn't return a single page to the vm. 2. What ARC takes can't be freed because there's no simple opposite interface to get X blocks from ARC. It would be much better if ARC whould be arranged in a way that system can shrink it with a simple syscall, like cache. Without this we are already taking this route: * systems needs space; * arc starts shrinking; * while arc shrinks some mem is cached to swap and becomes available; * mem freed from swapping is taken and process starts working; * arc completes shrinking and starts to grow because of a disk activity. As far as I understand our VM systems tries to maintain a predefined percent of mem clean or at least cached to swap so this mem can be quickly claimed. So swapping wins, ARC losts and swap is never read back again unless explicitly required. This is because it's too late to evict anything from ARC when we need memory. If there would be a way for ARC to mark some pages as freely purgeable (probably with a callback to tell ARC which pages where purged) I think this problem would be gone. -- Sphinx of black quartz, judge my vow. From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 17:00:37 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 742E4904 for ; Tue, 18 Mar 2014 17:00:37 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id A8740E11 for ; Tue, 18 Mar 2014 17:00:36 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA29096; Tue, 18 Mar 2014 19:00:34 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WPxNS-000BDT-5s; Tue, 18 Mar 2014 19:00:34 +0200 Message-ID: <53287B8E.8060007@FreeBSD.org> Date: Tue, 18 Mar 2014 18:59:58 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Volodymyr Kostyrko , freebsd-fs@FreeBSD.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403181520.s2IFK1M3069036@freefall.freebsd.org> <53287A79.9060807@b1t.name> In-Reply-To: <53287A79.9060807@b1t.name> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 17:00:37 -0000 on 18/03/2014 18:55 Volodymyr Kostyrko said the following: > 1. Kernel cannot reorder memory when some process (like VirtualBox) Issues caused by VirtualBox needing contiguous memory are completely separate from generic VM and ZFS ARC issues and, thus, should be discussed and fixed separately. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 17:19:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6B30216E for ; Tue, 18 Mar 2014 17:19:39 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 1D71185 for ; Tue, 18 Mar 2014 17:19:38 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2IHJbTc086152 for ; Tue, 18 Mar 2014 12:19:37 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Tue Mar 18 12:19:37 2014 Message-ID: <53288024.2060005@denninger.net> Date: Tue, 18 Mar 2014 12:19:32 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 MIME-Version: 1.0 To: avg@FreeBSD.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403181520.s2IFK1M3069036@freefall.freebsd.org> In-Reply-To: <201403181520.s2IFK1M3069036@freefall.freebsd.org> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms000607060402000804060701" X-Antivirus: avast! (VPS 140318-1, 03/18/2014), Outbound message X-Antivirus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 17:19:39 -0000 This is a cryptographically signed message in MIME format. --------------ms000607060402000804060701 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/18/2014 10:20 AM, Andriy Gapon wrote: > The following reply was made to PR kern/187594; it has been noted by GN= ATS. > > From: Andriy Gapon > To: bug-followup@FreeBSD.org, karl@fs.denninger.net > Cc: > Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fi= x > Date: Tue, 18 Mar 2014 17:15:05 +0200 > > Karl Denninger wrote: > > ZFS can be convinced to engage in pathological behavior due to a ba= d > > low-memory test in arc.c > > > > The offending file is at > > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c; it a= llegedly > > checks for 25% free memory, and if it is less asks for the cache to= shrink. > > > > (snippet from arc.c around line 2494 of arc.c in 10-STABLE; path > > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs) > > > > #else /* !sun */ > > if (kmem_used() > (kmem_size() * 3) / 4) > > return (1); > > #endif /* sun */ > > > > Unfortunately these two functions do not return what the authors th= ought > > they did. It's clear what they're trying to do from the Solaris-spe= cific > > code up above this test. > =20 > No, these functions do return what the authors think they do. > The check is for KVA usage (kernel virtual address space), not for ph= ysical memory. I understand, but that's nonsensical in the context of the Solaris=20 code. "lotsfree" is *not* a declaration of free kvm space, it's a=20 declaration of when the system has "lots" of free *physical* memory. Further it makes no sense at all to allow the ARC cache to force things=20 into virtual (e.g. swap-space backed) memory. But that's the behavior=20 that has been observed, and it fits with the code as originally written. > =20 > > The result is that the cache only shrinks when vm_paging_needed() t= ests > > true, but by that time the system is in serious memory trouble and = by > =20 > No, it is not. > The description and numbers here are a little bit outdated but they s= hould give > an idea of how paging works in general: > https://wiki.freebsd.org/AvgPageoutAlgorithm > =20 > > triggering only there it actually drives the system further into pa= ging, > =20 > How does ARC eviction drives the system further into paging? 1. System gets low on physical memory but the ARC cache is looking at=20 available kvm (of which there is plenty.) The ARC cache continues to=20 expand. 2. vm_paging_needed() returns true and the system begins to page off to=20 the swap. At the same time the ARC cache is pared down because=20 arc_reclaim_needed has returned "1". 3. As the ARC cache shrinks and paging occurs vm_paging_needed() returns = false. Paging out ceases but inactive pages remain on the swap. They=20 are not recalled until and unless they are scheduled to execute. =20 Arc_reclaim_needed again returns "0". 4. The hold-down timer expires in the ARC cache code ("arc_grow_retry",=20 declared as 60 seconds) and the ARC cache begins to expand again. Go back to #2 until the system's performance starts to deteriorate badly = enough due to the paging that you notice it, which occurs when something = that is actually consuming CPU time has to be called in from swap. This is consistent with what I and others have observed on both 9.2 and=20 10.0; the ARC will expand until it hits the maximum configured even at=20 the expense of forcing pages onto the swap. In this specific machine's=20 case left to defaults it will grab nearly all physical memory (over 20GB = of 24) and wire it down. Limiting arc_max to 16GB sorta fixes it. I say "sorta" because it turns = out that 16GB is still too much for the workload; it prevents the=20 pathological behavior where system "stalls" happen but only in the=20 extreme. It turns out with the patch in my ARC cache stabilizes at=20 about 13.5GB during the busiest part of the day, growing to about 16=20 off-hours. One of the problems with just limiting it in /boot/loader.conf is that=20 you have to guess and the system doesn't reasonably adapt to changing=20 memory loads. The code is clearly intended to do that but it doesn't=20 end up working that way in practice. > =20 > > because the pager will not recall pages from the swap until they ar= e next > > executed. This leads the ARC to try to fill in all the available RA= M even > > though pages have been pushed off onto swap. Not good. > =20 > Unused physical memory is a waste. It is true that ARC tries to use = as much of > memory as it is allowed. The same applies to the page cache (Active,= Inactive). > Memory management is a dynamic system and there are a few competing a= gents. > =20 That's true. However, what the stock code does is force working set out = of memory and into the swap. The ideal situation is one in which there=20 is no free memory because cache has sized itself to consume everything=20 *not* necessary for the working set of the processes that are running. =20 Unfortunately we cannot determine this presciently because a new process = may come along and we do not necessarily know for how long a process=20 that is blocked on an event will remain blocked (e.g. something waiting=20 on network I/O, etc.) However, it is my contention that you do not want to evict a process=20 that is scheduled to run (or is going to be) in favor of disk cache=20 because you're defeating yourself by doing so. The point of the disk=20 cache is to avoid going to the physical disk for I/O, but if you page=20 something you have ditched a physical I/O for data in favor of having to = go to physical disk *twice* -- first to write the paged-out data to=20 swap, and then to retrieve it when it is to be executed. This also=20 appears to be consistent with what is present for Solaris machines. From the Sun code: #ifdef sun /* * take 'desfree' extra pages, so we reclaim sooner, rather than= later */ extra =3D desfree; =20 /* * check that we're out of range of the pageout scanner. It sta= rts to * schedule paging if freemem is less than lotsfree and needfree= =2E * lotsfree is the high-water mark for pageout, and needfree is = the * number of needed free pages. We add extra pages here to make= sure * the scanner doesn't start up while we're freeing memory. */ if (freemem < lotsfree + needfree + extra) return (1); =20 /* * check to make sure that swapfs has enough space so that anon * reservations can still succeed. anon_resvmem() checks that th= e * availrmem is greater than swapfs_minfree, and the number of r= eserved * swap pages. We also add a bit of extra here just to prevent * circumstances from getting really dire. */ if (availrmem < swapfs_minfree + swapfs_reserve + extra) return (1); "freemem" is not virtual memory, it's actual memory. "Lotsfree" is the=20 point where the system considers free RAM to be "ample"; "needfree" is=20 the "desperation" point and "extra" is the margin (presumably for image=20 activation.) The base code on FreeBSD doesn't look at physical memory at all; it=20 looks at kvm space instead. > It is hard to correctly tune that system using a large hummer such as= your > patch. I believe that with your patch ARC will get shrunk to its min= imum size > in due time. Active + Inactive will grow to use the memory that you = are denying > to ARC driving Free below a threshold, which will reduce ARC. Repeat= ed enough > times this will drive ARC to its minimum. I disagree both in design theory and based on the empirical evidence of=20 actual operation. First, I don't (ever) want to give memory to the ARC cache that=20 otherwise would go to "active", because any time I do that I'm going to=20 force two page events, which is double the amount of I/O I would take on = a cache *miss*, and even with the ARC at minimum I get a reasonable hit=20 percentage. If I therefore prefer ARC over "active" pages I am going to = take *at least* a 200% penalty on physical I/O and if I get an 80% hit=20 ratio with the ARC at a minimum the penalty is closer to 800%! For inactive pages it's a bit more complicated as those may not be=20 reactivated. However, I am trusting FreeBSD's VM subsystem to demote=20 those that are unlikely to be reactivated to the cache bucket and then=20 to "free", where they are able to be re-used. This is consistent with=20 what I actually see on a running system -- the "inact" bucket is=20 typically fairly large (often on a busy machine close to that of=20 "active") but pages demoted to "cache" don't stay there long - they=20 either get re-promoted back up or they are freed and go on the free list.= The only time I see "inact" get out of control is when there's a kernel=20 memory leak somewhere (such as what I ran into the other day with the=20 in-kernel NAT subsystem on 10-STABLE.) But that's a bug and if it=20 happens you're going to get bit anyway. For example right now on one of my very busy systems with 24GB of=20 installed RAM and many terabytes of storage across three ZFS pools I'm=20 seeing 17GB wired of which 13.5 is ARC cache. That's the adaptive=20 figure it currently is running at, with a maximum of 22.3 and a minimum=20 of 2.79 (8:1 ratio.) The remainder is wired down for other reasons=20 (there's a fairly large Postgres server running on that box, among other = things, and it has a big shared buffer declaration -- that's most of the = difference.) Cache hit efficiency is currently 97.8%. Active is 2.26G right now, and inactive is 2.09G. Both are stable.=20 Overnight inactive will drop to about 1.1GB while active will not change = all that much since most of it postgres and the middleware that talks to = it along with apache, which leaves most of its processes present even=20 when they go idle. Peak load times are about right now (mid-day), and=20 again when the system is running backups nightly. Cache is 7448, in other words, insignificant. Free memory is 2.6G. The tunable is set to 10%, which is almost exactly what free memory is. = I find that when the system gets under 1G free transient image=20 activation can drive it into paging and performance starts to suffer for = my particular workload. > =20 > Also, there are a few technical problems with the patch: > - you don't need to use sysctl interface in kernel, the values you ne= ed are > available directly, just take a look at e.g. implementation of vm_pag= ing_needed() That's easily fixed. I will look at it. > - similarly, querying vfs.zfs.arc_freepage_percent_target value via > kernel_sysctlbyname is just bogus; you can use percent_target directl= y I did not know if during setup of the OID the value was copied (and thus = you had to reference it later on) or the entry simply took the pointer=20 and stashed that. Easily corrected. > - you don't need to sum various page counters to get a total count, t= here is > v_page_count > =20 Fair enough as well. > Lastly, can you try to test reverting your patch and instead setting > vm.lowmem_period=3D0 ? > =20 Yes. By default it's 10; I have not tampered with that default. Let me do a bit of work and I'll post back with a revised patch. Perhaps = a tunable for percentage free + a free reserve that is a "floor"? The=20 problem with that is where to put the defaults. One option would be to=20 grab total size at init time and compute something similar to what=20 "lotsfree" is for Solaris, allowing that to be tuned with the percentage = if desired. I selected 25% because that's what the original test was=20 expressing and it should be reasonable for modest RAM configurations. =20 It's clearly too high for moderately large (or huge) memory machines=20 unless they have a lot of RAM -hungry processes running on them. The percentage test, however, is an easy knob to twist that is unlikely=20 to severely harm you if you dial it too far in either direction; anyone=20 setting it to zero obviously knows what they're getting into, and if you = crank it too high all you end up doing is limiting the ARC to the=20 minimum value. --=20 -- Karl karl@denninger.net --------------ms000607060402000804060701 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTgxNzE5MzJaMCMGCSqGSIb3DQEJBDEW BBSu4P0qnTanu1q3c/E0uTfSKE5okTBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAMK/1qngYhrlSJ5PZ64eBJmH+0k5F 3j4G3LKc1jp9Z9wq5q+WCcK3YeuIe8M/F2ENpPNemRGXJ0s7p/xoz8LnehLJpt/ER+cCqAIa WQ48ZnHUxC3QqthvstwhA54kSKZW8UlFfXXMTf+D2imGA3DQCjFghHhaEMqlnz+ICoH1KXSU gtk7JbuSgDWhOapPtXOWPCCbmSgCZA7Wjr0D6NeCmmD1UOrrBgkTi81yKSVzjoWlu8fB7FkW Op/Qtb2AulRCXxPUoiYADmnB6b37JO/ZS8fcAC/RJF0ogluIY7GkzEbiO0t5yE9HsReaM6g3 skAi6QhldIKYi7poLZwuGJgqKbd4NcAmiS45ApnlqV1V4ZlByfFDApsvYF/HyNkpJfvrSWxf XZr85qOdgHMe3vAmEb5OAIRa/ymHvBgVGwnOFSI6VlTltMUymVMubzENx7mw8yZIdE7BgPgW vzAT02qwbv8GRkDbRLhtFs2ASp1l2WX90lD6ba2Lple/IMBo5xKpv9huH5mzLFPbwd9MP3as Evx8mVNqjyVvGvd7EYRppAKMww7vqzM5FV8RE/8idd6dJ35oyOvJKyVGMfuXE0gwH7e4xzIN bn3KiA9iV1R7pua32GA4a9782I2x4nyqln1ZkxrlwCNYqU6MCPuXLted0wcgkSJ0CT4y/xMm h17b5M8AAAAAAAA= --------------ms000607060402000804060701-- From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 17:45:59 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 913B1852 for ; Tue, 18 Mar 2014 17:45:59 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 9D9B13B7 for ; Tue, 18 Mar 2014 17:45:58 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA29874; Tue, 18 Mar 2014 19:45:49 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WPy5F-000BGb-92; Tue, 18 Mar 2014 19:45:49 +0200 Message-ID: <53288629.60309@FreeBSD.org> Date: Tue, 18 Mar 2014 19:45:13 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403181520.s2IFK1M3069036@freefall.freebsd.org> <53288024.2060005@denninger.net> In-Reply-To: <53288024.2060005@denninger.net> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 17:45:59 -0000 on 18/03/2014 19:19 Karl Denninger said the following: > > On 3/18/2014 10:20 AM, Andriy Gapon wrote: >> The following reply was made to PR kern/187594; it has been noted by GNATS. >> >> From: Andriy Gapon >> To: bug-followup@FreeBSD.org, karl@fs.denninger.net >> Cc: >> Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix >> Date: Tue, 18 Mar 2014 17:15:05 +0200 >> >> Karl Denninger wrote: >> > ZFS can be convinced to engage in pathological behavior due to a bad >> > low-memory test in arc.c >> > >> > The offending file is at >> > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c; it allegedly >> > checks for 25% free memory, and if it is less asks for the cache to shrink. >> > >> > (snippet from arc.c around line 2494 of arc.c in 10-STABLE; path >> > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs) >> > >> > #else /* !sun */ >> > if (kmem_used() > (kmem_size() * 3) / 4) >> > return (1); >> > #endif /* sun */ >> > >> > Unfortunately these two functions do not return what the authors thought >> > they did. It's clear what they're trying to do from the Solaris-specific >> > code up above this test. >> No, these functions do return what the authors think they do. >> The check is for KVA usage (kernel virtual address space), not for physical >> memory. > I understand, but that's nonsensical in the context of the Solaris code. > "lotsfree" is *not* a declaration of free kvm space, it's a declaration of when > the system has "lots" of free *physical* memory. No, it's not nonsensical. Replacement for lotsfree stuff is vm_paging_needed(). kmem_* stuff is replacement for vmem_* stuff in Solaris code. > Further it makes no sense at all to allow the ARC cache to force things into > virtual (e.g. swap-space backed) memory. Seems like you don't have proper understanding of what kernel virtual memory is. That makes conversation harder. > But that's the behavior that has been > observed, and it fits with the code as originally written. > >> > The result is that the cache only shrinks when vm_paging_needed() tests >> > true, but by that time the system is in serious memory trouble and by >> No, it is not. >> The description and numbers here are a little bit outdated but they should give >> an idea of how paging works in general: >> https://wiki.freebsd.org/AvgPageoutAlgorithm >> > triggering only there it actually drives the system further into paging, >> How does ARC eviction drives the system further into paging? > 1. System gets low on physical memory but the ARC cache is looking at available > kvm (of which there is plenty.) The ARC cache continues to expand. > > 2. vm_paging_needed() returns true and the system begins to page off to the > swap. At the same time the ARC cache is pared down because arc_reclaim_needed > has returned "1". Except that ARC is supposed to be evicted before page daemon does anything. > 3. As the ARC cache shrinks and paging occurs vm_paging_needed() returns false. > Paging out ceases but inactive pages remain on the swap. They are not recalled > until and unless they are scheduled to execute. Arc_reclaim_needed again > returns "0". > > 4. The hold-down timer expires in the ARC cache code ("arc_grow_retry", declared > as 60 seconds) and the ARC cache begins to expand again. > > Go back to #2 until the system's performance starts to deteriorate badly enough > due to the paging that you notice it, which occurs when something that is > actually consuming CPU time has to be called in from swap. > > This is consistent with what I and others have observed on both 9.2 and 10.0; > the ARC will expand until it hits the maximum configured even at the expense of > forcing pages onto the swap. In this specific machine's case left to defaults > it will grab nearly all physical memory (over 20GB of 24) and wire it down. Well, this does not match my experience from before 10.x times. > Limiting arc_max to 16GB sorta fixes it. I say "sorta" because it turns out > that 16GB is still too much for the workload; it prevents the pathological > behavior where system "stalls" happen but only in the extreme. It turns out > with the patch in my ARC cache stabilizes at about 13.5GB during the busiest > part of the day, growing to about 16 off-hours. > > One of the problems with just limiting it in /boot/loader.conf is that you have > to guess and the system doesn't reasonably adapt to changing memory loads. The > code is clearly intended to do that but it doesn't end up working that way in > practice. >> > because the pager will not recall pages from the swap until they are next >> > executed. This leads the ARC to try to fill in all the available RAM even >> > though pages have been pushed off onto swap. Not good. >> Unused physical memory is a waste. It is true that ARC tries to use as >> much of >> memory as it is allowed. The same applies to the page cache (Active, >> Inactive). >> Memory management is a dynamic system and there are a few competing agents. >> > That's true. However, what the stock code does is force working set out of > memory and into the swap. The ideal situation is one in which there is no free > memory because cache has sized itself to consume everything *not* necessary for > the working set of the processes that are running. Unfortunately we cannot > determine this presciently because a new process may come along and we do not > necessarily know for how long a process that is blocked on an event will remain > blocked (e.g. something waiting on network I/O, etc.) > > However, it is my contention that you do not want to evict a process that is > scheduled to run (or is going to be) in favor of disk cache because you're > defeating yourself by doing so. The point of the disk cache is to avoid going > to the physical disk for I/O, but if you page something you have ditched a > physical I/O for data in favor of having to go to physical disk *twice* -- first > to write the paged-out data to swap, and then to retrieve it when it is to be > executed. This also appears to be consistent with what is present for Solaris > machines. > > From the Sun code: > > #ifdef sun > /* > * take 'desfree' extra pages, so we reclaim sooner, rather than later > */ > extra = desfree; > > /* > * check that we're out of range of the pageout scanner. It starts to > * schedule paging if freemem is less than lotsfree and needfree. > * lotsfree is the high-water mark for pageout, and needfree is the > * number of needed free pages. We add extra pages here to make sure > * the scanner doesn't start up while we're freeing memory. > */ > if (freemem < lotsfree + needfree + extra) > return (1); > > /* > * check to make sure that swapfs has enough space so that anon > * reservations can still succeed. anon_resvmem() checks that the > * availrmem is greater than swapfs_minfree, and the number of reserved > * swap pages. We also add a bit of extra here just to prevent > * circumstances from getting really dire. > */ > if (availrmem < swapfs_minfree + swapfs_reserve + extra) > return (1); > > "freemem" is not virtual memory, it's actual memory. "Lotsfree" is the point > where the system considers free RAM to be "ample"; "needfree" is the > "desperation" point and "extra" is the margin (presumably for image activation.) > > The base code on FreeBSD doesn't look at physical memory at all; it looks at kvm > space instead. This is an incorrect statement as I explained above. vm_paging_needed() looks at physical memory. >> It is hard to correctly tune that system using a large hummer such as your >> patch. I believe that with your patch ARC will get shrunk to its minimum size >> in due time. Active + Inactive will grow to use the memory that you are >> denying >> to ARC driving Free below a threshold, which will reduce ARC. Repeated enough >> times this will drive ARC to its minimum. > I disagree both in design theory and based on the empirical evidence of actual > operation. > > First, I don't (ever) want to give memory to the ARC cache that otherwise would > go to "active", because any time I do that I'm going to force two page events, > which is double the amount of I/O I would take on a cache *miss*, and even with > the ARC at minimum I get a reasonable hit percentage. If I therefore prefer ARC > over "active" pages I am going to take *at least* a 200% penalty on physical I/O > and if I get an 80% hit ratio with the ARC at a minimum the penalty is closer to > 800%! > > For inactive pages it's a bit more complicated as those may not be reactivated. > However, I am trusting FreeBSD's VM subsystem to demote those that are unlikely > to be reactivated to the cache bucket and then to "free", where they are able to > be re-used. This is consistent with what I actually see on a running system -- > the "inact" bucket is typically fairly large (often on a busy machine close to > that of "active") but pages demoted to "cache" don't stay there long - they > either get re-promoted back up or they are freed and go on the free list. > > The only time I see "inact" get out of control is when there's a kernel memory > leak somewhere (such as what I ran into the other day with the in-kernel NAT > subsystem on 10-STABLE.) But that's a bug and if it happens you're going to get > bit anyway. > > For example right now on one of my very busy systems with 24GB of installed RAM > and many terabytes of storage across three ZFS pools I'm seeing 17GB wired of > which 13.5 is ARC cache. That's the adaptive figure it currently is running at, > with a maximum of 22.3 and a minimum of 2.79 (8:1 ratio.) The remainder is > wired down for other reasons (there's a fairly large Postgres server running on > that box, among other things, and it has a big shared buffer declaration -- > that's most of the difference.) Cache hit efficiency is currently 97.8%. > > Active is 2.26G right now, and inactive is 2.09G. Both are stable. Overnight > inactive will drop to about 1.1GB while active will not change all that much > since most of it postgres and the middleware that talks to it along with apache, > which leaves most of its processes present even when they go idle. Peak load > times are about right now (mid-day), and again when the system is running > backups nightly. > > Cache is 7448, in other words, insignificant. Free memory is 2.6G. > > The tunable is set to 10%, which is almost exactly what free memory is. I find > that when the system gets under 1G free transient image activation can drive it > into paging and performance starts to suffer for my particular workload. > >> Also, there are a few technical problems with the patch: >> - you don't need to use sysctl interface in kernel, the values you need are >> available directly, just take a look at e.g. implementation of >> vm_paging_needed() > That's easily fixed. I will look at it. >> - similarly, querying vfs.zfs.arc_freepage_percent_target value via >> kernel_sysctlbyname is just bogus; you can use percent_target directly > I did not know if during setup of the OID the value was copied (and thus you had > to reference it later on) or the entry simply took the pointer and stashed > that. Easily corrected. >> - you don't need to sum various page counters to get a total count, there is >> v_page_count >> > Fair enough as well. >> Lastly, can you try to test reverting your patch and instead setting >> vm.lowmem_period=0 ? >> > Yes. By default it's 10; I have not tampered with that default. > > Let me do a bit of work and I'll post back with a revised patch. Perhaps a > tunable for percentage free + a free reserve that is a "floor"? The problem > with that is where to put the defaults. One option would be to grab total size > at init time and compute something similar to what "lotsfree" is for Solaris, > allowing that to be tuned with the percentage if desired. I selected 25% > because that's what the original test was expressing and it should be reasonable > for modest RAM configurations. It's clearly too high for moderately large (or > huge) memory machines unless they have a lot of RAM -hungry processes running on > them. > > The percentage test, however, is an easy knob to twist that is unlikely to > severely harm you if you dial it too far in either direction; anyone setting it > to zero obviously knows what they're getting into, and if you crank it too high > all you end up doing is limiting the ARC to the minimum value. > -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 23:38:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5F9ACE77 for ; Tue, 18 Mar 2014 23:38:46 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 23631EDB for ; Tue, 18 Mar 2014 23:38:45 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqUEAC/YKFODaFve/2dsb2JhbABag0FXgwa3Y4ZrUYE/dIIlAQEBAwEBAQEgKyALBRYOCgICDRkCKQEJJgYIBwQBHASHUAgNrg6iOReBKYxoAQEbNAeCb4FJBJVvhAmQfoNJITGBBDk X-IronPort-AV: E=Sophos;i="4.97,681,1389762000"; d="scan'208";a="106763114" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 18 Mar 2014 19:38:38 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 0C11DB3F1A; Tue, 18 Mar 2014 19:38:39 -0400 (EDT) Date: Tue, 18 Mar 2014 19:38:39 -0400 (EDT) From: Rick Macklem To: Ryan Stone Message-ID: <710925933.24683288.1395185919038.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: Is the NFS Replay Cache needed for correctness with TCP mounts? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 23:38:46 -0000 Ryan Stone wrote: > My understanding of the NFS replay cache is that it's used so that > the > NFS server can avoid trying non-idempotent requests twice if a it > handles a retransmitted request (because the response to the first > request was lost in transmit, for example). Is this really only > needed for UDP mounts? I would expect TCP mounts to not have the > problem because the TCP layer should handle the retransmits and the > NFS code should never see the same request twice. Is this correct? > Well, even for TCP, a client can retry a non-idempotent RPC, if it does not receive an RPC reply. Normally the timeout is much longer, so it will take a network partitioning for some time to cause it. (The timeout will vary with client, but I would expect it to be at least 1minute. I think the new FreeBSD client uses 5minutes.) Most (although not all NFSv3) clients will do the retry of the RPC on a new TCP connection. As such, the question really becomes "How reliable is your network interconnect?" and "How critical is file corruption on the server?". However, as I mention below, I don't believe that the old/default FreeBSD8 server uses the DRC for TCP. > > I ask because I have an NFS server (using the default legacy NFS > implementation) running FreeBSD 8.2 that is having problems with > entries in the replay cache becoming badly corrupted, leading to mbuf > leaks and system crashes. I know that the NFS code has been > rewritten > as of FreeBSD 9 so hopefully the issue is fixed in future versions, > but for the short term I'm not able to upgrade. I control the > clients > and I know that they all use TCP mounts, so I was wondering if > patching the server to disable the replay cache would be a plausible > short-term workaround for the issue until I can upgrade, or if I'm > courting disaster. > As far as I know, the old/default NFS server in FreeBSD8 does not use the DRC for TCP. (I added TCP support to the new server to try and improve correctness.) I have no idea why the replay cache would be doing anything if all the mounts are using UDP, given the old NFS server. (I'm not sure if "nfsstat -s" will list all RPCs as "Misses" or not list them at all, so I don't know if a non-zero "Misses" count indicates that the DRC is being used?) rick > Thanks, > Ryan > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Mar 18 23:57:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 944A433C; Tue, 18 Mar 2014 23:57:39 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 47BE9107; Tue, 18 Mar 2014 23:57:38 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAAzdKFODaFve/2dsb2JhbABahBiDBrwPgxCBP3SCJQEBAQMBIwRSBRYOCgICDRkCWQaIBAiuIaI5F4EpjGEBIzQHgm+BSQSqdoNJIYEsAR8i X-IronPort-AV: E=Sophos;i="4.97,681,1389762000"; d="scan'208";a="106977219" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 18 Mar 2014 19:57:37 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id C7C4CB3F1A; Tue, 18 Mar 2014 19:57:37 -0400 (EDT) Date: Tue, 18 Mar 2014 19:57:37 -0400 (EDT) From: Rick Macklem To: Alexander Motin Message-ID: <2092082855.24699674.1395187057807.JavaMail.root@uoguelph.ca> In-Reply-To: <5328065D.60201@FreeBSD.org> Subject: Re: review/test: NFS patch to use pagesize mbuf clusters MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Mar 2014 23:57:39 -0000 Alexander Motin wrote: > Hi. > > On 18.03.2014 03:26, Rick Macklem wrote: > > Several of the TSO capable network interfaces have a limit of > > 32 mbufs in the transmit mbuf chain (the drivers call these > > transmit > > segments, which I admit I find confusing). > > > > For a 64K read/readdir reply or 64K write request, NFS passes > > a list of 34 mbufs down to TCP. TCP will split the list, since > > it is slightly more than 64K bytes, but that split will normally > > be a copy by reference of the last mbuf cluster. As such, normally > > the network interface will get a list of 34 mbufs. > > > > For TSO enabled interfaces that are limited to 32 mbufs in the > > list, the usual workaround in the driver is to copy { real copy, > > not copy by reference } the list to 32 mbuf clusters via > > m_defrag(). > > (A few drivers use m_collapse() which is less likely to succeed.) > > > > As a workaround to this problem, the attached patch modifies NFS > > to use larger pagesize clusters, so that the 64K RPC message is > > in 18 mbufs (assuming a 4K pagesize). > > > > Testing on my slow hardware which does not have TSO capability > > shows it to be performance neutral, but I believe avoiding the > > overhead of copying via m_defrag() { and possible failures > > resulting in the message never being transmitted } makes this > > patch worth doing. > > > > As such, I'd like to request review and/or testing of this patch > > by anyone who can do so. > > First, I've tried to find respective NIC to test: cxgb/cxgbe have > limit > of 36, and so probably unaffected, ixgb -- 100, igb -- 64, only on em > I've found limit of 32. > When I did a find/grep on sys/dev, I found a bunch of them, but I didn't save the output. The case that came up was virtio and the author fixed that driver, since there was no hardware limitation. The "ix" driver (in sys/dev/ixgbe) is an example for some chips. I believe the 82599 chips have the 32 limit. > I run several profiles on em NIC with and without the patch. I can > confirm that without the patch m_defrag() is indeed called, while > with > patch it is not any more. But profiler shows to me that very small > amount of time (percents or even fractions) is spent there. I can't > measure the effect (my Core-i7 desktop test system has only about 5% > CPU > load while serving full 1Gbps NFS over the em), though I can't say > for > sure that effect can't be there on some low-end system. > Well, since m_defrag() creates a new list and bcopy()s the data, there is some overhead, although I'm not surprised it isn't that easy to measure. (I thought your server built entirely of SSDs might show a difference.) I am more concerned with the possibility of m_defrag() failing and the driver dropping the reply, forcing the client to do a fresh TCP connection and retry of the RPC after a long timeout (1minute or more). This will show up as "terrible performance" for users. Also, some drivers use m_collapse() instead of m_defrag() and these will probably be "train wrecks". I get cases where reports of serious NFS problems get "fixed" by disabling TSO and I was hoping this would work around that. > I am also not very sure about replacing M_WAITOK with M_NOWAIT. > Instead > of waiting a bit while VM find a cluster, NFSMCLGET() will return > single > mbuf, as result, replacing chain of 2K clusters instead of 4K ones > with > chain of 256b mbufs. > I hoped the comment in the patch would explain this. When I was testing (on a small i386 system), I succeeded in getting threads stuck sleeping on "btalloc" a couple of times when I used M_WAITOK for m_getjcl(). As far as I could see, this indicated that it hasd run out of kernel address space, but I'm not sure. --> That is why I used M_NOWAIT for m_getjcl(). As for using MCLGET(..M_NOWAIT), the main reason for doing that was I noticed that the code does a drain on zone_mcluster if this allocation attempt for a cluster fails. For some reason, m_getcl() and m_getjcl() do not do this drain of the zone? I thought the drain might help memory constrained cases. To be honest, I've never been able to get a MCLGET(..M_NOWAIT) to fail during testing. rick > -- > Alexander Motin > From owner-freebsd-fs@FreeBSD.ORG Wed Mar 19 00:07:02 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5295F4F1; Wed, 19 Mar 2014 00:07:02 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id D9D10217; Wed, 19 Mar 2014 00:07:01 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqQEAEzfKFODaFve/2dsb2JhbABag0FXgwa3Y4ZrUYE/dIIlAQEBAwEBAQEgKyALGxgCAg0ZAikBCSYOBwQBHASHUAgNrgaiOheBKYxXCgEFAgEbNAeCb4FJBJVvhAmQfoNJITF7AR8i X-IronPort-AV: E=Sophos;i="4.97,681,1389762000"; d="scan'208";a="106769007" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 18 Mar 2014 20:06:52 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 7BD69B403B; Tue, 18 Mar 2014 20:06:52 -0400 (EDT) Date: Tue, 18 Mar 2014 20:06:52 -0400 (EDT) From: Rick Macklem To: araujo@FreeBSD.org Message-ID: <459657309.24706896.1395187612496.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: review/test: NFS patch to use pagesize mbuf clusters MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Mar 2014 00:07:02 -0000 Marcelo Araujo wrote: > > Hello Rick, > > > I have couple machines with 10G interface capable with TSO. > Which kind of result do you expecting? Is it a speed up in read? > Well, if NFS is working well on these systems, I would hope you don't see any regression. If your TSO enabled interfaces can handle more than 32 transmit segments (there is usually a #define constant in the driver with something like TX_SEGMAX in it and if this is >= 34 you should see very little effect). Even if your network interface is one of the ones limited to 32 transmit segments, the driver usually fixes the list via a call to m_defrag(). Although this involves a bunch of bcopy()'ng, you still might not see any easily measured performance improvement, assuming m_defrag() is getting the job done. (Network latency and disk latency in the server will predominate, I suspect. A server built entirely using SSDs might be a different story?) Thanks for doing testing, since a lack of a regression is what I care about most. (I am hoping this resolves cases where users have had to disable TSO to make NFS work ok for them.) rick > > I'm gonna make some tests today, but against 9.1-RELEASE, where my > servers are working on. > > > Best Regards, > > > > > > 2014-03-18 9:26 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca > : > > > Hi, > > Several of the TSO capable network interfaces have a limit of > 32 mbufs in the transmit mbuf chain (the drivers call these transmit > segments, which I admit I find confusing). > > For a 64K read/readdir reply or 64K write request, NFS passes > a list of 34 mbufs down to TCP. TCP will split the list, since > it is slightly more than 64K bytes, but that split will normally > be a copy by reference of the last mbuf cluster. As such, normally > the network interface will get a list of 34 mbufs. > > For TSO enabled interfaces that are limited to 32 mbufs in the > list, the usual workaround in the driver is to copy { real copy, > not copy by reference } the list to 32 mbuf clusters via m_defrag(). > (A few drivers use m_collapse() which is less likely to succeed.) > > As a workaround to this problem, the attached patch modifies NFS > to use larger pagesize clusters, so that the 64K RPC message is > in 18 mbufs (assuming a 4K pagesize). > > Testing on my slow hardware which does not have TSO capability > shows it to be performance neutral, but I believe avoiding the > overhead of copying via m_defrag() { and possible failures > resulting in the message never being transmitted } makes this > patch worth doing. > > As such, I'd like to request review and/or testing of this patch > by anyone who can do so. > > Thanks in advance for your help, rick > ps: If you don't get the attachment, just email and I'll > send you a copy. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to " freebsd-fs-unsubscribe@freebsd.org > " > > > > > -- > Marcelo Araujo > araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Wed Mar 19 07:31:26 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 94369923 for ; Wed, 19 Mar 2014 07:31:26 +0000 (UTC) Received: from mail-we0-x231.google.com (mail-we0-x231.google.com [IPv6:2a00:1450:400c:c03::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2C6CCF5C for ; Wed, 19 Mar 2014 07:31:26 +0000 (UTC) Received: by mail-we0-f177.google.com with SMTP id u57so6612842wes.22 for ; Wed, 19 Mar 2014 00:31:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=HuboMjZLmK7SQPmK7apV5LE3VZHwP9R/MvSCd0FLBGA=; b=ppljB5vS9KIIYdFg3wzkidKpRCayWZl1mvkZ/BbSaQJ/6zfk1UnIdy65BO61wJbPkj gpzypIVP4YOMGh5X992PcsynEQY5OXpDbCbpwNvWU9e3dIjchzUmiSheE1EfzsymBNDI ih+D66jaXd0IuN1YJr7nA/KyLaO9maUuf5rTYsQ+5zTD/Mva6cMGvY7u+b06/4B8my33 /ysHM8hrTHb04sVZaobpZIrpJJST7w3/Ho64vQqSvJF2zx9nd+Pzd9JDmpXv9exlfyWI KtveU23qH4No9uRK+HsLuUI82S3MY8ZCvAxFDidKtQm9F+01lAhnmPyPmXvUW3fwqqQm kiaQ== X-Received: by 10.194.188.41 with SMTP id fx9mr723184wjc.56.1395214284569; Wed, 19 Mar 2014 00:31:24 -0700 (PDT) Received: from mavbook.mavhome.dp.ua ([134.249.139.101]) by mx.google.com with ESMTPSA id lz3sm41722354wic.1.2014.03.19.00.31.22 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 19 Mar 2014 00:31:23 -0700 (PDT) Sender: Alexander Motin Message-ID: <532947C9.9010607@FreeBSD.org> Date: Wed, 19 Mar 2014 09:31:21 +0200 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: Rick Macklem Subject: Re: review/test: NFS patch to use pagesize mbuf clusters References: <2092082855.24699674.1395187057807.JavaMail.root@uoguelph.ca> In-Reply-To: <2092082855.24699674.1395187057807.JavaMail.root@uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Mar 2014 07:31:26 -0000 On 19.03.2014 01:57, Rick Macklem wrote: > Alexander Motin wrote: >> I run several profiles on em NIC with and without the patch. I can >> confirm that without the patch m_defrag() is indeed called, while >> with >> patch it is not any more. But profiler shows to me that very small >> amount of time (percents or even fractions) is spent there. I can't >> measure the effect (my Core-i7 desktop test system has only about 5% >> CPU >> load while serving full 1Gbps NFS over the em), though I can't say >> for >> sure that effect can't be there on some low-end system. >> > Well, since m_defrag() creates a new list and bcopy()s the data, there > is some overhead, although I'm not surprised it isn't that easy to measure. > (I thought your server built entirely of SSDs might show a difference.) I did my test even from TMPFS, not SSD, but mentioned em NIC is only 1Gbps, that is too slow to reasonably load the system. > I am more concerned with the possibility of m_defrag() failing and the > driver dropping the reply, forcing the client to do a fresh TCP connection > and retry of the RPC after a long timeout (1minute or more). This will > show up as "terrible performance" for users. > > Also, some drivers use m_collapse() instead of m_defrag() and these > will probably be "train wrecks". I get cases where reports of serious > NFS problems get "fixed" by disabling TSO and I was hoping this would > work around that. Yes, I accept that argument. I don't see much reason to cut continuous data in small chunks. >> I am also not very sure about replacing M_WAITOK with M_NOWAIT. >> Instead >> of waiting a bit while VM find a cluster, NFSMCLGET() will return >> single >> mbuf, as result, replacing chain of 2K clusters instead of 4K ones >> with >> chain of 256b mbufs. >> > I hoped the comment in the patch would explain this. > > When I was testing (on a small i386 system), I succeeded in getting > threads stuck sleeping on "btalloc" a couple of times when I used > M_WAITOK for m_getjcl(). As far as I could see, this indicated that > it hasd run out of kernel address space, but I'm not sure. > --> That is why I used M_NOWAIT for m_getjcl(). > > As for using MCLGET(..M_NOWAIT), the main reason for doing that > was I noticed that the code does a drain on zone_mcluster if this > allocation attempt for a cluster fails. For some reason, m_getcl() > and m_getjcl() do not do this drain of the zone? > I thought the drain might help memory constrained cases. > To be honest, I've never been able to get a MCLGET(..M_NOWAIT) > to fail during testing. If it is true, I think that should be handled inside the allocation code, not work arounded here. Passing M_NOWAIT means that you agree to get NULL there, but IMO you don't really want to cut 64K data in ~200 byte pieces in any case even if system is in low memory condition, since at least most NICs won't be able to send it without defragging, that will also be problematic in low-memory case. -- Alexander Motin From owner-freebsd-fs@FreeBSD.ORG Wed Mar 19 12:51:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D1C94E5A for ; Wed, 19 Mar 2014 12:51:08 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 42987122 for ; Wed, 19 Mar 2014 12:51:07 +0000 (UTC) Received: (qmail 41396 invoked by uid 89); 19 Mar 2014 12:51:05 -0000 Received: from unknown (HELO eternal.bfh.ch) (ml-ktk@netlabs.org@147.87.42.166) by 0 with ESMTPA; 19 Mar 2014 12:51:05 -0000 Message-ID: <532992B8.4090407@netlabs.org> Date: Wed, 19 Mar 2014 13:51:04 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403181520.s2IFK1M3069036@freefall.freebsd.org> <53288024.2060005@denninger.net> <53288629.60309@FreeBSD.org> In-Reply-To: <53288629.60309@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Mar 2014 12:51:08 -0000 On 18.03.14 18:45, Andriy Gapon wrote: >> This is consistent with what I and others have observed on both 9.2 >> and 10.0; the ARC will expand until it hits the maximum configured >> even at the expense of forcing pages onto the swap. In this >> specific machine's case left to defaults it will grab nearly all >> physical memory (over 20GB of 24) and wire it down. > Well, this does not match my experience from before 10.x times. I reported the issue on which Karl gave feedback and developed the patch. The original thread of my report started here: http://lists.freebsd.org/pipermail/freebsd-fs/2014-March/019043.html Note that I don't have big memory eaters like VMs, it's just a bunch of jails and services running in them. Including some JVMs. Check out the munin graphs before and after: Daily which does not seem to grow much anymore now: http://ktk.netlabs.org/misc/munin-mem-zfs1.png Weekly: http://ktk.netlabs.org/misc/munin-mem-zfs2.png You can actually see where I activated the patch (16.3), the system behaves *much* better since then. I did one more reboot that's why it goes down again but since then I did not reboot anymore. The moments where munin did not report anything the system was in the ARC-swap lock and virtually dead. From working on the system it feels like a new machine, everything is super fast and snappy. I don't understand much of the discussions you guys are having but I'm pretty sure Karl fixed an issue which gave me headache on BSD over years. I first saw this in 8.x when I started to use ZFS productively and I've seen it in all 9.x release as well up to this patch. regards Adrian From owner-freebsd-fs@FreeBSD.ORG Wed Mar 19 13:07:02 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B7D9CCB8 for ; Wed, 19 Mar 2014 13:07:02 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 81DE02EF for ; Wed, 19 Mar 2014 13:07:01 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2JD6t7g090679 for ; Wed, 19 Mar 2014 08:06:55 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Wed Mar 19 08:06:55 2014 Message-ID: <5329966A.60308@denninger.net> Date: Wed, 19 Mar 2014 08:06:50 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403181520.s2IFK1M3069036@freefall.freebsd.org> <53288024.2060005@denninger.net> <53288629.60309@FreeBSD.org> <532992B8.4090407@netlabs.org> In-Reply-To: <532992B8.4090407@netlabs.org> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms020502080605050300090009" X-Antivirus: avast! (VPS 140319-0, 03/19/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Mar 2014 13:07:02 -0000 This is a cryptographically signed message in MIME format. --------------ms020502080605050300090009 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/19/2014 7:51 AM, Adrian Gschwend wrote: > On 18.03.14 18:45, Andriy Gapon wrote: > >>> This is consistent with what I and others have observed on both 9.2 >>> and 10.0; the ARC will expand until it hits the maximum configured >>> even at the expense of forcing pages onto the swap. In this >>> specific machine's case left to defaults it will grab nearly all >>> physical memory (over 20GB of 24) and wire it down. >> Well, this does not match my experience from before 10.x times. > I reported the issue on which Karl gave feedback and developed the > patch. The original thread of my report started here: > > http://lists.freebsd.org/pipermail/freebsd-fs/2014-March/019043.html > > Note that I don't have big memory eaters like VMs, it's just a bunch of= > jails and services running in them. Including some JVMs. > > Check out the munin graphs before and after: > > Daily which does not seem to grow much anymore now: > http://ktk.netlabs.org/misc/munin-mem-zfs1.png > > Weekly: > http://ktk.netlabs.org/misc/munin-mem-zfs2.png > > You can actually see where I activated the patch (16.3), the system > behaves *much* better since then. I did one more reboot that's why it > goes down again but since then I did not reboot anymore. > > The moments where munin did not report anything the system was in the > ARC-swap lock and virtually dead. From working on the system it feels > like a new machine, everything is super fast and snappy. > > I don't understand much of the discussions you guys are having but I'm > pretty sure Karl fixed an issue which gave me headache on BSD over > years. I first saw this in 8.x when I started to use ZFS productively > and I've seen it in all 9.x release as well up to this patch. > > regards > > Adrian > I have a newer version of this patch responding to the criticisms given=20 on gnats; it is being tested now. The salient difference is that it now does two things that are a bit=20 different: 1. It grabs the VM "first level" warning (vm_v_free_target), deducts 20% = from that, and sets that as the low-RAM warning level. 2. It also allows the setting of a freemem reservation in percentage as=20 an "additional" reservation (plus the low RAM warning level.) Both are exposed via sysctl and thus can be tuned during runtime. The reason for the change is that there is a legitimate criticism that=20 the pager may allow inact pages to grow without boundary if you never=20 get into the VM system's first warning level on free pages; that is, it=20 is never called upon to perform page stealing. "Never" seems like a bad = decision (shouldn't you clean things up eventually anyway?) but it is=20 what it is and the VM system has proved over time to be stable and fast, = and for mixed workloads I can see where there could be trouble there in=20 that ARC cache could be convinced to evict unnecessarily. Unbounded=20 inact page growth doesn't happen on my systems here but since it might=20 and appears to be reasonably easy to defend against without causing=20 other bad side effects that appears to be worth eliminating as a=20 potential problem. So instead I try to get more intelligent about choosing the arc eviction = level; I want it into the zone where the system will steal pages back,=20 but I *do not*, under any circumstance, want to allow vm.v_free_min to=20 be invaded, because that's where processes asking for memory get=20 **SUSPENDED** (that is, where stalls start to happen.) Since the knobs are exposed you can get the behavior you have now if you = want it, or you can leave it alone and let the code choose what it thinks are intelligent values. If you=20 diddle the knobs and don't like them you can reset the percentage=20 reservation to zero along with freepages and the system will pick up the = defaults again for you in real time and without rebooting. Also, and very importantly, I can now trivially provoke an INTENTIONAL=20 stall with the knobs exposed; set the reservation down far enough (which = effectively reverts to the system only paring cache when paging_needed=20 is set as is the case with the default arc.c "as-shipped") and then=20 simply copy a huge file to /dev/null (big enough to fill up the cache)=20 and bang -- INSTANT 15 second stall. Turn it back up so the ARC cache=20 is not allowed to drive the system into hard paging and the problem=20 disappears. I'm going to let it run through the day today before sending it up; it=20 ran overnight without problems and looks good, but I want to go through=20 a heavy load period before publishing it. I note that there are list complaints about this behavior going back to=20 at least 2010..... --=20 -- Karl karl@denninger.net --------------ms020502080605050300090009 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTkxMzA2NTBaMCMGCSqGSIb3DQEJBDEW BBRnHYvVHEl24aSo7+d8I3eGzLHcHzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAeoibScORZu/Gp9+l1lwZVoArJKE9 nQ8OrdlrEAZeUEEmEYQBhmOh5tEFG8XfaiCKoy96o4lLgNtdDkENEkqBjP/+PENjTs9IG3xd JHcm+QHORkVKG8JYU0kJKfussU+Iu/C3szSu7R27SdA18xVoF9haUCIUdw868Otn0HWwhlLg 8xlBewU5yONBKc5s9FKKrLQdvVtI8Qyy2toKYqdtgUZ0JJrMDFkF4oQQYThCWBbhVWesDpaC zui9Zeftqh0fR3WGSbchWppt8pCChgP/WN7r+ew1co8zc6IpYtMoMiWl1ONP1RbVjCG7Asjn HF/XqEGJ/dVZ2Rto/H4bNVbs33cJQ/H+5XiOwUIAmKE82e4N23Lyn7g9Hi7EwOWdY3Tgplu7 MwYewPEOO6MEVW42N8PQwSkrmW1eRRp7YZhYQIGCboJts5xLQ5MvObkEk8M3YS7kjjL3zl4N HVfBMfaMQaiYKx4UDAsMRjFhQ3CQlNnowz46EaPZhoKbgS14N6NaBokXQeOMnua0bX1DCB1s xKToNSYizFlNAXPR/3WX2gu4JBKqpfjg+bGru3fIGVQV5xeH2FUNJFnQE310aAHUWs3+I73h /sJI3IQ83Uv5QnxvmjOHvgJC4T2cVhqVD4IT4t6AFsMjq45mTtQYWgln2hNBHt75K+Oa6UY8 JgFT8IsAAAAAAAA= --------------ms020502080605050300090009-- From owner-freebsd-fs@FreeBSD.ORG Wed Mar 19 14:18:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E936CF2C for ; Wed, 19 Mar 2014 14:18:47 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 8A824CAE for ; Wed, 19 Mar 2014 14:18:47 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2JEIjCv013106 for ; Wed, 19 Mar 2014 09:18:45 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Wed Mar 19 09:18:45 2014 Message-ID: <5329A740.4060304@denninger.net> Date: Wed, 19 Mar 2014 09:18:40 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: avg@FreeBSD.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403181520.s2IFK1M3069036@freefall.freebsd.org> <53288024.2060005@denninger.net> In-Reply-To: <53288024.2060005@denninger.net> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms010701070402040604030408" X-Antivirus: avast! (VPS 140319-0, 03/19/2014), Outbound message X-Antivirus-Status: Clean Cc: freebsd-fs@freebsd.org, bug-followup@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Mar 2014 14:18:48 -0000 This is a cryptographically signed message in MIME format. --------------ms010701070402040604030408 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/18/2014 12:19 PM, Karl Denninger wrote: > > On 3/18/2014 10:20 AM, Andriy Gapon wrote: >> The following reply was made to PR kern/187594; it has been noted by=20 >> GNATS. >> >> From: Andriy Gapon >> To: bug-followup@FreeBSD.org, karl@fs.denninger.net >> Cc: >> Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and f= ix >> Date: Tue, 18 Mar 2014 17:15:05 +0200 >> >> Karl Denninger wrote: >> > ZFS can be convinced to engage in pathological behavior due to a b= ad >> > low-memory test in arc.c >> > >> > The offending file is at >> > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c; it = >> allegedly >> > checks for 25% free memory, and if it is less asks for the cache=20 >> to shrink. >> > >> > (snippet from arc.c around line 2494 of arc.c in 10-STABLE; path >> > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs) >> > >> > #else /* !sun */ >> > if (kmem_used() > (kmem_size() * 3) / 4) >> > return (1); >> > #endif /* sun */ >> > >> > Unfortunately these two functions do not return what the authors=20 >> thought >> > they did. It's clear what they're trying to do from the=20 >> Solaris-specific >> > code up above this test. >> No, these functions do return what the authors think they do. >> The check is for KVA usage (kernel virtual address space), not for=20 >> physical memory. > I understand, but that's nonsensical in the context of the Solaris=20 > code. "lotsfree" is *not* a declaration of free kvm space, it's a=20 > declaration of when the system has "lots" of free *physical* memory. > > Further it makes no sense at all to allow the ARC cache to force=20 > things into virtual (e.g. swap-space backed) memory. But that's the=20 > behavior that has been observed, and it fits with the code as=20 > originally written. > >> > The result is that the cache only shrinks when=20 >> vm_paging_needed() tests >> > true, but by that time the system is in serious memory trouble=20 >> and by >> No, it is not. >> The description and numbers here are a little bit outdated but they = >> should give >> an idea of how paging works in general: >> https://wiki.freebsd.org/AvgPageoutAlgorithm >> > triggering only there it actually drives the system further=20 >> into paging, >> How does ARC eviction drives the system further into paging? > 1. System gets low on physical memory but the ARC cache is looking at=20 > available kvm (of which there is plenty.) The ARC cache continues to=20 > expand. > > 2. vm_paging_needed() returns true and the system begins to page off=20 > to the swap. At the same time the ARC cache is pared down because=20 > arc_reclaim_needed has returned "1". > > 3. As the ARC cache shrinks and paging occurs vm_paging_needed()=20 > returns false. Paging out ceases but inactive pages remain on the=20 > swap. They are not recalled until and unless they are scheduled to=20 > execute. Arc_reclaim_needed again returns "0". > > 4. The hold-down timer expires in the ARC cache code=20 > ("arc_grow_retry", declared as 60 seconds) and the ARC cache begins to = > expand again. > > Go back to #2 until the system's performance starts to deteriorate=20 > badly enough due to the paging that you notice it, which occurs when=20 > something that is actually consuming CPU time has to be called in from = > swap. > > This is consistent with what I and others have observed on both 9.2=20 > and 10.0; the ARC will expand until it hits the maximum configured=20 > even at the expense of forcing pages onto the swap. In this specific=20 > machine's case left to defaults it will grab nearly all physical=20 > memory (over 20GB of 24) and wire it down. > > Limiting arc_max to 16GB sorta fixes it. I say "sorta" because it=20 > turns out that 16GB is still too much for the workload; it prevents=20 > the pathological behavior where system "stalls" happen but only in the = > extreme. It turns out with the patch in my ARC cache stabilizes at=20 > about 13.5GB during the busiest part of the day, growing to about 16=20 > off-hours. > > One of the problems with just limiting it in /boot/loader.conf is that = > you have to guess and the system doesn't reasonably adapt to changing=20 > memory loads. The code is clearly intended to do that but it doesn't=20 > end up working that way in practice. >> > because the pager will not recall pages from the swap until=20 >> they are next >> > executed. This leads the ARC to try to fill in all the available=20 >> RAM even >> > though pages have been pushed off onto swap. Not good. >> Unused physical memory is a waste. It is true that ARC tries to=20 >> use as much of >> memory as it is allowed. The same applies to the page cache=20 >> (Active, Inactive). >> Memory management is a dynamic system and there are a few competing = >> agents. > That's true. However, what the stock code does is force working set=20 > out of memory and into the swap. The ideal situation is one in which=20 > there is no free memory because cache has sized itself to consume=20 > everything *not* necessary for the working set of the processes that=20 > are running. Unfortunately we cannot determine this presciently=20 > because a new process may come along and we do not necessarily know=20 > for how long a process that is blocked on an event will remain blocked = > (e.g. something waiting on network I/O, etc.) > > However, it is my contention that you do not want to evict a process=20 > that is scheduled to run (or is going to be) in favor of disk cache=20 > because you're defeating yourself by doing so. The point of the disk=20 > cache is to avoid going to the physical disk for I/O, but if you page=20 > something you have ditched a physical I/O for data in favor of having=20 > to go to physical disk *twice* -- first to write the paged-out data to = > swap, and then to retrieve it when it is to be executed. This also=20 > appears to be consistent with what is present for Solaris machines. > > From the Sun code: > > #ifdef sun > /* > * take 'desfree' extra pages, so we reclaim sooner, rather=20 > than later > */ > extra =3D desfree; > > /* > * check that we're out of range of the pageout scanner. It=20 > starts to > * schedule paging if freemem is less than lotsfree and needfre= e. > * lotsfree is the high-water mark for pageout, and needfree=20 > is the > * number of needed free pages. We add extra pages here to=20 > make sure > * the scanner doesn't start up while we're freeing memory. > */ > if (freemem < lotsfree + needfree + extra) > return (1); > > /* > * check to make sure that swapfs has enough space so that anon= > * reservations can still succeed. anon_resvmem() checks that t= he > * availrmem is greater than swapfs_minfree, and the number of = > reserved > * swap pages. We also add a bit of extra here just to prevent= > * circumstances from getting really dire. > */ > if (availrmem < swapfs_minfree + swapfs_reserve + extra) > return (1); > > "freemem" is not virtual memory, it's actual memory. "Lotsfree" is=20 > the point where the system considers free RAM to be "ample";=20 > "needfree" is the "desperation" point and "extra" is the margin=20 > (presumably for image activation.) > > The base code on FreeBSD doesn't look at physical memory at all; it=20 > looks at kvm space instead. > >> It is hard to correctly tune that system using a large hummer such=20 >> as your >> patch. I believe that with your patch ARC will get shrunk to its=20 >> minimum size >> in due time. Active + Inactive will grow to use the memory that=20 >> you are denying >> to ARC driving Free below a threshold, which will reduce ARC.=20 >> Repeated enough >> times this will drive ARC to its minimum. > I disagree both in design theory and based on the empirical evidence=20 > of actual operation. > > First, I don't (ever) want to give memory to the ARC cache that=20 > otherwise would go to "active", because any time I do that I'm going=20 > to force two page events, which is double the amount of I/O I would=20 > take on a cache *miss*, and even with the ARC at minimum I get a=20 > reasonable hit percentage. If I therefore prefer ARC over "active"=20 > pages I am going to take *at least* a 200% penalty on physical I/O and = > if I get an 80% hit ratio with the ARC at a minimum the penalty is=20 > closer to 800%! > > For inactive pages it's a bit more complicated as those may not be=20 > reactivated. However, I am trusting FreeBSD's VM subsystem to demote=20 > those that are unlikely to be reactivated to the cache bucket and then = > to "free", where they are able to be re-used. This is consistent with=20 > what I actually see on a running system -- the "inact" bucket is=20 > typically fairly large (often on a busy machine close to that of=20 > "active") but pages demoted to "cache" don't stay there long - they=20 > either get re-promoted back up or they are freed and go on the free lis= t. > > The only time I see "inact" get out of control is when there's a=20 > kernel memory leak somewhere (such as what I ran into the other day=20 > with the in-kernel NAT subsystem on 10-STABLE.) But that's a bug and=20 > if it happens you're going to get bit anyway. > > For example right now on one of my very busy systems with 24GB of=20 > installed RAM and many terabytes of storage across three ZFS pools I'm = > seeing 17GB wired of which 13.5 is ARC cache. That's the adaptive=20 > figure it currently is running at, with a maximum of 22.3 and a=20 > minimum of 2.79 (8:1 ratio.) The remainder is wired down for other=20 > reasons (there's a fairly large Postgres server running on that box,=20 > among other things, and it has a big shared buffer declaration --=20 > that's most of the difference.) Cache hit efficiency is currently 97.8= %. > > Active is 2.26G right now, and inactive is 2.09G. Both are stable.=20 > Overnight inactive will drop to about 1.1GB while active will not=20 > change all that much since most of it postgres and the middleware that = > talks to it along with apache, which leaves most of its processes=20 > present even when they go idle. Peak load times are about right now=20 > (mid-day), and again when the system is running backups nightly. > > Cache is 7448, in other words, insignificant. Free memory is 2.6G. > > The tunable is set to 10%, which is almost exactly what free memory=20 > is. I find that when the system gets under 1G free transient image=20 > activation can drive it into paging and performance starts to suffer=20 > for my particular workload. > >> Also, there are a few technical problems with the patch: >> - you don't need to use sysctl interface in kernel, the values you=20 >> need are >> available directly, just take a look at e.g. implementation of=20 >> vm_paging_needed() > That's easily fixed. I will look at it. >> - similarly, querying vfs.zfs.arc_freepage_percent_target value via >> kernel_sysctlbyname is just bogus; you can use percent_target direct= ly > I did not know if during setup of the OID the value was copied (and=20 > thus you had to reference it later on) or the entry simply took the=20 > pointer and stashed that. Easily corrected. >> - you don't need to sum various page counters to get a total count, = >> there is >> v_page_count > Fair enough as well. >> Lastly, can you try to test reverting your patch and instead setting= >> vm.lowmem_period=3D0 ? > Yes. By default it's 10; I have not tampered with that default. > > Let me do a bit of work and I'll post back with a revised patch.=20 > Perhaps a tunable for percentage free + a free reserve that is a=20 > "floor"? The problem with that is where to put the defaults. One=20 > option would be to grab total size at init time and compute something=20 > similar to what "lotsfree" is for Solaris, allowing that to be tuned=20 > with the percentage if desired. I selected 25% because that's what=20 > the original test was expressing and it should be reasonable for=20 > modest RAM configurations. It's clearly too high for moderately large = > (or huge) memory machines unless they have a lot of RAM -hungry=20 > processes running on them. > > The percentage test, however, is an easy knob to twist that is=20 > unlikely to severely harm you if you dial it too far in either=20 > direction; anyone setting it to zero obviously knows what they're=20 > getting into, and if you crank it too high all you end up doing is=20 > limiting the ARC to the minimum value. > Responsive to the criticisms and in an attempt to better-track what the=20 VM system does, I offer this update to the patch. The following changes = have been made: 1. There are now two tunables: vfs.zfs.arc_freepages -- the number of free pages below which we declare = low memory and ask for ARC paring. vfs.zfs.arc_freepage_percent -- the additional free RAM to reserve in=20 percent of total, if any (added to freepages) 2. vfs.zfs.arc_freepages, if zero (as is the default at boot), defaults=20 to "vm.stats.vm.v_free_target" less 20%. This allows the system to get=20 into the page-stealing paradigm before the ARC cache is invaded. While=20 I do not run into a situation of unbridled inact page growth here the=20 criticism that the original patch could allow this appears to be=20 well-founded. Setting the low memory alert here should prevent this, as = the system will now allow the ARC to grow to the point that=20 page-stealing takes place. 3. The previous option to reserve either a hard amount of RAM or a=20 percentage of RAM remains. 4. The defaults should auto-tune for any particular RAM configuration to = reasonable values that prevent stalls, yet if you have circumstances=20 that argue for reserving more memory you may do so. Updated patch follows: *** arc.c.original Thu Mar 13 09:18:48 2014 --- arc.c Wed Mar 19 07:44:01 2014 *************** *** 18,23 **** --- 18,99 ---- * * CDDL HEADER END */ + + /* Karl Denninger (karl@denninger.net), 3/18/2014, FreeBSD-specific + * + * If "NEWRECLAIM" is defined, change the "low memory" warning that cau= ses + * the ARC cache to be pared down. The reason for the change is that t= he + * apparent attempted algorithm is to start evicting ARC cache when fre= e + * pages fall below 25% of installed RAM. This maps reasonably well to= how + * Solaris is documented to behave; when "lotsfree" is invaded ZFS is t= old + * to pare down. + * + * The problem is that on FreeBSD machines the system doesn't appear to= be + * getting what the authors of the original code thought they were look= ing at + * with its test -- or at least not what Solaris did -- and as a result= that + * test never triggers. That leaves the only reclaim trigger as the "p= aging + * needed" status flag, and by the time * that trips the system is alre= ady + * in low-memory trouble. This can lead to severe pathological behavio= r + * under the following scenario: + * - The system starts to page and ARC is evicted. + * - The system stops paging as ARC's eviction drops wired RAM a bit. + * - ARC starts increasing its allocation again, and wired memory grows= =2E + * - A new image is activated, and the system once again attempts to pa= ge. + * - ARC starts to be evicted again. + * - Back to #2 + * + * Note that ZFS's ARC default (unless you override it in /boot/loader.= conf) + * is to allow the ARC cache to grab nearly all of free RAM, provided n= obody + * else needs it. That would be ok if we evicted cache when required. + * + * Unfortunately the system can get into a state where it never + * manages to page anything of materiality back in, as if there is acti= ve + * I/O the ARC will start grabbing space once again as soon as the memo= ry + * contention state drops. For this reason the "paging is occurring" f= lag + * should be the **last resort** condition for ARC eviction; you want t= o + * (as Solaris does) start when there is material free RAM left BUT the= + * vm system thinks it needs to be active to steal pages back in the at= tempt + * to never get into the condition where you're potentially paging off + * executables in favor of leaving disk cache allocated. + * + * To fix this we change how we look at low memory, declaring two new + * runtime tunables. + * + * The new sysctls are: + * vfs.zfs.arc_freepages (free pages required to call RAM "sufficient")= + * vfs.zfs.arc_freepage_percent (additional reservation percentage, def= ault 0) + * + * vfs.zfs.arc_freepages is initialized from vm.stats.vm.v_free_target,= + * less 20% if we find that it is zero. Note that vm.stats.vm.v_free_t= arget + * is not initialized at boot -- the system has to be running first, so= we + * cannot initialize this in arc_init. So we check during runtime; thi= s + * also allows the user to return to defaults by setting it to zero. + * + * This should insure that we allow the VM system to steal pages first,= + * but pare the cache before we suspend processes attempting to get mor= e + * memory, thereby avoiding "stalls." You can set this higher if you w= ish, + * or force a specific percentage reservation as well, but doing so may= + * cause the cache to pare back while the VM system remains willing to + * allow "inactive" pages to accumulate. The challenge is that image + * activation can force things into the page space on a repeated basis + * if you allow this level to be too small (the above pathological + * behavior); the defaults should avoid that behavior but the sysctls + * are exposed should your workload require adjustment. + * + * If we're using this check for low memory we are replacing the previo= us + * ones, including the oddball "random" reclaim that appears to fire fa= r + * more often than it should. We still trigger if the system pages. + * + * If you turn on NEWRECLAIM_DEBUG then the kernel will print on the co= nsole + * status messages when the reclaim status trips on and off, along with= the + * page count aggregate that triggered it (and the free space) for each= + * event. + */ + + #define NEWRECLAIM + #undef NEWRECLAIM_DEBUG + + /* * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights = reserved. * Copyright (c) 2013 by Delphix. All rights reserved. *************** *** 139,144 **** --- 215,226 ---- =20 #include =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + #include + #endif + #endif /* NEWRECLAIM */ + #ifdef illumos #ifndef _KERNEL /* set with ZFS_DEBUG=3Dwatch, to enable watchpoints on frozen buffers= */ *************** *** 203,218 **** --- 285,320 ---- int zfs_arc_shrink_shift =3D 0; int zfs_arc_p_min_shift =3D 0; int zfs_disable_dup_eviction =3D 0; + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + static int freepages =3D 0; /* This much memory is considered critical = */ + static int percent_target =3D 0; /* Additionally reserve "X" percent fr= ee RAM */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max); TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min); TUNABLE_QUAD("vfs.zfs.arc_meta_limit", &zfs_arc_meta_limit); + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages); + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + SYSCTL_DECL(_vfs_zfs); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, &zfs_arc_max,= 0, "Maximum ARC size"); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN, &zfs_arc_min,= 0, "Minimum ARC size"); =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN, &freepages= , 0, "ARC Free RAM Pages Required"); + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent, CTLFLAG_RWTUN, &pe= rcent_target, 0, "ARC Free RAM Target percentage"); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + /* * Note that buffers can be in one of 6 states: * ARC_anon - anonymous (discussed below) *************** *** 2438,2443 **** --- 2540,2557 ---- { =20 #ifdef _KERNEL + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + u_int vmfree =3D 0; + u_int vmtotal =3D 0; + size_t vmsize; + #ifdef NEWRECLAIM_DEBUG + static int xval =3D -1; + static int oldpercent =3D 0; + static int oldfreepages =3D 0; + #endif /* NEWRECLAIM_DEBUG */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 if (needfree) return (1); *************** *** 2476,2481 **** --- 2590,2596 ---- return (1); =20 #if defined(__i386) + /* * If we're on an i386 platform, it's possible that we'll exhaust the= * kernel heap space before we ever run out of available physical *************** *** 2492,2502 **** return (1); #endif #else /* !sun */ if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 - #else if (spa_get_random(100) =3D=3D 0) return (1); #endif --- 2607,2680 ---- return (1); #endif #else /* !sun */ + + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + /* + * Implement the new tunable free RAM algorithm. We check the free pag= es + * against the minimum specified target and the percentage that should = be + * free. If we're low we ask for ARC cache shrinkage. If this is defi= ned + * on a FreeBSD system the older checks are not performed. + * + * Check first to see if we need to init freepages, then test. + */ + if (!freepages) { /* If zero then (re)init */ + vmsize =3D sizeof(vmtotal); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_free_target", &vmtotal,= &vmsize, NULL, 0, NULL, 0); + freepages =3D vmtotal - (vmtotal / 5); + #ifdef NEWRECLAIM_DEBUG + printf("ZFS ARC: Default vfs.zfs.arc_freepages to [%u] [%u less 20%%]= \n", freepages, vmtotal); + #endif /* NEWRECLAIM_DEBUG */ + } + + vmsize =3D sizeof(vmtotal); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_page_count", &vmt= otal, &vmsize, NULL, 0, NULL, 0); + vmsize =3D sizeof(vmfree); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_free_count", &vmf= ree, &vmsize, NULL, 0, NULL, 0); + #ifdef NEWRECLAIM_DEBUG + if (percent_target !=3D oldpercent) { + printf("ZFS ARC: Reservation percent change to [%d], [%d] pages, [%d]= free\n", percent_target, vmtotal, vmfree); + oldpercent =3D percent_target; + } + if (freepages !=3D oldfreepages) { + printf("ZFS ARC: Low RAM page change to [%d], [%d] pages, [%d] free\n= ", freepages, vmtotal, vmfree); + oldfreepages =3D freepages; + } + #endif /* NEWRECLAIM_DEBUG */ + if (!vmtotal) { + vmtotal =3D 1; /* Protect against divide by zero */ + /* (should be impossible, but...) */ + } + /* + * Now figure out how much free RAM we require to call the ARC cache st= atus + * "ok". Add the percentage specified of the total to the base require= ment. + */ + + if (vmfree < freepages + ((vmtotal / 100) * percent_target)) { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 1) { + printf("ZFS ARC: RECLAIM total %u, free %u, free pct (%u), reserved = (%u), target pct (%u)\n", vmtotal, vmfree, ((vmfree * 100) / vmtotal), fr= eepages, percent_target); + xval =3D 1; + } + #endif /* NEWRECLAIM_DEBUG */ + return(1); + } else { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 0) { + printf("ZFS ARC: NORMAL total %u, free %u, free pct (%u), reserved (= %u), target pct (%u)\n", vmtotal, vmfree, ((vmfree * 100) / vmtotal), fre= epages, percent_target); + xval =3D 0; + } + #endif /* NEWRECLAIM_DEBUG */ + return(0); + } + + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 if (spa_get_random(100) =3D=3D 0) return (1); #endif --=20 -- Karl karl@denninger.net --------------ms010701070402040604030408 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTkxNDE4NDBaMCMGCSqGSIb3DQEJBDEW BBQei71KWp0Us3DWHQWNCkeF3NMHRjBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAYM3wX4zcQ6slDGipG999HQbbYlLY wEaJRr1wTMOUoP+KPdDpxP9hJ6lOJYbiaM98HM1mSjxEvyX6kydwbKvV9QKVld7dliA2+pTy yH7ZlVdVKgtYWH6J03fjyIIdZaFHpAfSVmHeoNxKvgVZ27ur0cLs5VG+BcOeW37Jctenhidf H2XMs5DgCQMcn2ZcUqM7ncq3zPQu5K3afxcrmFhkrvKoeUgiLnZtERGHKClhdhQHthOGjaPa WShUih/yJoDcsEeuOOio4wQ3mM7DIwvn2F4B/hL90NIM0VLW95NyeJJ2TjbMa8kQ2tSv+PC3 NPXNCJRv6wONUT3i+U+9Dl69sJVrLmfXku+vbXFb7VirsEN7WP8x7ABX6TA3WIDNTy+RMcMx EmYim5pmLId5h3s72b48vR/ptwPrAmxrQOaLPt5kKkRxZ4D4uTQb0+XPtAFJKEhGCQyEQ86n 4b7Kzskoucm2UWx78uMUPD6eSiWdvv0AtnkYULhnPAErNz2t1hnpmsJK23dDZQfyIYRDxc8Q 3UZX2KVyyD/gnq3G3JNDj5zayedh2f08bCPKBqoYUbWnhY0rtkyCdWaL3zz+CXGqnT8Kp/wF Uan14xdvVyETg6xXOLxFAYIj16nXS/gjWm45oyhEGlT0GcKCBcjK8V46KuXqwqZ1k5ojKWYV AfZPB7YAAAAAAAA= --------------ms010701070402040604030408-- From owner-freebsd-fs@FreeBSD.ORG Wed Mar 19 14:20:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 44C65FB2 for ; Wed, 19 Mar 2014 14:20:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2DB4ACBF for ; Wed, 19 Mar 2014 14:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2JEK0rN021788 for ; Wed, 19 Mar 2014 14:20:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2JEK0MA021787; Wed, 19 Mar 2014 14:20:00 GMT (envelope-from gnats) Date: Wed, 19 Mar 2014 14:20:00 GMT Message-Id: <201403191420.s2JEK0MA021787@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Karl Denninger List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Mar 2014 14:20:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Karl Denninger To: avg@FreeBSD.org Cc: freebsd-fs@freebsd.org, bug-followup@FreeBSD.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Wed, 19 Mar 2014 09:18:40 -0500 This is a cryptographically signed message in MIME format. --------------ms010701070402040604030408 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/18/2014 12:19 PM, Karl Denninger wrote: > > On 3/18/2014 10:20 AM, Andriy Gapon wrote: >> The following reply was made to PR kern/187594; it has been noted by=20 >> GNATS. >> >> From: Andriy Gapon >> To: bug-followup@FreeBSD.org, karl@fs.denninger.net >> Cc: >> Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and f= ix >> Date: Tue, 18 Mar 2014 17:15:05 +0200 >> >> Karl Denninger wrote: >> > ZFS can be convinced to engage in pathological behavior due to a b= ad >> > low-memory test in arc.c >> > >> > The offending file is at >> > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c; it = >> allegedly >> > checks for 25% free memory, and if it is less asks for the cache=20 >> to shrink. >> > >> > (snippet from arc.c around line 2494 of arc.c in 10-STABLE; path >> > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs) >> > >> > #else /* !sun */ >> > if (kmem_used() > (kmem_size() * 3) / 4) >> > return (1); >> > #endif /* sun */ >> > >> > Unfortunately these two functions do not return what the authors=20 >> thought >> > they did. It's clear what they're trying to do from the=20 >> Solaris-specific >> > code up above this test. >> No, these functions do return what the authors think they do. >> The check is for KVA usage (kernel virtual address space), not for=20 >> physical memory. > I understand, but that's nonsensical in the context of the Solaris=20 > code. "lotsfree" is *not* a declaration of free kvm space, it's a=20 > declaration of when the system has "lots" of free *physical* memory. > > Further it makes no sense at all to allow the ARC cache to force=20 > things into virtual (e.g. swap-space backed) memory. But that's the=20 > behavior that has been observed, and it fits with the code as=20 > originally written. > >> > The result is that the cache only shrinks when=20 >> vm_paging_needed() tests >> > true, but by that time the system is in serious memory trouble=20 >> and by >> No, it is not. >> The description and numbers here are a little bit outdated but they = >> should give >> an idea of how paging works in general: >> https://wiki.freebsd.org/AvgPageoutAlgorithm >> > triggering only there it actually drives the system further=20 >> into paging, >> How does ARC eviction drives the system further into paging? > 1. System gets low on physical memory but the ARC cache is looking at=20 > available kvm (of which there is plenty.) The ARC cache continues to=20 > expand. > > 2. vm_paging_needed() returns true and the system begins to page off=20 > to the swap. At the same time the ARC cache is pared down because=20 > arc_reclaim_needed has returned "1". > > 3. As the ARC cache shrinks and paging occurs vm_paging_needed()=20 > returns false. Paging out ceases but inactive pages remain on the=20 > swap. They are not recalled until and unless they are scheduled to=20 > execute. Arc_reclaim_needed again returns "0". > > 4. The hold-down timer expires in the ARC cache code=20 > ("arc_grow_retry", declared as 60 seconds) and the ARC cache begins to = > expand again. > > Go back to #2 until the system's performance starts to deteriorate=20 > badly enough due to the paging that you notice it, which occurs when=20 > something that is actually consuming CPU time has to be called in from = > swap. > > This is consistent with what I and others have observed on both 9.2=20 > and 10.0; the ARC will expand until it hits the maximum configured=20 > even at the expense of forcing pages onto the swap. In this specific=20 > machine's case left to defaults it will grab nearly all physical=20 > memory (over 20GB of 24) and wire it down. > > Limiting arc_max to 16GB sorta fixes it. I say "sorta" because it=20 > turns out that 16GB is still too much for the workload; it prevents=20 > the pathological behavior where system "stalls" happen but only in the = > extreme. It turns out with the patch in my ARC cache stabilizes at=20 > about 13.5GB during the busiest part of the day, growing to about 16=20 > off-hours. > > One of the problems with just limiting it in /boot/loader.conf is that = > you have to guess and the system doesn't reasonably adapt to changing=20 > memory loads. The code is clearly intended to do that but it doesn't=20 > end up working that way in practice. >> > because the pager will not recall pages from the swap until=20 >> they are next >> > executed. This leads the ARC to try to fill in all the available=20 >> RAM even >> > though pages have been pushed off onto swap. Not good. >> Unused physical memory is a waste. It is true that ARC tries to=20 >> use as much of >> memory as it is allowed. The same applies to the page cache=20 >> (Active, Inactive). >> Memory management is a dynamic system and there are a few competing = >> agents. > That's true. However, what the stock code does is force working set=20 > out of memory and into the swap. The ideal situation is one in which=20 > there is no free memory because cache has sized itself to consume=20 > everything *not* necessary for the working set of the processes that=20 > are running. Unfortunately we cannot determine this presciently=20 > because a new process may come along and we do not necessarily know=20 > for how long a process that is blocked on an event will remain blocked = > (e.g. something waiting on network I/O, etc.) > > However, it is my contention that you do not want to evict a process=20 > that is scheduled to run (or is going to be) in favor of disk cache=20 > because you're defeating yourself by doing so. The point of the disk=20 > cache is to avoid going to the physical disk for I/O, but if you page=20 > something you have ditched a physical I/O for data in favor of having=20 > to go to physical disk *twice* -- first to write the paged-out data to = > swap, and then to retrieve it when it is to be executed. This also=20 > appears to be consistent with what is present for Solaris machines. > > From the Sun code: > > #ifdef sun > /* > * take 'desfree' extra pages, so we reclaim sooner, rather=20 > than later > */ > extra =3D desfree; > > /* > * check that we're out of range of the pageout scanner. It=20 > starts to > * schedule paging if freemem is less than lotsfree and needfre= e. > * lotsfree is the high-water mark for pageout, and needfree=20 > is the > * number of needed free pages. We add extra pages here to=20 > make sure > * the scanner doesn't start up while we're freeing memory. > */ > if (freemem < lotsfree + needfree + extra) > return (1); > > /* > * check to make sure that swapfs has enough space so that anon= > * reservations can still succeed. anon_resvmem() checks that t= he > * availrmem is greater than swapfs_minfree, and the number of = > reserved > * swap pages. We also add a bit of extra here just to prevent= > * circumstances from getting really dire. > */ > if (availrmem < swapfs_minfree + swapfs_reserve + extra) > return (1); > > "freemem" is not virtual memory, it's actual memory. "Lotsfree" is=20 > the point where the system considers free RAM to be "ample";=20 > "needfree" is the "desperation" point and "extra" is the margin=20 > (presumably for image activation.) > > The base code on FreeBSD doesn't look at physical memory at all; it=20 > looks at kvm space instead. > >> It is hard to correctly tune that system using a large hummer such=20 >> as your >> patch. I believe that with your patch ARC will get shrunk to its=20 >> minimum size >> in due time. Active + Inactive will grow to use the memory that=20 >> you are denying >> to ARC driving Free below a threshold, which will reduce ARC.=20 >> Repeated enough >> times this will drive ARC to its minimum. > I disagree both in design theory and based on the empirical evidence=20 > of actual operation. > > First, I don't (ever) want to give memory to the ARC cache that=20 > otherwise would go to "active", because any time I do that I'm going=20 > to force two page events, which is double the amount of I/O I would=20 > take on a cache *miss*, and even with the ARC at minimum I get a=20 > reasonable hit percentage. If I therefore prefer ARC over "active"=20 > pages I am going to take *at least* a 200% penalty on physical I/O and = > if I get an 80% hit ratio with the ARC at a minimum the penalty is=20 > closer to 800%! > > For inactive pages it's a bit more complicated as those may not be=20 > reactivated. However, I am trusting FreeBSD's VM subsystem to demote=20 > those that are unlikely to be reactivated to the cache bucket and then = > to "free", where they are able to be re-used. This is consistent with=20 > what I actually see on a running system -- the "inact" bucket is=20 > typically fairly large (often on a busy machine close to that of=20 > "active") but pages demoted to "cache" don't stay there long - they=20 > either get re-promoted back up or they are freed and go on the free lis= t. > > The only time I see "inact" get out of control is when there's a=20 > kernel memory leak somewhere (such as what I ran into the other day=20 > with the in-kernel NAT subsystem on 10-STABLE.) But that's a bug and=20 > if it happens you're going to get bit anyway. > > For example right now on one of my very busy systems with 24GB of=20 > installed RAM and many terabytes of storage across three ZFS pools I'm = > seeing 17GB wired of which 13.5 is ARC cache. That's the adaptive=20 > figure it currently is running at, with a maximum of 22.3 and a=20 > minimum of 2.79 (8:1 ratio.) The remainder is wired down for other=20 > reasons (there's a fairly large Postgres server running on that box,=20 > among other things, and it has a big shared buffer declaration --=20 > that's most of the difference.) Cache hit efficiency is currently 97.8= %. > > Active is 2.26G right now, and inactive is 2.09G. Both are stable.=20 > Overnight inactive will drop to about 1.1GB while active will not=20 > change all that much since most of it postgres and the middleware that = > talks to it along with apache, which leaves most of its processes=20 > present even when they go idle. Peak load times are about right now=20 > (mid-day), and again when the system is running backups nightly. > > Cache is 7448, in other words, insignificant. Free memory is 2.6G. > > The tunable is set to 10%, which is almost exactly what free memory=20 > is. I find that when the system gets under 1G free transient image=20 > activation can drive it into paging and performance starts to suffer=20 > for my particular workload. > >> Also, there are a few technical problems with the patch: >> - you don't need to use sysctl interface in kernel, the values you=20 >> need are >> available directly, just take a look at e.g. implementation of=20 >> vm_paging_needed() > That's easily fixed. I will look at it. >> - similarly, querying vfs.zfs.arc_freepage_percent_target value via >> kernel_sysctlbyname is just bogus; you can use percent_target direct= ly > I did not know if during setup of the OID the value was copied (and=20 > thus you had to reference it later on) or the entry simply took the=20 > pointer and stashed that. Easily corrected. >> - you don't need to sum various page counters to get a total count, = >> there is >> v_page_count > Fair enough as well. >> Lastly, can you try to test reverting your patch and instead setting= >> vm.lowmem_period=3D0 ? > Yes. By default it's 10; I have not tampered with that default. > > Let me do a bit of work and I'll post back with a revised patch.=20 > Perhaps a tunable for percentage free + a free reserve that is a=20 > "floor"? The problem with that is where to put the defaults. One=20 > option would be to grab total size at init time and compute something=20 > similar to what "lotsfree" is for Solaris, allowing that to be tuned=20 > with the percentage if desired. I selected 25% because that's what=20 > the original test was expressing and it should be reasonable for=20 > modest RAM configurations. It's clearly too high for moderately large = > (or huge) memory machines unless they have a lot of RAM -hungry=20 > processes running on them. > > The percentage test, however, is an easy knob to twist that is=20 > unlikely to severely harm you if you dial it too far in either=20 > direction; anyone setting it to zero obviously knows what they're=20 > getting into, and if you crank it too high all you end up doing is=20 > limiting the ARC to the minimum value. > Responsive to the criticisms and in an attempt to better-track what the=20 VM system does, I offer this update to the patch. The following changes = have been made: 1. There are now two tunables: vfs.zfs.arc_freepages -- the number of free pages below which we declare = low memory and ask for ARC paring. vfs.zfs.arc_freepage_percent -- the additional free RAM to reserve in=20 percent of total, if any (added to freepages) 2. vfs.zfs.arc_freepages, if zero (as is the default at boot), defaults=20 to "vm.stats.vm.v_free_target" less 20%. This allows the system to get=20 into the page-stealing paradigm before the ARC cache is invaded. While=20 I do not run into a situation of unbridled inact page growth here the=20 criticism that the original patch could allow this appears to be=20 well-founded. Setting the low memory alert here should prevent this, as = the system will now allow the ARC to grow to the point that=20 page-stealing takes place. 3. The previous option to reserve either a hard amount of RAM or a=20 percentage of RAM remains. 4. The defaults should auto-tune for any particular RAM configuration to = reasonable values that prevent stalls, yet if you have circumstances=20 that argue for reserving more memory you may do so. Updated patch follows: *** arc.c.original Thu Mar 13 09:18:48 2014 --- arc.c Wed Mar 19 07:44:01 2014 *************** *** 18,23 **** --- 18,99 ---- * * CDDL HEADER END */ + + /* Karl Denninger (karl@denninger.net), 3/18/2014, FreeBSD-specific + * + * If "NEWRECLAIM" is defined, change the "low memory" warning that cau= ses + * the ARC cache to be pared down. The reason for the change is that t= he + * apparent attempted algorithm is to start evicting ARC cache when fre= e + * pages fall below 25% of installed RAM. This maps reasonably well to= how + * Solaris is documented to behave; when "lotsfree" is invaded ZFS is t= old + * to pare down. + * + * The problem is that on FreeBSD machines the system doesn't appear to= be + * getting what the authors of the original code thought they were look= ing at + * with its test -- or at least not what Solaris did -- and as a result= that + * test never triggers. That leaves the only reclaim trigger as the "p= aging + * needed" status flag, and by the time * that trips the system is alre= ady + * in low-memory trouble. This can lead to severe pathological behavio= r + * under the following scenario: + * - The system starts to page and ARC is evicted. + * - The system stops paging as ARC's eviction drops wired RAM a bit. + * - ARC starts increasing its allocation again, and wired memory grows= =2E + * - A new image is activated, and the system once again attempts to pa= ge. + * - ARC starts to be evicted again. + * - Back to #2 + * + * Note that ZFS's ARC default (unless you override it in /boot/loader.= conf) + * is to allow the ARC cache to grab nearly all of free RAM, provided n= obody + * else needs it. That would be ok if we evicted cache when required. + * + * Unfortunately the system can get into a state where it never + * manages to page anything of materiality back in, as if there is acti= ve + * I/O the ARC will start grabbing space once again as soon as the memo= ry + * contention state drops. For this reason the "paging is occurring" f= lag + * should be the **last resort** condition for ARC eviction; you want t= o + * (as Solaris does) start when there is material free RAM left BUT the= + * vm system thinks it needs to be active to steal pages back in the at= tempt + * to never get into the condition where you're potentially paging off + * executables in favor of leaving disk cache allocated. + * + * To fix this we change how we look at low memory, declaring two new + * runtime tunables. + * + * The new sysctls are: + * vfs.zfs.arc_freepages (free pages required to call RAM "sufficient")= + * vfs.zfs.arc_freepage_percent (additional reservation percentage, def= ault 0) + * + * vfs.zfs.arc_freepages is initialized from vm.stats.vm.v_free_target,= + * less 20% if we find that it is zero. Note that vm.stats.vm.v_free_t= arget + * is not initialized at boot -- the system has to be running first, so= we + * cannot initialize this in arc_init. So we check during runtime; thi= s + * also allows the user to return to defaults by setting it to zero. + * + * This should insure that we allow the VM system to steal pages first,= + * but pare the cache before we suspend processes attempting to get mor= e + * memory, thereby avoiding "stalls." You can set this higher if you w= ish, + * or force a specific percentage reservation as well, but doing so may= + * cause the cache to pare back while the VM system remains willing to + * allow "inactive" pages to accumulate. The challenge is that image + * activation can force things into the page space on a repeated basis + * if you allow this level to be too small (the above pathological + * behavior); the defaults should avoid that behavior but the sysctls + * are exposed should your workload require adjustment. + * + * If we're using this check for low memory we are replacing the previo= us + * ones, including the oddball "random" reclaim that appears to fire fa= r + * more often than it should. We still trigger if the system pages. + * + * If you turn on NEWRECLAIM_DEBUG then the kernel will print on the co= nsole + * status messages when the reclaim status trips on and off, along with= the + * page count aggregate that triggered it (and the free space) for each= + * event. + */ + + #define NEWRECLAIM + #undef NEWRECLAIM_DEBUG + + /* * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights = reserved. * Copyright (c) 2013 by Delphix. All rights reserved. *************** *** 139,144 **** --- 215,226 ---- =20 #include =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + #include + #endif + #endif /* NEWRECLAIM */ + #ifdef illumos #ifndef _KERNEL /* set with ZFS_DEBUG=3Dwatch, to enable watchpoints on frozen buffers= */ *************** *** 203,218 **** --- 285,320 ---- int zfs_arc_shrink_shift =3D 0; int zfs_arc_p_min_shift =3D 0; int zfs_disable_dup_eviction =3D 0; + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + static int freepages =3D 0; /* This much memory is considered critical = */ + static int percent_target =3D 0; /* Additionally reserve "X" percent fr= ee RAM */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max); TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min); TUNABLE_QUAD("vfs.zfs.arc_meta_limit", &zfs_arc_meta_limit); + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages); + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + SYSCTL_DECL(_vfs_zfs); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, &zfs_arc_max,= 0, "Maximum ARC size"); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN, &zfs_arc_min,= 0, "Minimum ARC size"); =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN, &freepages= , 0, "ARC Free RAM Pages Required"); + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent, CTLFLAG_RWTUN, &pe= rcent_target, 0, "ARC Free RAM Target percentage"); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + /* * Note that buffers can be in one of 6 states: * ARC_anon - anonymous (discussed below) *************** *** 2438,2443 **** --- 2540,2557 ---- { =20 #ifdef _KERNEL + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + u_int vmfree =3D 0; + u_int vmtotal =3D 0; + size_t vmsize; + #ifdef NEWRECLAIM_DEBUG + static int xval =3D -1; + static int oldpercent =3D 0; + static int oldfreepages =3D 0; + #endif /* NEWRECLAIM_DEBUG */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 if (needfree) return (1); *************** *** 2476,2481 **** --- 2590,2596 ---- return (1); =20 #if defined(__i386) + /* * If we're on an i386 platform, it's possible that we'll exhaust the= * kernel heap space before we ever run out of available physical *************** *** 2492,2502 **** return (1); #endif #else /* !sun */ if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 - #else if (spa_get_random(100) =3D=3D 0) return (1); #endif --- 2607,2680 ---- return (1); #endif #else /* !sun */ + + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + /* + * Implement the new tunable free RAM algorithm. We check the free pag= es + * against the minimum specified target and the percentage that should = be + * free. If we're low we ask for ARC cache shrinkage. If this is defi= ned + * on a FreeBSD system the older checks are not performed. + * + * Check first to see if we need to init freepages, then test. + */ + if (!freepages) { /* If zero then (re)init */ + vmsize =3D sizeof(vmtotal); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_free_target", &vmtotal,= &vmsize, NULL, 0, NULL, 0); + freepages =3D vmtotal - (vmtotal / 5); + #ifdef NEWRECLAIM_DEBUG + printf("ZFS ARC: Default vfs.zfs.arc_freepages to [%u] [%u less 20%%]= \n", freepages, vmtotal); + #endif /* NEWRECLAIM_DEBUG */ + } + + vmsize =3D sizeof(vmtotal); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_page_count", &vmt= otal, &vmsize, NULL, 0, NULL, 0); + vmsize =3D sizeof(vmfree); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_free_count", &vmf= ree, &vmsize, NULL, 0, NULL, 0); + #ifdef NEWRECLAIM_DEBUG + if (percent_target !=3D oldpercent) { + printf("ZFS ARC: Reservation percent change to [%d], [%d] pages, [%d]= free\n", percent_target, vmtotal, vmfree); + oldpercent =3D percent_target; + } + if (freepages !=3D oldfreepages) { + printf("ZFS ARC: Low RAM page change to [%d], [%d] pages, [%d] free\n= ", freepages, vmtotal, vmfree); + oldfreepages =3D freepages; + } + #endif /* NEWRECLAIM_DEBUG */ + if (!vmtotal) { + vmtotal =3D 1; /* Protect against divide by zero */ + /* (should be impossible, but...) */ + } + /* + * Now figure out how much free RAM we require to call the ARC cache st= atus + * "ok". Add the percentage specified of the total to the base require= ment. + */ + + if (vmfree < freepages + ((vmtotal / 100) * percent_target)) { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 1) { + printf("ZFS ARC: RECLAIM total %u, free %u, free pct (%u), reserved = (%u), target pct (%u)\n", vmtotal, vmfree, ((vmfree * 100) / vmtotal), fr= eepages, percent_target); + xval =3D 1; + } + #endif /* NEWRECLAIM_DEBUG */ + return(1); + } else { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 0) { + printf("ZFS ARC: NORMAL total %u, free %u, free pct (%u), reserved (= %u), target pct (%u)\n", vmtotal, vmfree, ((vmfree * 100) / vmtotal), fre= epages, percent_target); + xval =3D 0; + } + #endif /* NEWRECLAIM_DEBUG */ + return(0); + } + + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 if (spa_get_random(100) =3D=3D 0) return (1); #endif --=20 -- Karl karl@denninger.net --------------ms010701070402040604030408 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTkxNDE4NDBaMCMGCSqGSIb3DQEJBDEW BBQei71KWp0Us3DWHQWNCkeF3NMHRjBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAYM3wX4zcQ6slDGipG999HQbbYlLY wEaJRr1wTMOUoP+KPdDpxP9hJ6lOJYbiaM98HM1mSjxEvyX6kydwbKvV9QKVld7dliA2+pTy yH7ZlVdVKgtYWH6J03fjyIIdZaFHpAfSVmHeoNxKvgVZ27ur0cLs5VG+BcOeW37Jctenhidf H2XMs5DgCQMcn2ZcUqM7ncq3zPQu5K3afxcrmFhkrvKoeUgiLnZtERGHKClhdhQHthOGjaPa WShUih/yJoDcsEeuOOio4wQ3mM7DIwvn2F4B/hL90NIM0VLW95NyeJJ2TjbMa8kQ2tSv+PC3 NPXNCJRv6wONUT3i+U+9Dl69sJVrLmfXku+vbXFb7VirsEN7WP8x7ABX6TA3WIDNTy+RMcMx EmYim5pmLId5h3s72b48vR/ptwPrAmxrQOaLPt5kKkRxZ4D4uTQb0+XPtAFJKEhGCQyEQ86n 4b7Kzskoucm2UWx78uMUPD6eSiWdvv0AtnkYULhnPAErNz2t1hnpmsJK23dDZQfyIYRDxc8Q 3UZX2KVyyD/gnq3G3JNDj5zayedh2f08bCPKBqoYUbWnhY0rtkyCdWaL3zz+CXGqnT8Kp/wF Uan14xdvVyETg6xXOLxFAYIj16nXS/gjWm45oyhEGlT0GcKCBcjK8V46KuXqwqZ1k5ojKWYV AfZPB7YAAAAAAAA= --------------ms010701070402040604030408-- From owner-freebsd-fs@FreeBSD.ORG Wed Mar 19 18:09:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EC0AEE10 for ; Wed, 19 Mar 2014 18:09:29 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9613D999 for ; Wed, 19 Mar 2014 18:09:29 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2JI9S8n059424 for ; Wed, 19 Mar 2014 13:09:28 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Wed Mar 19 13:09:28 2014 Message-ID: <5329DD53.2020308@denninger.net> Date: Wed, 19 Mar 2014 13:09:23 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Fwd: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <5329DBF2.6060008@denninger.net> In-Reply-To: <5329DBF2.6060008@denninger.net> X-Forwarded-Message-Id: <5329DBF2.6060008@denninger.net> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030707020708050300080101" X-Antivirus: avast! (VPS 140319-1, 03/19/2014), Outbound message X-Antivirus-Status: Clean X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Mar 2014 18:09:30 -0000 This is a cryptographically signed message in MIME format. --------------ms030707020708050300080101 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable CC'ing the list on my PR followup; forgot to include it when submitted. -------- Original Message -------- Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix= Date: Wed, 19 Mar 2014 13:03:30 -0500 From: Karl Denninger To: bug-followup@FreeBSD.org, karl@fs.denninger.net The 20% invasion of the first-level paging regime looks too aggressive under very heavy load. I have changed my system here to 10% (during runt= ime) and obtain a materially-better response profile. At 20% the system will still occasionally page recently-used executable code to disk before cache is released which is undesirable. 10% looks better but may STILL be too aggressive (in other words, 5% might be "just right") Being able to tune this in real time is a BIG help! Adjusted patch follows (only a couple of lines have changed) *** arc.c.original Thu Mar 13 09:18:48 2014 --- arc.c Wed Mar 19 13:01:48 2014 *************** *** 18,23 **** --- 18,99 ---- * * CDDL HEADER END */ + + /* Karl Denninger (karl@denninger.net), 3/18/2014, FreeBSD-specific + * + * If "NEWRECLAIM" is defined, change the "low memory" warning that cau= ses + * the ARC cache to be pared down. The reason for the change is that t= he + * apparent attempted algorithm is to start evicting ARC cache when fre= e + * pages fall below 25% of installed RAM. This maps reasonably well to= how + * Solaris is documented to behave; when "lotsfree" is invaded ZFS is t= old + * to pare down. + * + * The problem is that on FreeBSD machines the system doesn't appear to= be + * getting what the authors of the original code thought they were look= ing at + * with its test -- or at least not what Solaris did -- and as a result= that + * test never triggers. That leaves the only reclaim trigger as the "p= aging + * needed" status flag, and by the time * that trips the system is alre= ady + * in low-memory trouble. This can lead to severe pathological behavio= r + * under the following scenario: + * - The system starts to page and ARC is evicted. + * - The system stops paging as ARC's eviction drops wired RAM a bit. + * - ARC starts increasing its allocation again, and wired memory grows= =2E + * - A new image is activated, and the system once again attempts to pa= ge. + * - ARC starts to be evicted again. + * - Back to #2 + * + * Note that ZFS's ARC default (unless you override it in /boot/loader.= conf) + * is to allow the ARC cache to grab nearly all of free RAM, provided n= obody + * else needs it. That would be ok if we evicted cache when required. + * + * Unfortunately the system can get into a state where it never + * manages to page anything of materiality back in, as if there is acti= ve + * I/O the ARC will start grabbing space once again as soon as the memo= ry + * contention state drops. For this reason the "paging is occurring" f= lag + * should be the **last resort** condition for ARC eviction; you want t= o + * (as Solaris does) start when there is material free RAM left BUT the= + * vm system thinks it needs to be active to steal pages back in the at= tempt + * to never get into the condition where you're potentially paging off + * executables in favor of leaving disk cache allocated. + * + * To fix this we change how we look at low memory, declaring two new + * runtime tunables. + * + * The new sysctls are: + * vfs.zfs.arc_freepages (free pages required to call RAM "sufficient")= + * vfs.zfs.arc_freepage_percent (additional reservation percentage, def= ault 0) + * + * vfs.zfs.arc_freepages is initialized from vm.stats.vm.v_free_target,= + * less 10% if we find that it is zero. Note that vm.stats.vm.v_free_t= arget + * is not initialized at boot -- the system has to be running first, so= we + * cannot initialize this in arc_init. So we check during runtime; thi= s + * also allows the user to return to defaults by setting it to zero. + * + * This should insure that we allow the VM system to steal pages first,= + * but pare the cache before we suspend processes attempting to get mor= e + * memory, thereby avoiding "stalls." You can set this higher if you w= ish, + * or force a specific percentage reservation as well, but doing so may= + * cause the cache to pare back while the VM system remains willing to + * allow "inactive" pages to accumulate. The challenge is that image + * activation can force things into the page space on a repeated basis + * if you allow this level to be too small (the above pathological + * behavior); the defaults should avoid that behavior but the sysctls + * are exposed should your workload require adjustment. + * + * If we're using this check for low memory we are replacing the previo= us + * ones, including the oddball "random" reclaim that appears to fire fa= r + * more often than it should. We still trigger if the system pages. + * + * If you turn on NEWRECLAIM_DEBUG then the kernel will print on the co= nsole + * status messages when the reclaim status trips on and off, along with= the + * page count aggregate that triggered it (and the free space) for each= + * event. + */ + + #define NEWRECLAIM + #undef NEWRECLAIM_DEBUG + + /* * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights= reserved. * Copyright (c) 2013 by Delphix. All rights reserved. *************** *** 139,144 **** --- 215,226 ---- =20 #include =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + #include + #endif + #endif /* NEWRECLAIM */ + #ifdef illumos #ifndef _KERNEL /* set with ZFS_DEBUG=3Dwatch, to enable watchpoints on frozen buffer= s */ *************** *** 203,218 **** --- 285,320 ---- int zfs_arc_shrink_shift =3D 0; int zfs_arc_p_min_shift =3D 0; int zfs_disable_dup_eviction =3D 0; + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + static int freepages =3D 0; /* This much memory is considered critical = */ + static int percent_target =3D 0; /* Additionally reserve "X" percent fr= ee RAM */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max); TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min); TUNABLE_QUAD("vfs.zfs.arc_meta_limit", &zfs_arc_meta_limit); + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages); + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + SYSCTL_DECL(_vfs_zfs); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, &zfs_arc_max= , 0, "Maximum ARC size"); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN, &zfs_arc_min= , 0, "Minimum ARC size"); =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN, &freepages= , 0, "ARC Free RAM Pages Required"); + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent, CTLFLAG_RWTUN, &pe= rcent_target, 0, "ARC Free RAM Target percentage"); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + /* * Note that buffers can be in one of 6 states: * ARC_anon - anonymous (discussed below) *************** *** 2438,2443 **** --- 2540,2557 ---- { =20 #ifdef _KERNEL + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + u_int vmfree =3D 0; + u_int vmtotal =3D 0; + size_t vmsize; + #ifdef NEWRECLAIM_DEBUG + static int xval =3D -1; + static int oldpercent =3D 0; + static int oldfreepages =3D 0; + #endif /* NEWRECLAIM_DEBUG */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 if (needfree) return (1); *************** *** 2476,2481 **** --- 2590,2596 ---- return (1); =20 #if defined(__i386) + /* * If we're on an i386 platform, it's possible that we'll exhaust th= e * kernel heap space before we ever run out of available physical *************** *** 2492,2502 **** return (1); #endif #else /* !sun */ if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 - #else if (spa_get_random(100) =3D=3D 0) return (1); #endif --- 2607,2680 ---- return (1); #endif #else /* !sun */ + + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + /* + * Implement the new tunable free RAM algorithm. We check the free pag= es + * against the minimum specified target and the percentage that should = be + * free. If we're low we ask for ARC cache shrinkage. If this is defi= ned + * on a FreeBSD system the older checks are not performed. + * + * Check first to see if we need to init freepages, then test. + */ + if (!freepages) { /* If zero then (re)init */ + vmsize =3D sizeof(vmtotal); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_free_target", &vmtotal,= &vmsize, NULL, 0, NULL, 0); + freepages =3D vmtotal - (vmtotal / 10); + #ifdef NEWRECLAIM_DEBUG + printf("ZFS ARC: Default vfs.zfs.arc_freepages to [%u] [%u less 10%%]= \n", freepages, vmtotal); + #endif /* NEWRECLAIM_DEBUG */ + } + + vmsize =3D sizeof(vmtotal); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_page_count", &vmt= otal, &vmsize, NULL, 0, NULL, 0); + vmsize =3D sizeof(vmfree); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_free_count", &vmf= ree, &vmsize, NULL, 0, NULL, 0); + #ifdef NEWRECLAIM_DEBUG + if (percent_target !=3D oldpercent) { + printf("ZFS ARC: Reservation percent change to [%d], [%d] pages, [%d]= free\n", percent_target, vmtotal, vmfree); + oldpercent =3D percent_target; + } + if (freepages !=3D oldfreepages) { + printf("ZFS ARC: Low RAM page change to [%d], [%d] pages, [%d] free\n= ", freepages, vmtotal, vmfree); + oldfreepages =3D freepages; + } + #endif /* NEWRECLAIM_DEBUG */ + if (!vmtotal) { + vmtotal =3D 1; /* Protect against divide by zero */ + /* (should be impossible, but...) */ + } + /* + * Now figure out how much free RAM we require to call the ARC cache st= atus + * "ok". Add the percentage specified of the total to the base require= ment. + */ + + if (vmfree < freepages + ((vmtotal / 100) * percent_target)) { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 1) { + printf("ZFS ARC: RECLAIM total %u, free %u, free pct (%u), reserved = (%u), target pct (%u)\n", vmtotal, vmfree, ((vmfree * 100) / vmtotal), fr= eepages, percent_target); + xval =3D 1; + } + #endif /* NEWRECLAIM_DEBUG */ + return(1); + } else { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 0) { + printf("ZFS ARC: NORMAL total %u, free %u, free pct (%u), reserved (= %u), target pct (%u)\n", vmtotal, vmfree, ((vmfree * 100) / vmtotal), fre= epages, percent_target); + xval =3D 0; + } + #endif /* NEWRECLAIM_DEBUG */ + return(0); + } + + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 if (spa_get_random(100) =3D=3D 0) return (1); #endif --=20 -- Karl karl@denninger.net --------------ms030707020708050300080101 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTkxODA5MjNaMCMGCSqGSIb3DQEJBDEW BBRruoAjO8dXZsy2+jSoLTdoJbE0yDBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAVkJSc75QWxWKgpV4VADi3nanpWeu syEPEYEYjegOSM/YAbSCGdjs+g8JwUk5zFqJmjbJ8EfowliRw/8iNXJa0J3qxuctyzcs3rLb /4/m5PHTRwdNFfxubBLgtdfuUYZcrU2dARvXaiqlRVDlgI+y3MtSxc6Us8hJHusCJpJaE6oH 9sKd5yRtVQh2SDtiNveF9CYP4u80ExkX08g/xaaftspM5gpDhWfRWu4vhO3A0bELomAzbbAn 23uwyUiGKcL++UeffIFNa53KOG8idYLNwzvY+rOGreIwEk8UZIZUzc/HppIB7+4eG2usOWl9 cbnxYvoZ/tXORJ1H1okRm3XF39wSbR5YRj2pdkEbKNFRuf/+Gpp8xTUt9xnYwLTuC9/KmB5R NwZc9h+XmWRw9+mf5uheq6H543PPmbNcgzRn1qHiNWlUEil6BDqk0RJv+M0OFNulRGB/WsRF CSxemmXC5+6rF40GRLOMjMf08GEJai4+ftr7NdXpfnS4EUjH25TmK3++BrTymZTcDLqHXkaR mYGkh5asoPV2sUtN2a0IoB+pBPR8Ekha1VflXD8Ujs1tzdeXctGy2qIZHEHthttjEEXMXgCq gxQN+WzDbWC3i4JQmxzWbNQXMU5mp5biokja4PpLfVkc26rTfaioAcx84WHuJjJlYeoTdkxr Eq5SGzkAAAAAAAA= --------------ms030707020708050300080101-- From owner-freebsd-fs@FreeBSD.ORG Wed Mar 19 18:10:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6AF16E83 for ; Wed, 19 Mar 2014 18:10:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 54BB49A1 for ; Wed, 19 Mar 2014 18:10:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2JIA1cM092552 for ; Wed, 19 Mar 2014 18:10:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2JIA1Fv092551; Wed, 19 Mar 2014 18:10:01 GMT (envelope-from gnats) Date: Wed, 19 Mar 2014 18:10:01 GMT Message-Id: <201403191810.s2JIA1Fv092551@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Karl Denninger List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Mar 2014 18:10:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Karl Denninger To: bug-followup@FreeBSD.org, karl@fs.denninger.net Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Wed, 19 Mar 2014 13:03:30 -0500 This is a cryptographically signed message in MIME format. --------------ms030600050808010400080908 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable The 20% invasion of the first-level paging regime looks too aggressive=20 under very heavy load. I have changed my system here to 10% and obtain=20 a better response profile. At 20% the system will still occasionally page recently-used executable=20 code to disk before cache is released which is undesirable. 10% looks=20 better but may STILL be too aggressive (in other words, 5% might be=20 "just right") Being able to tune this in real time is a BIG help! Adjusted patch follows (only a couple of lines have changed) *** arc.c.original Thu Mar 13 09:18:48 2014 --- arc.c Wed Mar 19 13:01:48 2014 *************** *** 18,23 **** --- 18,99 ---- * * CDDL HEADER END */ + + /* Karl Denninger (karl@denninger.net), 3/18/2014, FreeBSD-specific + * + * If "NEWRECLAIM" is defined, change the "low memory" warning that cau= ses + * the ARC cache to be pared down. The reason for the change is that t= he + * apparent attempted algorithm is to start evicting ARC cache when fre= e + * pages fall below 25% of installed RAM. This maps reasonably well to= how + * Solaris is documented to behave; when "lotsfree" is invaded ZFS is t= old + * to pare down. + * + * The problem is that on FreeBSD machines the system doesn't appear to= be + * getting what the authors of the original code thought they were look= ing at + * with its test -- or at least not what Solaris did -- and as a result= that + * test never triggers. That leaves the only reclaim trigger as the "p= aging + * needed" status flag, and by the time * that trips the system is alre= ady + * in low-memory trouble. This can lead to severe pathological behavio= r + * under the following scenario: + * - The system starts to page and ARC is evicted. + * - The system stops paging as ARC's eviction drops wired RAM a bit. + * - ARC starts increasing its allocation again, and wired memory grows= =2E + * - A new image is activated, and the system once again attempts to pa= ge. + * - ARC starts to be evicted again. + * - Back to #2 + * + * Note that ZFS's ARC default (unless you override it in /boot/loader.= conf) + * is to allow the ARC cache to grab nearly all of free RAM, provided n= obody + * else needs it. That would be ok if we evicted cache when required. + * + * Unfortunately the system can get into a state where it never + * manages to page anything of materiality back in, as if there is acti= ve + * I/O the ARC will start grabbing space once again as soon as the memo= ry + * contention state drops. For this reason the "paging is occurring" f= lag + * should be the **last resort** condition for ARC eviction; you want t= o + * (as Solaris does) start when there is material free RAM left BUT the= + * vm system thinks it needs to be active to steal pages back in the at= tempt + * to never get into the condition where you're potentially paging off + * executables in favor of leaving disk cache allocated. + * + * To fix this we change how we look at low memory, declaring two new + * runtime tunables. + * + * The new sysctls are: + * vfs.zfs.arc_freepages (free pages required to call RAM "sufficient")= + * vfs.zfs.arc_freepage_percent (additional reservation percentage, def= ault 0) + * + * vfs.zfs.arc_freepages is initialized from vm.stats.vm.v_free_target,= + * less 10% if we find that it is zero. Note that vm.stats.vm.v_free_t= arget + * is not initialized at boot -- the system has to be running first, so= we + * cannot initialize this in arc_init. So we check during runtime; thi= s + * also allows the user to return to defaults by setting it to zero. + * + * This should insure that we allow the VM system to steal pages first,= + * but pare the cache before we suspend processes attempting to get mor= e + * memory, thereby avoiding "stalls." You can set this higher if you w= ish, + * or force a specific percentage reservation as well, but doing so may= + * cause the cache to pare back while the VM system remains willing to + * allow "inactive" pages to accumulate. The challenge is that image + * activation can force things into the page space on a repeated basis + * if you allow this level to be too small (the above pathological + * behavior); the defaults should avoid that behavior but the sysctls + * are exposed should your workload require adjustment. + * + * If we're using this check for low memory we are replacing the previo= us + * ones, including the oddball "random" reclaim that appears to fire fa= r + * more often than it should. We still trigger if the system pages. + * + * If you turn on NEWRECLAIM_DEBUG then the kernel will print on the co= nsole + * status messages when the reclaim status trips on and off, along with= the + * page count aggregate that triggered it (and the free space) for each= + * event. + */ + + #define NEWRECLAIM + #undef NEWRECLAIM_DEBUG + + /* * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights = reserved. * Copyright (c) 2013 by Delphix. All rights reserved. *************** *** 139,144 **** --- 215,226 ---- =20 #include =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + #include + #endif + #endif /* NEWRECLAIM */ + #ifdef illumos #ifndef _KERNEL /* set with ZFS_DEBUG=3Dwatch, to enable watchpoints on frozen buffers= */ *************** *** 203,218 **** --- 285,320 ---- int zfs_arc_shrink_shift =3D 0; int zfs_arc_p_min_shift =3D 0; int zfs_disable_dup_eviction =3D 0; + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + static int freepages =3D 0; /* This much memory is considered critical = */ + static int percent_target =3D 0; /* Additionally reserve "X" percent fr= ee RAM */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max); TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min); TUNABLE_QUAD("vfs.zfs.arc_meta_limit", &zfs_arc_meta_limit); + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages); + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + SYSCTL_DECL(_vfs_zfs); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, &zfs_arc_max,= 0, "Maximum ARC size"); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN, &zfs_arc_min,= 0, "Minimum ARC size"); =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN, &freepages= , 0, "ARC Free RAM Pages Required"); + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent, CTLFLAG_RWTUN, &pe= rcent_target, 0, "ARC Free RAM Target percentage"); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + /* * Note that buffers can be in one of 6 states: * ARC_anon - anonymous (discussed below) *************** *** 2438,2443 **** --- 2540,2557 ---- { =20 #ifdef _KERNEL + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + u_int vmfree =3D 0; + u_int vmtotal =3D 0; + size_t vmsize; + #ifdef NEWRECLAIM_DEBUG + static int xval =3D -1; + static int oldpercent =3D 0; + static int oldfreepages =3D 0; + #endif /* NEWRECLAIM_DEBUG */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 if (needfree) return (1); *************** *** 2476,2481 **** --- 2590,2596 ---- return (1); =20 #if defined(__i386) + /* * If we're on an i386 platform, it's possible that we'll exhaust the= * kernel heap space before we ever run out of available physical *************** *** 2492,2502 **** return (1); #endif #else /* !sun */ if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 - #else if (spa_get_random(100) =3D=3D 0) return (1); #endif --- 2607,2680 ---- return (1); #endif #else /* !sun */ + + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + /* + * Implement the new tunable free RAM algorithm. We check the free pag= es + * against the minimum specified target and the percentage that should = be + * free. If we're low we ask for ARC cache shrinkage. If this is defi= ned + * on a FreeBSD system the older checks are not performed. + * + * Check first to see if we need to init freepages, then test. + */ + if (!freepages) { /* If zero then (re)init */ + vmsize =3D sizeof(vmtotal); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_free_target", &vmtotal,= &vmsize, NULL, 0, NULL, 0); + freepages =3D vmtotal - (vmtotal / 10); + #ifdef NEWRECLAIM_DEBUG + printf("ZFS ARC: Default vfs.zfs.arc_freepages to [%u] [%u less 10%%]= \n", freepages, vmtotal); + #endif /* NEWRECLAIM_DEBUG */ + } + + vmsize =3D sizeof(vmtotal); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_page_count", &vmt= otal, &vmsize, NULL, 0, NULL, 0); + vmsize =3D sizeof(vmfree); + kernel_sysctlbyname(curthread, "vm.stats.vm.v_free_count", &vmf= ree, &vmsize, NULL, 0, NULL, 0); + #ifdef NEWRECLAIM_DEBUG + if (percent_target !=3D oldpercent) { + printf("ZFS ARC: Reservation percent change to [%d], [%d] pages, [%d]= free\n", percent_target, vmtotal, vmfree); + oldpercent =3D percent_target; + } + if (freepages !=3D oldfreepages) { + printf("ZFS ARC: Low RAM page change to [%d], [%d] pages, [%d] free\n= ", freepages, vmtotal, vmfree); + oldfreepages =3D freepages; + } + #endif /* NEWRECLAIM_DEBUG */ + if (!vmtotal) { + vmtotal =3D 1; /* Protect against divide by zero */ + /* (should be impossible, but...) */ + } + /* + * Now figure out how much free RAM we require to call the ARC cache st= atus + * "ok". Add the percentage specified of the total to the base require= ment. + */ + + if (vmfree < freepages + ((vmtotal / 100) * percent_target)) { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 1) { + printf("ZFS ARC: RECLAIM total %u, free %u, free pct (%u), reserved = (%u), target pct (%u)\n", vmtotal, vmfree, ((vmfree * 100) / vmtotal), fr= eepages, percent_target); + xval =3D 1; + } + #endif /* NEWRECLAIM_DEBUG */ + return(1); + } else { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 0) { + printf("ZFS ARC: NORMAL total %u, free %u, free pct (%u), reserved (= %u), target pct (%u)\n", vmtotal, vmfree, ((vmfree * 100) / vmtotal), fre= epages, percent_target); + xval =3D 0; + } + #endif /* NEWRECLAIM_DEBUG */ + return(0); + } + + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 if (spa_get_random(100) =3D=3D 0) return (1); #endif --=20 -- Karl karl@denninger.net --------------ms030600050808010400080908 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMTkxODAzMzBaMCMGCSqGSIb3DQEJBDEW BBSFyxvPVuD9KQx+MckyjGvGxg/wgzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAPdSjbOwT77SmBWi5+zg+Jonbc/Lb 7lyt9dX3xR2xX8Bkxa6HS/n0DjQ+vvNQSUOthkjzmlFs53EbSVuhMQGFfXYJOmeB7oBeRPzR VXUl2X8zgajok9SjKCkeV0Wx2yHCW+cMc/k5e0dDU6QCH05d26qbaEp0iOG7T0SAcdZagFKv EHOCYvCtDI7kzajdSQOAwp4JsuBpHO4Bx+h6dn8MleeCm229cKl5rVbVM4KkBXECDzJ8vu05 ggfisQKeJzKVyMwFvkVlZtN54Sf2Vy70X/S0wYIMeopHFe18Ua1mpfUrG+FlvDVrxb0XawV+ L52GKFTVWAhdYzflZdonkZP2cU3luJuhLVo4GQL6CMQnmV8YhREoBZYlsaa6A4HYcWjuy8kC rUohrgDQTdlG3KXbLksQ8joEc2lp79QAwF7c4+ggUVqFIbE8YiYvr0lJCxEce8BF3pVknNcM oF8tsIac3LnHj5fdmogToohf4vCC+UA6DL3eL5PMpQT7RdwaYqrZbcNGKcSSB1H662trqceV yO7vDDzkEwu5bh3afW88Q9vEIr21LcMi75cZn1FRp3TmIWNVZboop9MCGpKSMenqDtAkDSCY ex+IBP3eEsoDtNfNDQCAbC3vO9stQVYKwPvPNng4FlwFSkxXsQ1urRL6XK3LwHC0RO3B4DLh Nt8StxkAAAAAAAA= --------------ms030600050808010400080908-- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 20 08:35:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6AF9C813 for ; Thu, 20 Mar 2014 08:35:45 +0000 (UTC) Received: from mail-ve0-x22a.google.com (mail-ve0-x22a.google.com [IPv6:2607:f8b0:400c:c01::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2B4A2328 for ; Thu, 20 Mar 2014 08:35:45 +0000 (UTC) Received: by mail-ve0-f170.google.com with SMTP id pa12so554896veb.1 for ; Thu, 20 Mar 2014 01:35:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=ZjWt24/AMesWyXXzeIDVzLOCAk/mRv/fW/VbhtSv82w=; b=klMZG1gEs16lk545xtIyA9NPACoNdoK0LQspl6p5xdGtruIWeMjKo9myb0AnH6Ioz3 6yd9FwWtyOxwjdSmkzv/w2myt2IFnBP919Qo91KZKmanIp5jAKz96bCVv4wdAowUuu0W avCiOBTiL2ksQVU4J1jig7cv3Fp4xeOimTt8kBam2lLMO82IjphPgbe6Gdb/q2Bjp5Hi DgacCOrojMP7lQk8+n4vVQvE33hdI/Ir4iSB+puFBV9CqoFd4WJptZ5Vyd3fMGG2GZ0T GMXf1An7hHbcHpkvbBd0NyVk0PU83mOI3QQW49EzZLx+ZdyxnrnIcugZeI3UPcZ5HxCt gsJA== X-Received: by 10.52.18.70 with SMTP id u6mr28529109vdd.11.1395304544395; Thu, 20 Mar 2014 01:35:44 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.252.165 with HTTP; Thu, 20 Mar 2014 01:35:14 -0700 (PDT) In-Reply-To: <201403191810.s2JIA1Fv092551@freefall.freebsd.org> References: <201403191810.s2JIA1Fv092551@freefall.freebsd.org> From: Matthias Gamsjager Date: Thu, 20 Mar 2014 09:35:14 +0100 Message-ID: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix To: Karl Denninger Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Mar 2014 08:35:45 -0000 It's kinda messy now with multiple patches in this list. Couple you please update the original patch.txt in the PR? From owner-freebsd-fs@FreeBSD.ORG Thu Mar 20 12:30:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 07172DCD for ; Thu, 20 Mar 2014 12:30:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id DBFA6CE3 for ; Thu, 20 Mar 2014 12:30:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2KCU1Lr057436 for ; Thu, 20 Mar 2014 12:30:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2KCU1vx057435; Thu, 20 Mar 2014 12:30:01 GMT (envelope-from gnats) Date: Thu, 20 Mar 2014 12:30:01 GMT Message-Id: <201403201230.s2KCU1vx057435@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Karl Denninger List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Mar 2014 12:30:02 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Karl Denninger To: bug-followup@FreeBSD.org, karl@fs.denninger.net Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Thu, 20 Mar 2014 07:26:39 -0500 This is a cryptographically signed message in MIME format. --------------ms010203060907010901070002 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable I am increasingly-convinced with increasing runtime now on both=20 synthetic and real loads in production that the proper default value for = vfs.zfs.arc_freepages is vm.stats.vm.v_free_target less "just a bit." =20 Five percent appears to be ok for most workloads with RAM configurations = ranging from 4GB to the 24GB area (configurations that I can easily test = under both synthetic and real environments.) Larger invasions of the free target increasingly risk provocation of the = behavior that prompted me to get involved in working this part of the=20 code in the first place, including short-term (~5-10 second) "stalls"=20 during which the system appears to be locked up, but is not. It appears that the key to avoiding that behavior is to not allow the=20 ARC to continue to take RAM when a material invasion of that target=20 space has occurred. --=20 -- Karl karl@denninger.net --------------ms010203060907010901070002 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjAxMjI2MzlaMCMGCSqGSIb3DQEJBDEW BBQR32SZybvYPRSh2yyMOKJNR5tQVDBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAjrvvzQFOgSuCeTX+6e+W/0jShDsU MgxPnQc3xi002hY/qcA5mOQCAa/Dyynmcsyp1vYz27QaI7Bs9wiH+m+0U5x7gdp5H6mICMKe UGVkvDCQ/z2+sORuF+LilHmq6D/vy7i44s47wQ8WXCk6/a9TJXF6v83f1oxJuxqwUm13Sny8 xP/UQQXZZl/fv99j3B6jjiZZ9JfiTmhWDhb7e8OAFmgT68XAV5PHAYwvgSpaFxB3w/JCoolW scVL4n/ESFP26j3RxCYb4Zr0jidpX+aeBdzCtx5BJE4OBHcTM6tH011jbtCdiRe5ruhtZq0g j0//egwZgyOUTFMWGF4j3XB7qeC8pdv/W9PgdWUncn99kkgFfvi+QxtEe8EiKRc61VXSdt9R EwtNKrHbmvCkPNWlCQ78lw5j3lfSG6bwYbvwjc31/ZOlJ7kjEED4vvB70GSaFxeQTpGbMQwB kz5MpFlalj/w8XLy5AWcgENbs9VC4lVeqH78vgg9oVJznlHipQO2emnGWFpsReX3vK65gaTM 4i7a++zFX5uw/UabVTKdUvMyshX2pr++jeI2hK6h8FHqd8PVK/nhNYaOQVD+e/1ju8v6YJWu 4hQJDd44EleGDNXsEfm/FaZt6VNOZ+UZhvpfBdQaE+sE+DcTOhu/7eWIaiccO+s8tUFc+OCI CTp2VFQAAAAAAAA= --------------ms010203060907010901070002-- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 20 15:00:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9024FFCD for ; Thu, 20 Mar 2014 15:00:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7E47CF7F for ; Thu, 20 Mar 2014 15:00:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2KF017D002146 for ; Thu, 20 Mar 2014 15:00:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2KF01kA002145; Thu, 20 Mar 2014 15:00:01 GMT (envelope-from gnats) Date: Thu, 20 Mar 2014 15:00:01 GMT Message-Id: <201403201500.s2KF01kA002145@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Andriy Gapon Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Andriy Gapon List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Mar 2014 15:00:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Andriy Gapon To: bug-followup@FreeBSD.org, karl@fs.denninger.net Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Thu, 20 Mar 2014 16:56:18 +0200 I think that you are gradually approaching a correct solution to the problem, but from quite a different angle comparing to how I would approach the problem. In fact, I think that it was this commit http://svnweb.freebsd.org/changeset/base/254304 that broke a balance between the page cache and ZFS ARC. On technical side, I see that you are still using kernel_sysctlbyname in your patches. As I've said before, this is not needed and in certain sense incorrect. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Mar 20 15:02:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E40BE1AB for ; Thu, 20 Mar 2014 15:02:17 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4342C70 for ; Thu, 20 Mar 2014 15:02:16 +0000 (UTC) Received: (qmail 55718 invoked by uid 89); 20 Mar 2014 15:02:08 -0000 Received: from unknown (HELO eternal.bfh.ch) (ml-ktk@netlabs.org@147.87.42.12) by 0 with ESMTPA; 20 Mar 2014 15:02:08 -0000 Message-ID: <532B02EF.4020801@netlabs.org> Date: Thu, 20 Mar 2014 16:02:07 +0100 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403201500.s2KF01kA002145@freefall.freebsd.org> In-Reply-To: <201403201500.s2KF01kA002145@freefall.freebsd.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Mar 2014 15:02:18 -0000 On 20.03.14 16:00, Andriy Gapon wrote: > In fact, I think that it was this commit > http://svnweb.freebsd.org/changeset/base/254304 that broke a balance between the > page cache and ZFS ARC. I definitely had this behavior long before that date. regards Adrian From owner-freebsd-fs@FreeBSD.ORG Thu Mar 20 15:06:52 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4A70469F for ; Thu, 20 Mar 2014 15:06:52 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 9A336FC for ; Thu, 20 Mar 2014 15:06:51 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id RAA09182; Thu, 20 Mar 2014 17:06:43 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WQeYM-000JhX-Nb; Thu, 20 Mar 2014 17:06:42 +0200 Message-ID: <532B03DF.7080503@FreeBSD.org> Date: Thu, 20 Mar 2014 17:06:07 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Karl Denninger , freebsd-fs@FreeBSD.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403201230.s2KCU1vx057435@freefall.freebsd.org> In-Reply-To: <201403201230.s2KCU1vx057435@freefall.freebsd.org> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Mar 2014 15:06:52 -0000 on 20/03/2014 14:30 Karl Denninger said the following: > The following reply was made to PR kern/187594; it has been noted by GNATS. > > From: Karl Denninger > To: bug-followup@FreeBSD.org, karl@fs.denninger.net > Cc: > Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix > Date: Thu, 20 Mar 2014 07:26:39 -0500 > > This is a cryptographically signed message in MIME format. > > --------------ms010203060907010901070002 > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > Content-Transfer-Encoding: quoted-printable > > I am increasingly-convinced with increasing runtime now on both=20 > synthetic and real loads in production that the proper default value for = > > vfs.zfs.arc_freepages is vm.stats.vm.v_free_target less "just a bit." =20 > Five percent appears to be ok for most workloads with RAM configurations = How about just changing vm_paging_needed() check to vm_paging_target() > 0 check? Could you please try to test this? > ranging from 4GB to the 24GB area (configurations that I can easily test = > > under both synthetic and real environments.) > > Larger invasions of the free target increasingly risk provocation of the = > > behavior that prompted me to get involved in working this part of the=20 > code in the first place, including short-term (~5-10 second) "stalls"=20 > during which the system appears to be locked up, but is not. > > It appears that the key to avoiding that behavior is to not allow the=20 > ARC to continue to take RAM when a material invasion of that target=20 > space has occurred. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Mar 20 15:33:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 818FA7C3 for ; Thu, 20 Mar 2014 15:33:20 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 460873F4 for ; Thu, 20 Mar 2014 15:33:19 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2KFXGEb026853 for ; Thu, 20 Mar 2014 10:33:16 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Thu Mar 20 10:33:16 2014 Message-ID: <532B0A37.3050307@denninger.net> Date: Thu, 20 Mar 2014 10:33:11 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403201500.s2KF01kA002145@freefall.freebsd.org> <532B02EF.4020801@netlabs.org> In-Reply-To: <532B02EF.4020801@netlabs.org> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030302080009010004080803" X-Antivirus: avast! (VPS 140319-1, 03/19/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Mar 2014 15:33:20 -0000 This is a cryptographically signed message in MIME format. --------------ms030302080009010004080803 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/20/2014 10:02 AM, Adrian Gschwend wrote: > On 20.03.14 16:00, Andriy Gapon wrote: > >> In fact, I think that it was this commit >> http://svnweb.freebsd.org/changeset/base/254304 that broke a balance= between the >> page cache and ZFS ARC. > I definitely had this behavior long before that date. > So did I -- in fact my issues with the stalls went back to my first=20 attempts to run ZFS. --=20 -- Karl karl@denninger.net --------------ms030302080009010004080803 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjAxNTMzMTFaMCMGCSqGSIb3DQEJBDEW BBSxTtdFEFPxK0IACPiMs9cdZsvHxTBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAWInTg/sMk7CvoFqNMZ6hdaeVpv2t U0Uh18qlXd0XCt7aivKW9tA4DRquRfKSH5GkvMeJi8W8GCFOGqX8y2Jt40kuClfpbX15N3BI GYCzlT5FV9VErx99ZEixj/fbm5LqKNjZsz5VDyBctOW0HHNTyiJhsKkiTPWD0IBdASaP0KJL MvUSW2pcBcD2PcPyqW5smUH3IaS3WFe3XYccgstDMX66p3uyAQWEpKYEfpPzT6mZCb8TqOOE /RkurLQZNboF5EzuNkJ0nYtte3TNhCBeMBZhTolU5ind4hjhOfao+qdATkrvopiF93L9qShW WDfZnjWJHoFlKvgvIkbcIV8HrC1ar3LJIGcdFpz40LUJpc7ZVMHlXKA4Gj96swMq8oTt0aBH eqsz0qNwGtxvehIVhEZKdLh87lR/JhoFVq2Z5bmGiVu4iZSlsEDibwh6j40HpqxLnHNZ5Nc7 hnvjV83rW5NWuxkK6x2qmwUDOxTfBh43bKigTrIUngouH25o4ZdTVqCvVtf69pVCrXS9tM9J OsMiWYd0HQlA+UZ28seTujyOIb7hY0gquQ6d/LyDTz0skj3d5tBt3E18f/1kdtoC1T+E52r6 EZbEYtK6zNebcJYBcY8Ha2pxpEijr3If2d34tMGee1r0beW5TiWw/pw4x7UoomxgwrrQRJPp eTPVZiYAAAAAAAA= --------------ms030302080009010004080803-- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 20 17:10:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 82266BBD for ; Thu, 20 Mar 2014 17:10:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6C7F417F for ; Thu, 20 Mar 2014 17:10:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2KHA1kc043055 for ; Thu, 20 Mar 2014 17:10:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2KHA0e9043051; Thu, 20 Mar 2014 17:10:00 GMT (envelope-from gnats) Date: Thu, 20 Mar 2014 17:10:00 GMT Message-Id: <201403201710.s2KHA0e9043051@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Karl Denninger List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Mar 2014 17:10:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Karl Denninger To: bug-followup@FreeBSD.org, karl@fs.denninger.net Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Thu, 20 Mar 2014 12:00:54 -0500 This is a cryptographically signed message in MIME format. --------------ms010508000607000909070805 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable Responsive to avg's comment and with another overnight and daytime load=20 of testing on multiple machines with varying memory configs from 4-24GB=20 of RAM here is another version of the patch. The differences are: 1. No longer use kernel_sysctlbyname, include the VM header file and get = the values directly (less overhead.) Remove the variables no longer need= ed. 2. Set the default free RAM level for ARC shrinkage to v_free_target=20 less 3% as I was able to provoke a stall once with it set to a 5%=20 reservation, was able to provoke it with the parameter set to 10% with a = lot of work and was able to do so "on demand" with it set to 20%. With=20 a 5% invasion initiating a scrub with very heavy I/O and image load=20 (hundreds of web and database processes) provoked a ~10 second system=20 stall. With it set to 3% I have not been able to reproduce the stall=20 yet the inactive page count remains stable even under extremely heavy=20 load, indicating that page-stealing remains effective when required. =20 Note that for my workload even with this level set above v_free_target,=20 which would imply no page stealing by the VM system before ARC expansion = is halted, I do not get unbridled inactive page growth. As before vfs.zfs.zrc_freepages and vfs.zfs.arc_freepage_percent remain=20 as accessible knobs if you wish to twist them for some reason to=20 compensate for an unusual load profile or machine configuration. *** arc.c.original Thu Mar 13 09:18:48 2014 --- arc.c Thu Mar 20 11:51:48 2014 *************** *** 18,23 **** --- 18,94 ---- * * CDDL HEADER END */ + + /* Karl Denninger (karl@denninger.net), 3/20/2014, FreeBSD-specific + * + * If "NEWRECLAIM" is defined, change the "low memory" warning that cau= ses + * the ARC cache to be pared down. The reason for the change is that t= he + * apparent attempted algorithm is to start evicting ARC cache when fre= e + * pages fall below 25% of installed RAM. This maps reasonably well to= how + * Solaris is documented to behave; when "lotsfree" is invaded ZFS is t= old + * to pare down. + * + * The problem is that on FreeBSD machines the system doesn't appear to= be + * getting what the authors of the original code thought they were look= ing at + * with its test -- or at least not what Solaris did -- and as a result= that + * test never triggers. That leaves the only reclaim trigger as the "p= aging + * needed" status flag, and by the time * that trips the system is alre= ady + * in low-memory trouble. This can lead to severe pathological behavio= r + * under the following scenario: + * - The system starts to page and ARC is evicted. + * - The system stops paging as ARC's eviction drops wired RAM a bit. + * - ARC starts increasing its allocation again, and wired memory grows= =2E + * - A new image is activated, and the system once again attempts to pa= ge. + * - ARC starts to be evicted again. + * - Back to #2 + * + * Note that ZFS's ARC default (unless you override it in /boot/loader.= conf) + * is to allow the ARC cache to grab nearly all of free RAM, provided n= obody + * else needs it. That would be ok if we evicted cache when required. + * + * Unfortunately the system can get into a state where it never + * manages to page anything of materiality back in, as if there is acti= ve + * I/O the ARC will start grabbing space once again as soon as the memo= ry + * contention state drops. For this reason the "paging is occurring" f= lag + * should be the **last resort** condition for ARC eviction; you want t= o + * (as Solaris does) start when there is material free RAM left BUT the= + * vm system thinks it needs to be active to steal pages back in the at= tempt + * to never get into the condition where you're potentially paging off + * executables in favor of leaving disk cache allocated. + * + * To fix this we change how we look at low memory, declaring two new + * runtime tunables. + * + * The new sysctls are: + * vfs.zfs.arc_freepages (free pages required to call RAM "sufficient")= + * vfs.zfs.arc_freepage_percent (additional reservation percentage, def= ault 0) + * + * vfs.zfs.arc_freepages is initialized from vm.v_free_target, less 3%.= + * This should insure that we allow the VM system to steal pages first,= + * but pare the cache before we suspend processes attempting to get mor= e + * memory, thereby avoiding "stalls." You can set this higher if you w= ish, + * or force a specific percentage reservation as well, but doing so may= + * cause the cache to pare back while the VM system remains willing to + * allow "inactive" pages to accumulate. The challenge is that image + * activation can force things into the page space on a repeated basis + * if you allow this level to be too small (the above pathological + * behavior); the defaults should avoid that behavior but the sysctls + * are exposed should your workload require adjustment. + * + * If we're using this check for low memory we are replacing the previo= us + * ones, including the oddball "random" reclaim that appears to fire fa= r + * more often than it should. We still trigger if the system pages. + * + * If you turn on NEWRECLAIM_DEBUG then the kernel will print on the co= nsole + * status messages when the reclaim status trips on and off, along with= the + * page count aggregate that triggered it (and the free space) for each= + * event. + */ + + #define NEWRECLAIM + #undef NEWRECLAIM_DEBUG + + /* * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights = reserved. * Copyright (c) 2013 by Delphix. All rights reserved. *************** *** 139,144 **** --- 210,222 ---- =20 #include =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + #include + #include + #endif + #endif /* NEWRECLAIM */ + #ifdef illumos #ifndef _KERNEL /* set with ZFS_DEBUG=3Dwatch, to enable watchpoints on frozen buffers= */ *************** *** 203,218 **** --- 281,316 ---- int zfs_arc_shrink_shift =3D 0; int zfs_arc_p_min_shift =3D 0; int zfs_disable_dup_eviction =3D 0; + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + static int freepages =3D 0; /* This much memory is considered critical = */ + static int percent_target =3D 0; /* Additionally reserve "X" percent fr= ee RAM */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max); TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min); TUNABLE_QUAD("vfs.zfs.arc_meta_limit", &zfs_arc_meta_limit); + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages); + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + SYSCTL_DECL(_vfs_zfs); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, &zfs_arc_max,= 0, "Maximum ARC size"); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN, &zfs_arc_min,= 0, "Minimum ARC size"); =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN, &freepages= , 0, "ARC Free RAM Pages Required"); + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent, CTLFLAG_RWTUN, &pe= rcent_target, 0, "ARC Free RAM Target percentage"); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + /* * Note that buffers can be in one of 6 states: * ARC_anon - anonymous (discussed below) *************** *** 2438,2443 **** --- 2536,2546 ---- { =20 #ifdef _KERNEL + #ifdef NEWRECLAIM_DEBUG + static int xval =3D -1; + static int oldpercent =3D 0; + static int oldfreepages =3D 0; + #endif /* NEWRECLAIM_DEBUG */ =20 if (needfree) return (1); *************** *** 2476,2481 **** --- 2579,2585 ---- return (1); =20 #if defined(__i386) + /* * If we're on an i386 platform, it's possible that we'll exhaust the= * kernel heap space before we ever run out of available physical *************** *** 2492,2502 **** return (1); #endif #else /* !sun */ if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 - #else if (spa_get_random(100) =3D=3D 0) return (1); #endif --- 2596,2658 ---- return (1); #endif #else /* !sun */ + + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + /* + * Implement the new tunable free RAM algorithm. We check the free pag= es + * against the minimum specified target and the percentage that should = be + * free. If we're low we ask for ARC cache shrinkage. If this is defi= ned + * on a FreeBSD system the older checks are not performed. + * + * Check first to see if we need to init freepages, then test. + */ + if (!freepages) { /* If zero then (re)init */ + freepages =3D cnt.v_free_target - (cnt.v_free_target / 33); + #ifdef NEWRECLAIM_DEBUG + printf("ZFS ARC: Default vfs.zfs.arc_freepages to [%u] [%u less 3%%]\= n", freepages, cnt.v_free_target); + #endif /* NEWRECLAIM_DEBUG */ + } + #ifdef NEWRECLAIM_DEBUG + if (percent_target !=3D oldpercent) { + printf("ZFS ARC: Reservation percent change to [%d], [%d] pages, [%d]= free\n", percent_target, cnt.v_page_count, cnt.v_free_count); + oldpercent =3D percent_target; + } + if (freepages !=3D oldfreepages) { + printf("ZFS ARC: Low RAM page change to [%d], [%d] pages, [%d] free\n= ", freepages, cnt.v_page_count, cnt.v_free_count); + oldfreepages =3D freepages; + } + #endif /* NEWRECLAIM_DEBUG */ + /* + * Now figure out how much free RAM we require to call the ARC cache st= atus + * "ok". Add the percentage specified of the total to the base require= ment. + */ + + if (cnt.v_free_count < freepages + ((cnt.v_page_count / 100) * percent= _target)) { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 1) { + printf("ZFS ARC: RECLAIM total %u, free %u, free pct (%u), reserved = (%u), target pct (%u)\n", cnt.v_page_count, cnt.v_free_count, ((cnt.v_fre= e_count * 100) / cnt.v_page_count), freepages, percent_target); + xval =3D 1; + } + #endif /* NEWRECLAIM_DEBUG */ + return(1); + } else { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 0) { + printf("ZFS ARC: NORMAL total %u, free %u, free pct (%u), reserved (= %u), target pct (%u)\n", cnt.v_page_count, cnt.v_free_count, ((cnt.v_free= _count * 100) / cnt.v_page_count), freepages, percent_target); + xval =3D 0; + } + #endif /* NEWRECLAIM_DEBUG */ + return(0); + } + + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 if (spa_get_random(100) =3D=3D 0) return (1); #endif --=20 -- Karl karl@denninger.net --------------ms010508000607000909070805 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjAxNzAwNTRaMCMGCSqGSIb3DQEJBDEW BBQrRNHcjN1dwAlZu7blrh+3Vu7++TBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAsIbg6OIk2XnrPw25FA9+s4FCnKlo Wz3KtfA59Gf2jX8yKEQW26k1QK6w62Zc8KaB5LypaOK1rZ5bipeu6rGHhgaG1oPXUMmcc2p4 E18URDskAfn5apoJ/n54nR94OqHfQ/EPBx711pxYtAGBLFOOzwrU2MEZCl2KBydI+Bw/E75R WRIk6y0NqSWjgVWU2tJwnOEZj/2UGQCSvJ7h5t1n7idbDIfT88/hvAW3b3knRwPxwpZretXq 2BGgmv8lojr7Zui5sR/YdDjSK2yGHqo0mWkSAHp0Wts8okcoJNZSEispFRh56MWCIoJ51cki pCZH/vX1EEsfka3CrlE7LWABAYf1biy+Xq/Bfxgq9oAaknGF2yM0jgR7xnjLYLvbv5pjt7ar TH2JslJMYkJPKiYFJNEgVJ9wTVQtrCPJQPTk3R1qD3YFraly5Mgjwy5Ax5n8SW858WWOxHeP vmL0j1boO0Re9qeAb9v/z8z3NPkFPZhBrEz3g6INCWil+2Vx1yruJvxm1oN9OMQSt2qY38rj XWhWVxoQtW39LZc/xSNR41DQXvPJ8VOvyrmvLm7uTm4+lQYVUwNuLNbDFlj8slkAeXF/eR1S 4VuWtwexxCco+xGjbPTcZgap976XsvlRWOmjmwqZyGNuW7ZmcODQPFjQvpnBkx9Rm5cLndK6 OVorTQkAAAAAAAA= --------------ms010508000607000909070805-- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 20 21:20:02 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7C2104B2 for ; Thu, 20 Mar 2014 21:20:02 +0000 (UTC) Received: from dss.incore.de (dss.incore.de [195.145.1.138]) by mx1.freebsd.org (Postfix) with ESMTP id D6C565E4 for ; Thu, 20 Mar 2014 21:20:01 +0000 (UTC) Received: from inetmail.dmz (inetmail_new [10.3.0.4]) by dss.incore.de (Postfix) with ESMTP id 9D8D05C025 for ; Thu, 20 Mar 2014 22:13:52 +0100 (CET) X-Virus-Scanned: amavisd-new at incore.de Received: from dss.incore.de ([10.3.0.3]) by inetmail.dmz (inetmail.dmz [10.3.0.4]) (amavisd-new, port 10024) with LMTP id 8ktsBVknpL81 for ; Thu, 20 Mar 2014 22:13:50 +0100 (CET) Received: from mail.incore (fwintern.dmz [10.0.0.253]) by dss.incore.de (Postfix) with ESMTP id 28D825C01E for ; Thu, 20 Mar 2014 22:13:50 +0100 (CET) Received: from bsdmhs.longwitz (unknown [192.168.99.6]) by mail.incore (Postfix) with ESMTP id 3EB4750865 for ; Thu, 20 Mar 2014 22:13:49 +0100 (CET) Message-ID: <532B5A0C.1010008@incore.de> Date: Thu, 20 Mar 2014 22:13:48 +0100 From: Andreas Longwitz User-Agent: Thunderbird 2.0.0.19 (X11/20090113) MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Kernel crash trying to import a ZFS pool with log device Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Mar 2014 21:20:02 -0000 On a maschine running FreeBSD 8.4-STABLE #0 r256119 I have loader.conf: vfs.zfs.vdev.bio_delete_disable=1 -> glabel status Name Status Components label/C325BL31 N/A da2 label/C330CJHW N/A da3 -> gmirror status Name Status Components mirror/gmsv09 COMPLETE da0 (ACTIVE) da1 (ACTIVE) mirror/gm0 COMPLETE da4 (ACTIVE) da5 (ACTIVE) -> zpool status pool: mpool state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Sat Mar 15 06:15:36 2014 config: NAME STATE READ WRITE CKSUM mpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 label/C325BL31 ONLINE 0 0 0 label/C330CJHW ONLINE 0 0 0 errors: No known data errors I can run "zpool export mpool" and "zpool import [mpool]" without any problems. After adding a log device from a free partition of a gmirrored disk with -> zpool add mpool log /dev/mirror/gm0p3 the pool runs fine and I have -> zpool status pool: mpool state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Sat Mar 15 06:15:36 2014 config: NAME STATE READ WRITE CKSUM mpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 label/C325BL31 ONLINE 0 0 0 label/C330CJHW ONLINE 0 0 0 logs mirror/gm0p3 ONLINE 0 0 0 errors: No known data errors But if I now run "zpool export" and "zpool import" the kernel crashes: ... vdev_geom_open_by_path:554[1]: Found provider by name /dev/label/C325BL31. vdev_geom_attach:102[1]: Attaching to label/C325BL31. g_access(0xffffff0096b23a00(label/C325BL31), 1, 0, 1) open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002c0d400(label/C325BL31) g_access(0xffffff0002ba7300(da2), 1, 0, 2) open delta:[r1w0e2] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23800(da2) g_disk_access(da2, 1, 0, 2) vdev_geom_attach:123[1]: Created geom and consumer for label/C325BL31. vdev_geom_read_config:248[1]: Reading config from label/C325BL31... vdev_geom_open_by_path:569[1]: guid match for provider /dev/label/C325BL31. vdev_geom_open_by_path:554[1]: Found provider by name /dev/label/C330CJHW. vdev_geom_attach:102[1]: Attaching to label/C330CJHW. g_access(0xffffff00969a5280(label/C330CJHW), 1, 0, 1) open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002c02100(label/C330CJHW) g_access(0xffffff0002b96280(da3), 1, 0, 2) open delta:[r1w0e2] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23b00(da3) g_disk_access(da3, 1, 0, 2) vdev_geom_attach:143[1]: Created consumer for label/C330CJHW. vdev_geom_read_config:248[1]: Reading config from label/C330CJHW... vdev_geom_open_by_path:569[1]: guid match for provider /dev/label/C330CJHW. vdev_geom_open_by_path:554[1]: Found provider by name /dev/mirror/gm0p3. vdev_geom_attach:102[1]: Attaching to mirror/gm0p3. g_access(0xffffff0096b24180(mirror/gm0p3), 1, 0, 1) open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] 0xffffff00969c7e00(mirror/gm0p3) g_part_access(mirror/gm0p3,1,0,1) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 1) open delta:[r1w0e1] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e1. vdev_geom_attach:143[1]: Created consumer for mirror/gm0p3. vdev_geom_read_config:248[1]: Reading config from mirror/gm0p3... vdev_geom_open_by_path:569[1]: guid match for provider /dev/mirror/gm0p3. g_post_event_x(0xffffffff80b16830, 0xffffff0096b24180, 2, 0) vdev_geom_detach:163[1]: Closing access to mirror/gm0p3. g_access(0xffffff0096b24180(mirror/gm0p3), -1, 0, -1) open delta:[r-1w0e-1] old:[r1w0e1] provider:[r1w0e1] 0xffffff00969c7e00(mirror/gm0p3) g_part_access(mirror/gm0p3,-1,0,-1) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, -1) open delta:[r-1w0e-1] old:[r9w8e17] provider:[r9w8e17] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e-1. vdev_geom_open_by_path:554[1]: Found provider by name /dev/mirror/gm0p3. vdev_geom_attach:102[1]: Attaching to mirror/gm0p3. vdev_geom_attach:128[1]: Found consumer for mirror/gm0p3. g_access(0xffffff0096b24180(mirror/gm0p3), 1, 0, 1) open delta:[r1w0e1] old:[r1w0e1] provider:[r1w0e1] 0xffffff00969c7e00(mirror/gm0p3) g_part_access(mirror/gm0p3,1,0,1) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 1) Open delta:[r1w0e1] old:[r9w8e17] provider:[r9w8e17] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e1. vdev_geom_detach:167[1]: Destroyed consumer to mirror/gm0p3. g_detach(0xffffff0096b24180) g_destroy_consumer(0xffffff0096b24180) vdev_geom_attach:147[1]: Used existing consumer for mirror/gm0p3. vdev_geom_read_config:248[1]: Fatal trap 12: page fault while in kernel mode cpuid = 1; apic id = 01 fault virtual address = 0x0 fault code = supervisor read data, page not present instruction pointer = 0x20:0xffffffff80b16f01 stack pointer = 0x28:0xffffff82452325b0 frame pointer = 0x28:0xffffff8245232650 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 15494 (initial thread) [thread pid 15494 tid 100151 ] Stopped at vdev_geom_read_config+0x71: movq (%rdx),%rsi (kgdb) where ... #9 0xffffffff805dce1b in trap (frame=0xffffff8245232500) at /usr/src/sys/amd64/amd64/trap.c:457 #10 0xffffffff805c3024 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:228 #11 0xffffffff80b16f01 in vdev_geom_read_config (cp=0xffffff0096b24180, config=0xffffff8245232670) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:248 #12 0xffffffff80b17194 in vdev_geom_read_guid (cp=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:454 #13 0xffffffff80b172f0 in vdev_geom_open_by_path (vd=0xffffff0002b2f000, check_guid=1) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:559 #14 0xffffffff80b17528 in vdev_geom_open (vd=0xffffff0002b2f000, psize=0xffffff8245232760, max_psize=0xffffff8245232758, ashift=0xffffff8245232750) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:608 #15 0xffffffff80aca87a in vdev_open (vd=0xffffff0002b2f000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c:1153 #16 0xffffffff80acac5e in vdev_reopen (vd=0xffffff0002b2f000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c:1514 #17 0xffffffff80ab84e0 in spa_load (spa=0xffffff0002b85000, state=SPA_LOAD_TRYIMPORT, type=SPA_IMPORT_EXISTING, mosconfig=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:1654 #18 0xffffffff80abaa40 in spa_tryimport (tryconfig=0xffffff00024b2260) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:4184 #19 0xffffffff80afb486 in zfs_ioc_pool_tryimport (zc=0xffffff8001f1d000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:1630 #20 0xffffffff80afea7f in zfsdev_ioctl (dev=, zcmd=, arg=0xffffff00966154c0 "\003", flag=3, td=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:5945 #21 0xffffffff8037729b in devfs_ioctl_f (fp=0xffffff0002c98960, com=3222821382, data=, cred=, td=0xffffff0096017000) at /usr/src/sys/fs/devfs/devfs_vnops.c:700 #22 0xffffffff80444b22 in kern_ioctl (td=, fd=, com=3222821382, data=0xffffff00966154c0 "\003") at file.h:277 #23 0xffffffff80444d5d in ioctl (td=0xffffff0096017000, uap=0xffffff8245232bb0) at /usr/src/sys/kern/sys_generic.c:679 #24 0xffffffff805dbca4 in amd64_syscall (td=0xffffff0096017000, traced=0) at subr_syscall.c:114 #25 0xffffffff805c331c in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:387 #26 0x0000000180fcec2c in ?? () (kgdb) f 11 #11 0xffffffff80b16f01 in vdev_geom_read_config (cp=0xffffff0096b24180, config=0xffffff8245232670) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:248 248 ZFS_LOG(1, "Reading config from %s...", pp->name); (kgdb) list 243 int error, l, len; 244 245 g_topology_assert_not(); 246 247 pp = cp->provider; 248 ZFS_LOG(1, "Reading config from %s...", pp->name); 249 250 psize = pp->mediasize; 251 psize = P2ALIGN(psize, (uint64_t)sizeof(vdev_label_t)); 252 (kgdb) p *cp $1 = {geom = 0xffffff0002c0dd00, consumer = {le_next = 0xffffff00969a5280, le_prev = 0xffffff0002c0dd20}, provider = 0x0, consumers = {le_next = 0xffffff009607bb00, le_prev = 0xffffff00969c7e20}, acr = 1, acw = 0, ace = 1, spoiled = 0, stat = 0xffffff0002bed5a0, nstart = 17, nend = 17, private = 0x0, index = 0} (kgdb) p *cp->provider Cannot access memory at address 0x0 (kgdb) f 12 #12 0xffffffff80b17194 in vdev_geom_read_guid (cp=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:454 454 if (vdev_geom_read_config(cp, &config) == 0) { (kgdb) p *cp $2 = {geom = 0xffffff0002c0dd00, consumer = {le_next = 0xffffff00969a5280, le_prev = 0xffffff0002c0dd20}, provider = 0x0, consumers = {le_next = 0xffffff009607bb00, le_prev = 0xffffff00969c7e20}, acr = 1, acw = 0, ace = 1, spoiled = 0, stat = 0xffffff0002bed5a0, nstart = 17, nend = 17, private = 0x0, index = 0} (kgdb) info local config = (nvlist_t *) 0xffffff0002b85000 guid = 0 (kgdb) list 449 uint64_t guid; 450 451 g_topology_assert_not(); 452 453 guid = 0; 454 if (vdev_geom_read_config(cp, &config) == 0) { 455 guid = nvlist_get_guid(config); 456 nvlist_free(config); 457 } 458 return (guid); (kgdb) f 13 #13 0xffffffff80b172f0 in vdev_geom_open_by_path (vd=0xffffff0002b2f000, check_guid=1) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:559 559 guid = vdev_geom_read_guid(cp); (kgdb) list 554 ZFS_LOG(1, "Found provider by name %s.", vd->vdev_path); 555 cp = vdev_geom_attach(pp); 556 if (cp != NULL && check_guid && ISP2(pp->sectorsize) && 557 pp->sectorsize <= VDEV_PAD_SIZE) { 558 g_topology_unlock(); 559 guid = vdev_geom_read_guid(cp); 560 g_topology_lock(); 561 if (guid != vd->vdev_guid) { 562 vdev_geom_detach(cp, 0); 563 cp = NULL; (kgdb) info local pp = (struct g_provider *) 0xffffff00969c7e00 cp = (struct g_consumer *) 0xffffff0096b24180 guid = __func__ = "ÿÿ\000\000H\213uÀ\211Ø\211\235üþÿÿ\203À\001\205ÀH\213" (kgdb) p *cp $3 = {geom = 0xffffff0002c0dd00, consumer = {le_next = 0xffffff00969a5280, le_prev = 0xffffff0002c0dd20}, provider = 0x0, consumers = {le_next = 0xffffff009607bb00, le_prev = 0xffffff00969c7e20}, acr = 1, acw = 0, ace = 1, spoiled = 0, stat = 0xffffff0002bed5a0, nstart = 17, nend = 17, private = 0x0, index = 0} (kgdb) p *pp $4 = {name = 0xffffff00969c7e88 "mirror/gm0p3", provider = {le_next = 0xffffff009698a100, le_prev = 0xffffff0002c03208}, geom = 0xffffff0002c68000, consumers = {lh_first = 0xffffff009607bb00}, acr = 1, acw = 0, ace = 1, error = 0, orphan = { tqe_next = 0x0, tqe_prev = 0x0}, mediasize = 8589934592, sectorsize = 512, stripesize = 0, stripeoffset = 226575360, stat = 0xffffff0002be7120, nstart = 157, nend = 157, flags = 0, private = 0xffffff00969c7c00, index = 2} (kgdb) quit The technical reason for the crash is that in "f 11" we have pp = cp->provider = 0. I can give more information from the kernel dump, also I can easy repeat the crash. -- Andreas Longwitz From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 00:33:37 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B6F29702 for ; Fri, 21 Mar 2014 00:33:37 +0000 (UTC) Received: from mail-la0-x22a.google.com (mail-la0-x22a.google.com [IPv6:2a00:1450:4010:c03::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2C258CA5 for ; Fri, 21 Mar 2014 00:33:36 +0000 (UTC) Received: by mail-la0-f42.google.com with SMTP id ec20so1186031lab.15 for ; Thu, 20 Mar 2014 17:33:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=6t7wtov8CIhcshUpaCaXOUupFdtYBkAXXr6GCWPFwKo=; b=BmK30U3KRzn493H38JsNBpHMtC6bkhxn25u0HF4Zr+LPKi1VJNTCW8uRbjxDeWvYFQ +irCVxczXGm5SCLJRFKKsPIdR+c4LvF/Sn1h4GnAytYiIN+jKTolyFaTlhehXHFM6bxK 17y+T05+4IDWmBPdzOMYOrLppgo8J3oozOkW3niFv45Rkj7zquhFqQbwi24cLsAVIzWX yD7n39fs8Y33XzGb0gk4yUPUsjuPJY+I+Ar2dV/X4JBPG2ABmeBSnqPpe7V2vHNUul4W FL8J7LYAA13kNm7T05HH5Ktxs/AsI2Nn4MCzgCg5Qg83vTX4djRWLV77NXwpirZeU8tv LKfw== MIME-Version: 1.0 X-Received: by 10.112.35.130 with SMTP id h2mr30018603lbj.15.1395362014062; Thu, 20 Mar 2014 17:33:34 -0700 (PDT) Received: by 10.114.76.81 with HTTP; Thu, 20 Mar 2014 17:33:34 -0700 (PDT) In-Reply-To: References: Date: Thu, 20 Mar 2014 17:33:34 -0700 Message-ID: Subject: Re: rsync w/ fake-super -> crashes zfs From: javocado To: FreeBSD Filesystems Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 00:33:37 -0000 Gathering more info here... When re-running the rsync command, stripped down: rsync -av --rsync-path=3D"rsync --fake-super" /src /dst we're not seeing any system crashes, yet, but rsync is choking/quitting quite a bit saying a file here or there has a corrupt extattr. Does this point to a problem with ZFS or more with rsync. Any thoughts on the fake-super impact and how it leads to crashing? On Fri, Mar 14, 2014 at 4:57 PM, javocado wrote: > System specifics: > ZFS version 28 > FreeBSD 8.3-RELEASE > > We're seeing a repeatable outcome where a remote rsync command like: > > rsync -axzHAXS --rsync-path=3D"rsync --fake-super" --exclude '*/rsync.%st= at' > > backing up to our zfs filesystem (with 15M inodes) will lead to a panic > with output like: > > Fatal trap 12: page fault while in kernel modecpuid =3D 4; apic id =3D 04= fault virtual address =3D 0x160fault code =3D supervisor rea= d data, page not presentinstruction pointer =3D 0x20:0xffffffff80abb546= stack pointer =3D 0x28:0xffffff976c62b910frame pointer = =3D 0x28:0xffffff976c62b9d0code segment =3D base 0x0, limit 0xff= fff, type 0x1b =3D DPL 0, pres 1, long 1, def32 0, g= ran 1processor eflags =3D interrupt enabled, resume, IOPL =3D 0curre= nt process =3D 7295 (rsync)[thread pid 7295 tid 101008 ]Stopped at = zfs_freebsd_remove+0x426: movq 0x160(%rax),%rsi > > > On the sending side (RHEL, ext3), rsync reports errors like: > > rsync: failed to read xattr rsync.%statrsync: failed to write xattr rsync= .%statrsync: get_xattr_names: llistxattr > > which we've seen occasionally with other systems when running rsync with = fake-super, but it usually doesn't lead to a crash.* > > On the receiving side, other than the crashes, we are seeing a few new fi= les (that don't exist on the source) named: > > rsync.%stat > > which correspond to and contain the owner and permission attributes that = should have been stored in the extattr's for the file or directory. Not sur= e if they are a red herring, but they're usually not something we see (perh= aps that's related to the --exclude '*/rsync.%stat' and rsync not being abl= e to cleanup properly). > > We are still testing to see if any options in the rsync command (above) m= ay be contributing to the crash, since fake-super in and of itself runs fin= e under basic (rsync -av --rsync-path=3D"rsync --fake-super" /src /dst) cir= cumstances. But we suspect that the problem is related to fake-super and it= s reliance upon extattr's. > > What we really need is a solution to the crashing - some way to make ZFS = stop choking on whatever --fake-super produces and/or how it's interacting = with extattr's on ZFS. > > > Thanks! > > * we sometimes also see on the sending side w/ fake-super: > rsync: failed to write xattr rsync.%stat for "xxxxxx/file" : No such file > or directory (2) > > when (1) the file exists, (2) it's a symlink > but that isn't happening in this instance. We only mention it here as > another oddity of fake-super + ZFS + extattr > > From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 02:05:05 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D2287ABD for ; Fri, 21 Mar 2014 02:05:05 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [88.198.178.88]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6370C6BA for ; Fri, 21 Mar 2014 02:05:03 +0000 (UTC) Received: (qmail 78138 invoked by uid 89); 16 Mar 2014 20:07:16 -0000 Received: by simscan 1.4.0 ppid: 78133, pid: 78135, t: 0.1515s scanners: attach: 1.4.0 clamav: 0.97.3/m:55/d:18606 Received: from unknown (HELO linux-wb36.example.org) (rainer@ultra-secure.de@212.71.117.85) by mail-ultra-secure.de with ESMTPA; 16 Mar 2014 20:07:15 -0000 Date: Sun, 16 Mar 2014 21:07:09 +0100 From: Rainer Duffner To: freebsd-fs@freebsd.org Subject: Is it possible to access a jailed zfs filesystem outside the jail? Message-ID: <20140316210709.51cb84af@linux-wb36.example.org> X-Mailer: Claws Mail 3.7.10 (GTK+ 2.24.7; i586-suse-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 02:05:05 -0000 Hi, to make backup simpler, I'd like to be able to mount the filesystems I dedicated to jails on the host as well (beats the purpose a bit, I know). I only need read-only access (for backup). Is that possible? From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 02:31:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 17784F76; Fri, 21 Mar 2014 02:31:24 +0000 (UTC) Received: from mx1.fisglobal.com (mx1.fisglobal.com [199.200.24.190]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id D92F9897; Fri, 21 Mar 2014 02:31:23 +0000 (UTC) Received: from smarthost.fisglobal.com ([10.132.206.192]) by ltcfislmsgpa02.fnfis.com (8.14.5/8.14.5) with ESMTP id s2L2VJgu021434 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT); Thu, 20 Mar 2014 21:31:19 -0500 Received: from THEMADHATTER (10.242.181.54) by smarthost.fisglobal.com (10.132.206.192) with Microsoft SMTP Server id 14.3.174.1; Thu, 20 Mar 2014 21:31:18 -0500 From: Sender: Devin Teske To: "'Rainer Duffner'" , References: <20140316210709.51cb84af@linux-wb36.example.org> In-Reply-To: <20140316210709.51cb84af@linux-wb36.example.org> Subject: RE: Is it possible to access a jailed zfs filesystem outside the jail? Date: Thu, 20 Mar 2014 19:31:05 -0700 Message-ID: <037001cf44ad$9d45ebf0$d7d1c3d0$@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Outlook 15.0 Thread-Index: AQIfZTjh7rP7Ynu4ZDpLqrSLfBfCQJpKj/JA Content-Language: en-us X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.87, 1.0.14, 0.0.0000 definitions=2014-03-20_06:2014-03-20,2014-03-20,1970-01-01 signatures=0 Cc: 'Devin Teske' X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 02:31:24 -0000 > -----Original Message----- > From: Rainer Duffner [mailto:rainer@ultra-secure.de] > Sent: Sunday, March 16, 2014 1:07 PM > To: freebsd-fs@freebsd.org > Subject: Is it possible to access a jailed zfs filesystem outside the jail? > > Hi, > > to make backup simpler, I'd like to be able to mount the filesystems I > dedicated to jails on the host as well (beats the purpose a bit, I know). > I only need read-only access (for backup). > > Is that possible? mount_nullfs -o ro /dirA /dirB Allows read-only access to /dirA through /dirB -- Devin _____________ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 02:50:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5ECAD5A7 for ; Fri, 21 Mar 2014 02:50:42 +0000 (UTC) Received: from mail-ob0-x235.google.com (mail-ob0-x235.google.com [IPv6:2607:f8b0:4003:c01::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 288F3A72 for ; Fri, 21 Mar 2014 02:50:42 +0000 (UTC) Received: by mail-ob0-f181.google.com with SMTP id wp4so1899789obc.12 for ; Thu, 20 Mar 2014 19:50:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=WjL5NxliYNmwhyuc/MSGa2x8XhIdEca3MBHt2XuMJ8M=; b=Rb1hCp86Uzn6kb5mbyi6R1lR9SSdqkBbPYkVKvrQDGo4vgmsv7v1WXFg7wDKXr2tHc 46YFAHXMeS7x13aQDindZkUfIckLaky/X33Oz3E9tHb1DexYrKqpUkXNAUKO+htI7IlW rr/OlyXvzY50wXewaR6mlnGTv4nZiqnBS1FbGsjyKtB0G1CRrVGJQ0NLymZrDh5ji54R jJpxxSQSHrD04sb8puBYyTCr+J6ysCS1Yy4cwmz6DLg4D6EHW0rpW5+GZ1Wcimchk+Np grXPV3F1yQ/3KaGdzCt/6kX4VKtgzA+FJJIXfMDSjt8WvdO5UOohjAhC9aIp5b+0pCEw IPHQ== MIME-Version: 1.0 X-Received: by 10.182.120.40 with SMTP id kz8mr36384066obb.6.1395370241553; Thu, 20 Mar 2014 19:50:41 -0700 (PDT) Received: by 10.76.180.40 with HTTP; Thu, 20 Mar 2014 19:50:41 -0700 (PDT) Received: by 10.76.180.40 with HTTP; Thu, 20 Mar 2014 19:50:41 -0700 (PDT) In-Reply-To: <20140316210709.51cb84af@linux-wb36.example.org> References: <20140316210709.51cb84af@linux-wb36.example.org> Date: Thu, 20 Mar 2014 19:50:41 -0700 Message-ID: Subject: Re: Is it possible to access a jailed zfs filesystem outside the jail? From: Freddie Cash To: Rainer Duffner Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 02:50:42 -0000 On Mar 20, 2014 7:05 PM, "Rainer Duffner" wrote: > > Hi, > > to make backup simpler, I'd like to be able to mount the filesystems I > dedicated to jails on the host as well (beats the purpose a bit, I > know). > I only need read-only access (for backup). > > Is that possible? Snapshot the filesystem. Then you can either backup from the .zfs/snapshot/[snapname]/ directory, or manually mount the snapshot somewhere else. From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 03:14:26 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 11882990; Fri, 21 Mar 2014 03:14:26 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id BA1B2C9B; Fri, 21 Mar 2014 03:14:25 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEABauK1ODaFve/2dsb2JhbABZhBiDB7w7gw6BK3SCJQEBAQQjBFIbDgoCAg0ZAlkGiAytKaJiF4EpjQg0B4JvgUkEqnqDSSGBLAc7 X-IronPort-AV: E=Sophos;i="4.97,700,1389762000"; d="scan'208";a="107794521" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 20 Mar 2014 23:14:18 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 68882B3F0B; Thu, 20 Mar 2014 23:14:18 -0400 (EDT) Date: Thu, 20 Mar 2014 23:14:18 -0400 (EDT) From: Rick Macklem To: Alexander Motin Message-ID: <2106150833.655954.1395371658421.JavaMail.root@uoguelph.ca> In-Reply-To: <532947C9.9010607@FreeBSD.org> Subject: Re: review/test: NFS patch to use pagesize mbuf clusters MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 03:14:26 -0000 Alexander Motin wrote: > On 19.03.2014 01:57, Rick Macklem wrote: > > Alexander Motin wrote: > >> I run several profiles on em NIC with and without the patch. I can > >> confirm that without the patch m_defrag() is indeed called, while > >> with > >> patch it is not any more. But profiler shows to me that very small > >> amount of time (percents or even fractions) is spent there. I > >> can't > >> measure the effect (my Core-i7 desktop test system has only about > >> 5% > >> CPU > >> load while serving full 1Gbps NFS over the em), though I can't say > >> for > >> sure that effect can't be there on some low-end system. > >> > > Well, since m_defrag() creates a new list and bcopy()s the data, > > there > > is some overhead, although I'm not surprised it isn't that easy to > > measure. > > (I thought your server built entirely of SSDs might show a > > difference.) > > I did my test even from TMPFS, not SSD, but mentioned em NIC is only > 1Gbps, that is too slow to reasonably load the system. > > > I am more concerned with the possibility of m_defrag() failing and > > the > > driver dropping the reply, forcing the client to do a fresh TCP > > connection > > and retry of the RPC after a long timeout (1minute or more). This > > will > > show up as "terrible performance" for users. > > > > Also, some drivers use m_collapse() instead of m_defrag() and these > > will probably be "train wrecks". I get cases where reports of > > serious > > NFS problems get "fixed" by disabling TSO and I was hoping this > > would > > work around that. > > Yes, I accept that argument. I don't see much reason to cut > continuous > data in small chunks. > > >> I am also not very sure about replacing M_WAITOK with M_NOWAIT. > >> Instead > >> of waiting a bit while VM find a cluster, NFSMCLGET() will return > >> single > >> mbuf, as result, replacing chain of 2K clusters instead of 4K ones > >> with > >> chain of 256b mbufs. > >> > > I hoped the comment in the patch would explain this. > > > > When I was testing (on a small i386 system), I succeeded in getting > > threads stuck sleeping on "btalloc" a couple of times when I used > > M_WAITOK for m_getjcl(). As far as I could see, this indicated that > > it hasd run out of kernel address space, but I'm not sure. > > --> That is why I used M_NOWAIT for m_getjcl(). > > > > As for using MCLGET(..M_NOWAIT), the main reason for doing that > > was I noticed that the code does a drain on zone_mcluster if this > > allocation attempt for a cluster fails. For some reason, m_getcl() > > and m_getjcl() do not do this drain of the zone? > > I thought the drain might help memory constrained cases. > > To be honest, I've never been able to get a MCLGET(..M_NOWAIT) > > to fail during testing. > > If it is true, I think that should be handled inside the allocation > code, not work arounded here. Passing M_NOWAIT means that you agree > to > get NULL there, but IMO you don't really want to cut 64K data in ~200 > byte pieces in any case even if system is in low memory condition, > since > at least most NICs won't be able to send it without defragging, that > will also be problematic in low-memory case. > Yep. It looks like calling m_getjcl(..M_NOWAIT..) is worse than m_getjcl(..M_WAITOK..). Using M_NOWAIT does avoid getting stuck looping and sleeping on "btalloc", however... I thought it would result in m_getjcl() returning NULL. What actually happens is it now loops in "R" state. Unfortunately before I got a lot of info on it, the machine wedged pretty good. I'm now trying to make it happen again so I can poke at it some more, but it seems that this needs to be resolved before the patch could go in head. As a complete aside, it looks like the loop in tcp_output() may be broken and generating TSO segments > 65535 and this might explain the headaches w.r.t. TSO enabled interfaces. (See the ixgbe thread over on freebsd-net@.) I'll post if/when I have more on how UMA(9) behaves when the boundary tag zone can't seem to do an allocation. (It seems it results in an allocation request for the mbuf page cluster zone looping instead of returning NULL, but I'm not sure yet.) Anyone familiar with UMA(9) and what the boundary tags are for, feel free to jump in here and explain it, because I don't know diddly about it at this point. Thanks for testing it and stay tuned, rick > -- > Alexander Motin > From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 07:44:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 28F209CD for ; Fri, 21 Mar 2014 07:44:55 +0000 (UTC) Received: from mail.time-domain.co.uk (host81-142-251-212.in-addr.btopenworld.com [81.142.251.212]) by mx1.freebsd.org (Postfix) with ESMTP id 714E466C for ; Fri, 21 Mar 2014 07:44:53 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mail.time-domain.co.uk (8.14.3/8.14.3) with ESMTP id s2L7eEiu024452; Fri, 21 Mar 2014 07:40:15 GMT Date: Fri, 21 Mar 2014 07:40:14 +0000 (GMT) From: andy thomas X-X-Sender: andy-tds@mail.time-domain.co.uk To: Rainer Duffner Subject: Re: Is it possible to access a jailed zfs filesystem outside the jail? In-Reply-To: <20140316210709.51cb84af@linux-wb36.example.org> Message-ID: References: <20140316210709.51cb84af@linux-wb36.example.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: clamav-milter 0.97.5 at mail X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 07:44:55 -0000 On Sun, 16 Mar 2014, Rainer Duffner wrote: > Hi, > > to make backup simpler, I'd like to be able to mount the filesystems I > dedicated to jails on the host as well (beats the purpose a bit, I > know). > I only need read-only access (for backup). > > Is that possible? Yes, jail filesystems are part of the host's filesystem so if your happens to be located at /home/jails/myjail then you'll find all the jail file under there and can back them up in the usual way. Andy From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 09:39:02 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D7B911CB; Fri, 21 Mar 2014 09:39:02 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 07225FF3; Fri, 21 Mar 2014 09:38:57 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id LAA28136; Fri, 21 Mar 2014 11:38:54 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WQvug-000Nhy-Aq; Fri, 21 Mar 2014 11:38:54 +0200 Message-ID: <532C085D.3020201@FreeBSD.org> Date: Fri, 21 Mar 2014 11:37:33 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Andreas Longwitz , freebsd-fs@FreeBSD.org, freebsd-geom@FreeBSD.org Subject: g_mirror_access() dropping geom topology_lock [Was: Kernel crash trying to import a ZFS pool with log device] References: <532B5A0C.1010008@incore.de> In-Reply-To: <532B5A0C.1010008@incore.de> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 09:39:02 -0000 on 20/03/2014 23:13 Andreas Longwitz said the following: [snip] > But if I now run "zpool export" and "zpool import" the kernel crashes: > ... > vdev_geom_open_by_path:554[1]: Found provider by name /dev/label/C325BL31. > vdev_geom_attach:102[1]: Attaching to label/C325BL31. > g_access(0xffffff0096b23a00(label/C325BL31), 1, 0, 1) > open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] > 0xffffff0002c0d400(label/C325BL31) > g_access(0xffffff0002ba7300(da2), 1, 0, 2) > open delta:[r1w0e2] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23800(da2) > g_disk_access(da2, 1, 0, 2) > vdev_geom_attach:123[1]: Created geom and consumer for label/C325BL31. > vdev_geom_read_config:248[1]: Reading config from label/C325BL31... > vdev_geom_open_by_path:569[1]: guid match for provider /dev/label/C325BL31. > vdev_geom_open_by_path:554[1]: Found provider by name /dev/label/C330CJHW. > vdev_geom_attach:102[1]: Attaching to label/C330CJHW. > g_access(0xffffff00969a5280(label/C330CJHW), 1, 0, 1) > open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] > 0xffffff0002c02100(label/C330CJHW) > g_access(0xffffff0002b96280(da3), 1, 0, 2) > open delta:[r1w0e2] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23b00(da3) > g_disk_access(da3, 1, 0, 2) > vdev_geom_attach:143[1]: Created consumer for label/C330CJHW. > vdev_geom_read_config:248[1]: Reading config from label/C330CJHW... > vdev_geom_open_by_path:569[1]: guid match for provider /dev/label/C330CJHW. > vdev_geom_open_by_path:554[1]: Found provider by name /dev/mirror/gm0p3. > vdev_geom_attach:102[1]: Attaching to mirror/gm0p3. > g_access(0xffffff0096b24180(mirror/gm0p3), 1, 0, 1) > open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] > 0xffffff00969c7e00(mirror/gm0p3) > g_part_access(mirror/gm0p3,1,0,1) > g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 1) > open delta:[r1w0e1] old:[r8w8e16] provider:[r8w8e16] > 0xffffff00969c7d00(mirror/gm0) > GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e1. The following part of the log is very informative. Thank you for capturing it. > vdev_geom_attach:143[1]: Created consumer for mirror/gm0p3. > vdev_geom_read_config:248[1]: Reading config from mirror/gm0p3... > vdev_geom_open_by_path:569[1]: guid match for provider /dev/mirror/gm0p3. I read the above as thread A entering vdev_geom_open_by_path and trying to "taste" /dev/mirror/gm0p3. > g_post_event_x(0xffffffff80b16830, 0xffffff0096b24180, 2, 0) > vdev_geom_detach:163[1]: Closing access to mirror/gm0p3. > g_access(0xffffff0096b24180(mirror/gm0p3), -1, 0, -1) > open delta:[r-1w0e-1] old:[r1w0e1] provider:[r1w0e1] > 0xffffff00969c7e00(mirror/gm0p3) Simultaneously thread B is closing access /dev/mirror/gm0p3. It is not clear from the quoted log when and how this thread B got access to the device in the first place. > g_part_access(mirror/gm0p3,-1,0,-1) > g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, -1) > open delta:[r-1w0e-1] old:[r9w8e17] provider:[r9w8e17] > 0xffffff00969c7d00(mirror/gm0) > GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e-1. > vdev_geom_open_by_path:554[1]: Found provider by name /dev/mirror/gm0p3. > vdev_geom_attach:102[1]: Attaching to mirror/gm0p3. > vdev_geom_attach:128[1]: Found consumer for mirror/gm0p3. > g_access(0xffffff0096b24180(mirror/gm0p3), 1, 0, 1) > open delta:[r1w0e1] old:[r1w0e1] provider:[r1w0e1] > 0xffffff00969c7e00(mirror/gm0p3) Thread A is accessing the device. > g_part_access(mirror/gm0p3,1,0,1) > g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 1) > Open delta:[r1w0e1] old:[r9w8e17] provider:[r9w8e17] > 0xffffff00969c7d00(mirror/gm0) > GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e1. > vdev_geom_detach:167[1]: Destroyed consumer to mirror/gm0p3. > g_detach(0xffffff0096b24180) > g_destroy_consumer(0xffffff0096b24180) Thread B is destroying a special ZFS "taster" consumer. > vdev_geom_attach:147[1]: Used existing consumer for mirror/gm0p3. Thread A is trying to re-use the taster consumer, which has just been destroyed. > vdev_geom_read_config:248[1]: > > Fatal trap 12: page fault while in kernel mode Boom! I see two issues here. First, the ZFS tasting code could be made more robust. If it never tried to re-use the consumer and always created a new one, then most likely this crash could be avoided. But there is no bug in the code. The code is correct and it it uses GEOM topology lock to avoid any concurrency issues. But GEOM mirror code breaks a contract on which the ZFS code relies. g_access() must be called with the topology lock hold. I extend this requirement to a requirement that access method of any GEOM provider must operate under the topology lock and must never drop it. In other words, if a caller must acquire g_topology_lock before calling g_access, then in return it must have a guarantee that the GEOM topology stays unchanged across the call to g_access(). g_mirror_access() breaks the above contract. So, the code in vdev_geom_attach() obtains g_topology_lock, then it finds an existing valid consumer and calls g_access() on it. It reasonably expects that the consumer remains valid, but because g_mirror_access() drops and requires the topology lock, there is a chance that the topology can change and the consumer may become invalid. I am not very familiar with gmirror code, so I am not sure how to fix the problem from that end. > cpuid = 1; apic id = 01 > fault virtual address = 0x0 > fault code = supervisor read data, page not present > instruction pointer = 0x20:0xffffffff80b16f01 > stack pointer = 0x28:0xffffff82452325b0 > frame pointer = 0x28:0xffffff8245232650 > code segment = base 0x0, limit 0xfffff, type 0x1b > = DPL 0, pres 1, long 1, def32 0, gran 1 > processor eflags = interrupt enabled, resume, IOPL = 0 > current process = 15494 (initial thread) > [thread pid 15494 tid 100151 ] > Stopped at vdev_geom_read_config+0x71: movq (%rdx),%rsi > > (kgdb) where > ... > #9 0xffffffff805dce1b in trap (frame=0xffffff8245232500) at > /usr/src/sys/amd64/amd64/trap.c:457 > #10 0xffffffff805c3024 in calltrap () at > /usr/src/sys/amd64/amd64/exception.S:228 > #11 0xffffffff80b16f01 in vdev_geom_read_config (cp=0xffffff0096b24180, > config=0xffffff8245232670) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:248 > #12 0xffffffff80b17194 in vdev_geom_read_guid (cp=) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:454 > #13 0xffffffff80b172f0 in vdev_geom_open_by_path (vd=0xffffff0002b2f000, > check_guid=1) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:559 > #14 0xffffffff80b17528 in vdev_geom_open (vd=0xffffff0002b2f000, > psize=0xffffff8245232760, max_psize=0xffffff8245232758, > ashift=0xffffff8245232750) at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:608 > #15 0xffffffff80aca87a in vdev_open (vd=0xffffff0002b2f000) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c:1153 > #16 0xffffffff80acac5e in vdev_reopen (vd=0xffffff0002b2f000) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c:1514 > #17 0xffffffff80ab84e0 in spa_load (spa=0xffffff0002b85000, > state=SPA_LOAD_TRYIMPORT, type=SPA_IMPORT_EXISTING, > mosconfig=) at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:1654 > #18 0xffffffff80abaa40 in spa_tryimport (tryconfig=0xffffff00024b2260) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:4184 > #19 0xffffffff80afb486 in zfs_ioc_pool_tryimport (zc=0xffffff8001f1d000) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:1630 > #20 0xffffffff80afea7f in zfsdev_ioctl (dev=, > zcmd=, arg=0xffffff00966154c0 "\003", > flag=3, td=) at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:5945 > #21 0xffffffff8037729b in devfs_ioctl_f (fp=0xffffff0002c98960, > com=3222821382, data=, > cred=, td=0xffffff0096017000) at > /usr/src/sys/fs/devfs/devfs_vnops.c:700 > #22 0xffffffff80444b22 in kern_ioctl (td=, > fd=, com=3222821382, > data=0xffffff00966154c0 "\003") at file.h:277 > #23 0xffffffff80444d5d in ioctl (td=0xffffff0096017000, > uap=0xffffff8245232bb0) at /usr/src/sys/kern/sys_generic.c:679 > #24 0xffffffff805dbca4 in amd64_syscall (td=0xffffff0096017000, > traced=0) at subr_syscall.c:114 > #25 0xffffffff805c331c in Xfast_syscall () at > /usr/src/sys/amd64/amd64/exception.S:387 > #26 0x0000000180fcec2c in ?? () > > (kgdb) f 11 > #11 0xffffffff80b16f01 in vdev_geom_read_config (cp=0xffffff0096b24180, > config=0xffffff8245232670) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:248 > 248 ZFS_LOG(1, "Reading config from %s...", pp->name); > (kgdb) list > 243 int error, l, len; > 244 > 245 g_topology_assert_not(); > 246 > 247 pp = cp->provider; > 248 ZFS_LOG(1, "Reading config from %s...", pp->name); > 249 > 250 psize = pp->mediasize; > 251 psize = P2ALIGN(psize, (uint64_t)sizeof(vdev_label_t)); > 252 > (kgdb) p *cp > $1 = {geom = 0xffffff0002c0dd00, consumer = {le_next = > 0xffffff00969a5280, le_prev = 0xffffff0002c0dd20}, provider = 0x0, > consumers = {le_next = 0xffffff009607bb00, le_prev = > 0xffffff00969c7e20}, acr = 1, acw = 0, ace = 1, spoiled = 0, > stat = 0xffffff0002bed5a0, nstart = 17, nend = 17, private = 0x0, > index = 0} > (kgdb) p *cp->provider > Cannot access memory at address 0x0 > > (kgdb) f 12 > #12 0xffffffff80b17194 in vdev_geom_read_guid (cp=) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:454 > 454 if (vdev_geom_read_config(cp, &config) == 0) { > (kgdb) p *cp > $2 = {geom = 0xffffff0002c0dd00, consumer = {le_next = > 0xffffff00969a5280, le_prev = 0xffffff0002c0dd20}, provider = 0x0, > consumers = {le_next = 0xffffff009607bb00, le_prev = > 0xffffff00969c7e20}, acr = 1, acw = 0, ace = 1, spoiled = 0, > stat = 0xffffff0002bed5a0, nstart = 17, nend = 17, private = 0x0, > index = 0} > (kgdb) info local > config = (nvlist_t *) 0xffffff0002b85000 > guid = 0 > (kgdb) list > 449 uint64_t guid; > 450 > 451 g_topology_assert_not(); > 452 > 453 guid = 0; > 454 if (vdev_geom_read_config(cp, &config) == 0) { > 455 guid = nvlist_get_guid(config); > 456 nvlist_free(config); > 457 } > 458 return (guid); > > (kgdb) f 13 > #13 0xffffffff80b172f0 in vdev_geom_open_by_path (vd=0xffffff0002b2f000, > check_guid=1) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c:559 > 559 guid = vdev_geom_read_guid(cp); > (kgdb) list > 554 ZFS_LOG(1, "Found provider by name %s.", > vd->vdev_path); > 555 cp = vdev_geom_attach(pp); > 556 if (cp != NULL && check_guid && > ISP2(pp->sectorsize) && > 557 pp->sectorsize <= VDEV_PAD_SIZE) { > 558 g_topology_unlock(); > 559 guid = vdev_geom_read_guid(cp); > 560 g_topology_lock(); > 561 if (guid != vd->vdev_guid) { > 562 vdev_geom_detach(cp, 0); > 563 cp = NULL; > (kgdb) info local > pp = (struct g_provider *) 0xffffff00969c7e00 > cp = (struct g_consumer *) 0xffffff0096b24180 > guid = > __func__ = "ÿÿ\000\000H\213uÀ\211Ø\211\235üþÿÿ\203À\001\205ÀH\213" > (kgdb) p *cp > $3 = {geom = 0xffffff0002c0dd00, consumer = {le_next = > 0xffffff00969a5280, le_prev = 0xffffff0002c0dd20}, provider = 0x0, > consumers = {le_next = 0xffffff009607bb00, le_prev = > 0xffffff00969c7e20}, acr = 1, acw = 0, ace = 1, spoiled = 0, > stat = 0xffffff0002bed5a0, nstart = 17, nend = 17, private = 0x0, > index = 0} > (kgdb) p *pp > $4 = {name = 0xffffff00969c7e88 "mirror/gm0p3", provider = {le_next = > 0xffffff009698a100, le_prev = 0xffffff0002c03208}, > geom = 0xffffff0002c68000, consumers = {lh_first = > 0xffffff009607bb00}, acr = 1, acw = 0, ace = 1, error = 0, orphan = { > tqe_next = 0x0, tqe_prev = 0x0}, mediasize = 8589934592, sectorsize > = 512, stripesize = 0, stripeoffset = 226575360, > stat = 0xffffff0002be7120, nstart = 157, nend = 157, flags = 0, > private = 0xffffff00969c7c00, index = 2} > (kgdb) quit > > The technical reason for the crash is that in "f 11" we have pp = > cp->provider = 0. > I can give more information from the kernel dump, also I can easy repeat > the crash. > -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 10:04:33 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DC62CEC0; Fri, 21 Mar 2014 10:04:33 +0000 (UTC) Received: from mail.dawidek.net (garage.dawidek.net [91.121.88.72]) by mx1.freebsd.org (Postfix) with ESMTP id A0E172B0; Fri, 21 Mar 2014 10:04:32 +0000 (UTC) Received: from localhost (58.wheelsystems.com [83.12.187.58]) by mail.dawidek.net (Postfix) with ESMTPSA id 674F31E6; Fri, 21 Mar 2014 11:04:24 +0100 (CET) Date: Fri, 21 Mar 2014 11:06:33 +0100 From: Pawel Jakub Dawidek To: Andriy Gapon Subject: Re: g_mirror_access() dropping geom topology_lock [Was: Kernel crash trying to import a ZFS pool with log device] Message-ID: <20140321100633.GA1656@garage.freebsd.pl> References: <532B5A0C.1010008@incore.de> <532C085D.3020201@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="EVF5PPMfhYS0aIcm" Content-Disposition: inline In-Reply-To: <532C085D.3020201@FreeBSD.org> X-OS: FreeBSD 11.0-CURRENT amd64 User-Agent: Mutt/1.5.22 (2013-10-16) Cc: freebsd-fs@FreeBSD.org, Andreas Longwitz , freebsd-geom@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 10:04:33 -0000 --EVF5PPMfhYS0aIcm Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Mar 21, 2014 at 11:37:33AM +0200, Andriy Gapon wrote: > I see two issues here. > First, the ZFS tasting code could be made more robust. If it never tried= to > re-use the consumer and always created a new one, then most likely this c= rash > could be avoided. But there is no bug in the code. The code is correct = and it > it uses GEOM topology lock to avoid any concurrency issues. This is the problem, in my opinion. GEOM classes have to have the ability to drop the topology lock in the access method. Without such ability any more complex GEOM class cannot work or will require tons of hacks to do their job. Not only my GEOM classes do that - GRAID does the same. I'd much prefer for us to accept the fact that GEOM classes are allowed to drop the topology lock in their access methods and fix it in ZFS. > But GEOM mirror code breaks a contract on which the ZFS code relies. > g_access() must be called with the topology lock hold. > I extend this requirement to a requirement that access method of any GEOM > provider must operate under the topology lock and must never drop it. > In other words, if a caller must acquire g_topology_lock before calling > g_access, then in return it must have a guarantee that the GEOM topology = stays > unchanged across the call to g_access(). > g_mirror_access() breaks the above contract. >=20 > So, the code in vdev_geom_attach() obtains g_topology_lock, then it finds= an > existing valid consumer and calls g_access() on it. It reasonably expect= s that > the consumer remains valid, but because g_mirror_access() drops and requi= res the > topology lock, there is a chance that the topology can change and the con= sumer > may become invalid. >=20 > I am not very familiar with gmirror code, so I am not sure how to fix the > problem from that end. --=20 Pawel Jakub Dawidek http://www.wheelsystems.com FreeBSD committer http://www.FreeBSD.org Am I Evil? Yes, I Am! http://mobter.com --EVF5PPMfhYS0aIcm Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iEYEARECAAYFAlMsDykACgkQForvXbEpPzT2/wCgikwhKj4jipMzxnUyD8EvW0Ag vWIAoK8QSmWe+fx5e7x99qfP3JqmlGCL =JY2h -----END PGP SIGNATURE----- --EVF5PPMfhYS0aIcm-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 10:08:42 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D7950196 for ; Fri, 21 Mar 2014 10:08:42 +0000 (UTC) Received: from elf.hq.norma.perm.ru (mail.norma.perm.ru [IPv6:2001:470:1f09:14c0::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AF20B2E5 for ; Fri, 21 Mar 2014 10:08:41 +0000 (UTC) Received: from bsdrookie.norma.com. (bsdrookie.norma.com [192.168.7.224]) by elf.hq.norma.perm.ru (8.14.5/8.14.5) with ESMTP id s2LA8cZp097751 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Fri, 21 Mar 2014 16:08:38 +0600 (YEKT) (envelope-from emz@norma.perm.ru) Message-ID: <532C0FA6.9050005@norma.perm.ru> Date: Fri, 21 Mar 2014 16:08:38 +0600 From: "Eugene M. Zheganin" User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs Subject: crash on zpool import - help get data back Content-Type: text/plain; charset=KOI8-R Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7 (elf.hq.norma.perm.ru [192.168.3.10]); Fri, 21 Mar 2014 16:08:38 +0600 (YEKT) X-Spam-Status: No hits=-101.0 bayes=0.5 testhits ALL_TRUSTED=-1, USER_IN_WHITELIST=-100 autolearn=unavailable version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on elf.hq.norma.perm.ru X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 10:08:42 -0000 Hi. After some time using zfs on a 10.x server (and a couple of panics) I'm now getting the reproducible and repeatable panic on any operations involving particular pool. I managed to boot from USB stick, import only healthy pools and substitite the server's zpool.cache file with one referencing only healthy pools. Now I'm able to boot, but when I try to import the remaining pool I'm getting the panic (attached below). Now questions: - do I understand correctly that "#7 0xffffffff8188e076 in vdev_readable (vd=0x0)" means vd is NULL, and this is triggering the panic ? - I saw a similar (but not identical) panic in http://lists.freebsd.org/pipermail/freebsd-fs/2012-January/013513.html , - are there any possible tricks that could help me getting my data ? the target pool itself shows as healthy too: # zpool import pool: xtank id: 16620000996171732653 state: ONLINE status: Some supported features are not enabled on the pool. action: The pool can be imported using its name or numeric identifier, though some features will not be available without an explicit 'zpool upgrade'. config: xtank ONLINE mirror-0 ONLINE gpt/xtank0 ONLINE gpt/xtank1 ONLINE logs mirror-1 ONLINE gpt/xlog0 ONLINE gpt/xlog1 ONLINE Backtrace and stuff: ===Cut=== # less core.txt.4 witchdoctor.hq.norma.perm.ru dumped core - see /var/crash/vmcore.4 Fri Mar 21 13:00:30 YEKT 2014 FreeBSD witchdoctor.hq.norma.perm.ru 10.0-STABLE FreeBSD 10.0-STABLE #0 r263266: Mon Mar 17 23:17:32 YEKT 2014 emz@witchdoctor.hq.norma.perm.ru:/usr/obj/usr/src/sys/GENERIC amd64 panic: page fault GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"... Unread portion of the kernel message buffer: Fatal trap 12: page fault while in kernel mode Fatal trap 12: page fault while in kernel mode cpuid = 3; apic id = 03 fault virtual address = 0x50 cpuid = 1; apic id = 01 fault code = supervisor read data, page not present fault virtual address = 0xa0 instruction pointer = 0x20:0xffffffff8188e076 fault code = supervisor read data, page not present stack pointer = 0x28:0xfffffe01208e5b00 instruction pointer = 0x20:0xffffffff818ab666 frame pointer = 0x28:0xfffffe01208e5b10 stack pointer = 0x28:0xfffffe0120fc85b0 code segment = base 0x0, limit 0xfffff, type 0x1b frame pointer = 0x28:0xfffffe0120fc8640 = DPL 0, pres 1, long 1, def32 0, gran 1 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = interrupt enabled, processor eflags = resume, IOPL = 0 interrupt enabled, resume, IOPL = 0 current process = 0 (system_taskq_2) trap number = 12 current process = 1363 (zpool) panic: page fault cpuid = 3 KDB: stack backtrace: #0 0xffffffff808f01d0 at kdb_backtrace+0x60 #1 0xffffffff808b7ba5 at panic+0x155 #2 0xffffffff80c98f32 at trap_fatal+0x3a2 #3 0xffffffff80c99209 at trap_pfault+0x2c9 #4 0xffffffff80c9899b at trap+0x5bb #5 0xffffffff80c7fc52 at calltrap+0x8 #6 0xffffffff81894740 at vdev_mirror_child_select+0x70 #7 0xffffffff81894284 at vdev_mirror_io_start+0x234 #8 0xffffffff818ae754 at zio_vdev_io_start+0x184 #9 0xffffffff818aba8a at zio_execute+0x15a #10 0xffffffff8183a158 at arc_read+0x958 #11 0xffffffff81852fde at traverse_prefetcher+0x13e #12 0xffffffff8185243d at traverse_visitbp+0x20d #13 0xffffffff81852e6f at traverse_dnode+0xef #14 0xffffffff81852bb7 at traverse_visitbp+0x987 #15 0xffffffff81852613 at traverse_visitbp+0x3e3 #16 0xffffffff81852613 at traverse_visitbp+0x3e3 #17 0xffffffff81852613 at traverse_visitbp+0x3e3 Uptime: 6m45s Dumping 326 out of 4043 MB:..5%..15%..25%..35%..45%..54%..64%..74%..84%..94% Reading symbols from /boot/kernel/zfs.ko.symbols...done. Loaded symbols for /boot/kernel/zfs.ko.symbols Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. Loaded symbols for /boot/kernel/opensolaris.ko.symbols Reading symbols from /boot/kernel/geom_mirror.ko.symbols...done. Loaded symbols for /boot/kernel/geom_mirror.ko.symbols Reading symbols from /boot/kernel/ums.ko.symbols...done. Loaded symbols for /boot/kernel/ums.ko.symbols Reading symbols from /boot/kernel/uhid.ko.symbols...done. Loaded symbols for /boot/kernel/uhid.ko.symbols #0 doadump (textdump=) at pcpu.h:219 219 pcpu.h: No such file or directory. in pcpu.h (kgdb) #0 doadump (textdump=) at pcpu.h:219 #1 0xffffffff808b7820 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:452 #2 0xffffffff808b7be4 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:759 #3 0xffffffff80c98f32 in trap_fatal (frame=, eva=) at /usr/src/sys/amd64/amd64/trap.c:875 #4 0xffffffff80c99209 in trap_pfault (frame=0xfffffe01208e5a50, usermode=0) at /usr/src/sys/amd64/amd64/trap.c:692 #5 0xffffffff80c9899b in trap (frame=0xfffffe01208e5a50) at /usr/src/sys/amd64/amd64/trap.c:456 #6 0xffffffff80c7fc52 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:232 #7 0xffffffff8188e076 in vdev_readable (vd=0x0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c:2632 #8 0xffffffff81894740 in vdev_mirror_child_select (zio=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_mirror.c:238 #9 0xffffffff81894284 in vdev_mirror_io_start (zio=0xfffff8004f228398) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_mirror.c:295 #10 0xffffffff818ae754 in zio_vdev_io_start (zio=0xfffff8004f228398) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:2514 #11 0xffffffff818aba8a in zio_execute (zio=0xfffff8004f228398) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1346 #12 0xffffffff8183a158 in arc_read (pio=0x0, spa=0xfffff800799b8000, bp=, done=0x2, private=0x0, priority=ZIO_PRIORITY_ASYNC_READ, zio_flags=512, arc_flags=, zb=0x0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3417 #13 0xffffffff81852fde in traverse_prefetcher (spa=0xfffff800799b8000, zilog=0xf01ff, bp=, zb=, dnp=0xfffff8004f1d3240, arg=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:451 #14 0xffffffff8185243d in traverse_visitbp (td=0xfffffe01208e6980, dnp=0xfffffe00276fe800, bp=0xfffffe00276fe980, zb=0xfffffe01208e5f08) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:250 #15 0xffffffff81852e6f in traverse_dnode (td=0xfffffe01208e6980, dnp=0xfffffe00276fe800, objset=41, object=57388) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:417 #16 0xffffffff81852bb7 in traverse_visitbp (td=0xfffffe01208e6980, dnp=0xfffffe00276fd000, bp=0xfffffe00276f1080, zb=0xfffffe01208e6128) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:309 #17 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01208e6980, dnp=0xfffff80079bb9800, bp=0xfffffe0027709700, zb=0xfffffe01208e6258) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #18 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01208e6980, dnp=0xfffff80079bb9800, bp=0xfffffe00275dd000, zb=0xfffffe01208e6388) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #19 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01208e6980, dnp=0xfffff80079bb9800, bp=0xfffffe0027651000, zb=0xfffffe01208e64b8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #20 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01208e6980, dnp=0xfffff80079bb9800, bp=0xfffffe0027584000, zb=0xfffffe01208e65e8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #21 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01208e6980, dnp=0xfffff80079bb9800, bp=0xfffffe002755c000, zb=0xfffffe01208e6718) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #22 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01208e6980, dnp=0xfffff80079bb9800, bp=0xfffff80079bb9840, zb=0xfffffe01208e67d8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #23 0xffffffff81852e04 in traverse_dnode (td=0xfffffe01208e6980, dnp=0xfffff80079bb9800, objset=41, object=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:407 #24 0xffffffff818528c0 in traverse_visitbp (td=0xfffffe01208e6980, dnp=0x0, bp=0xfffff800794c6280, zb=0xfffffe01208e6960) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:338 #25 0xffffffff818521d6 in traverse_prefetch_thread (arg=0xfffffe0120fc9420) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:470 #26 0xffffffff81828c00 in taskq_run (arg=0xfffff80018b01ea0, pending=983551) at /usr/src/sys/modules/zfs/../../cddl/compat/opensolaris/kern/opensolaris_taskq.c:109 #27 0xffffffff808fe1b6 in taskqueue_run_locked (queue=0xfffff8001614c300) at /usr/src/sys/kern/subr_taskqueue.c:342 #28 0xffffffff808fec28 in taskqueue_thread_loop (arg=) at /usr/src/sys/kern/subr_taskqueue.c:563 #29 0xffffffff80889cba in fork_exit ( callout=0xffffffff808feb80 , arg=0xfffff8001616f720, frame=0xfffffe01208e6ac0) at /usr/src/sys/kern/kern_fork.c:995 #30 0xffffffff80c8018e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606 #31 0x0000000000000000 in ?? () Current language: auto; currently minimal (kgdb) ===Cut=== Thanks. Eugene. From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 10:08:57 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 004CF208 for ; Fri, 21 Mar 2014 10:08:56 +0000 (UTC) Received: from elf.hq.norma.perm.ru (mail.norma.perm.ru [IPv6:2001:470:1f09:14c0::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4E92A2EF for ; Fri, 21 Mar 2014 10:08:56 +0000 (UTC) Received: from bsdrookie.norma.com. (bsdrookie.norma.com [192.168.7.224]) by elf.hq.norma.perm.ru (8.14.5/8.14.5) with ESMTP id s2LA8rrN097765 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Fri, 21 Mar 2014 16:08:53 +0600 (YEKT) (envelope-from emz@norma.perm.ru) Message-ID: <532C0FB5.80307@norma.perm.ru> Date: Fri, 21 Mar 2014 16:08:53 +0600 From: "Eugene M. Zheganin" User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs Subject: Re: crash on zpool import - help get data back References: <532BEABC.5050808@norma.perm.ru> In-Reply-To: <532BEABC.5050808@norma.perm.ru> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7 (elf.hq.norma.perm.ru [192.168.3.10]); Fri, 21 Mar 2014 16:08:53 +0600 (YEKT) X-Spam-Status: No hits=-101.0 bayes=0.5 testhits ALL_TRUSTED=-1, USER_IN_WHITELIST=-100 autolearn=unavailable version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on elf.hq.norma.perm.ru X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 10:08:57 -0000 Hi. On 21.03.2014 13:31, Eugene M. Zheganin wrote: > Hi. > > After some time using zfs on a 10.x server (and a couple of panics) I'm > now getting the reproducible and repeatable panic on any operations > involving particular pool. I managed to boot from USB stick, import only > healthy pools and substitite the server's zpool.cache file with one > referencing only healthy pools. Now I'm able to boot, but when I try to > import the remaining pool I'm getting the panic (attached below). Now > questions: > > - do I understand correctly that "#7 0xffffffff8188e076 in > vdev_readable (vd=0x0)" means vd is NULL, and this is triggering the panic ? > - I saw a similar (but not identical) panic in > http://lists.freebsd.org/pipermail/freebsd-fs/2012-January/013513.html , > - are there any possible tricks that could help me getting my data ? After some thinking (speeded up with the superiors running in circles) I realized that the root cause is the same and I can apply the tricks mentioned above. However, the question remains - how could this happen, because the main difference between me and original thread author is that I don't have memory errors. Furthermore, may be this technique can be applied to the FreeBSD zfs code, for example switching affected vdevs into the DEGRADED state, like solaris fmadm does when it's getting errors on a disk, thus signalling that the pool actually isn't healthy anymore ? However, I am not a programmer of any kind, so this is just a thought based on a fact that two individuals independently stepped on same issue. Thanks. Eugene. From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 10:09:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C7869277 for ; Fri, 21 Mar 2014 10:09:10 +0000 (UTC) Received: from elf.hq.norma.perm.ru (mail.norma.perm.ru [IPv6:2001:470:1f09:14c0::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id DEFA22F2 for ; Fri, 21 Mar 2014 10:09:09 +0000 (UTC) Received: from bsdrookie.norma.com. (bsdrookie.norma.com [192.168.7.224]) by elf.hq.norma.perm.ru (8.14.5/8.14.5) with ESMTP id s2LA97tb097789 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Fri, 21 Mar 2014 16:09:07 +0600 (YEKT) (envelope-from emz@norma.perm.ru) Message-ID: <532C0FC3.5060108@norma.perm.ru> Date: Fri, 21 Mar 2014 16:09:07 +0600 From: "Eugene M. Zheganin" User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs Subject: Re: crash on zpool import - help get data back References: <532BEABC.5050808@norma.perm.ru> <532BF531.1050400@norma.perm.ru> In-Reply-To: <532BF531.1050400@norma.perm.ru> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7 (elf.hq.norma.perm.ru [192.168.3.10]); Fri, 21 Mar 2014 16:09:07 +0600 (YEKT) X-Spam-Status: No hits=-101.0 bayes=0.5 testhits ALL_TRUSTED=-1, USER_IN_WHITELIST=-100 autolearn=unavailable version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on elf.hq.norma.perm.ru X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 10:09:10 -0000 Hi. On 21.03.2014 14:15, Eugene M. Zheganin wrote: > > After some thinking (speeded up with the superiors running in circles) I > realized that the root cause is the same and I can apply the tricks > mentioned above. I did them, I'm able to import the pool and read some data, but when trying to read all of it I get panic on 43th gigabyte out of 1200 (attached below). Is there some way to get this hack-patched too ? Just to save data. ===Cut=== Fatal trap 12: page fault while in kernel mode cpuid = 3; apic id = 03 fault virtual address = 0x88 fault code = supervisor read data, page not present instruction pointer = 0x20:0xffffffff818a0454 stack pointer = 0x28:0xfffffe01214086e0 frame pointer = 0x28:0xfffffe0121408740 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 1282 (zfs) trap number = 12 panic: page fault cpuid = 3 KDB: stack backtrace: #0 0xffffffff808f01d0 at kdb_backtrace+0x60 #1 0xffffffff808b7ba5 at panic+0x155 #2 0xffffffff80c98f32 at trap_fatal+0x3a2 #3 0xffffffff80c99209 at trap_pfault+0x2c9 #4 0xffffffff80c9899b at trap+0x5bb #5 0xffffffff80c7fc52 at calltrap+0x8 #6 0xffffffff818aeed5 at zio_checksum_verify+0x65 #7 0xffffffff818abada at zio_execute+0x15a #8 0xffffffff818ab0b3 at zio_wait+0x23 #9 0xffffffff81839f83 at arc_read+0x783 #10 0xffffffff8184e04f at backup_cb+0x35f #11 0xffffffff8185243d at traverse_visitbp+0x20d #12 0xffffffff81852e6f at traverse_dnode+0xef #13 0xffffffff81852bb7 at traverse_visitbp+0x987 #14 0xffffffff81852613 at traverse_visitbp+0x3e3 #15 0xffffffff81852613 at traverse_visitbp+0x3e3 #16 0xffffffff81852613 at traverse_visitbp+0x3e3 #17 0xffffffff81852613 at traverse_visitbp+0x3e3 Uptime: 10m14s Dumping 426 out of 4043 MB:..4%..12%..23%..34%..42%..53%..64%..72%..83%..94% Reading symbols from /boot/kernel/zfs.ko.symbols...done. Loaded symbols for /boot/kernel/zfs.ko.symbols Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. Loaded symbols for /boot/kernel/opensolaris.ko.symbols Reading symbols from /boot/kernel/geom_mirror.ko.symbols...done. Loaded symbols for /boot/kernel/geom_mirror.ko.symbols #0 doadump (textdump=) at pcpu.h:219 219 pcpu.h: No such file or directory. in pcpu.h (kgdb) #0 doadump (textdump=) at pcpu.h:219 #1 0xffffffff808b7820 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:452 #2 0xffffffff808b7be4 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:759 #3 0xffffffff80c98f32 in trap_fatal (frame=, eva=) at /usr/src/sys/amd64/amd64/trap.c:875 #4 0xffffffff80c99209 in trap_pfault (frame=0xfffffe0121408630, usermode=0) at /usr/src/sys/amd64/amd64/trap.c:692 #5 0xffffffff80c9899b in trap (frame=0xfffffe0121408630) at /usr/src/sys/amd64/amd64/trap.c:456 #6 0xffffffff80c7fc52 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:232 #7 0xffffffff818a0454 in zfs_ereport_start_checksum (spa=0xfffff80099c0c000, vd=0x0, zio=0xfffff80117eed000, offset=0, length=512, arg=, info=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fm.c:704 #8 0xffffffff818aeed5 in zio_checksum_verify (zio=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:2858 #9 0xffffffff818abada in zio_execute (zio=0xfffff80117eed000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1346 #10 0xffffffff818ab0b3 in zio_wait (zio=0xfffff80117eed000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1370 #11 0xffffffff81839f83 in arc_read (pio=0x0, spa=0xfffff80099c0c000, bp=, done=, private=0x0, priority=ZIO_PRIORITY_ASYNC_READ, zio_flags=512, arc_flags=0xffffffff81838f02, zb=0x0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3414 #12 0xffffffff8184e04f in backup_cb (spa=0xfffff80099c0c000, zilog=, bp=0xfffffe005c9de780, zb=0xfffffe0121408b08, dnp=, arg=0xfffff8006e402900) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c:422 #13 0xffffffff8185243d in traverse_visitbp (td=0xfffffe01214095f0, dnp=0xfffffe005c9de600, bp=0xfffffe005c9de780, zb=0xfffffe0121408b08) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:250 #14 0xffffffff81852e6f in traverse_dnode (td=0xfffffe01214095f0, dnp=0xfffffe005c9de600, objset=236, object=104707) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:417 #15 0xffffffff81852bb7 in traverse_visitbp (td=0xfffffe01214095f0, dnp=0xfffffe005c9de000, bp=0xfffffe0056cf9400, zb=0xfffffe0121408d28) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:309 #16 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01214095f0, dnp=0xfffff800a68a5000, bp=0xfffffe002c653c80, zb=0xfffffe0121408e58) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #17 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01214095f0, dnp=0xfffff800a68a5000, bp=0xfffffe002c74a000, zb=0xfffffe0121408f88) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #18 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01214095f0, dnp=0xfffff800a68a5000, bp=0xfffffe002c71a000, zb=0xfffffe01214090b8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #19 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01214095f0, dnp=0xfffff800a68a5000, bp=0xfffffe002c37d000, zb=0xfffffe01214091e8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #20 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01214095f0, dnp=0xfffff800a68a5000, bp=0xfffffe002c726000, zb=0xfffffe0121409318) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #21 0xffffffff81852613 in traverse_visitbp (td=0xfffffe01214095f0, dnp=0xfffff800a68a5000, bp=0xfffff800a68a5040, zb=0xfffffe01214093d8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:284 #22 0xffffffff81852e04 in traverse_dnode (td=0xfffffe01214095f0, dnp=0xfffff800a68a5000, objset=236, object=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:407 #23 0xffffffff818528c0 in traverse_visitbp (td=0xfffffe01214095f0, dnp=0x0, bp=0xfffff800a6b4ce80, zb=0xfffffe0121409588) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:338 #24 0xffffffff81851e5c in traverse_impl (spa=, ds=, objset=, rootbp=0xfffff800a6b4ce80, txg_start=, resume=, flags=, func=0xffffffff8184dcf0 , arg=0xfffff8006e402900) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:540 #25 0xffffffff81851bf3 in traverse_dataset (ds=0xfffffe01214087b0, txg_start=0, flags=, func=0x50, arg=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c:563 #26 0xffffffff8184b701 in dmu_send_impl (tag=0xffffffff8192d305, dp=0xfffff80016151400, ds=0xfffff800a64d6c00, fromds=, outfd=-1504875032, fp=0xfffff800a64d6e08) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c:551 #27 0xffffffff8184b3a7 in dmu_send_obj (pool=, tosnap=, fromsnap=, outfd=1, fp=0xfffff80086e6e2d0, off=0xfffffe01214097c0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c:620 #28 0xffffffff818c2efa in zfs_ioc_send (zc=0xfffffe002c7a6000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:4273 #29 0xffffffff818bf008 in zfsdev_ioctl (dev=, zcmd=, arg=, flag=, td=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:5960 #30 0xffffffff807b2f4f in devfs_ioctl_f (fp=0xfffff80086e6e370, com=3222821404, data=0xfffff80086cd19e0, cred=, td=0xfffff80026e1f000) at /usr/src/sys/fs/devfs/devfs_vnops.c:757 #31 0xffffffff8090680e in kern_ioctl (td=0xfffff80026e1f000, fd=, com=18446741879539140528) at file.h:319 #32 0xffffffff8090658f in sys_ioctl (td=0xfffff80026e1f000, uap=0xfffffe0121409a40) at /usr/src/sys/kern/sys_generic.c:702 #33 0xffffffff80c99827 in amd64_syscall (td=0xfffff80026e1f000, traced=0) at subr_syscall.c:134 #34 0xffffffff80c7ff3b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:391 #35 0x00000008019e413a in ?? () Previous frame inner to this frame (corrupt stack?) Current language: auto; currently minimal (kgdb) ===Cut=== Thanks. Eugene. From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 10:43:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 96825924; Fri, 21 Mar 2014 10:43:47 +0000 (UTC) Received: from mail-we0-x22c.google.com (mail-we0-x22c.google.com [IPv6:2a00:1450:400c:c03::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id D2BA7881; Fri, 21 Mar 2014 10:43:46 +0000 (UTC) Received: by mail-we0-f172.google.com with SMTP id t61so1467529wes.31 for ; Fri, 21 Mar 2014 03:43:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=ieiFr+1sVMKRc4JSgjatA4Q/kfjAfJwX2YFkebMKHQM=; b=m2uqmf65ZZ+6UqSPbuQU0hjDbT2WLb+B2CxpDvwKkWz4nGEsFE/g8TgsCRTFbNgV8z xBxHMvOEb5/KmHkzwd+skWFKFyVwI5Wbqr/acpuVkIiMmDVT+8nXHt7qYpXg44nzWqN6 Adj5yWAaHl06UcacAobj+rhJeoI+PqlrBV5aBg4BxZvh3O3NvKANGLONmgcDgirTI8nX 6Fhk4iTeT6+Pqk4arOLNZnnYm+3ccMbw74ratFQ80RvcUQiFCfw1ug1c+vzmzM7QjKQ/ Dlo8l1fAUDUn+9t8j6YLH8vpxnQWRb8a33T0iNpwb6cWhT/BTImRtWHSFMnLAM8nRBqZ HPyg== X-Received: by 10.194.92.228 with SMTP id cp4mr955659wjb.81.1395398625269; Fri, 21 Mar 2014 03:43:45 -0700 (PDT) Received: from mavbook.mavhome.dp.ua ([134.249.139.101]) by mx.google.com with ESMTPSA id q15sm12223741wjw.18.2014.03.21.03.43.42 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Mar 2014 03:43:44 -0700 (PDT) Sender: Alexander Motin Message-ID: <532C17DD.9030704@FreeBSD.org> Date: Fri, 21 Mar 2014 12:43:41 +0200 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: Andriy Gapon , Andreas Longwitz , freebsd-fs@FreeBSD.org, freebsd-geom@FreeBSD.org Subject: Re: g_mirror_access() dropping geom topology_lock [Was: Kernel crash trying to import a ZFS pool with log device] References: <532B5A0C.1010008@incore.de> <532C085D.3020201@FreeBSD.org> In-Reply-To: <532C085D.3020201@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 10:43:47 -0000 On 21.03.2014 11:37, Andriy Gapon wrote: > on 20/03/2014 23:13 Andreas Longwitz said the following: > [snip] >> But if I now run "zpool export" and "zpool import" the kernel crashes: >> ... >> vdev_geom_open_by_path:554[1]: Found provider by name /dev/label/C325BL31. >> vdev_geom_attach:102[1]: Attaching to label/C325BL31. >> g_access(0xffffff0096b23a00(label/C325BL31), 1, 0, 1) >> open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] >> 0xffffff0002c0d400(label/C325BL31) >> g_access(0xffffff0002ba7300(da2), 1, 0, 2) >> open delta:[r1w0e2] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23800(da2) >> g_disk_access(da2, 1, 0, 2) >> vdev_geom_attach:123[1]: Created geom and consumer for label/C325BL31. >> vdev_geom_read_config:248[1]: Reading config from label/C325BL31... >> vdev_geom_open_by_path:569[1]: guid match for provider /dev/label/C325BL31. >> vdev_geom_open_by_path:554[1]: Found provider by name /dev/label/C330CJHW. >> vdev_geom_attach:102[1]: Attaching to label/C330CJHW. >> g_access(0xffffff00969a5280(label/C330CJHW), 1, 0, 1) >> open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] >> 0xffffff0002c02100(label/C330CJHW) >> g_access(0xffffff0002b96280(da3), 1, 0, 2) >> open delta:[r1w0e2] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23b00(da3) >> g_disk_access(da3, 1, 0, 2) >> vdev_geom_attach:143[1]: Created consumer for label/C330CJHW. >> vdev_geom_read_config:248[1]: Reading config from label/C330CJHW... >> vdev_geom_open_by_path:569[1]: guid match for provider /dev/label/C330CJHW. >> vdev_geom_open_by_path:554[1]: Found provider by name /dev/mirror/gm0p3. >> vdev_geom_attach:102[1]: Attaching to mirror/gm0p3. >> g_access(0xffffff0096b24180(mirror/gm0p3), 1, 0, 1) >> open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] >> 0xffffff00969c7e00(mirror/gm0p3) >> g_part_access(mirror/gm0p3,1,0,1) >> g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 1) >> open delta:[r1w0e1] old:[r8w8e16] provider:[r8w8e16] >> 0xffffff00969c7d00(mirror/gm0) >> GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e1. > > The following part of the log is very informative. > Thank you for capturing it. > >> vdev_geom_attach:143[1]: Created consumer for mirror/gm0p3. >> vdev_geom_read_config:248[1]: Reading config from mirror/gm0p3... >> vdev_geom_open_by_path:569[1]: guid match for provider /dev/mirror/gm0p3. > > I read the above as thread A entering vdev_geom_open_by_path and trying to > "taste" /dev/mirror/gm0p3. > >> g_post_event_x(0xffffffff80b16830, 0xffffff0096b24180, 2, 0) >> vdev_geom_detach:163[1]: Closing access to mirror/gm0p3. >> g_access(0xffffff0096b24180(mirror/gm0p3), -1, 0, -1) >> open delta:[r-1w0e-1] old:[r1w0e1] provider:[r1w0e1] >> 0xffffff00969c7e00(mirror/gm0p3) > > > Simultaneously thread B is closing access /dev/mirror/gm0p3. > It is not clear from the quoted log when and how this thread B got access to the > device in the first place. > >> g_part_access(mirror/gm0p3,-1,0,-1) >> g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, -1) >> open delta:[r-1w0e-1] old:[r9w8e17] provider:[r9w8e17] >> 0xffffff00969c7d00(mirror/gm0) >> GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e-1. >> vdev_geom_open_by_path:554[1]: Found provider by name /dev/mirror/gm0p3. >> vdev_geom_attach:102[1]: Attaching to mirror/gm0p3. >> vdev_geom_attach:128[1]: Found consumer for mirror/gm0p3. >> g_access(0xffffff0096b24180(mirror/gm0p3), 1, 0, 1) >> open delta:[r1w0e1] old:[r1w0e1] provider:[r1w0e1] >> 0xffffff00969c7e00(mirror/gm0p3) > > Thread A is accessing the device. > >> g_part_access(mirror/gm0p3,1,0,1) >> g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 1) >> Open delta:[r1w0e1] old:[r9w8e17] provider:[r9w8e17] >> 0xffffff00969c7d00(mirror/gm0) >> GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e1. >> vdev_geom_detach:167[1]: Destroyed consumer to mirror/gm0p3. >> g_detach(0xffffff0096b24180) >> g_destroy_consumer(0xffffff0096b24180) > > Thread B is destroying a special ZFS "taster" consumer. > >> vdev_geom_attach:147[1]: Used existing consumer for mirror/gm0p3. > > Thread A is trying to re-use the taster consumer, which has just been destroyed. > >> vdev_geom_read_config:248[1]: >> >> Fatal trap 12: page fault while in kernel mode > > Boom! > > I see two issues here. > First, the ZFS tasting code could be made more robust. If it never tried to > re-use the consumer and always created a new one, then most likely this crash > could be avoided. But there is no bug in the code. The code is correct and it > it uses GEOM topology lock to avoid any concurrency issues. > > But GEOM mirror code breaks a contract on which the ZFS code relies. > g_access() must be called with the topology lock hold. > I extend this requirement to a requirement that access method of any GEOM > provider must operate under the topology lock and must never drop it. > In other words, if a caller must acquire g_topology_lock before calling > g_access, then in return it must have a guarantee that the GEOM topology stays > unchanged across the call to g_access(). > g_mirror_access() breaks the above contract. > > So, the code in vdev_geom_attach() obtains g_topology_lock, then it finds an > existing valid consumer and calls g_access() on it. It reasonably expects that > the consumer remains valid, but because g_mirror_access() drops and requires the > topology lock, there is a chance that the topology can change and the consumer > may become invalid. > > I am not very familiar with gmirror code, so I am not sure how to fix the > problem from that end. I can confirm this. I know about this problem for some time already. The same issue as shown in GMIRROR is also present in GRAID. AFAIR the problem is in keeping lock order between GEOM topology lock and class' own lock. The only "excuse" is that it is not very reasonable to have ZFS on top of GMIRROR or GRAID. -- Alexander Motin From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 10:52:16 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A9DADA5D; Fri, 21 Mar 2014 10:52:16 +0000 (UTC) Received: from dss.incore.de (dss.incore.de [195.145.1.138]) by mx1.freebsd.org (Postfix) with ESMTP id D9FBD94B; Fri, 21 Mar 2014 10:52:15 +0000 (UTC) Received: from inetmail.dmz (inetmail_new [10.3.0.4]) by dss.incore.de (Postfix) with ESMTP id A600C5C023; Fri, 21 Mar 2014 11:52:13 +0100 (CET) X-Virus-Scanned: amavisd-new at incore.de Received: from dss.incore.de ([10.3.0.3]) by inetmail.dmz (inetmail.dmz [10.3.0.4]) (amavisd-new, port 10024) with LMTP id dgsS3JBdttY4; Fri, 21 Mar 2014 11:52:08 +0100 (CET) Received: from mail.incore (fwintern.dmz [10.0.0.253]) by dss.incore.de (Postfix) with ESMTP id 4E1E95C029; Fri, 21 Mar 2014 11:52:08 +0100 (CET) Received: from bsdlo.incore (bsdlo.incore [192.168.0.84]) by mail.incore (Postfix) with ESMTP id 3254350868; Fri, 21 Mar 2014 11:52:08 +0100 (CET) Message-ID: <532C19D7.9000901@incore.de> Date: Fri, 21 Mar 2014 11:52:07 +0100 From: Andreas Longwitz User-Agent: Thunderbird 2.0.0.19 (X11/20090113) MIME-Version: 1.0 To: Andriy Gapon , freebsd-fs@freebsd.org Subject: Re: g_mirror_access() dropping geom topology_lock [Was: Kernel crash trying to import a ZFS pool with log device] References: <532B5A0C.1010008@incore.de> <532C085D.3020201@FreeBSD.org> In-Reply-To: <532C085D.3020201@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 10:52:16 -0000 Thanks for the quick analysis, > I read the above as thread A entering vdev_geom_open_by_path and trying to > "taste" /dev/mirror/gm0p3. > >> g_post_event_x(0xffffffff80b16830, 0xffffff0096b24180, 2, 0) >> vdev_geom_detach:163[1]: Closing access to mirror/gm0p3. >> g_access(0xffffff0096b24180(mirror/gm0p3), -1, 0, -1) >> open delta:[r-1w0e-1] old:[r1w0e1] provider:[r1w0e1] >> 0xffffff00969c7e00(mirror/gm0p3) > > > Simultaneously thread B is closing access /dev/mirror/gm0p3. > It is not clear from the quoted log when and how this thread B got access to the > device in the first place. OK, my snip of the console output was not correct. The complete output after the command "zpool import" follows. _post_event_x(0xffffffff8039b8f0, 0xffffff0002c07680, 2, 262144) g_post_event_x(0xffffffff8039b8f0, 0xffffff0096615280, 2, 262144) g_dev_open(acd0, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002aa5300(acd0), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002ad6a00(acd0) g_dev_close(acd0, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002aa5300(acd0), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0002ad6a00(acd0) g_dev_open(da0, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a0cc80(da0), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff00029f1d00(da0) g_disk_access(da0, 1, 0, 0) g_dev_close(da0, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a0cc80(da0), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff00029f1d00(da0) g_disk_access(da0, -1, 0, 0) g_dev_open(da1, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a69180(da1), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002a23500(da1) g_disk_access(da1, 1, 0, 0) g_dev_close(da1, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a69180(da1), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002a23500(da1) g_disk_access(da1, -1, 0, 0) g_dev_open(da2, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a69280(da2), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23800(da2) g_disk_access(da2, 1, 0, 0) g_dev_close(da2, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a69280(da2), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0002a23800(da2) g_disk_access(da2, -1, 0, 0) g_dev_open(da3, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a69480(da3), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23b00(da3) g_disk_access(da3, 1, 0, 0) g_dev_close(da3, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a69480(da3), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0002a23b00(da3) g_disk_access(da3, -1, 0, 0) g_dev_open(da4, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a69600(da4), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff00023c5000(da4) g_disk_access(da4, 1, 0, 0) g_dev_close(da4, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a69600(da4), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff00023c5000(da4) g_disk_access(da4, -1, 0, 0) g_dev_open(da5, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a16280(da5), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002a5fb00(da5) g_disk_access(da5, 1, 0, 0) g_dev_close(da5, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a16280(da5), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002a5fb00(da5) g_disk_access(da5, -1, 0, 0) g_dev_open(label/C325BL31, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba7e00(label/C325BL31), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002c0d400(label/C325BL31) g_access(0xffffff0002ba7300(da2), 1, 0, 1) open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23800(da2) g_disk_access(da2, 1, 0, 1) g_dev_close(label/C325BL31, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba7e00(label/C325BL31), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0002c0d400(label/C325BL31) g_access(0xffffff0002ba7300(da2), -1, 0, -1) open delta:[r-1w0e-1] old:[r1w0e1] provider:[r1w0e1] 0xffffff0002a23800(da2) g_disk_access(da2, -1, 0, -1) g_dev_open(label/C330CJHW, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a16700(label/C330CJHW), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002c02100(label/C330CJHW) g_access(0xffffff0002b96280(da3), 1, 0, 1) open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23b00(da3) g_disk_access(da3, 1, 0, 1) g_dev_close(label/C330CJHW, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a16700(label/C330CJHW), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0002c02100(label/C330CJHW) g_access(0xffffff0002b96280(da3), -1, 0, -1) open delta:[r-1w0e-1] old:[r1w0e1] provider:[r1w0e1] 0xffffff0002a23b00(da3) g_disk_access(da3, -1, 0, -1) g_dev_open(md0, 1, 8192, 0xffffff0096017000) g_access(0xffffff00961f1900(md0), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff00961b2800(md0) g_dev_close(md0, 1, 8192, 0xffffff0096017000) g_access(0xffffff00961f1900(md0), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff00961b2800(md0) g_dev_open(mirror/gm0, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096a9ce00(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096a9ce00(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p1, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b97880(mirror/gm0p1), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0096649700(mirror/gm0p1) g_part_access(mirror/gm0p1,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p1, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b97880(mirror/gm0p1), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0096649700(mirror/gm0p1) g_part_access(mirror/gm0p1,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p10, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096974880(mirror/gm0p10), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff009698a800(mirror/gm0p10) g_part_access(mirror/gm0p10,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p10, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096974880(mirror/gm0p10), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff009698a800(mirror/gm0p10) g_part_access(mirror/gm0p10,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p10.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff00966df400(mirror/gm0p10.journal), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002c70500(mirror/gm0p10.journal) g_dev_close(mirror/gm0p10.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff00966df400(mirror/gm0p10.journal), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0002c70500(mirror/gm0p10.journal) g_dev_open(mirror/gm0p11, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002c26c80(mirror/gm0p11), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0096989800(mirror/gm0p11) g_part_access(mirror/gm0p11,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p11, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002c26c80(mirror/gm0p11), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0096989800(mirror/gm0p11) g_part_access(mirror/gm0p11,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p11.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096565a80(mirror/gm0p11.journal), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0096988700(mirror/gm0p11.journal) g_dev_close(mirror/gm0p11.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096565a80(mirror/gm0p11.journal), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0096988700(mirror/gm0p11.journal) g_dev_open(mirror/gm0p2, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096b26180(mirror/gm0p2), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff009698a100(mirror/gm0p2) g_part_access(mirror/gm0p2,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p2, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096b26180(mirror/gm0p2), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff009698a100(mirror/gm0p2) g_part_access(mirror/gm0p2,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p3, 1, 8192, 0xffffff0096017000) g_access(0xffffff009607bb00(mirror/gm0p3), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff00969c7e00(mirror/gm0p3) g_part_access(mirror/gm0p3,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p3, 1, 8192, 0xffffff0096017000) g_access(0xffffff009607bb00(mirror/gm0p3), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff00969c7e00(mirror/gm0p3) g_part_access(mirror/gm0p3,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p4, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096562980(mirror/gm0p4), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c03200(mirror/gm0p4) g_part_access(mirror/gm0p4,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p4, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096562980(mirror/gm0p4), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c03200(mirror/gm0p4) g_part_access(mirror/gm0p4,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p5, 1, 8192, 0xffffff0096017000) g_access(0xffffff00966cfa80(mirror/gm0p5), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0096a97500(mirror/gm0p5) g_part_access(mirror/gm0p5,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p5, 1, 8192, 0xffffff0096017000) g_access(0xffffff00966cfa80(mirror/gm0p5), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0096a97500(mirror/gm0p5) g_part_access(mirror/gm0p5,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p6, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096a9d500(mirror/gm0p6), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002361600(mirror/gm0p6) g_part_access(mirror/gm0p6,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p6, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096a9d500(mirror/gm0p6), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002361600(mirror/gm0p6) g_part_access(mirror/gm0p6,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p7, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096562400(mirror/gm0p7), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0096989a00(mirror/gm0p7) g_part_access(mirror/gm0p7,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p7, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096562400(mirror/gm0p7), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0096989a00(mirror/gm0p7) g_part_access(mirror/gm0p7,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p8, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096c12300(mirror/gm0p8), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff009662ba00(mirror/gm0p8) g_part_access(mirror/gm0p8,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p8, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096c12300(mirror/gm0p8), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff009662ba00(mirror/gm0p8) g_part_access(mirror/gm0p8,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p8.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff00966cf300(mirror/gm0p8.journal), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0096172d00(mirror/gm0p8.journal) g_dev_close(mirror/gm0p8.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff00966cf300(mirror/gm0p8.journal), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0096172d00(mirror/gm0p8.journal) g_dev_open(mirror/gm0p9, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096564e00(mirror/gm0p9), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff00969ad500(mirror/gm0p9) g_part_access(mirror/gm0p9,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p9, 1, 8192, 0xffffff0096017000) g_access(0xffffff0096564e00(mirror/gm0p9), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff00969ad500(mirror/gm0p9) g_part_access(mirror/gm0p9,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. g_dev_open(mirror/gm0p9.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff00961f1980(mirror/gm0p9.journal), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff009664a600(mirror/gm0p9.journal) g_dev_close(mirror/gm0p9.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff00961f1980(mirror/gm0p9.journal), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff009664a600(mirror/gm0p9.journal) g_dev_open(mirror/gmsv09, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95c80(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95c80(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p1, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95880(mirror/gmsv09p1), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002ad6700(mirror/gmsv09p1) g_part_access(mirror/gmsv09p1,1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r9w9e17] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09p1, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95880(mirror/gmsv09p1), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0002ad6700(mirror/gmsv09p1) g_part_access(mirror/gmsv09p1,-1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r10w9e17] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p10, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba7100(mirror/gmsv09p10), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c32100(mirror/gmsv09p10) g_part_access(mirror/gmsv09p10,1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r9w9e17] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09p10, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba7100(mirror/gmsv09p10), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c32100(mirror/gmsv09p10) g_part_access(mirror/gmsv09p10,-1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r10w9e17] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p10.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b94a80(mirror/gmsv09p10.journal), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c30200(mirror/gmsv09p10.journal) g_dev_close(mirror/gmsv09p10.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b94a80(mirror/gmsv09p10.journal), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c30200(mirror/gmsv09p10.journal) g_dev_open(mirror/gmsv09p2, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95580(mirror/gmsv09p2), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e0] 0xffffff0002c31000(mirror/gmsv09p2) g_part_access(mirror/gmsv09p2,1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r9w9e17] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09p2, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95580(mirror/gmsv09p2), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e0] 0xffffff0002c31000(mirror/gmsv09p2) g_part_access(mirror/gmsv09p2,-1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r10w9e17] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p3, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95280(mirror/gmsv09p3), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c30d00(mirror/gmsv09p3) g_part_access(mirror/gmsv09p3,1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r9w9e17] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09p3, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95280(mirror/gmsv09p3), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c30d00(mirror/gmsv09p3) g_part_access(mirror/gmsv09p3,-1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r10w9e17] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p4, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba4400(mirror/gmsv09p4), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c30b00(mirror/gmsv09p4) g_part_access(mirror/gmsv09p4,1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r9w9e17] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09p4, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba4400(mirror/gmsv09p4), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c30b00(mirror/gmsv09p4) g_part_access(mirror/gmsv09p4,-1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r10w9e17] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p5, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba4780(mirror/gmsv09p5), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c30900(mirror/gmsv09p5) g_part_access(mirror/gmsv09p5,1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r9w9e17] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09p5, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba4780(mirror/gmsv09p5), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c30900(mirror/gmsv09p5) g_part_access(mirror/gmsv09p5,-1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r10w9e17] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p6, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b94c00(mirror/gmsv09p6), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c30700(mirror/gmsv09p6) g_part_access(mirror/gmsv09p6,1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r9w9e17] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09p6, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b94c00(mirror/gmsv09p6), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c30700(mirror/gmsv09p6) g_part_access(mirror/gmsv09p6,-1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r10w9e17] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p7, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b94900(mirror/gmsv09p7), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c30500(mirror/gmsv09p7) g_part_access(mirror/gmsv09p7,1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r9w9e17] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09p7, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b94900(mirror/gmsv09p7), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c30500(mirror/gmsv09p7) g_part_access(mirror/gmsv09p7,-1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r10w9e17] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p7.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95380(mirror/gmsv09p7.journal), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c03d00(mirror/gmsv09p7.journal) g_dev_close(mirror/gmsv09p7.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95380(mirror/gmsv09p7.journal), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c03d00(mirror/gmsv09p7.journal) g_dev_open(mirror/gmsv09p8, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ae0c80(mirror/gmsv09p8), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c30300(mirror/gmsv09p8) g_part_access(mirror/gmsv09p8,1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r9w9e17] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09p8, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ae0c80(mirror/gmsv09p8), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c30300(mirror/gmsv09p8) g_part_access(mirror/gmsv09p8,-1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r10w9e17] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p8.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba7480(mirror/gmsv09p8.journal), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c03900(mirror/gmsv09p8.journal) g_dev_close(mirror/gmsv09p8.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba7480(mirror/gmsv09p8.journal), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c03900(mirror/gmsv09p8.journal) g_dev_open(mirror/gmsv09p9, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba4e00(mirror/gmsv09p9), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c30100(mirror/gmsv09p9) g_part_access(mirror/gmsv09p9,1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), 1, 0, 0) open delta:[r1w0e0] old:[r9w9e17] provider:[r9w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r1w0e0. g_dev_close(mirror/gmsv09p9, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba4e00(mirror/gmsv09p9), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c30100(mirror/gmsv09p9) g_part_access(mirror/gmsv09p9,-1,0,0) g_access(0xffffff0002b97780(mirror/gmsv09), -1, 0, 0) open delta:[r-1w0e0] old:[r10w9e17] provider:[r10w9e17] 0xffffff0002c0d800(mirror/gmsv09) GEOM_MIRROR[2]: Access request for mirror/gmsv09: r-1w0e0. g_dev_open(mirror/gmsv09p9.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95900(mirror/gmsv09p9.journal), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r1w1e1] 0xffffff0002c03600(mirror/gmsv09p9.journal) g_dev_close(mirror/gmsv09p9.journal, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002b95900(mirror/gmsv09p9.journal), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r2w1e1] 0xffffff0002c03600(mirror/gmsv09p9.journal) g_dev_open(label/C325BL31, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba7e00(label/C325BL31), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002c0d400(label/C325BL31) g_access(0xffffff0002ba7300(da2), 1, 0, 1) open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23800(da2) g_disk_access(da2, 1, 0, 1) g_dev_close(label/C325BL31, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002ba7e00(label/C325BL31), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0002c0d400(label/C325BL31) g_access(0xffffff0002ba7300(da2), -1, 0, -1) open delta:[r-1w0e-1] old:[r1w0e1] provider:[r1w0e1] 0xffffff0002a23800(da2) g_disk_access(da2, -1, 0, -1) g_dev_open(label/C330CJHW, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a16700(label/C330CJHW), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002c02100(label/C330CJHW) g_access(0xffffff0002b96280(da3), 1, 0, 1) open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23b00(da3) g_disk_access(da3, 1, 0, 1) g_dev_close(label/C330CJHW, 1, 8192, 0xffffff0096017000) g_access(0xffffff0002a16700(label/C330CJHW), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff0002c02100(label/C330CJHW) g_access(0xffffff0002b96280(da3), -1, 0, -1) open delta:[r-1w0e-1] old:[r1w0e1] provider:[r1w0e1] 0xffffff0002a23b00(da3) g_disk_access(da3, -1, 0, -1) g_dev_open(mirror/gm0p3, 1, 8192, 0xffffff0096017000) g_access(0xffffff009607bb00(mirror/gm0p3), 1, 0, 0) open delta:[r1w0e0] old:[r0w0e0] provider:[r0w0e0] 0xffffff00969c7e00(mirror/gm0p3) g_part_access(mirror/gm0p3,1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 0) open delta:[r1w0e0] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e0. g_dev_close(mirror/gm0p3, 1, 8192, 0xffffff0096017000) g_access(0xffffff009607bb00(mirror/gm0p3), -1, 0, 0) open delta:[r-1w0e0] old:[r1w0e0] provider:[r1w0e0] 0xffffff00969c7e00(mirror/gm0p3) g_part_access(mirror/gm0p3,-1,0,0) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, 0) open delta:[r-1w0e0] old:[r9w8e16] provider:[r9w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e0. vdev_geom_open_by_path:554[1]: Found provider by name /dev/label/C325BL31. vdev_geom_attach:102[1]: Attaching to label/C325BL31. g_access(0xffffff0096b23a00(label/C325BL31), 1, 0, 1) open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002c0d400(label/C325BL31) g_access(0xffffff0002ba7300(da2), 1, 0, 2) open delta:[r1w0e2] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23800(da2) g_disk_access(da2, 1, 0, 2) vdev_geom_attach:123[1]: Created geom and consumer for label/C325BL31. vdev_geom_read_config:248[1]: Reading config from label/C325BL31... vdev_geom_open_by_path:569[1]: guid match for provider /dev/label/C325BL31. vdev_geom_open_by_path:554[1]: Found provider by name /dev/label/C330CJHW. vdev_geom_attach:102[1]: Attaching to label/C330CJHW. g_access(0xffffff00969a5280(label/C330CJHW), 1, 0, 1) open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002c02100(label/C330CJHW) g_access(0xffffff0002b96280(da3), 1, 0, 2) open delta:[r1w0e2] old:[r0w0e0] provider:[r0w0e0] 0xffffff0002a23b00(da3) g_disk_access(da3, 1, 0, 2) vdev_geom_attach:143[1]: Created consumer for label/C330CJHW. vdev_geom_read_config:248[1]: Reading config from label/C330CJHW... vdev_geom_open_by_path:569[1]: guid match for provider /dev/label/C330CJHW. vdev_geom_open_by_path:554[1]: Found provider by name /dev/mirror/gm0p3. vdev_geom_attach:102[1]: Attaching to mirror/gm0p3. g_access(0xffffff0096b24180(mirror/gm0p3), 1, 0, 1) open delta:[r1w0e1] old:[r0w0e0] provider:[r0w0e0] 0xffffff00969c7e00(mirror/gm0p3) g_part_access(mirror/gm0p3,1,0,1) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 1) open delta:[r1w0e1] old:[r8w8e16] provider:[r8w8e16] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e1. vdev_geom_attach:143[1]: Created consumer for mirror/gm0p3. vdev_geom_read_config:248[1]: Reading config from mirror/gm0p3... vdev_geom_open_by_path:569[1]: guid match for provider /dev/mirror/gm0p3. g_post_event_x(0xffffffff80b16830, 0xffffff0096b24180, 2, 0) vdev_geom_detach:163[1]: Closing access to mirror/gm0p3. g_access(0xffffff0096b24180(mirror/gm0p3), -1, 0, -1) open delta:[r-1w0e-1] old:[r1w0e1] provider:[r1w0e1] 0xffffff00969c7e00(mirror/gm0p3) g_part_access(mirror/gm0p3,-1,0,-1) g_access(0xffffff0096c0f800(mirror/gm0), -1, 0, -1) open delta:[r-1w0e-1] old:[r9w8e17] provider:[r9w8e17] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r-1w0e-1. vdev_geom_open_by_path:554[1]: Found provider by name /dev/mirror/gm0p3. vdev_geom_attach:102[1]: Attaching to mirror/gm0p3. vdev_geom_attach:128[1]: Found consumer for mirror/gm0p3. g_access(0xffffff0096b24180(mirror/gm0p3), 1, 0, 1) open delta:[r1w0e1] old:[r1w0e1] provider:[r1w0e1] 0xffffff00969c7e00(mirror/gm0p3) g_part_access(mirror/gm0p3,1,0,1) g_access(0xffffff0096c0f800(mirror/gm0), 1, 0, 1) open delta:[r1w0e1] old:[r9w8e17] provider:[r9w8e17] 0xffffff00969c7d00(mirror/gm0) GEOM_MIRROR[2]: Access request for mirror/gm0: r1w0e1. vdev_geom_detach:167[1]: Destroyed consumer to mirror/gm0p3. g_detach(0xffffff0096b24180) g_destroy_consumer(0xffffff0096b24180) vdev_geom_attach:147[1]: Used existing consumer for mirror/gm0p3. vdev_geom_read_config:248[1]: Fatal trap 12: page fault while in kernel mode > I see two issues here. > First, the ZFS tasting code could be made more robust. If it never tried to > re-use the consumer and always created a new one, then most likely this crash > could be avoided. But there is no bug in the code. The code is correct and it > it uses GEOM topology lock to avoid any concurrency issues. > > But GEOM mirror code breaks a contract on which the ZFS code relies. > g_access() must be called with the topology lock hold. > I extend this requirement to a requirement that access method of any GEOM > provider must operate under the topology lock and must never drop it. > In other words, if a caller must acquire g_topology_lock before calling > g_access, then in return it must have a guarantee that the GEOM topology stays > unchanged across the call to g_access(). > g_mirror_access() breaks the above contract. > > So, the code in vdev_geom_attach() obtains g_topology_lock, then it finds an > existing valid consumer and calls g_access() on it. It reasonably expects that > the consumer remains valid, but because g_mirror_access() drops and requires the > topology lock, there is a chance that the topology can change and the consumer > may become invalid. Ok, I do not understand why there are two threads A and B after "zpool export" dealing with the zfs log device. If you follow Pawels argumentation I am ready to test a patch for ZFS. -- Andreas Longwitz From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 13:58:06 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E271D67B; Fri, 21 Mar 2014 13:58:06 +0000 (UTC) Received: from mail.dawidek.net (garage.dawidek.net [91.121.88.72]) by mx1.freebsd.org (Postfix) with ESMTP id 9C260CF3; Fri, 21 Mar 2014 13:58:06 +0000 (UTC) Received: from localhost (58.wheelsystems.com [83.12.187.58]) by mail.dawidek.net (Postfix) with ESMTPSA id 7205D2A7; Fri, 21 Mar 2014 14:58:04 +0100 (CET) Date: Fri, 21 Mar 2014 15:00:13 +0100 From: Pawel Jakub Dawidek To: Alexander Motin Subject: Re: g_mirror_access() dropping geom topology_lock [Was: Kernel crash trying to import a ZFS pool with log device] Message-ID: <20140321140012.GB1656@garage.freebsd.pl> References: <532B5A0C.1010008@incore.de> <532C085D.3020201@FreeBSD.org> <532C17DD.9030704@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="61jdw2sOBCFtR2d/" Content-Disposition: inline In-Reply-To: <532C17DD.9030704@FreeBSD.org> X-OS: FreeBSD 11.0-CURRENT amd64 User-Agent: Mutt/1.5.22 (2013-10-16) Cc: freebsd-fs@FreeBSD.org, Andreas Longwitz , Andriy Gapon , freebsd-geom@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 13:58:07 -0000 --61jdw2sOBCFtR2d/ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Mar 21, 2014 at 12:43:41PM +0200, Alexander Motin wrote: > On 21.03.2014 11:37, Andriy Gapon wrote: > > Boom! > > > > I see two issues here. > > First, the ZFS tasting code could be made more robust. If it never tri= ed to > > re-use the consumer and always created a new one, then most likely this= crash > > could be avoided. But there is no bug in the code. The code is correc= t and it > > it uses GEOM topology lock to avoid any concurrency issues. > > > > But GEOM mirror code breaks a contract on which the ZFS code relies. > > g_access() must be called with the topology lock hold. > > I extend this requirement to a requirement that access method of any GE= OM > > provider must operate under the topology lock and must never drop it. > > In other words, if a caller must acquire g_topology_lock before calling > > g_access, then in return it must have a guarantee that the GEOM topolog= y stays > > unchanged across the call to g_access(). > > g_mirror_access() breaks the above contract. > > > > So, the code in vdev_geom_attach() obtains g_topology_lock, then it fin= ds an > > existing valid consumer and calls g_access() on it. It reasonably expe= cts that > > the consumer remains valid, but because g_mirror_access() drops and req= uires the > > topology lock, there is a chance that the topology can change and the c= onsumer > > may become invalid. > > > > I am not very familiar with gmirror code, so I am not sure how to fix t= he > > problem from that end. >=20 > I can confirm this. I know about this problem for some time already. The= =20 > same issue as shown in GMIRROR is also present in GRAID. AFAIR the=20 > problem is in keeping lock order between GEOM topology lock and class'=20 > own lock. >=20 > The only "excuse" is that it is not very reasonable to have ZFS on top=20 > of GMIRROR or GRAID. In my opinion we should stop pretending that we can do without dropping the topology lock in the access method, accept that fact and act accordingly in other GEOM classes (like ZFS::VDEV). --=20 Pawel Jakub Dawidek http://www.wheelsystems.com FreeBSD committer http://www.FreeBSD.org Am I Evil? Yes, I Am! http://mobter.com --61jdw2sOBCFtR2d/ Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iEYEARECAAYFAlMsRewACgkQForvXbEpPzTfVQCfc7YI5qqBOJYWU+TFgk5nMvZa oFkAoLNRBKH+RCATCfkhJlLucOTzxHzu =BHMw -----END PGP SIGNATURE----- --61jdw2sOBCFtR2d/-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 21:10:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E4542738 for ; Fri, 21 Mar 2014 21:10:38 +0000 (UTC) Received: from rs2.shuttle.de (rs2.shuttle.de [IPv6:2001:638:206:3::8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A752624E for ; Fri, 21 Mar 2014 21:10:38 +0000 (UTC) Received: by rs2.shuttle.de (Postfix, from userid 10) id 637EE5804C; Fri, 21 Mar 2014 22:10:28 +0100 (CET) Received: from hal9k.schweikhardt.net (localhost [127.0.0.1]) by hal9k.schweikhardt.net (8.14.8/8.14.8) with ESMTP id s2LLAKb1080567 for ; Fri, 21 Mar 2014 22:10:20 +0100 (CET) (envelope-from schweikh@hal9k.schweikhardt.net) Received: (from schweikh@localhost) by hal9k.schweikhardt.net (8.14.8/8.14.8/Submit) id s2LLAKUf080566 for freebsd-fs@freebsd.org; Fri, 21 Mar 2014 22:10:20 +0100 (CET) (envelope-from schweikh) Date: Fri, 21 Mar 2014 22:10:20 +0100 From: Jens Schweikhardt To: freebsd-fs@freebsd.org Subject: Quickly emptying a ZFS Message-ID: <20140321211020.GB1859@schweikhardt.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 21:10:39 -0000 hello, world\n If I need to quickly delete all files on a file system, with UFS I can simply use newfs. Is there something equivalent for ZFS that is neither of * cd mount-point; rm -rf * (not really quick, needs messing with chflags) * zfs destroy dataset; zfs create dataset (loses all properties set) I.e. the task is to quickly nuke all files without losing the dataset properties. Is there maybe a way to save and restore the properties? Regards, Jens -- Jens Schweikhardt http://www.schweikhardt.net/ SIGSIG -- signature too long (core dumped) From owner-freebsd-fs@FreeBSD.ORG Fri Mar 21 21:21:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AE12290C for ; Fri, 21 Mar 2014 21:21:04 +0000 (UTC) Received: from mail-vc0-f174.google.com (mail-vc0-f174.google.com [209.85.220.174]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6CC96343 for ; Fri, 21 Mar 2014 21:21:04 +0000 (UTC) Received: by mail-vc0-f174.google.com with SMTP id ld13so3272947vcb.33 for ; Fri, 21 Mar 2014 14:20:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=fjju818G8WLB/TZxSiH8dgMo9xK3Q3RDR5ksO5MROd0=; b=UpjUSLmKipodaUF2iil6+AOFXLt3RWvHEs/Qetx74yrTCjKVJKe0vJKBS3H0/BeJW+ C4qaw8gFDGK66Ki+lSRMTBDZyndvXCEJ+DK0x7WlZtOgMVjOu4ihw6t9PHZ4QGdT5WOc oVHvIh4vvb3oapvfY+/kH276+qXHCKRrL8v4bmL1z2x1JAC/DXayt5aJKx1Gxnjoks26 szhb1L4ID+CfEfMShH1LLGfD2GUPpXtY6l8u9d21UdN/2Dfm2G60GtDUak5WbwXsfKEp uM3OV8Y+Z3T08hC1H8CN89BArxpvwd+vDiZ3LTKHRtUwXmXwlEXJDU2lq73hXsv8P29f +4/g== X-Gm-Message-State: ALoCoQkFiGkYXI5Qd2Vb4VPVNxhjQ57wDdLG72mNqyiY5o7Hze1x2sZiruh0FrDG4XiNbcsCLxyP X-Received: by 10.52.240.207 with SMTP id wc15mr33700849vdc.14.1395436857522; Fri, 21 Mar 2014 14:20:57 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.204.9 with HTTP; Fri, 21 Mar 2014 14:20:37 -0700 (PDT) In-Reply-To: <20140321211020.GB1859@schweikhardt.net> References: <20140321211020.GB1859@schweikhardt.net> From: Ira Cooper Date: Sat, 22 Mar 2014 02:50:37 +0530 Message-ID: Subject: Re: Quickly emptying a ZFS To: Jens Schweikhardt Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Mar 2014 21:21:04 -0000 Personally, I'd just create an extra subvolume if I did that... tank/prop/real_vol Put all the properties you want in prop and have real_vol inherit them. (Change names to suit taste.) Destroy/recreate real_vol... -Ira On Sat, Mar 22, 2014 at 2:40 AM, Jens Schweikhardt < schweikh@schweikhardt.net> wrote: > hello, world\n > > If I need to quickly delete all files on a file system, with UFS I can > simply use newfs. Is there something equivalent for ZFS that is neither > of > > * cd mount-point; rm -rf * (not really quick, needs messing with chflags) > * zfs destroy dataset; zfs create dataset (loses all properties set) > > I.e. the task is to quickly nuke all files without losing the dataset > properties. Is there maybe a way to save and restore the properties? > > > Regards, > > Jens > -- > Jens Schweikhardt http://www.schweikhardt.net/ > SIGSIG -- signature too long (core dumped) > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sun Mar 23 00:02:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D83F7E13 for ; Sun, 23 Mar 2014 00:02:18 +0000 (UTC) Received: from ln.servalan.com (ln.servalan.com [IPv6:2600:3c00::f03c:91ff:fe96:62f5]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B2DAA7D2 for ; Sun, 23 Mar 2014 00:02:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=servalan.com; s=rsadkim; h=Message-Id:Date:Subject:From:To; bh=rKjjnuvBo2rGG4IC8OgWzVp/83/0jlYPvA/ncfG3GhA=; b=so7QBMwSMoUYkUG/UIJWvowofrVlFtx2m0D1lXzD3kjcF3GjxlGtyxhmK53+CU1rlgzOOl+MqJQvxFHqf/7iA1/Igrjud679echrgEPSq8xRzs8b7PZWmBlPnp73muOJGjf0xNW90iEV3KBGNlziPLtbdQ4/QhuaEsmvIeNxKgQ=; Received: from uucp by ln.servalan.com with local-rmail (Exim 4.76) (envelope-from ) id 1WRVrl-00010b-MZ for freebsd-fs@freebsd.org; Sat, 22 Mar 2014 19:02:17 -0500 Received: from localhost ([127.0.0.1]:39758 helo=ichotolot.servalan.com) by servalan.servalan.com with esmtp (Exim 4.82 (FreeBSD)) (envelope-from ) id 1WRVly-00080s-ER for freebsd-fs@freebsd.org; Sat, 22 Mar 2014 18:56:18 -0500 To: freebsd-fs@freebsd.org From: rmtodd@servalan.servalan.com (Richard Todd) Subject: ZFS panic: solaris assert: wakeup >= now, file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_tx.c, line: 1066 Date: Sat, 22 Mar 2014 18:56:18 -0500 Message-Id: X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Mar 2014 00:02:19 -0000 I just recently updated my main machine to a recent (Wed. morning) 9-STABLE version. While doing backups to an external ZFS-formatted USB drive I got the following panic: panic: solaris assert: wakeup >= now, file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_tx.c, line: 1066 The culprit seems to be the write-throttling code in dmu_tx.c (I've noted thea actual line of the panic with an arrow): #ifdef _KERNEL #ifdef illumos mutex_enter(&curthread->t_delay_lock); while (cv_timedwait_hires(&curthread->t_delay_cv, &curthread->t_delay_lock, wakeup, zfs_delay_resolution_ns, CALLOUT_FLAG_ABSOLUTE | CALLOUT_FLAG_ROUNDUP) > 0) continue; mutex_exit(&curthread->t_delay_lock); #else /* XXX High resolution callouts are not available */ ASSERT(wakeup >= now); <------ panic pause("dmu_tx_delay", NSEC_TO_TICK(wakeup - now)); #endif #else hrtime_t delta = wakeup - gethrtime(); struct timespec ts; ts.tv_sec = delta / NANOSEC; ts.tv_nsec = delta % NANOSEC; (void) nanosleep(&ts, NULL); #endif The code is supposed to put the thread to sleep until such time as the time "wakeup" arrives, but apparently under some circumstances when the system is under heavy enough load things get delayed enough so that by the time the code gets to line 1066 above, the actual time is past the "wakeup" time, making the assert fail. (The system was quite busy when this happened, with not only the backups running but a couple of the zfs pools were doing the /etc/periodic-initiated regular scrub at the time.) I've gone ahead and hacked my copy to skip the pause if the wakeup time has already passed instead of doing a panic, logging an informative printf when this happens: diff -r e817c2457f83 sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_tx.c --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_tx.c Wed Mar 19 21:08:39 2014 -0500 +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_tx.c Sat Mar 22 16:28:06 2014 -0500 @@ -1063,8 +1063,11 @@ mutex_exit(&curthread->t_delay_lock); #else /* XXX High resolution callouts are not available */ - ASSERT(wakeup >= now); - pause("dmu_tx_delay", NSEC_TO_TICK(wakeup - now)); + if (wakeup < now) { + printf("Warning: dmu_tx_delay: wakeup %lu < now %lu\n", (unsigned long)wakeup, (unsigned long)now); + } else { + pause("dmu_tx_delay", NSEC_TO_TICK(wakeup - now)); + } #endif #else hrtime_t delta = wakeup - gethrtime(); Things seem to be running okay with this patch, with an occasional console message like Mar 22 17:43:09 ichotolot kernel: Warning: dmu_tx_delay: wakeup 3965794197537 < now 3975734195460 From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 05:25:06 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A003831E; Mon, 24 Mar 2014 05:25:06 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 76EBA15A; Mon, 24 Mar 2014 05:25:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2O5P6A9094768; Mon, 24 Mar 2014 05:25:06 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2O5P6vc094767; Mon, 24 Mar 2014 05:25:06 GMT (envelope-from linimon) Date: Mon, 24 Mar 2014 05:25:06 GMT Message-Id: <201403240525.s2O5P6vc094767@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/187778: [zfs] Two ZFS filesystems mounted on / at same time X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 05:25:06 -0000 Old Synopsis: Two ZFS filesystems mounted on / at same time New Synopsis: [zfs] Two ZFS filesystems mounted on / at same time Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Mar 24 05:24:50 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=187778 From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 11:06:44 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BFBF4F4B for ; Mon, 24 Mar 2014 11:06:44 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9E62116B for ; Mon, 24 Mar 2014 11:06:44 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2OB6ivM013836 for ; Mon, 24 Mar 2014 11:06:44 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2OB6iHN013834 for freebsd-fs@FreeBSD.org; Mon, 24 Mar 2014 11:06:44 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 24 Mar 2014 11:06:44 GMT Message-Id: <201403241106.s2OB6iHN013834@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 11:06:44 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/187778 fs [zfs] Two ZFS filesystems mounted on / at same time o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix o kern/187261 fs [fuse] FUSE kernel panic when using socket / bind o bin/187071 fs [nfs] nfs server only start 2 daemons 1 master & 1 ser o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186574 fs [zfs] zpool history hangs (infinite loop) o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185858 fs [zfs] zvol clone can't see new device o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 342 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 11:06:58 2014 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0B92C364 for ; Mon, 24 Mar 2014 11:06:58 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id D49B819E for ; Mon, 24 Mar 2014 11:06:57 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2OB6vOR014098 for ; Mon, 24 Mar 2014 11:06:57 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2OB6vci014096 for fs@FreeBSD.org; Mon, 24 Mar 2014 11:06:57 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 24 Mar 2014 11:06:57 GMT Message-Id: <201403241106.s2OB6vci014096@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: fs@FreeBSD.org Subject: Current problem reports assigned to fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 11:06:58 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov 1 problem total. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 11:50:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 12A47A6E for ; Mon, 24 Mar 2014 11:50:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id D735B8DA for ; Mon, 24 Mar 2014 11:50:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2OBo1aA029494 for ; Mon, 24 Mar 2014 11:50:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2OBo1Oc029493; Mon, 24 Mar 2014 11:50:01 GMT (envelope-from gnats) Date: Mon, 24 Mar 2014 11:50:01 GMT Message-Id: <201403241150.s2OBo1Oc029493@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Karl Denninger List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 11:50:02 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Karl Denninger To: bug-followup@FreeBSD.org, karl@fs.denninger.net Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Mon, 24 Mar 2014 06:41:16 -0500 This is a cryptographically signed message in MIME format. --------------ms090509050705090705090709 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable Update: 1. Patch is still good against latest arc.c change (associated with new=20 flags on the pool). 2. Change default low memory warning for the arc to cnt.v_page_count; no = margin. This appears to provide the best performance and does not cause = problems with inact pages or other misbehavior on my test systems. 3. Expose the return flag (arc_shrink_needed) so if you care to watch it = for some reason, you can. *** arc.c.original Sun Mar 23 14:56:01 2014 --- arc.c Sun Mar 23 15:12:15 2014 *************** *** 18,23 **** --- 18,95 ---- * * CDDL HEADER END */ + + /* Karl Denninger (karl@denninger.net), 3/20/2014, FreeBSD-specific + * + * If "NEWRECLAIM" is defined, change the "low memory" warning that cau= ses + * the ARC cache to be pared down. The reason for the change is that t= he + * apparent attempted algorithm is to start evicting ARC cache when fre= e + * pages fall below 25% of installed RAM. This maps reasonably well to= how + * Solaris is documented to behave; when "lotsfree" is invaded ZFS is t= old + * to pare down. + * + * The problem is that on FreeBSD machines the system doesn't appear to= be + * getting what the authors of the original code thought they were look= ing at + * with its test -- or at least not what Solaris did -- and as a result= that + * test never triggers. That leaves the only reclaim trigger as the "p= aging + * needed" status flag, and by the time * that trips the system is alre= ady + * in low-memory trouble. This can lead to severe pathological behavio= r + * under the following scenario: + * - The system starts to page and ARC is evicted. + * - The system stops paging as ARC's eviction drops wired RAM a bit. + * - ARC starts increasing its allocation again, and wired memory grows= =2E + * - A new image is activated, and the system once again attempts to pa= ge. + * - ARC starts to be evicted again. + * - Back to #2 + * + * Note that ZFS's ARC default (unless you override it in /boot/loader.= conf) + * is to allow the ARC cache to grab nearly all of free RAM, provided n= obody + * else needs it. That would be ok if we evicted cache when required. + * + * Unfortunately the system can get into a state where it never + * manages to page anything of materiality back in, as if there is acti= ve + * I/O the ARC will start grabbing space once again as soon as the memo= ry + * contention state drops. For this reason the "paging is occurring" f= lag + * should be the **last resort** condition for ARC eviction; you want t= o + * (as Solaris does) start when there is material free RAM left BUT the= + * vm system thinks it needs to be active to steal pages back in the at= tempt + * to never get into the condition where you're potentially paging off + * executables in favor of leaving disk cache allocated. + * + * To fix this we change how we look at low memory, declaring two new + * runtime tunables and one status. + * + * The new sysctls are: + * vfs.zfs.arc_freepages (free pages required to call RAM "sufficient")= + * vfs.zfs.arc_freepage_percent (additional reservation percentage, def= ault 0) + * vfs.zfs.arc_shrink_needed (shows "1" if we're asking for shrinking t= he ARC) + * + * vfs.zfs.arc_freepages is initialized from vm.v_free_target. + * This should insure that we allow the VM system to steal pages, + * but pare the cache before we suspend processes attempting to get mor= e + * memory, thereby avoiding "stalls." You can set this higher if you w= ish, + * or force a specific percentage reservation as well, but doing so may= + * cause the cache to pare back while the VM system remains willing to + * allow "inactive" pages to accumulate. The challenge is that image + * activation can force things into the page space on a repeated basis + * if you allow this level to be too small (the above pathological + * behavior); the defaults should avoid that behavior but the sysctls + * are exposed should your workload require adjustment. + * + * If we're using this check for low memory we are replacing the previo= us + * ones, including the oddball "random" reclaim that appears to fire fa= r + * more often than it should. We still trigger if the system pages. + * + * If you turn on NEWRECLAIM_DEBUG then the kernel will print on the co= nsole + * status messages when the reclaim status trips on and off, along with= the + * page count aggregate that triggered it (and the free space) for each= + * event. + */ + + #define NEWRECLAIM + #undef NEWRECLAIM_DEBUG + + /* * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights = reserved. * Copyright (c) 2013 by Delphix. All rights reserved. *************** *** 139,144 **** --- 211,223 ---- =20 #include =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + #include + #include + #endif + #endif /* NEWRECLAIM */ + #ifdef illumos #ifndef _KERNEL /* set with ZFS_DEBUG=3Dwatch, to enable watchpoints on frozen buffers= */ *************** *** 203,218 **** --- 282,320 ---- int zfs_arc_shrink_shift =3D 0; int zfs_arc_p_min_shift =3D 0; int zfs_disable_dup_eviction =3D 0; + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + static int freepages =3D 0; /* This much memory is considered critical = */ + static int percent_target =3D 0; /* Additionally reserve "X" percent fr= ee RAM */ + static int shrink_needed =3D 0; /* Shrinkage of ARC cache needed? */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max); TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min); TUNABLE_QUAD("vfs.zfs.arc_meta_limit", &zfs_arc_meta_limit); + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages); + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target); + TUNABLE_INT("vfs.zfs.arc_shrink_needed", &shrink_needed); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + SYSCTL_DECL(_vfs_zfs); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, &zfs_arc_max,= 0, "Maximum ARC size"); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN, &zfs_arc_min,= 0, "Minimum ARC size"); =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN, &freepages= , 0, "ARC Free RAM Pages Required"); + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent, CTLFLAG_RWTUN, &pe= rcent_target, 0, "ARC Free RAM Target percentage"); + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_shrink_needed, CTLFLAG_RD, &shrink_n= eeded, 0, "ARC Memory Constrained (0 =3D no, 1 =3D yes)"); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + /* * Note that buffers can be in one of 6 states: * ARC_anon - anonymous (discussed below) *************** *** 2438,2443 **** --- 2540,2550 ---- { =20 #ifdef _KERNEL + #ifdef NEWRECLAIM_DEBUG + static int xval =3D -1; + static int oldpercent =3D 0; + static int oldfreepages =3D 0; + #endif /* NEWRECLAIM_DEBUG */ =20 if (needfree) return (1); *************** *** 2476,2481 **** --- 2583,2589 ---- return (1); =20 #if defined(__i386) + /* * If we're on an i386 platform, it's possible that we'll exhaust the= * kernel heap space before we ever run out of available physical *************** *** 2492,2502 **** return (1); #endif #else /* !sun */ if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 - #else if (spa_get_random(100) =3D=3D 0) return (1); #endif --- 2600,2664 ---- return (1); #endif #else /* !sun */ + + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + /* + * Implement the new tunable free RAM algorithm. We check the free pag= es + * against the minimum specified target and the percentage that should = be + * free. If we're low we ask for ARC cache shrinkage. If this is defi= ned + * on a FreeBSD system the older checks are not performed. + * + * Check first to see if we need to init freepages, then test. + */ + if (!freepages) { /* If zero then (re)init */ + freepages =3D cnt.v_free_target; + #ifdef NEWRECLAIM_DEBUG + printf("ZFS ARC: Default vfs.zfs.arc_freepages to [%u]\n", freepages)= ; + #endif /* NEWRECLAIM_DEBUG */ + } + #ifdef NEWRECLAIM_DEBUG + if (percent_target !=3D oldpercent) { + printf("ZFS ARC: Reservation percent change to [%d], [%d] pages, [%d]= free\n", percent_target, cnt.v_page_count, cnt.v_free_count); + oldpercent =3D percent_target; + } + if (freepages !=3D oldfreepages) { + printf("ZFS ARC: Low RAM page change to [%d], [%d] pages, [%d] free\n= ", freepages, cnt.v_page_count, cnt.v_free_count); + oldfreepages =3D freepages; + } + #endif /* NEWRECLAIM_DEBUG */ + /* + * Now figure out how much free RAM we require to call the ARC cache st= atus + * "ok". Add the percentage specified of the total to the base require= ment. + */ + + if (cnt.v_free_count < (freepages + ((cnt.v_page_count / 100) * percen= t_target))) { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 1) { + printf("ZFS ARC: RECLAIM total %u, free %u, free pct (%u), reserved = (%u), target pct (%u)\n", cnt.v_page_count, cnt.v_free_count, ((cnt.v_fre= e_count * 100) / cnt.v_page_count), freepages, percent_target); + xval =3D 1; + } + #endif /* NEWRECLAIM_DEBUG */ + shrink_needed =3D 1; + return(1); + } else { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 0) { + printf("ZFS ARC: NORMAL total %u, free %u, free pct (%u), reserved (= %u), target pct (%u)\n", cnt.v_page_count, cnt.v_free_count, ((cnt.v_free= _count * 100) / cnt.v_page_count), freepages, percent_target); + xval =3D 0; + } + #endif /* NEWRECLAIM_DEBUG */ + shrink_needed =3D 0; + return(0); + } + + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 if (spa_get_random(100) =3D=3D 0) return (1); #endif --=20 -- Karl karl@denninger.net --------------ms090509050705090705090709 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjQxMTQxMTZaMCMGCSqGSIb3DQEJBDEW BBSpjQAUz/irMw8ktf83fseTJbi4szBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAgRTUPXy9Gqe69794f6zBeOsb1GYt t732rinQP9a/SadpluwziBBHL2O1NpjuaP/TPTCQIj0Tc7T02QJ8KPmsLVpRy9r115eLcQ8L Yp/jDpRwUXKn7690gNf4NknaqmQTkiT7GN8/knSyyj3Oy3rWaTbjoAYsG5Iiu2aPiNP86SvZ 60meUP6agmELnPRfpeJuixzB225n7o8X20wkiG1iJYSLHDceuPo4oy6/OStg+efxcxxOrBrq PIMTn5pXK0iNKLxgyHWm3We3jLXDq4NLBL844LJ1tuj1Axp++rwwhgs7aNHvwSwFc1iDh+KB UjxL0HTC5sapGdcyEFLcOW/SL400sZOlxBjmHYCHQ/2toNiUdc9CsOiDmgMrkFjOvHrWqsuX wHFra919HLtiqdUy3TxYLDh+3toa/1BW/DEEYDtWPqjWcoHIp2RasLAeJl9HAqlU/KgqfrUa eM0mnAEVa0qx5/KaGFqN1sl9EYhIJJgVTsQpb2Xk84p4c2ANxoK2uZ912pNHcq7tiplVd0F+ WuYrYVkaXh+QJARJo3+GPzc9UnErDHLQSMYLBVQzhuA7CRDo/Orb2kUubZxWsD+9ztL/A8Wd ElW4DDD/or1xFdCsFPllvxFdiwBGKLccyqyPHQzgVQS+Sgi0vL3Ph7RgKGkUf+qGJxBRs97s g58oRFcAAAAAAAA= --------------ms090509050705090705090709-- From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 12:58:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B6EB4B38; Mon, 24 Mar 2014 12:58:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7FA32FE4; Mon, 24 Mar 2014 12:58:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2OCw2qp051493; Mon, 24 Mar 2014 12:58:02 GMT (envelope-from mckusick@freefall.freebsd.org) Received: (from mckusick@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2OCw2gl051492; Mon, 24 Mar 2014 12:58:02 GMT (envelope-from mckusick) Date: Mon, 24 Mar 2014 12:58:02 GMT Message-Id: <201403241258.s2OCw2gl051492@freefall.freebsd.org> To: mckusick@FreeBSD.org, fs@FreeBSD.org, freebsd-fs@FreeBSD.org From: mckusick@FreeBSD.org Subject: Re: kern/167362: [fusefs] Reproduceble Page Fault when running rsync over sshfs/encfs. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 12:58:02 -0000 Synopsis: [fusefs] Reproduceble Page Fault when running rsync over sshfs/encfs. Responsible-Changed-From-To: fs->freebsd-fs Responsible-Changed-By: mckusick Responsible-Changed-When: Mon Mar 24 12:56:59 UTC 2014 Responsible-Changed-Why: Reassign from fs@freebsd.org to freebsd-fs@freebsd.org as that is where all the other filesystem bugs are kept. http://www.freebsd.org/cgi/query-pr.cgi?pr=167362 From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 15:02:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A74FFEBC for ; Mon, 24 Mar 2014 15:02:10 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 023E7F76 for ; Mon, 24 Mar 2014 15:02:08 +0000 (UTC) Received: from firewall.mikej.com (162-238-140-44.lightspeed.lsvlky.sbcglobal.net [162.238.140.44]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s2OF1qIN089268 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Mon, 24 Mar 2014 11:01:54 -0400 (EDT) (envelope-from mikej@mikej.com) Received: from firewall.mikej.com (localhost.mikej.com [127.0.0.1]) by firewall.mikej.com (8.14.8/8.14.8) with ESMTP id s2OF1BpB090835 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 24 Mar 2014 11:01:51 -0400 (EDT) (envelope-from mikej@mikej.com) Received: (from www@localhost) by firewall.mikej.com (8.14.8/8.14.8/Submit) id s2OF19TN090834; Mon, 24 Mar 2014 11:01:09 -0400 (EDT) (envelope-from mikej@mikej.com) X-Authentication-Warning: firewall.mikej.com: www set sender to mikej@mikej.com using -f To: Karl Denninger , Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix [SB QUAR: Fri Mar 21 16:01:12 2014] MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Date: Mon, 24 Mar 2014 11:00:38 -0400 From: mikej In-Reply-To: <532CD3BC.5020305@denninger.net> References: <201403201710.s2KHA0e9043051@freefall.freebsd.org> <31e2f5092048c128add6f4cd3d136f4d@mail.mikej.com> <532CD3BC.5020305@denninger.net> Message-ID: X-Sender: mikej@mikej.com User-Agent: Roundcube Webmail/0.6-beta X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 15:02:10 -0000 Karl, Not being a C coder it appears a declaration is missing. --- arc.o --- /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2614:15: error: use of undeclared identifier 'cnt' freepages = cnt.v_free_target - (cnt.v_free_target / 33); Thanks again, Michael Jung On 2014-03-21 20:05, Karl Denninger wrote: > Here 'ya go... > > Please keep me posted (the list is best as the more public > commentary > the better, and if this needs more tuning that's the way to find > out!) > on how it works for you. > > I have it in production at this point and am happy with it -- the > current default is at the pager "wakeup" level less 3%, but it of > course can be tuned manually. > > On 3/21/2014 3:59 PM, mikej wrote: > >> Karl, >> >> I've looked at my raw mailbox and something is trashing tabs and >> line >> length for your more recent patches in email. >> >> I did not see any attachments, nor updates to the PR for download - >> would >> you mind sending me the latest patch as an attachment? >> >> Thanks for your work, I believe this is going to add real stability >> >> without having to set vfs.zfs.arc_max and other tunables. >> >> Kind regards, >> >> Michael Jung >> >> On 2014-03-20 13:10, Karl Denninger wrote: >> >>> The following reply was made to PR kern/187594; it has been noted >>> by GNATS. >>> >>> From: Karl Denninger [1] >>> To: bug-followup@FreeBSD.org [2], karl@fs.denninger.net [3] >>> Cc: >>> Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem >>> and fix >>> Date: Thu, 20 Mar 2014 12:00:54 -0500 >>> >>>  This is a cryptographically signed message in MIME format. >>> >>>  --------------ms010508000607000909070805 >>>  Content-Type: text/plain; charset=ISO-8859-1; format=flowed >>>  Content-Transfer-Encoding: quoted-printable >>> >>>  Responsive to avg's comment and with another overnight and >>> daytime load=20 >>>  of testing on multiple machines with varying memory configs >>> from 4-24GB=20 >>>  of RAM here is another version of the patch. >>> >>>  The differences are: >>> >>>  1. No longer use kernel_sysctlbyname, include the VM header >>> file and get = >>> >>>  the values directly (less overhead.)  Remove the variables no >>> longer need= >>>  ed. >>> >>>  2. Set the default free RAM level for ARC shrinkage to >>> v_free_target=20 >>>  less 3% as I was able to provoke a stall once with it set to a >>> 5%=20 >>>  reservation, was able to provoke it with the parameter set to >>> 10% with a = >>> >>>  lot of work and was able to do so "on demand" with it set to >>> 20%.  With=20 >>>  a 5% invasion initiating a scrub with very heavy I/O and image >>> load=20 >>>  (hundreds of web and database processes) provoked a ~10 second >>> system=20 >>>  stall.  With it set to 3% I have not been able to reproduce >>> the stall=20 >>>  yet the inactive page count remains stable even under extremely >>> heavy=20 >>>  load, indicating that page-stealing remains effective when >>> required. =20 >>>  Note that for my workload even with this level set above >>> v_free_target,=20 >>>  which would imply no page stealing by the VM system before ARC >>> expansion = >>> >>>  is halted, I do not get unbridled inactive page growth. >>> >>>  As before vfs.zfs.zrc_freepages and >>> vfs.zfs.arc_freepage_percent remain=20 >>>  as accessible knobs if you wish to twist them for some reason >>> to=20 >>>  compensate for an unusual load profile or machine >>> configuration. >>> >>>  *** arc.c.original    Thu Mar 13 09:18:48 2014 >>>  --- arc.c    Thu Mar 20 11:51:48 2014 >>>  *************** >>>  *** 18,23 **** >>>  --- 18,94 ---- >>>      * >>>      * CDDL HEADER END >>>      */ >>>  + >>>  + /* Karl Denninger (karl@denninger.net [4]), 3/20/2014, >>> FreeBSD-specific >>>  +  * >>>  +  * If "NEWRECLAIM" is defined, change the "low memory" >>> warning that cau= >>>  ses >>>  +  * the ARC cache to be pared down.  The reason for the >>> change is that t= >>>  he >>>  +  * apparent attempted algorithm is to start evicting ARC >>> cache when fre= >>>  e >>>  +  * pages fall below 25% of installed RAM.  This maps >>> reasonably well to= >>>   how >>>  +  * Solaris is documented to behave; when "lotsfree" is >>> invaded ZFS is t= >>>  old >>>  +  * to pare down. >>>  +  * >>>  +  * The problem is that on FreeBSD machines the system >>> doesn't appear to= >>>   be >>>  +  * getting what the authors of the original code thought >>> they were look= >>>  ing at >>>  +  * with its test -- or at least not what Solaris did -- and >>> as a result= >>>   that >>>  +  * test never triggers.  That leaves the only reclaim >>> trigger as the "p= >>>  aging >>>  +  * needed" status flag, and by the time * that trips the >>> system is alre= >>>  ady >>>  +  * in low-memory trouble.  This can lead to severe >>> pathological behavio= >>>  r >>>  +  * under the following scenario: >>>  +  * - The system starts to page and ARC is evicted. >>>  +  * - The system stops paging as ARC's eviction drops wired >>> RAM a bit. >>>  +  * - ARC starts increasing its allocation again, and wired >>> memory grows= >>>  =2E >>>  +  * - A new image is activated, and the system once again >>> attempts to pa= >>>  ge. >>>  +  * - ARC starts to be evicted again. >>>  +  * - Back to #2 >>>  +  * >>>  +  * Note that ZFS's ARC default (unless you override it in >>> /boot/loader.= >>>  conf) >>>  +  * is to allow the ARC cache to grab nearly all of free RAM, >>> provided n= >>>  obody >>>  +  * else needs it.  That would be ok if we evicted cache >>> when required. >>>  +  * >>>  +  * Unfortunately the system can get into a state where it >>> never >>>  +  * manages to page anything of materiality back in, as if >>> there is acti= >>>  ve >>>  +  * I/O the ARC will start grabbing space once again as soon >>> as the memo= >>>  ry >>>  +  * contention state drops.  For this reason the "paging is >>> occurring" f= >>>  lag >>>  +  * should be the **last resort** condition for ARC eviction; >>> you want t= >>>  o >>>  +  * (as Solaris does) start when there is material free RAM >>> left BUT the= >>> >>>  +  * vm system thinks it needs to be active to steal pages >>> back in the at= >>>  tempt >>>  +  * to never get into the condition where you're potentially >>> paging off >>>  +  * executables in favor of leaving disk cache allocated. >>>  +  * >>>  +  * To fix this we change how we look at low memory, >>> declaring two new >>>  +  * runtime tunables. >>>  +  * >>>  +  * The new sysctls are: >>>  +  * vfs.zfs.arc_freepages (free pages required to call RAM >>> "sufficient")= >>> >>>  +  * vfs.zfs.arc_freepage_percent (additional reservation >>> percentage, def= >>>  ault 0) >>>  +  * >>>  +  * vfs.zfs.arc_freepages is initialized from >>> vm.v_free_target, less 3%.= >>> >>>  +  * This should insure that we allow the VM system to steal >>> pages first,= >>> >>>  +  * but pare the cache before we suspend processes attempting >>> to get mor= >>>  e >>>  +  * memory, thereby avoiding "stalls."  You can set this >>> higher if you w= >>>  ish, >>>  +  * or force a specific percentage reservation as well, but >>> doing so may= >>> >>>  +  * cause the cache to pare back while the VM system remains >>> willing to >>>  +  * allow "inactive" pages to accumulate.  The challenge is >>> that image >>>  +  * activation can force things into the page space on a >>> repeated basis >>>  +  * if you allow this level to be too small (the above >>> pathological >>>  +  * behavior); the defaults should avoid that behavior but >>> the sysctls >>>  +  * are exposed should your workload require adjustment. >>>  +  * >>>  +  * If we're using this check for low memory we are replacing >>> the previo= >>>  us >>>  +  * ones, including the oddball "random" reclaim that appears >>> to fire fa= >>>  r >>>  +  * more often than it should.  We still trigger if the >>> system pages. >>>  +  * >>>  +  * If you turn on NEWRECLAIM_DEBUG then the kernel will >>> print on the co= >>>  nsole >>>  +  * status messages when the reclaim status trips on and off, >>> along with= >>>   the >>>  +  * page count aggregate that triggered it (and the free >>> space) for each= >>> >>>  +  * event. >>>  +  */ >>>  + >>>  + #define    NEWRECLAIM >>>  + #undef    NEWRECLAIM_DEBUG >>>  + >>>  + >>>     /* >>>      * Copyright (c) 2005, 2010, Oracle and/or its >>> affiliates. All rights = >>>  reserved. >>>      * Copyright (c) 2013 by Delphix. All rights reserved. >>>  *************** >>>  *** 139,144 **** >>>  --- 210,222 ---- >>>    =20 >>>     #include >>>    =20 >>>  + #ifdef    NEWRECLAIM >>>  + #ifdef    __FreeBSD__ >>>  + #include >>>  + #include >>>  + #endif >>>  + #endif    /* NEWRECLAIM */ >>>  + >>>     #ifdef illumos >>>     #ifndef _KERNEL >>>     /* set with ZFS_DEBUG=3Dwatch, to enable watchpoints on >>> frozen buffers= >>>   */ >>>  *************** >>>  *** 203,218 **** >>>  --- 281,316 ---- >>>     int zfs_arc_shrink_shift =3D 0; >>>     int zfs_arc_p_min_shift =3D 0; >>>     int zfs_disable_dup_eviction =3D 0; >>>  + #ifdef    NEWRECLAIM >>>  + #ifdef  __FreeBSD__ >>>  + static    int freepages =3D 0;    /* This much memory >>> is considered critical = >>>  */ >>>  + static    int percent_target =3D 0;    /* Additionally >>> reserve "X" percent fr= >>>  ee RAM */ >>>  + #endif    /* __FreeBSD__ */ >>>  + #endif    /* NEWRECLAIM */ >>>    =20 >>>     TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max); >>>     TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min); >>>     TUNABLE_QUAD("vfs.zfs.arc_meta_limit", >>> &zfs_arc_meta_limit); >>>  + #ifdef    NEWRECLAIM >>>  + #ifdef  __FreeBSD__ >>>  + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages); >>>  + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target); >>> >>>  + #endif    /* __FreeBSD__ */ >>>  + #endif    /* NEWRECLAIM */ >>>  + >>>     SYSCTL_DECL(_vfs_zfs); >>>     SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, >>> &zfs_arc_max,= >>>   0, >>>         "Maximum ARC size"); >>>     SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN, >>> &zfs_arc_min,= >>>   0, >>>         "Minimum ARC size"); >>>    =20 >>>  + #ifdef    NEWRECLAIM >>>  + #ifdef  __FreeBSD__ >>>  + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN, >>> &freepages= >>>  , 0, "ARC Free RAM Pages Required"); >>>  + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent, >>> CTLFLAG_RWTUN, &pe= >>>  rcent_target, 0, "ARC Free RAM Target percentage"); >>>  + #endif    /* __FreeBSD__ */ >>>  + #endif    /* NEWRECLAIM */ >>>  + >>>     /* >>>      * Note that buffers can be in one of 6 states: >>>      *    ARC_anon    - anonymous (discussed below) >>>  *************** >>>  *** 2438,2443 **** >>>  --- 2536,2546 ---- >>>     { >>>    =20 >>>     #ifdef _KERNEL >>>  + #ifdef    NEWRECLAIM_DEBUG >>>  +     static    int    xval =3D -1; >>>  +     static    int    oldpercent =3D 0; >>>  +     static    int    oldfreepages =3D 0; >>>  + #endif    /* NEWRECLAIM_DEBUG */ >>>    =20 >>>         if (needfree) >>>             return (1); >>>  *************** >>>  *** 2476,2481 **** >>>  --- 2579,2585 ---- >>>             return (1); >>>    =20 >>>     #if defined(__i386) >>>  + >>>         /* >>>          * If we're on an i386 platform, it's possible >>> that we'll exhaust the= >>> >>>          * kernel heap space before we ever run out of >>> available physical >>>  *************** >>>  *** 2492,2502 **** >>>             return (1); >>>     #endif >>>     #else    /* !sun */ >>>         if (kmem_used() > (kmem_size() * 3) / 4) >>>             return (1); >>>     #endif    /* sun */ >>>    =20 >>>  - #else >>>         if (spa_get_random(100) =3D=3D 0) >>>             return (1); >>>     #endif >>>  --- 2596,2658 ---- >>>             return (1); >>>     #endif >>>     #else    /* !sun */ >>>  + >>>  + #ifdef    NEWRECLAIM >>>  + #ifdef  __FreeBSD__ >>>  + /* >>>  +  * Implement the new tunable free RAM algorithm.  We check >>> the free pag= >>>  es >>>  +  * against the minimum specified target and the percentage >>> that should = >>>  be >>>  +  * free.  If we're low we ask for ARC cache shrinkage.  If >>> this is defi= >>>  ned >>>  +  * on a FreeBSD system the older checks are not performed. >>>  +  * >>>  +  * Check first to see if we need to init freepages, then >>> test. >>>  +  */ >>>  +     if (!freepages) {        /* If zero then >>> (re)init */ >>>  +         freepages =3D cnt.v_free_target - >>> (cnt.v_free_target / 33); >>>  + #ifdef    NEWRECLAIM_DEBUG >>>  +         printf("ZFS ARC: Default >>> vfs.zfs.arc_freepages to [%u] [%u less 3%%]= >>>  n", freepages, cnt.v_free_target); >>>  + #endif    /* NEWRECLAIM_DEBUG */ >>>  +     } >>>  + #ifdef    NEWRECLAIM_DEBUG >>>  +     if (percent_target !=3D oldpercent) { >>>  +         printf("ZFS ARC: Reservation percent change >>> to [%d], [%d] pages, [%d]= >>>   freen", percent_target, cnt.v_page_count, cnt.v_free_count); >>>  +         oldpercent =3D percent_target; >>>  +     } >>>  +     if (freepages !=3D oldfreepages) { >>>  +         printf("ZFS ARC: Low RAM page change to [%d], >>> [%d] pages, [%d] freen= >>>  ", freepages, cnt.v_page_count, cnt.v_free_count); >>>  +         oldfreepages =3D freepages; >>>  +     } >>>  + #endif    /* NEWRECLAIM_DEBUG */ >>>  + /* >>>  +  * Now figure out how much free RAM we require to call the >>> ARC cache st= >>>  atus >>>  +  * "ok".  Add the percentage specified of the total to the >>> base require= >>>  ment. >>>  +  */ >>>  + >>>  +     if (cnt.v_free_count < freepages + ((cnt.v_page_count >>> / 100) * percent= >>>  _target)) { >>>  + #ifdef    NEWRECLAIM_DEBUG >>>  +         if (xval !=3D 1) { >>>  +             printf("ZFS ARC: RECLAIM total %u, >>> free %u, free pct (%u), reserved = >>>  (%u), target pct (%u)n", cnt.v_page_count, cnt.v_free_count, >>> ((cnt.v_fre= >>>  e_count * 100) / cnt.v_page_count), freepages, percent_target); >>> >>>  +             xval =3D 1; >>>  +         } >>>  + #endif    /* NEWRECLAIM_DEBUG */ >>>  +         return(1); >>>  +     } else { >>>  + #ifdef    NEWRECLAIM_DEBUG >>>  +         if (xval !=3D 0) { >>>  +             printf("ZFS ARC: NORMAL total %u, >>> free %u, free pct (%u), reserved (= >>>  %u), target pct (%u)n", cnt.v_page_count, cnt.v_free_count, >>> ((cnt.v_free= >>>  _count * 100) / cnt.v_page_count), freepages, percent_target); >>>  +             xval =3D 0; >>>  +         } >>>  + #endif    /* NEWRECLAIM_DEBUG */ >>>  +         return(0); >>>  +     } >>>  + >>>  + #endif    /* __FreeBSD__ */ >>>  + #endif    /* NEWRECLAIM */ >>>  + >>>         if (kmem_used() > (kmem_size() * 3) / 4) >>>             return (1); >>>     #endif    /* sun */ >>>    =20 >>>         if (spa_get_random(100) =3D=3D 0) >>>             return (1); >>>     #endif >>> >>>  --=20 >>>  -- Karl >>>  karl@denninger.net [5] >>> >>>  --------------ms010508000607000909070805 >>>  Content-Type: application/pkcs7-signature; name="smime.p7s" >>>  Content-Transfer-Encoding: base64 >>>  Content-Disposition: attachment; filename="smime.p7s" >>>  Content-Description: S/MIME Cryptographic Signature >>> >>>   >>> >> > > MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC >>> >>>   >>> >> > > BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI >>> >>>   >>> >> > > EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM >>> >>>   >>> >> > > TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv >>> >>>   >>> >> > > bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 >>> >>>   >>> >> > > MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg >>> >>>   >>> >> > > RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG >>> >>>   >>> >> > > SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th >>> >>>   >>> >> > > d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W >>> >>>   >>> >> > > 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV >>> >>>   >>> >> > > jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 >>> >>>   >>> >> > > SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY >>> >>>   >>> >> > > 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 >>> >>>   >>> >> > > Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 >>> >>>   >>> >> > > GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci >>> >>>   >>> >> > > WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN >>> >>>   >>> >> > > nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB >>> >>>   >>> >> > > o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg >>> >>>   >>> >> > > hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 >>> >>>   >>> >> > > +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG >>> >>>   >>> >> > > CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy >>> >>>   >>> >> > > bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO >>> >>>   >>> >> > > 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c >>> >>>   >>> >> > > L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 >>> >>>   >>> >> > > YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD >>> >>>   >>> >> > > pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE >>> >>>   >>> >> > > f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk >>> >>>   >>> >> > > YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD >>> >>>   >>> >> > > VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 >>> >>>   >>> >> > > aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ >>> >>>   >>> >> > > KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjAxNzAwNTRaMCMGCSqGSIb3DQEJBDEW >>> >>>   >>> >> > > BBQrRNHcjN1dwAlZu7blrh+3Vu7++TBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL >>> >>>   >>> >> > > BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA >>> >>>   >>> >> > > MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV >>> >>>   >>> >> > > BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT >>> >>>   >>> >> > > EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq >>> >>>   >>> >> > > hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG >>> >>>   >>> >> > > 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV >>> >>>   >>> >> > > BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk >>> >>>   >>> >> > > YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh >>> >>>   >>> >> > > c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAsIbg6OIk2XnrPw25FA9+s4FCnKlo >>> >>>   >>> >> > > Wz3KtfA59Gf2jX8yKEQW26k1QK6w62Zc8KaB5LypaOK1rZ5bipeu6rGHhgaG1oPXUMmcc2p4 >>> >>>   >>> >> > > E18URDskAfn5apoJ/n54nR94OqHfQ/EPBx711pxYtAGBLFOOzwrU2MEZCl2KBydI+Bw/E75R >>> >>>   >>> >> > > WRIk6y0NqSWjgVWU2tJwnOEZj/2UGQCSvJ7h5t1n7idbDIfT88/hvAW3b3knRwPxwpZretXq >>> >>>   >>> >> > > 2BGgmv8lojr7Zui5sR/YdDjSK2yGHqo0mWkSAHp0Wts8okcoJNZSEispFRh56MWCIoJ51cki >>> >>>   >>> >> > > pCZH/vX1EEsfka3CrlE7LWABAYf1biy+Xq/Bfxgq9oAaknGF2yM0jgR7xnjLYLvbv5pjt7ar >>> >>>   >>> >> > > TH2JslJMYkJPKiYFJNEgVJ9wTVQtrCPJQPTk3R1qD3YFraly5Mgjwy5Ax5n8SW858WWOxHeP >>> >>>   >>> >> > > vmL0j1boO0Re9qeAb9v/z8z3NPkFPZhBrEz3g6INCWil+2Vx1yruJvxm1oN9OMQSt2qY38rj >>> >>>   >>> >> > > XWhWVxoQtW39LZc/xSNR41DQXvPJ8VOvyrmvLm7uTm4+lQYVUwNuLNbDFlj8slkAeXF/eR1S >>> >>>   >>> >> > > 4VuWtwexxCco+xGjbPTcZgap976XsvlRWOmjmwqZyGNuW7ZmcODQPFjQvpnBkx9Rm5cLndK6 >>> >>>  OVorTQkAAAAAAAA= >>>  --------------ms010508000607000909070805-- >>> >>> _______________________________________________ >>> freebsd-fs@freebsd.org [6] mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs [7] >>> To unsubscribe, send any mail to >>> "freebsd-fs-unsubscribe@freebsd.org" [8] > > -- > Karl Denninger > karl@denninger.net [9] > _The Market Ticker_ > > > > Links: > ------ > [1] mailto:karl@denninger.net > [2] mailto:bug-followup@FreeBSD.org > [3] mailto:karl@fs.denninger.net > [4] mailto:karl@denninger.net > [5] mailto:karl@denninger.net > [6] mailto:freebsd-fs@freebsd.org > [7] http://lists.freebsd.org/mailman/listinfo/freebsd-fs > [8] mailto:freebsd-fs-unsubscribe@freebsd.org > [9] mailto:karl@denninger.net From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 15:22:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9AD0F905 for ; Mon, 24 Mar 2014 15:22:00 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2D84F1DB for ; Mon, 24 Mar 2014 15:21:59 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2OFLsof086897 for ; Mon, 24 Mar 2014 10:21:54 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Mon Mar 24 10:21:54 2014 Message-ID: <53304D8D.7080107@denninger.net> Date: Mon, 24 Mar 2014 10:21:49 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: mikej , freebsd-fs@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix [SB QUAR: Fri Mar 21 16:01:12 2014] References: <201403201710.s2KHA0e9043051@freefall.freebsd.org> <31e2f5092048c128add6f4cd3d136f4d@mail.mikej.com> <532CD3BC.5020305@denninger.net> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms010404080205080205080700" X-Antivirus: avast! (VPS 140324-0, 03/24/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 15:22:00 -0000 This is a cryptographically signed message in MIME format. --------------ms010404080205080205080700 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: quoted-printable Mike; Did the patch apply cleanly? That declaration is in which should be included up near=20 the top of the file if NEWRECLAIM is defined and the patch applied ok. See here, last entry (that's the most-recent rev), revert to the stock=20 arc.c (or put arc.c.orig back, which should be the original file) and=20 re-apply it. http://www.freebsd.org/cgi/query-pr.cgi?pr=3D187594 Lastly, what OS rev are you running? The patch is against 10-STABLE; it = is ok against both the current checked-in rev of arc.c and the previous=20 (prior to the new feature flags being added a week or so ago) rev back. It sounds like the include didn't get applied. On 3/24/2014 10:00 AM, mikej wrote: > Karl, > > Not being a C coder it appears a declaration is missing. > > --- arc.o --- > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/arc.c:2614:15:=20 > error: use of undeclared identifier 'cnt' > freepages =3D cnt.v_free_target - (cnt.v_free_target / = 33); > > Thanks again, > Michael Jung > > > On 2014-03-21 20:05, Karl Denninger wrote: >> Here 'ya go... >> >> Please keep me posted (the list is best as the more public commentary= >> the better, and if this needs more tuning that's the way to find out!)= >> on how it works for you. >> >> I have it in production at this point and am happy with it -- the >> current default is at the pager "wakeup" level less 3%, but it of >> course can be tuned manually. >> >> On 3/21/2014 3:59 PM, mikej wrote: >> >>> Karl, >>> >>> I've looked at my raw mailbox and something is trashing tabs and >>> line >>> length for your more recent patches in email. >>> >>> I did not see any attachments, nor updates to the PR for download - >>> would >>> you mind sending me the latest patch as an attachment? >>> >>> Thanks for your work, I believe this is going to add real stability >>> >>> without having to set vfs.zfs.arc_max and other tunables. >>> >>> Kind regards, >>> >>> Michael Jung >>> >>> On 2014-03-20 13:10, Karl Denninger wrote: >>> >>>> The following reply was made to PR kern/187594; it has been noted >>>> by GNATS. >>>> >>>> From: Karl Denninger [1] >>>> To: bug-followup@FreeBSD.org [2], karl@fs.denninger.net [3] >>>> Cc: >>>> Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem >>>> and fix >>>> Date: Thu, 20 Mar 2014 12:00:54 -0500 >>>> >>>> This is a cryptographically signed message in MIME format. >>>> >>>> --------------ms010508000607000909070805 >>>> Content-Type: text/plain; charset=3DISO-8859-1; format=3Dflowed >>>> Content-Transfer-Encoding: quoted-printable >>>> >>>> Responsive to avg's comment and with another overnight and >>>> daytime load=3D20 >>>> of testing on multiple machines with varying memory configs >>>> from 4-24GB=3D20 >>>> of RAM here is another version of the patch. >>>> >>>> The differences are: >>>> >>>> 1. No longer use kernel_sysctlbyname, include the VM header >>>> file and get =3D >>>> >>>> the values directly (less overhead.) Remove the variables no >>>> longer need=3D >>>> ed. >>>> >>>> 2. Set the default free RAM level for ARC shrinkage to >>>> v_free_target=3D20 >>>> less 3% as I was able to provoke a stall once with it set to a >>>> 5%=3D20 >>>> reservation, was able to provoke it with the parameter set to >>>> 10% with a =3D >>>> >>>> lot of work and was able to do so "on demand" with it set to >>>> 20%. With=3D20 >>>> a 5% invasion initiating a scrub with very heavy I/O and image >>>> load=3D20 >>>> (hundreds of web and database processes) provoked a ~10 second >>>> system=3D20 >>>> stall. With it set to 3% I have not been able to reproduce >>>> the stall=3D20 >>>> yet the inactive page count remains stable even under extremely >>>> heavy=3D20 >>>> load, indicating that page-stealing remains effective when >>>> required. =3D20 >>>> Note that for my workload even with this level set above >>>> v_free_target,=3D20 >>>> which would imply no page stealing by the VM system before ARC >>>> expansion =3D >>>> >>>> is halted, I do not get unbridled inactive page growth. >>>> >>>> As before vfs.zfs.zrc_freepages and >>>> vfs.zfs.arc_freepage_percent remain=3D20 >>>> as accessible knobs if you wish to twist them for some reason >>>> to=3D20 >>>> compensate for an unusual load profile or machine >>>> configuration. >>>> >>>> *** arc.c.original Thu Mar 13 09:18:48 2014 >>>> --- arc.c Thu Mar 20 11:51:48 2014 >>>> *************** >>>> *** 18,23 **** >>>> --- 18,94 ---- >>>> * >>>> * CDDL HEADER END >>>> */ >>>> + >>>> + /* Karl Denninger (karl@denninger.net [4]), 3/20/2014, >>>> FreeBSD-specific >>>> + * >>>> + * If "NEWRECLAIM" is defined, change the "low memory" >>>> warning that cau=3D >>>> ses >>>> + * the ARC cache to be pared down. The reason for the >>>> change is that t=3D >>>> he >>>> + * apparent attempted algorithm is to start evicting ARC >>>> cache when fre=3D >>>> e >>>> + * pages fall below 25% of installed RAM. This maps >>>> reasonably well to=3D >>>> how >>>> + * Solaris is documented to behave; when "lotsfree" is >>>> invaded ZFS is t=3D >>>> old >>>> + * to pare down. >>>> + * >>>> + * The problem is that on FreeBSD machines the system >>>> doesn't appear to=3D >>>> be >>>> + * getting what the authors of the original code thought >>>> they were look=3D >>>> ing at >>>> + * with its test -- or at least not what Solaris did -- and >>>> as a result=3D >>>> that >>>> + * test never triggers. That leaves the only reclaim >>>> trigger as the "p=3D >>>> aging >>>> + * needed" status flag, and by the time * that trips the >>>> system is alre=3D >>>> ady >>>> + * in low-memory trouble. This can lead to severe >>>> pathological behavio=3D >>>> r >>>> + * under the following scenario: >>>> + * - The system starts to page and ARC is evicted. >>>> + * - The system stops paging as ARC's eviction drops wired >>>> RAM a bit. >>>> + * - ARC starts increasing its allocation again, and wired >>>> memory grows=3D >>>> =3D2E >>>> + * - A new image is activated, and the system once again >>>> attempts to pa=3D >>>> ge. >>>> + * - ARC starts to be evicted again. >>>> + * - Back to #2 >>>> + * >>>> + * Note that ZFS's ARC default (unless you override it in >>>> /boot/loader.=3D >>>> conf) >>>> + * is to allow the ARC cache to grab nearly all of free RAM, >>>> provided n=3D >>>> obody >>>> + * else needs it. That would be ok if we evicted cache >>>> when required. >>>> + * >>>> + * Unfortunately the system can get into a state where it >>>> never >>>> + * manages to page anything of materiality back in, as if >>>> there is acti=3D >>>> ve >>>> + * I/O the ARC will start grabbing space once again as soon >>>> as the memo=3D >>>> ry >>>> + * contention state drops. For this reason the "paging is >>>> occurring" f=3D >>>> lag >>>> + * should be the **last resort** condition for ARC eviction; >>>> you want t=3D >>>> o >>>> + * (as Solaris does) start when there is material free RAM >>>> left BUT the=3D >>>> >>>> + * vm system thinks it needs to be active to steal pages >>>> back in the at=3D >>>> tempt >>>> + * to never get into the condition where you're potentially >>>> paging off >>>> + * executables in favor of leaving disk cache allocated. >>>> + * >>>> + * To fix this we change how we look at low memory, >>>> declaring two new >>>> + * runtime tunables. >>>> + * >>>> + * The new sysctls are: >>>> + * vfs.zfs.arc_freepages (free pages required to call RAM >>>> "sufficient")=3D >>>> >>>> + * vfs.zfs.arc_freepage_percent (additional reservation >>>> percentage, def=3D >>>> ault 0) >>>> + * >>>> + * vfs.zfs.arc_freepages is initialized from >>>> vm.v_free_target, less 3%.=3D >>>> >>>> + * This should insure that we allow the VM system to steal >>>> pages first,=3D >>>> >>>> + * but pare the cache before we suspend processes attempting >>>> to get mor=3D >>>> e >>>> + * memory, thereby avoiding "stalls." You can set this >>>> higher if you w=3D >>>> ish, >>>> + * or force a specific percentage reservation as well, but >>>> doing so may=3D >>>> >>>> + * cause the cache to pare back while the VM system remains >>>> willing to >>>> + * allow "inactive" pages to accumulate. The challenge is >>>> that image >>>> + * activation can force things into the page space on a >>>> repeated basis >>>> + * if you allow this level to be too small (the above >>>> pathological >>>> + * behavior); the defaults should avoid that behavior but >>>> the sysctls >>>> + * are exposed should your workload require adjustment. >>>> + * >>>> + * If we're using this check for low memory we are replacing >>>> the previo=3D >>>> us >>>> + * ones, including the oddball "random" reclaim that appears >>>> to fire fa=3D >>>> r >>>> + * more often than it should. We still trigger if the >>>> system pages. >>>> + * >>>> + * If you turn on NEWRECLAIM_DEBUG then the kernel will >>>> print on the co=3D >>>> nsole >>>> + * status messages when the reclaim status trips on and off, >>>> along with=3D >>>> the >>>> + * page count aggregate that triggered it (and the free >>>> space) for each=3D >>>> >>>> + * event. >>>> + */ >>>> + >>>> + #define NEWRECLAIM >>>> + #undef NEWRECLAIM_DEBUG >>>> + >>>> + >>>> /* >>>> * Copyright (c) 2005, 2010, Oracle and/or its >>>> affiliates. All rights =3D >>>> reserved. >>>> * Copyright (c) 2013 by Delphix. All rights reserved. >>>> *************** >>>> *** 139,144 **** >>>> --- 210,222 ---- >>>> =3D20 >>>> #include >>>> =3D20 >>>> + #ifdef NEWRECLAIM >>>> + #ifdef __FreeBSD__ >>>> + #include >>>> + #include >>>> + #endif >>>> + #endif /* NEWRECLAIM */ >>>> + >>>> #ifdef illumos >>>> #ifndef _KERNEL >>>> /* set with ZFS_DEBUG=3D3Dwatch, to enable watchpoints on >>>> frozen buffers=3D >>>> */ >>>> *************** >>>> *** 203,218 **** >>>> --- 281,316 ---- >>>> int zfs_arc_shrink_shift =3D3D 0; >>>> int zfs_arc_p_min_shift =3D3D 0; >>>> int zfs_disable_dup_eviction =3D3D 0; >>>> + #ifdef NEWRECLAIM >>>> + #ifdef __FreeBSD__ >>>> + static int freepages =3D3D 0; /* This much memory >>>> is considered critical =3D >>>> */ >>>> + static int percent_target =3D3D 0; /* Additionally >>>> reserve "X" percent fr=3D >>>> ee RAM */ >>>> + #endif /* __FreeBSD__ */ >>>> + #endif /* NEWRECLAIM */ >>>> =3D20 >>>> TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max); >>>> TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min); >>>> TUNABLE_QUAD("vfs.zfs.arc_meta_limit", >>>> &zfs_arc_meta_limit); >>>> + #ifdef NEWRECLAIM >>>> + #ifdef __FreeBSD__ >>>> + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages); >>>> + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target); >>>> >>>> + #endif /* __FreeBSD__ */ >>>> + #endif /* NEWRECLAIM */ >>>> + >>>> SYSCTL_DECL(_vfs_zfs); >>>> SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, >>>> &zfs_arc_max,=3D >>>> 0, >>>> "Maximum ARC size"); >>>> SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN, >>>> &zfs_arc_min,=3D >>>> 0, >>>> "Minimum ARC size"); >>>> =3D20 >>>> + #ifdef NEWRECLAIM >>>> + #ifdef __FreeBSD__ >>>> + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN, >>>> &freepages=3D >>>> , 0, "ARC Free RAM Pages Required"); >>>> + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent, >>>> CTLFLAG_RWTUN, &pe=3D >>>> rcent_target, 0, "ARC Free RAM Target percentage"); >>>> + #endif /* __FreeBSD__ */ >>>> + #endif /* NEWRECLAIM */ >>>> + >>>> /* >>>> * Note that buffers can be in one of 6 states: >>>> * ARC_anon - anonymous (discussed below) >>>> *************** >>>> *** 2438,2443 **** >>>> --- 2536,2546 ---- >>>> { >>>> =3D20 >>>> #ifdef _KERNEL >>>> + #ifdef NEWRECLAIM_DEBUG >>>> + static int xval =3D3D -1; >>>> + static int oldpercent =3D3D 0; >>>> + static int oldfreepages =3D3D 0; >>>> + #endif /* NEWRECLAIM_DEBUG */ >>>> =3D20 >>>> if (needfree) >>>> return (1); >>>> *************** >>>> *** 2476,2481 **** >>>> --- 2579,2585 ---- >>>> return (1); >>>> =3D20 >>>> #if defined(__i386) >>>> + >>>> /* >>>> * If we're on an i386 platform, it's possible >>>> that we'll exhaust the=3D >>>> >>>> * kernel heap space before we ever run out of >>>> available physical >>>> *************** >>>> *** 2492,2502 **** >>>> return (1); >>>> #endif >>>> #else /* !sun */ >>>> if (kmem_used() > (kmem_size() * 3) / 4) >>>> return (1); >>>> #endif /* sun */ >>>> =3D20 >>>> - #else >>>> if (spa_get_random(100) =3D3D=3D3D 0) >>>> return (1); >>>> #endif >>>> --- 2596,2658 ---- >>>> return (1); >>>> #endif >>>> #else /* !sun */ >>>> + >>>> + #ifdef NEWRECLAIM >>>> + #ifdef __FreeBSD__ >>>> + /* >>>> + * Implement the new tunable free RAM algorithm. We check >>>> the free pag=3D >>>> es >>>> + * against the minimum specified target and the percentage >>>> that should =3D >>>> be >>>> + * free. If we're low we ask for ARC cache shrinkage. If >>>> this is defi=3D >>>> ned >>>> + * on a FreeBSD system the older checks are not performed. >>>> + * >>>> + * Check first to see if we need to init freepages, then >>>> test. >>>> + */ >>>> + if (!freepages) { /* If zero then >>>> (re)init */ >>>> + freepages =3D3D cnt.v_free_target - >>>> (cnt.v_free_target / 33); >>>> + #ifdef NEWRECLAIM_DEBUG >>>> + printf("ZFS ARC: Default >>>> vfs.zfs.arc_freepages to [%u] [%u less 3%%]=3D >>>> n", freepages, cnt.v_free_target); >>>> + #endif /* NEWRECLAIM_DEBUG */ >>>> + } >>>> + #ifdef NEWRECLAIM_DEBUG >>>> + if (percent_target !=3D3D oldpercent) { >>>> + printf("ZFS ARC: Reservation percent change >>>> to [%d], [%d] pages, [%d]=3D >>>> freen", percent_target, cnt.v_page_count, cnt.v_free_count); >>>> + oldpercent =3D3D percent_target; >>>> + } >>>> + if (freepages !=3D3D oldfreepages) { >>>> + printf("ZFS ARC: Low RAM page change to [%d], >>>> [%d] pages, [%d] freen=3D >>>> ", freepages, cnt.v_page_count, cnt.v_free_count); >>>> + oldfreepages =3D3D freepages; >>>> + } >>>> + #endif /* NEWRECLAIM_DEBUG */ >>>> + /* >>>> + * Now figure out how much free RAM we require to call the >>>> ARC cache st=3D >>>> atus >>>> + * "ok". Add the percentage specified of the total to the >>>> base require=3D >>>> ment. >>>> + */ >>>> + >>>> + if (cnt.v_free_count < freepages + ((cnt.v_page_count >>>> / 100) * percent=3D >>>> _target)) { >>>> + #ifdef NEWRECLAIM_DEBUG >>>> + if (xval !=3D3D 1) { >>>> + printf("ZFS ARC: RECLAIM total %u, >>>> free %u, free pct (%u), reserved =3D >>>> (%u), target pct (%u)n", cnt.v_page_count, cnt.v_free_count, >>>> ((cnt.v_fre=3D >>>> e_count * 100) / cnt.v_page_count), freepages, percent_target); >>>> >>>> + xval =3D3D 1; >>>> + } >>>> + #endif /* NEWRECLAIM_DEBUG */ >>>> + return(1); >>>> + } else { >>>> + #ifdef NEWRECLAIM_DEBUG >>>> + if (xval !=3D3D 0) { >>>> + printf("ZFS ARC: NORMAL total %u, >>>> free %u, free pct (%u), reserved (=3D >>>> %u), target pct (%u)n", cnt.v_page_count, cnt.v_free_count, >>>> ((cnt.v_free=3D >>>> _count * 100) / cnt.v_page_count), freepages, percent_target); >>>> + xval =3D3D 0; >>>> + } >>>> + #endif /* NEWRECLAIM_DEBUG */ >>>> + return(0); >>>> + } >>>> + >>>> + #endif /* __FreeBSD__ */ >>>> + #endif /* NEWRECLAIM */ >>>> + >>>> if (kmem_used() > (kmem_size() * 3) / 4) >>>> return (1); >>>> #endif /* sun */ >>>> =3D20 >>>> if (spa_get_random(100) =3D3D=3D3D 0) >>>> return (1); >>>> #endif >>>> >>>> --=3D20 >>>> -- Karl >>>> karl@denninger.net [5] >>>> >>>> --------------ms010508000607000909070805 >>>> Content-Type: application/pkcs7-signature; name=3D"smime.p7s" >>>> Content-Transfer-Encoding: base64 >>>> Content-Disposition: attachment; filename=3D"smime.p7s" >>>> Content-Description: S/MIME Cryptographic Signature >>>> >>>> >>>> >>> >> >> MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTz= CC >>>> >>>> >>>> >>> >> >> BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQ= QI >>>> >>>> >>>> >>> >> >> EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcy= BM >>>> >>>> >>>> >>> >> >> TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3= Rv >>>> >>>> >>>> >>> >> >> bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMz= E5 >>>> >>>> >>>> >>> >> >> MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcm= wg >>>> >>>> >>>> >>> >> >> RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCS= qG >>>> >>>> >>>> >>> >> >> SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6= th >>>> >>>> >>>> >>> >> >> d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf= 6W >>>> >>>> >>>> >>> >> >> 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLn= iV >>>> >>>> >>>> >>> >> >> jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIam= O5 >>>> >>>> >>>> >>> >> >> SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFN= oY >>>> >>>> >>>> >>> >> >> 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2= Z8 >>>> >>>> >>>> >>> >> >> Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvx= P4 >>>> >>>> >>>> >>> >> >> GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4R= ci >>>> >>>> >>>> >>> >> >> WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcb= JN >>>> >>>> >>>> >>> >> >> nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQ= AB >>>> >>>> >>>> >>> >> >> o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBg= lg >>>> >>>> >>>> >>> >> >> hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFH= w4 >>>> >>>> >>>> >>> >> >> +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MD= gG >>>> >>>> >>>> >>> >> >> CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLm= Ny >>>> >>>> >>>> >>> >> >> bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnox= LO >>>> >>>> >>>> >>> >> >> 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN= /c >>>> >>>> >>>> >>> >> >> L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIc= J1 >>>> >>>> >>>> >>> >> >> YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMc= WD >>>> >>>> >>>> >>> >> >> pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmN= wE >>>> >>>> >>>> >>> >> >> f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcm= lk >>>> >>>> >>>> >>> >> >> YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGg= YD >>>> >>>> >>>> >>> >> >> VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZX= J2 >>>> >>>> >>>> >>> >> >> aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCw= YJ >>>> >>>> >>>> >>> >> >> KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjAxNzAwNTRaMCMGCSqGSIb3DQEJBD= EW >>>> >>>> >>>> >>> >> >> BBQrRNHcjN1dwAlZu7blrh+3Vu7++TBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKj= AL >>>> >>>> >>>> >>> >> >> BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAg= FA >>>> >>>> >>>> >>> >> >> MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBg= NV >>>> >>>> >>>> >>> >> >> BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBA= oT >>>> >>>> >>>> >>> >> >> EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBg= kq >>>> >>>> >>>> >>> >> >> hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2Bgsqhk= iG >>>> >>>> >>>> >>> >> >> 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBg= NV >>>> >>>> >>>> >>> >> >> BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3= Vk >>>> >>>> >>>> >>> >> >> YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdW= Rh >>>> >>>> >>>> >>> >> >> c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAsIbg6OIk2XnrPw25FA9+s4FCnK= lo >>>> >>>> >>>> >>> >> >> Wz3KtfA59Gf2jX8yKEQW26k1QK6w62Zc8KaB5LypaOK1rZ5bipeu6rGHhgaG1oPXUMmcc2= p4 >>>> >>>> >>>> >>> >> >> E18URDskAfn5apoJ/n54nR94OqHfQ/EPBx711pxYtAGBLFOOzwrU2MEZCl2KBydI+Bw/E7= 5R >>>> >>>> >>>> >>> >> >> WRIk6y0NqSWjgVWU2tJwnOEZj/2UGQCSvJ7h5t1n7idbDIfT88/hvAW3b3knRwPxwpZret= Xq >>>> >>>> >>>> >>> >> >> 2BGgmv8lojr7Zui5sR/YdDjSK2yGHqo0mWkSAHp0Wts8okcoJNZSEispFRh56MWCIoJ51c= ki >>>> >>>> >>>> >>> >> >> pCZH/vX1EEsfka3CrlE7LWABAYf1biy+Xq/Bfxgq9oAaknGF2yM0jgR7xnjLYLvbv5pjt7= ar >>>> >>>> >>>> >>> >> >> TH2JslJMYkJPKiYFJNEgVJ9wTVQtrCPJQPTk3R1qD3YFraly5Mgjwy5Ax5n8SW858WWOxH= eP >>>> >>>> >>>> >>> >> >> vmL0j1boO0Re9qeAb9v/z8z3NPkFPZhBrEz3g6INCWil+2Vx1yruJvxm1oN9OMQSt2qY38= rj >>>> >>>> >>>> >>> >> >> XWhWVxoQtW39LZc/xSNR41DQXvPJ8VOvyrmvLm7uTm4+lQYVUwNuLNbDFlj8slkAeXF/eR= 1S >>>> >>>> >>>> >>> >> >> 4VuWtwexxCco+xGjbPTcZgap976XsvlRWOmjmwqZyGNuW7ZmcODQPFjQvpnBkx9Rm5cLnd= K6 >>>> >>>> OVorTQkAAAAAAAA=3D >>>> --------------ms010508000607000909070805-- >>>> >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org [6] mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs [7] >>>> To unsubscribe, send any mail to >>>> "freebsd-fs-unsubscribe@freebsd.org" [8] >> >> --=20 >> Karl Denninger >> karl@denninger.net [9] >> _The Market Ticker_ >> >> >> >> Links: >> ------ >> [1] mailto:karl@denninger.net >> [2] mailto:bug-followup@FreeBSD.org >> [3] mailto:karl@fs.denninger.net >> [4] mailto:karl@denninger.net >> [5] mailto:karl@denninger.net >> [6] mailto:freebsd-fs@freebsd.org >> [7] http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> [8] mailto:freebsd-fs-unsubscribe@freebsd.org >> [9] mailto:karl@denninger.net > > > > %SPAMBLOCK-SYS: Matched [+mikej ], message ok > --=20 -- Karl karl@denninger.net --------------ms010404080205080205080700 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjQxNTIxNDlaMCMGCSqGSIb3DQEJBDEW BBQ8R6eIDrkwnhANaqPEPIhxb7J+fDBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAovOqhFRrVfYvrfAs/zuZnA8yvH0e wkqQa2c/sAqOSZQzJRzlMfSisD3N8WBROnZSnScnoBhi4sGIymEp27I+xxncXLQWv1VtP4D6 QeBWw5/fFcKvKk/4dbhsdHeznXikRn7pxpEApS8qKe7+pPGps7dGBeVZwkWudaAQD0G9lpsf HEmvXkIEEJgbkpywj8zfuZKnHv5DdDtQ5t2UKmM959fnUhWe7d2f4utnOdMO0LBzu+4r8vKw w5r2qkUuaJwijDGJ/1T5kwqTVFEfs2GKgabZYGRVUiawtnlWLhyLpaFYZshAGZNfFEAIdy7R CWLhSpgXpWXdM6VFdJtxzuPQQjNqEe85Y7xNwbRut8zrmP6cTX/XaX7sI8meWR+nvuill6BQ KSoL2r2Wc+89FIQ+aekKAgw8A7JN2/K4jU9lw8UrHW9ar9pKoMdngdOU02Hqc9YfaS/47BZP VmOGPl7yAODda6AGn5w4B1To95xgPS7kXxuSTvnGRQ20Tzfn1oZ5B8pe0JOTwY0zQg6NjqvW XgxISVRKX9IJqQqaREDfaYF6LzvNsmVrCWfz14/viycwM3vmCtlPywvAj+inNoWWWXzAEUJL A6xPW855FwfaXB+GZ9f9qCx/muwpEWyJkSt/w5CcL/OvgWxBzpT2qH4FC1u44O9RWHR1wvHS lMqWetkAAAAAAAA= --------------ms010404080205080205080700-- From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 18:34:59 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5D54E1A1; Mon, 24 Mar 2014 18:34:59 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 310A7A46; Mon, 24 Mar 2014 18:34:59 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2OIYxBZ058203; Mon, 24 Mar 2014 18:34:59 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2OIYxqS058202; Mon, 24 Mar 2014 18:34:59 GMT (envelope-from linimon) Date: Mon, 24 Mar 2014 18:34:59 GMT Message-Id: <201403241834.s2OIYxqS058202@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/187905: [zpool] Confusion zpool with a block size in HDD - block size: 512B configured, 4096B native X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 18:34:59 -0000 Old Synopsis: Confusion zpool with a block size in HDD - block size: 512B configured, 4096B native New Synopsis: [zpool] Confusion zpool with a block size in HDD - block size: 512B configured, 4096B native Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Mar 24 18:34:42 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=187905 From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 19:20:03 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 42B24D7E for ; Mon, 24 Mar 2014 19:20:03 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2FD76DF5 for ; Mon, 24 Mar 2014 19:20:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2OJK2aW071159 for ; Mon, 24 Mar 2014 19:20:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2OJK2lN071158; Mon, 24 Mar 2014 19:20:02 GMT (envelope-from gnats) Date: Mon, 24 Mar 2014 19:20:02 GMT Message-Id: <201403241920.s2OJK2lN071158@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/187905: [zpool] Confusion zpool with a block size in HDD - block size: 512B configured, 4096B native X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Steven Hartland List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 19:20:03 -0000 The following reply was made to PR kern/187905; it has been noted by GNATS. From: "Steven Hartland" To: , Cc: Subject: Re: kern/187905: [zpool] Confusion zpool with a block size in HDD - block size: 512B configured, 4096B native Date: Mon, 24 Mar 2014 19:09:48 -0000 This matches the 4K quirk for Seagate Barracuda Green Advanced Format (4k) drives { /* Seagate Barracuda Green Advanced Format (4k) drives */ { T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST???DM*", "*" }, /*quirks*/DA_Q_4K }, --- { /* Seagate Barracuda Advanced Format (4k) drives */ { T_DIRECT, SIP_MEDIA_FIXED, "*", "ST????DM*", "*" }, /*quirks*/ADA_Q_4K }, This is possibly incorrect for this drive as it looks like its a Seagate version of the old Samsung drive. Could you provide the information from:- camcontrol identify ada0 Regards Steve From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 19:30:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 49EE3301 for ; Mon, 24 Mar 2014 19:30:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 3639AEF6 for ; Mon, 24 Mar 2014 19:30:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2OJU1DU074018 for ; Mon, 24 Mar 2014 19:30:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2OJU1uE074017; Mon, 24 Mar 2014 19:30:01 GMT (envelope-from gnats) Date: Mon, 24 Mar 2014 19:30:01 GMT Message-Id: <201403241930.s2OJU1uE074017@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Vladislav V. Prodan" Subject: Re: kern/187905: [zpool] Confusion zpool with a block size in HDD - block size: 512B configured, 4096B native X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: "Vladislav V. Prodan" List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 19:30:01 -0000 The following reply was made to PR kern/187905; it has been noted by GNATS. From: "Vladislav V. Prodan" To: Steven Hartland Cc: bug-followup@freebsd.org Subject: Re: kern/187905: [zpool] Confusion zpool with a block size in HDD - block size: 512B configured, 4096B native Date: Mon, 24 Mar 2014 21:27:28 +0200 --e89a8f923df21bc29c04f55f3bfe Content-Type: text/plain; charset=UTF-8 2014-03-24 21:09 GMT+02:00 Steven Hartland : > This matches the 4K quirk for Seagate Barracuda Green Advanced Format (4k) > drives { > /* Seagate Barracuda Green Advanced Format (4k) drives */ > { T_DIRECT, SIP_MEDIA_FIXED, "ATA", "ST???DM*", "*" }, > /*quirks*/DA_Q_4K > }, > --- > { > /* Seagate Barracuda Advanced Format (4k) drives */ > { T_DIRECT, SIP_MEDIA_FIXED, "*", "ST????DM*", "*" }, > /*quirks*/ADA_Q_4K > }, > > This is possibly incorrect for this drive as it looks like its a Seagate > version of the old Samsung drive. > > Could you provide the information from:- > camcontrol identify ada0 > Thanks for the reply. Here's the output: # camcontrol identify ada0 pass0: ATA-8 SATA 2.x device pass0: 150.000MB/s transfers (SATA, UDMA5, PIO 8192bytes) protocol ATA/ATAPI-8 SATA 2.x device model ST500DM005 HD502HJ firmware revision 1AJ10001 serial number S20BJ90D306953 WWN 50004cf209d3bdd6 cylinders 16383 heads 16 sectors/track 63 sector size logical 512, physical 512, offset 0 LBA supported 268435455 sectors LBA48 supported 976773168 sectors PIO supported PIO4 DMA supported WDMA2 UDMA6 media RPM 7200 Feature Support Enabled Value Vendor read ahead yes yes write cache yes yes flush cache yes yes overlap no Tagged Command Queuing (TCQ) no no Native Command Queuing (NCQ) yes 32 tags SMART yes yes microcode download yes yes security yes no power management yes yes advanced power management yes no 0/0x00 automatic acoustic management yes yes 254/0xFE 254/0xFE media status notification no no power-up in Standby yes no write-read-verify no no unload no no free-fall no no Data Set Management (DSM/TRIM) no Host Protected Area (HPA) yes no 976773168/976773168 HPA - Security no -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE --e89a8f923df21bc29c04f55f3bfe Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable



2014-03-24 21:09 GMT+02:00 Steven Hartland <killing@multipla= y.co.uk>:
This matches the 4K quirk for Seagate Barracuda Green Adva= nced Format (4k) drives =C2=A0 =C2=A0 =C2=A0 =C2=A0{
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Seagate Barracuda= Green Advanced Format (4k) drives */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0{ T_DIRECT, SIP_MEDI= A_FIXED, "ATA", "ST???DM*", "*" },
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/*quirks*/DA_Q_4K =C2=A0 =C2=A0 =C2=A0 =C2=A0},
---
=C2=A0 =C2=A0 =C2=A0 =C2=A0{
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Seagate Barracuda= Advanced Format (4k) drives */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0{ T_DIRECT, SIP_MEDI= A_FIXED, "*", "ST????DM*", "*" },
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/*quirks*/ADA_Q_4K =C2=A0 =C2=A0 =C2=A0 =C2=A0},

This is possibly incorrect for this drive as it looks like its a Seagate version of the old Samsung drive.

Could you provide the information from:-
camcontrol identify ada0


Thanks for the reply.
Here's the output:

<= /span>
# camcontrol identify ada0
pass0: <ST500= DM005 HD502HJ 1AJ10001> ATA-8 SATA 2.x device
pass0: 150.000MB/s transfers (SATA, UDMA5, PIO 8192bytes)
protocol =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ATA/A= TAPI-8 SATA 2.x
device model =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ST= 500DM005 HD502HJ
firmware revision =C2=A0 =C2=A0 1AJ10001
serial number =C2=A0 =C2=A0 =C2=A0 =C2=A0 S20BJ90D306953
WWN= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 50004cf209d= 3bdd6
cylinders =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 16383
heads =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 16
sectors/track =C2=A0 =C2=A0 =C2=A0 =C2=A0 63
sector size= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 logical 512, physical 512, offset 0
LBA supported =C2=A0 =C2=A0 =C2=A0 =C2=A0 268435455 sectors
= LBA48 supported =C2=A0 =C2=A0 =C2=A0 976773168 sectors
PIO suppor= ted =C2=A0 =C2=A0 =C2=A0 =C2=A0 PIO4
DMA supported =C2=A0 =C2=A0 = =C2=A0 =C2=A0 WDMA2 UDMA6
media RPM =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 7200

Feature =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0Support =C2=A0Enabled =C2=A0 Value =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 Vendor
read ahead =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 yes =C2=A0 =C2=A0 =C2=A0yes
write cache =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0yes =C2=A0 =C2=A0 =C2=A0yes
flush cache =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0yes =C2=A0 = =C2=A0 =C2=A0yes
overlap =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0no
Tagged Command Queuing (TCQ) =C2=A0 no = =C2=A0 =C2=A0 =C2=A0 no
Native Command Queuing (NCQ) =C2=A0 yes = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A032 tags
SMART =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0yes =C2=A0 =C2=A0 =C2=A0yes
microcode download =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 yes =C2=A0 =C2= =A0 =C2=A0yes
security =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 yes =C2=A0 =C2=A0 =C2=A0no
pow= er management =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 yes =C2=A0 = =C2=A0 =C2=A0yes
advanced power management =C2=A0 =C2=A0 =C2=A0ye= s =C2=A0 =C2=A0 =C2=A0no =C2=A0 =C2=A0 =C2=A00/0x00
automatic acoustic management =C2=A0yes =C2=A0 =C2=A0 =C2=A0yes =C2=A0= =C2=A0 254/0xFE =C2=A0 =C2=A0 =C2=A0 =C2=A0254/0xFE
media status= notification =C2=A0 =C2=A0 =C2=A0no =C2=A0 =C2=A0 =C2=A0 no
powe= r-up in Standby =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0yes =C2=A0 =C2=A0 = =C2=A0no
write-read-verify =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0no =C2=A0 =C2=A0 =C2=A0 no
unload =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 no =C2=A0 =C2=A0 =C2=A0 no
free-fall =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0no =C2= =A0 =C2=A0 =C2=A0 no
Data Set Management (DSM/TRIM) no
= Host Protected Area (HPA) =C2=A0 =C2=A0 =C2=A0yes =C2=A0 =C2=A0 =C2=A0no = =C2=A0 =C2=A0 =C2=A0976773168/976773168
HPA - Security =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= no

=C2=A0


--
Vladislav V. Prodan =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0
System & Network Administrator
http://support.od.ua =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0
+380 67 4584408, +380 99 4060508
VVP88-RIPE
--e89a8f923df21bc29c04f55f3bfe-- From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 20:12:53 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3725C759 for ; Mon, 24 Mar 2014 20:12:53 +0000 (UTC) Received: from mail-lb0-x22a.google.com (mail-lb0-x22a.google.com [IPv6:2a00:1450:4010:c04::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9FC673E3 for ; Mon, 24 Mar 2014 20:12:52 +0000 (UTC) Received: by mail-lb0-f170.google.com with SMTP id s7so4049714lbd.15 for ; Mon, 24 Mar 2014 13:12:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=K25Q2XzqBzlzpOomKOegLayCzqOoJJGVhQr/T08ROnw=; b=Lny8sMh2Aak5NK/9AQWh9JD770Q5nBRDg9dTmiEfQSSE2QRKDQlYTdoWMgctz2k7bO ihiyBVYcyB1jNxO8irWT93f7baGCsT+DBzKPdte3enF7Q3ZXWg5VLHiJjrwLrhoC4Re6 zlDtQWu3O1tINBheZo+tVC3NIE94YaGo5RZYHW3M9YgwVErM6wt7+jOo1c2+IiZ/4fc3 CQ6GFZ/sDd/unuuvcnINdRnXg+l7huE2xx7M2++cr9kFyX1vwwTrGVWEI6uCbBI60cFK oGNa9Y4aCLhoiFNzsyWXf/6Sh4BPOe5tSuhd3eI+alPjtfZTg87UVQJiCetkJirphrj7 6qtg== MIME-Version: 1.0 X-Received: by 10.112.24.172 with SMTP id v12mr51225lbf.74.1395691970802; Mon, 24 Mar 2014 13:12:50 -0700 (PDT) Received: by 10.114.76.81 with HTTP; Mon, 24 Mar 2014 13:12:50 -0700 (PDT) In-Reply-To: References: Date: Mon, 24 Mar 2014 13:12:50 -0700 Message-ID: Subject: Re: rsync w/ fake-super -> crashes zfs From: javocado To: FreeBSD Filesystems Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 20:12:53 -0000 A bit more info: We've been able to reproduce the crash with panic output: Fatal trap 12: page fault while in kernel mode cpuid =3D 11; apic id =3D 15 fault virtual address =3D 0x160 fault code =3D supervisor read data, page not present instruction pointer =3D 0x20:0xffffffff80abb546 stack pointer =3D 0x28:0xffffffa59b580910 frame pointer =3D 0x28:0xffffffa59b5809d0 code segment =3D base 0x0, limit 0xfffff, type 0x1b =3D DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags =3D interrupt enabled, resume, IOPL =3D 0 current process =3D 93937 (rsync) [thread pid 93937 tid 102776 ] Stopped at zfs_freebsd_remove+0x426: movq 0x160(%rax),%rsi db> by simply adding --delete to this command: rsync -av --rsync-path=3D"rsync --fake-super" /src /dst the file it was deleting at the time of the crash had valid (readable) extattrs (rsync.%stat - for the fake-super info) and nothing else special for acl owner@:rw-p--aARWcCos:------:allow group@:r-----a-R-c--s:------:allow everyone@:r-----a-R-c--s:------:allow Question: should it be safe to remove that file traditionally, as in: rm /file/that/zfs+rsync+fake_super/croaked/on How's would that rm differ from what rsync was trying to do? On Thu, Mar 20, 2014 at 5:33 PM, javocado wrote: > Gathering more info here... > > When re-running the rsync command, stripped down: > > rsync -av --rsync-path=3D"rsync --fake-super" /src /dst > > we're not seeing any system crashes, yet, but rsync is choking/quitting > quite a bit saying a file here or there has a corrupt extattr. Does this > point to a problem with ZFS or more with rsync. > > Any thoughts on the fake-super impact and how it leads to crashing? > > > On Fri, Mar 14, 2014 at 4:57 PM, javocado wrote: > >> System specifics: >> ZFS version 28 >> FreeBSD 8.3-RELEASE >> >> We're seeing a repeatable outcome where a remote rsync command like: >> >> rsync -axzHAXS --rsync-path=3D"rsync --fake-super" --exclude '*/rsync.%s= tat' >> >> backing up to our zfs filesystem (with 15M inodes) will lead to a panic >> with output like: >> >> Fatal trap 12: page fault while in kernel modecpuid =3D 4; apic id =3D 0= 4fault virtual address =3D 0x160fault code =3D supervisor re= ad data, page not presentinstruction pointer =3D 0x20:0xffffffff80abb54= 6stack pointer =3D 0x28:0xffffff976c62b910frame pointer = =3D 0x28:0xffffff976c62b9d0code segment =3D base 0x0, limit 0xf= ffff, type 0x1b =3D DPL 0, pres 1, long 1, def32 0, = gran 1processor eflags =3D interrupt enabled, resume, IOPL =3D 0curr= ent process =3D 7295 (rsync)[thread pid 7295 tid 101008 ]Stopped at= zfs_freebsd_remove+0x426: movq 0x160(%rax),%rsi >> >> >> On the sending side (RHEL, ext3), rsync reports errors like: >> >> rsync: failed to read xattr rsync.%statrsync: failed to write xattr rsyn= c.%statrsync: get_xattr_names: llistxattr >> >> which we've seen occasionally with other systems when running rsync with= fake-super, but it usually doesn't lead to a crash.* >> >> On the receiving side, other than the crashes, we are seeing a few new f= iles (that don't exist on the source) named: >> >> rsync.%stat >> >> which correspond to and contain the owner and permission attributes that= should have been stored in the extattr's for the file or directory. Not su= re if they are a red herring, but they're usually not something we see (per= haps that's related to the --exclude '*/rsync.%stat' and rsync not being ab= le to cleanup properly). >> >> We are still testing to see if any options in the rsync command (above) = may be contributing to the crash, since fake-super in and of itself runs fi= ne under basic (rsync -av --rsync-path=3D"rsync --fake-super" /src /dst) ci= rcumstances. But we suspect that the problem is related to fake-super and i= ts reliance upon extattr's. >> >> What we really need is a solution to the crashing - some way to make ZFS= stop choking on whatever --fake-super produces and/or how it's interacting= with extattr's on ZFS. >> >> >> Thanks! >> >> * we sometimes also see on the sending side w/ fake-super: >> rsync: failed to write xattr rsync.%stat for "xxxxxx/file" : No such fil= e >> or directory (2) >> >> when (1) the file exists, (2) it's a symlink >> but that isn't happening in this instance. We only mention it here as >> another oddity of fake-super + ZFS + extattr >> >> > From owner-freebsd-fs@FreeBSD.ORG Mon Mar 24 23:02:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3FB03497 for ; Mon, 24 Mar 2014 23:02:22 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 864F47BC for ; Mon, 24 Mar 2014 23:02:21 +0000 (UTC) Received: from firewall.mikej.com (162-238-140-44.lightspeed.lsvlky.sbcglobal.net [162.238.140.44]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s2ON2BWV040718 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Mon, 24 Mar 2014 19:02:12 -0400 (EDT) (envelope-from mikej@mikej.com) Received: from firewall.mikej.com (localhost.mikej.com [127.0.0.1]) by firewall.mikej.com (8.14.8/8.14.8) with ESMTP id s2ON1Ucq066361 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 24 Mar 2014 19:02:11 -0400 (EDT) (envelope-from mikej@mikej.com) Received: (from www@localhost) by firewall.mikej.com (8.14.8/8.14.8/Submit) id s2ON1Tg4066349; Mon, 24 Mar 2014 19:01:29 -0400 (EDT) (envelope-from mikej@mikej.com) X-Authentication-Warning: firewall.mikej.com: www set sender to mikej@mikej.com using -f To: Karl Denninger , Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix [SB QUAR: Fri Mar 21 16:01:12 2014] MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Mon, 24 Mar 2014 19:00:59 -0400 From: mikej In-Reply-To: <53304D8D.7080107@denninger.net> References: <201403201710.s2KHA0e9043051@freefall.freebsd.org> <31e2f5092048c128add6f4cd3d136f4d@mail.mikej.com> <532CD3BC.5020305@denninger.net> <53304D8D.7080107@denninger.net> Message-ID: X-Sender: mikej@mikej.com User-Agent: Roundcube Webmail/0.6-beta X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Mar 2014 23:02:22 -0000 Karl, I don't know why the PR system doesn't allow multiple attachments for patches to be sent so they can easily be downloaded. I continue to be unsuccessful at getting patches out of email that don't have line wrap are tab issues. So, again if you could send a link or post the latest patch I would be glad to let it churn. If I need an education about the PR system, please someone help me. That last patch you sent me privately applied like this against r263618M. Hmm... Looks like a new-style context diff to me... The text leading up to this was: -------------------------- |*** arc.c.original Thu Mar 13 09:18:48 2014 |--- arc.c Fri Mar 21 09:04:23 2014 -------------------------- Patching file arc.c using Plan A... Hunk #1 succeeded at 18. Hunk #2 succeeded at 210. Hunk #3 succeeded at 281. Hunk #4 succeeded at 2539. Hunk #5 succeeded at 2582. Hunk #6 succeeded at 2599. done [root@charon /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs]# but failed to build. I hope others chime in but so far for the updates from these patches have been positive for me understanding though that everyone's workloads vary. Unless I knowingly drive the system into resource starvation everything is much better in not holding wired and not driving swap. My main background resource hogs are poudriere a random load. Regards, --mikej On 2014-03-24 11:21, Karl Denninger wrote: > Mike; > > Did the patch apply cleanly? > > That declaration is in which should be included up > near the top of the file if NEWRECLAIM is defined and the patch > applied ok. > > See here, last entry (that's the most-recent rev), revert to the > stock arc.c (or put arc.c.orig back, which should be the original > file) and re-apply it. > > http://www.freebsd.org/cgi/query-pr.cgi?pr=187594 > > Lastly, what OS rev are you running? The patch is against 10-STABLE; > it is ok against both the current checked-in rev of arc.c and the > previous (prior to the new feature flags being added a week or so > ago) > rev back. > > It sounds like the include didn't get applied. > > On 3/24/2014 10:00 AM, mikej wrote: >> Karl, >> >> Not being a C coder it appears a declaration is missing. >> >> --- arc.o --- >> >> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2614:15: >> error: use of undeclared identifier 'cnt' >> freepages = cnt.v_free_target - (cnt.v_free_target / >> 33); >> >> Thanks again, >> Michael Jung >> >> >> On 2014-03-21 20:05, Karl Denninger wrote: >>> Here 'ya go... >>> >>> Please keep me posted (the list is best as the more public >>> commentary >>> the better, and if this needs more tuning that's the way to find >>> out!) >>> on how it works for you. >>> >>> I have it in production at this point and am happy with it -- the >>> current default is at the pager "wakeup" level less 3%, but it of >>> course can be tuned manually. >>> >>> On 3/21/2014 3:59 PM, mikej wrote: >>> >>>> Karl, >>>> >>>> I've looked at my raw mailbox and something is trashing tabs and >>>> line >>>> length for your more recent patches in email. >>>> >>>> I did not see any attachments, nor updates to the PR for download >>>> - >>>> would >>>> you mind sending me the latest patch as an attachment? >>>> >>>> Thanks for your work, I believe this is going to add real >>>> stability >>>> >>>> without having to set vfs.zfs.arc_max and other tunables. >>>> >>>> Kind regards, >>>> >>>> Michael Jung >>>> >>>> On 2014-03-20 13:10, Karl Denninger wrote: >>>> >>>>> The following reply was made to PR kern/187594; it has been noted >>>>> by GNATS. >>>>> >>>>> From: Karl Denninger [1] >>>>> To: bug-followup@FreeBSD.org [2], karl@fs.denninger.net [3] >>>>> Cc: >>>>> Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem >>>>> and fix >>>>> Date: Thu, 20 Mar 2014 12:00:54 -0500 >>>>> >>>>> This is a cryptographically signed message in MIME format. >>>>> >>>>> --------------ms010508000607000909070805 >>>>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >>>>> Content-Transfer-Encoding: quoted-printable >>>>> >>>>> Responsive to avg's comment and with another overnight and >>>>> daytime load=20 >>>>> of testing on multiple machines with varying memory configs >>>>> from 4-24GB=20 >>>>> of RAM here is another version of the patch. >>>>> >>>>> The differences are: >>>>> >>>>> 1. No longer use kernel_sysctlbyname, include the VM header >>>>> file and get = >>>>> >>>>> the values directly (less overhead.) Remove the variables no >>>>> longer need= >>>>> ed. >>>>> >>>>> 2. Set the default free RAM level for ARC shrinkage to >>>>> v_free_target=20 >>>>> less 3% as I was able to provoke a stall once with it set to a >>>>> 5%=20 >>>>> reservation, was able to provoke it with the parameter set to >>>>> 10% with a = >>>>> >>>>> lot of work and was able to do so "on demand" with it set to >>>>> 20%. With=20 >>>>> a 5% invasion initiating a scrub with very heavy I/O and image >>>>> load=20 >>>>> (hundreds of web and database processes) provoked a ~10 second >>>>> system=20 >>>>> stall. With it set to 3% I have not been able to reproduce >>>>> the stall=20 >>>>> yet the inactive page count remains stable even under extremely >>>>> heavy=20 >>>>> load, indicating that page-stealing remains effective when >>>>> required. =20 >>>>> Note that for my workload even with this level set above >>>>> v_free_target,=20 >>>>> which would imply no page stealing by the VM system before ARC >>>>> expansion = >>>>> >>>>> is halted, I do not get unbridled inactive page growth. >>>>> >>>>> As before vfs.zfs.zrc_freepages and >>>>> vfs.zfs.arc_freepage_percent remain=20 >>>>> as accessible knobs if you wish to twist them for some reason >>>>> to=20 >>>>> compensate for an unusual load profile or machine >>>>> configuration. >>>>> >>>>> *** arc.c.original Thu Mar 13 09:18:48 2014 >>>>> --- arc.c Thu Mar 20 11:51:48 2014 >>>>> *************** >>>>> *** 18,23 **** >>>>> --- 18,94 ---- >>>>> * >>>>> * CDDL HEADER END >>>>> */ >>>>> + >>>>> + /* Karl Denninger (karl@denninger.net [4]), 3/20/2014, >>>>> FreeBSD-specific >>>>> + * >>>>> + * If "NEWRECLAIM" is defined, change the "low memory" >>>>> warning that cau= >>>>> ses >>>>> + * the ARC cache to be pared down. The reason for the >>>>> change is that t= >>>>> he >>>>> + * apparent attempted algorithm is to start evicting ARC >>>>> cache when fre= >>>>> e >>>>> + * pages fall below 25% of installed RAM. This maps >>>>> reasonably well to= >>>>> how >>>>> + * Solaris is documented to behave; when "lotsfree" is >>>>> invaded ZFS is t= >>>>> old >>>>> + * to pare down. >>>>> + * >>>>> + * The problem is that on FreeBSD machines the system >>>>> doesn't appear to= >>>>> be >>>>> + * getting what the authors of the original code thought >>>>> they were look= >>>>> ing at >>>>> + * with its test -- or at least not what Solaris did -- and >>>>> as a result= >>>>> that >>>>> + * test never triggers. That leaves the only reclaim >>>>> trigger as the "p= >>>>> aging >>>>> + * needed" status flag, and by the time * that trips the >>>>> system is alre= >>>>> ady >>>>> + * in low-memory trouble. This can lead to severe >>>>> pathological behavio= >>>>> r >>>>> + * under the following scenario: >>>>> + * - The system starts to page and ARC is evicted. >>>>> + * - The system stops paging as ARC's eviction drops wired >>>>> RAM a bit. >>>>> + * - ARC starts increasing its allocation again, and wired >>>>> memory grows= >>>>> =2E >>>>> + * - A new image is activated, and the system once again >>>>> attempts to pa= >>>>> ge. >>>>> + * - ARC starts to be evicted again. >>>>> + * - Back to #2 >>>>> + * >>>>> + * Note that ZFS's ARC default (unless you override it in >>>>> /boot/loader.= >>>>> conf) >>>>> + * is to allow the ARC cache to grab nearly all of free RAM, >>>>> provided n= >>>>> obody >>>>> + * else needs it. That would be ok if we evicted cache >>>>> when required. >>>>> + * >>>>> + * Unfortunately the system can get into a state where it >>>>> never >>>>> + * manages to page anything of materiality back in, as if >>>>> there is acti= >>>>> ve >>>>> + * I/O the ARC will start grabbing space once again as soon >>>>> as the memo= >>>>> ry >>>>> + * contention state drops. For this reason the "paging is >>>>> occurring" f= >>>>> lag >>>>> + * should be the **last resort** condition for ARC eviction; >>>>> you want t= >>>>> o >>>>> + * (as Solaris does) start when there is material free RAM >>>>> left BUT the= >>>>> >>>>> + * vm system thinks it needs to be active to steal pages >>>>> back in the at= >>>>> tempt >>>>> + * to never get into the condition where you're potentially >>>>> paging off >>>>> + * executables in favor of leaving disk cache allocated. >>>>> + * >>>>> + * To fix this we change how we look at low memory, >>>>> declaring two new >>>>> + * runtime tunables. >>>>> + * >>>>> + * The new sysctls are: >>>>> + * vfs.zfs.arc_freepages (free pages required to call RAM >>>>> "sufficient")= >>>>> >>>>> + * vfs.zfs.arc_freepage_percent (additional reservation >>>>> percentage, def= >>>>> ault 0) >>>>> + * >>>>> + * vfs.zfs.arc_freepages is initialized from >>>>> vm.v_free_target, less 3%.= >>>>> >>>>> + * This should insure that we allow the VM system to steal >>>>> pages first,= >>>>> >>>>> + * but pare the cache before we suspend processes attempting >>>>> to get mor= >>>>> e >>>>> + * memory, thereby avoiding "stalls." You can set this >>>>> higher if you w= >>>>> ish, >>>>> + * or force a specific percentage reservation as well, but >>>>> doing so may= >>>>> >>>>> + * cause the cache to pare back while the VM system remains >>>>> willing to >>>>> + * allow "inactive" pages to accumulate. The challenge is >>>>> that image >>>>> + * activation can force things into the page space on a >>>>> repeated basis >>>>> + * if you allow this level to be too small (the above >>>>> pathological >>>>> + * behavior); the defaults should avoid that behavior but >>>>> the sysctls >>>>> + * are exposed should your workload require adjustment. >>>>> + * >>>>> + * If we're using this check for low memory we are replacing >>>>> the previo= >>>>> us >>>>> + * ones, including the oddball "random" reclaim that appears >>>>> to fire fa= >>>>> r >>>>> + * more often than it should. We still trigger if the >>>>> system pages. >>>>> + * >>>>> + * If you turn on NEWRECLAIM_DEBUG then the kernel will >>>>> print on the co= >>>>> nsole >>>>> + * status messages when the reclaim status trips on and off, >>>>> along with= >>>>> the >>>>> + * page count aggregate that triggered it (and the free >>>>> space) for each= >>>>> >>>>> + * event. >>>>> + */ >>>>> + >>>>> + #define NEWRECLAIM >>>>> + #undef NEWRECLAIM_DEBUG >>>>> + >>>>> + >>>>> /* >>>>> * Copyright (c) 2005, 2010, Oracle and/or its >>>>> affiliates. All rights = >>>>> reserved. >>>>> * Copyright (c) 2013 by Delphix. All rights reserved. >>>>> *************** >>>>> *** 139,144 **** >>>>> --- 210,222 ---- >>>>> =20 >>>>> #include >>>>> =20 >>>>> + #ifdef NEWRECLAIM >>>>> + #ifdef __FreeBSD__ >>>>> + #include >>>>> + #include >>>>> + #endif >>>>> + #endif /* NEWRECLAIM */ >>>>> + >>>>> #ifdef illumos >>>>> #ifndef _KERNEL >>>>> /* set with ZFS_DEBUG=3Dwatch, to enable watchpoints on >>>>> frozen buffers= >>>>> */ >>>>> *************** >>>>> *** 203,218 **** >>>>> --- 281,316 ---- >>>>> int zfs_arc_shrink_shift =3D 0; >>>>> int zfs_arc_p_min_shift =3D 0; >>>>> int zfs_disable_dup_eviction =3D 0; >>>>> + #ifdef NEWRECLAIM >>>>> + #ifdef __FreeBSD__ >>>>> + static int freepages =3D 0; /* This much memory >>>>> is considered critical = >>>>> */ >>>>> + static int percent_target =3D 0; /* Additionally >>>>> reserve "X" percent fr= >>>>> ee RAM */ >>>>> + #endif /* __FreeBSD__ */ >>>>> + #endif /* NEWRECLAIM */ >>>>> =20 >>>>> TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max); >>>>> TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min); >>>>> TUNABLE_QUAD("vfs.zfs.arc_meta_limit", >>>>> &zfs_arc_meta_limit); >>>>> + #ifdef NEWRECLAIM >>>>> + #ifdef __FreeBSD__ >>>>> + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages); >>>>> + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target); >>>>> >>>>> + #endif /* __FreeBSD__ */ >>>>> + #endif /* NEWRECLAIM */ >>>>> + >>>>> SYSCTL_DECL(_vfs_zfs); >>>>> SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, >>>>> &zfs_arc_max,= >>>>> 0, >>>>> "Maximum ARC size"); >>>>> SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN, >>>>> &zfs_arc_min,= >>>>> 0, >>>>> "Minimum ARC size"); >>>>> =20 >>>>> + #ifdef NEWRECLAIM >>>>> + #ifdef __FreeBSD__ >>>>> + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN, >>>>> &freepages= >>>>> , 0, "ARC Free RAM Pages Required"); >>>>> + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent, >>>>> CTLFLAG_RWTUN, &pe= >>>>> rcent_target, 0, "ARC Free RAM Target percentage"); >>>>> + #endif /* __FreeBSD__ */ >>>>> + #endif /* NEWRECLAIM */ >>>>> + >>>>> /* >>>>> * Note that buffers can be in one of 6 states: >>>>> * ARC_anon - anonymous (discussed below) >>>>> *************** >>>>> *** 2438,2443 **** >>>>> --- 2536,2546 ---- >>>>> { >>>>> =20 >>>>> #ifdef _KERNEL >>>>> + #ifdef NEWRECLAIM_DEBUG >>>>> + static int xval =3D -1; >>>>> + static int oldpercent =3D 0; >>>>> + static int oldfreepages =3D 0; >>>>> + #endif /* NEWRECLAIM_DEBUG */ >>>>> =20 >>>>> if (needfree) >>>>> return (1); >>>>> *************** >>>>> *** 2476,2481 **** >>>>> --- 2579,2585 ---- >>>>> return (1); >>>>> =20 >>>>> #if defined(__i386) >>>>> + >>>>> /* >>>>> * If we're on an i386 platform, it's possible >>>>> that we'll exhaust the= >>>>> >>>>> * kernel heap space before we ever run out of >>>>> available physical >>>>> *************** >>>>> *** 2492,2502 **** >>>>> return (1); >>>>> #endif >>>>> #else /* !sun */ >>>>> if (kmem_used() > (kmem_size() * 3) / 4) >>>>> return (1); >>>>> #endif /* sun */ >>>>> =20 >>>>> - #else >>>>> if (spa_get_random(100) =3D=3D 0) >>>>> return (1); >>>>> #endif >>>>> --- 2596,2658 ---- >>>>> return (1); >>>>> #endif >>>>> #else /* !sun */ >>>>> + >>>>> + #ifdef NEWRECLAIM >>>>> + #ifdef __FreeBSD__ >>>>> + /* >>>>> + * Implement the new tunable free RAM algorithm. We check >>>>> the free pag= >>>>> es >>>>> + * against the minimum specified target and the percentage >>>>> that should = >>>>> be >>>>> + * free. If we're low we ask for ARC cache shrinkage. If >>>>> this is defi= >>>>> ned >>>>> + * on a FreeBSD system the older checks are not performed. >>>>> + * >>>>> + * Check first to see if we need to init freepages, then >>>>> test. >>>>> + */ >>>>> + if (!freepages) { /* If zero then >>>>> (re)init */ >>>>> + freepages =3D cnt.v_free_target - >>>>> (cnt.v_free_target / 33); >>>>> + #ifdef NEWRECLAIM_DEBUG >>>>> + printf("ZFS ARC: Default >>>>> vfs.zfs.arc_freepages to [%u] [%u less 3%%]= >>>>> n", freepages, cnt.v_free_target); >>>>> + #endif /* NEWRECLAIM_DEBUG */ >>>>> + } >>>>> + #ifdef NEWRECLAIM_DEBUG >>>>> + if (percent_target !=3D oldpercent) { >>>>> + printf("ZFS ARC: Reservation percent change >>>>> to [%d], [%d] pages, [%d]= >>>>> freen", percent_target, cnt.v_page_count, cnt.v_free_count); >>>>> + oldpercent =3D percent_target; >>>>> + } >>>>> + if (freepages !=3D oldfreepages) { >>>>> + printf("ZFS ARC: Low RAM page change to [%d], >>>>> [%d] pages, [%d] freen= >>>>> ", freepages, cnt.v_page_count, cnt.v_free_count); >>>>> + oldfreepages =3D freepages; >>>>> + } >>>>> + #endif /* NEWRECLAIM_DEBUG */ >>>>> + /* >>>>> + * Now figure out how much free RAM we require to call the >>>>> ARC cache st= >>>>> atus >>>>> + * "ok". Add the percentage specified of the total to the >>>>> base require= >>>>> ment. >>>>> + */ >>>>> + >>>>> + if (cnt.v_free_count < freepages + ((cnt.v_page_count >>>>> / 100) * percent= >>>>> _target)) { >>>>> + #ifdef NEWRECLAIM_DEBUG >>>>> + if (xval !=3D 1) { >>>>> + printf("ZFS ARC: RECLAIM total %u, >>>>> free %u, free pct (%u), reserved = >>>>> (%u), target pct (%u)n", cnt.v_page_count, cnt.v_free_count, >>>>> ((cnt.v_fre= >>>>> e_count * 100) / cnt.v_page_count), freepages, percent_target); >>>>> >>>>> + xval =3D 1; >>>>> + } >>>>> + #endif /* NEWRECLAIM_DEBUG */ >>>>> + return(1); >>>>> + } else { >>>>> + #ifdef NEWRECLAIM_DEBUG >>>>> + if (xval !=3D 0) { >>>>> + printf("ZFS ARC: NORMAL total %u, >>>>> free %u, free pct (%u), reserved (= >>>>> %u), target pct (%u)n", cnt.v_page_count, cnt.v_free_count, >>>>> ((cnt.v_free= >>>>> _count * 100) / cnt.v_page_count), freepages, percent_target); >>>>> + xval =3D 0; >>>>> + } >>>>> + #endif /* NEWRECLAIM_DEBUG */ >>>>> + return(0); >>>>> + } >>>>> + >>>>> + #endif /* __FreeBSD__ */ >>>>> + #endif /* NEWRECLAIM */ >>>>> + >>>>> if (kmem_used() > (kmem_size() * 3) / 4) >>>>> return (1); >>>>> #endif /* sun */ >>>>> =20 >>>>> if (spa_get_random(100) =3D=3D 0) >>>>> return (1); >>>>> #endif >>>>> >>>>> --=20 >>>>> -- Karl >>>>> karl@denninger.net [5] >>>>> >>>>> --------------ms010508000607000909070805 >>>>> Content-Type: application/pkcs7-signature; name="smime.p7s" >>>>> Content-Transfer-Encoding: base64 >>>>> Content-Disposition: attachment; filename="smime.p7s" >>>>> Content-Description: S/MIME Cryptographic Signature >>>>> >>>>> >>>>> >>>> >>> >>> >>> MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC >>>>> >>>>> >>>>> >>>> >>> >>> >>> BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI >>>>> >>>>> >>>>> >>>> >>> >>> >>> EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM >>>>> >>>>> >>>>> >>>> >>> >>> >>> TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv >>>>> >>>>> >>>>> >>>> >>> >>> >>> bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 >>>>> >>>>> >>>>> >>>> >>> >>> >>> MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg >>>>> >>>>> >>>>> >>>> >>> >>> >>> RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG >>>>> >>>>> >>>>> >>>> >>> >>> >>> SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th >>>>> >>>>> >>>>> >>>> >>> >>> >>> d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W >>>>> >>>>> >>>>> >>>> >>> >>> >>> 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV >>>>> >>>>> >>>>> >>>> >>> >>> >>> jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 >>>>> >>>>> >>>>> >>>> >>> >>> >>> SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY >>>>> >>>>> >>>>> >>>> >>> >>> >>> 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 >>>>> >>>>> >>>>> >>>> >>> >>> >>> Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 >>>>> >>>>> >>>>> >>>> >>> >>> >>> GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci >>>>> >>>>> >>>>> >>>> >>> >>> >>> WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN >>>>> >>>>> >>>>> >>>> >>> >>> >>> nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB >>>>> >>>>> >>>>> >>>> >>> >>> >>> o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg >>>>> >>>>> >>>>> >>>> >>> >>> >>> hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 >>>>> >>>>> >>>>> >>>> >>> >>> >>> +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG >>>>> >>>>> >>>>> >>>> >>> >>> >>> CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy >>>>> >>>>> >>>>> >>>> >>> >>> >>> bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO >>>>> >>>>> >>>>> >>>> >>> >>> >>> 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c >>>>> >>>>> >>>>> >>>> >>> >>> >>> L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 >>>>> >>>>> >>>>> >>>> >>> >>> >>> YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD >>>>> >>>>> >>>>> >>>> >>> >>> >>> pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE >>>>> >>>>> >>>>> >>>> >>> >>> >>> f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk >>>>> >>>>> >>>>> >>>> >>> >>> >>> YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD >>>>> >>>>> >>>>> >>>> >>> >>> >>> VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 >>>>> >>>>> >>>>> >>>> >>> >>> >>> aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ >>>>> >>>>> >>>>> >>>> >>> >>> >>> KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjAxNzAwNTRaMCMGCSqGSIb3DQEJBDEW >>>>> >>>>> >>>>> >>>> >>> >>> >>> BBQrRNHcjN1dwAlZu7blrh+3Vu7++TBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL >>>>> >>>>> >>>>> >>>> >>> >>> >>> BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA >>>>> >>>>> >>>>> >>>> >>> >>> >>> MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV >>>>> >>>>> >>>>> >>>> >>> >>> >>> BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT >>>>> >>>>> >>>>> >>>> >>> >>> >>> EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq >>>>> >>>>> >>>>> >>>> >>> >>> >>> hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG >>>>> >>>>> >>>>> >>>> >>> >>> >>> 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV >>>>> >>>>> >>>>> >>>> >>> >>> >>> BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk >>>>> >>>>> >>>>> >>>> >>> >>> >>> YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh >>>>> >>>>> >>>>> >>>> >>> >>> >>> c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAsIbg6OIk2XnrPw25FA9+s4FCnKlo >>>>> >>>>> >>>>> >>>> >>> >>> >>> Wz3KtfA59Gf2jX8yKEQW26k1QK6w62Zc8KaB5LypaOK1rZ5bipeu6rGHhgaG1oPXUMmcc2p4 >>>>> >>>>> >>>>> >>>> >>> >>> >>> E18URDskAfn5apoJ/n54nR94OqHfQ/EPBx711pxYtAGBLFOOzwrU2MEZCl2KBydI+Bw/E75R >>>>> >>>>> >>>>> >>>> >>> >>> >>> WRIk6y0NqSWjgVWU2tJwnOEZj/2UGQCSvJ7h5t1n7idbDIfT88/hvAW3b3knRwPxwpZretXq >>>>> >>>>> >>>>> >>>> >>> >>> >>> 2BGgmv8lojr7Zui5sR/YdDjSK2yGHqo0mWkSAHp0Wts8okcoJNZSEispFRh56MWCIoJ51cki >>>>> >>>>> >>>>> >>>> >>> >>> >>> pCZH/vX1EEsfka3CrlE7LWABAYf1biy+Xq/Bfxgq9oAaknGF2yM0jgR7xnjLYLvbv5pjt7ar >>>>> >>>>> >>>>> >>>> >>> >>> >>> TH2JslJMYkJPKiYFJNEgVJ9wTVQtrCPJQPTk3R1qD3YFraly5Mgjwy5Ax5n8SW858WWOxHeP >>>>> >>>>> >>>>> >>>> >>> >>> >>> vmL0j1boO0Re9qeAb9v/z8z3NPkFPZhBrEz3g6INCWil+2Vx1yruJvxm1oN9OMQSt2qY38rj >>>>> >>>>> >>>>> >>>> >>> >>> >>> XWhWVxoQtW39LZc/xSNR41DQXvPJ8VOvyrmvLm7uTm4+lQYVUwNuLNbDFlj8slkAeXF/eR1S >>>>> >>>>> >>>>> >>>> >>> >>> >>> 4VuWtwexxCco+xGjbPTcZgap976XsvlRWOmjmwqZyGNuW7ZmcODQPFjQvpnBkx9Rm5cLndK6 >>>>> >>>>> OVorTQkAAAAAAAA= >>>>> --------------ms010508000607000909070805-- >>>>> >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org [6] mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs [7] >>>>> To unsubscribe, send any mail to >>>>> "freebsd-fs-unsubscribe@freebsd.org" [8] >>> >>> -- Karl Denninger >>> karl@denninger.net [9] >>> _The Market Ticker_ >>> >>> >>> >>> Links: >>> ------ >>> [1] mailto:karl@denninger.net >>> [2] mailto:bug-followup@FreeBSD.org >>> [3] mailto:karl@fs.denninger.net >>> [4] mailto:karl@denninger.net >>> [5] mailto:karl@denninger.net >>> [6] mailto:freebsd-fs@freebsd.org >>> [7] http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> [8] mailto:freebsd-fs-unsubscribe@freebsd.org >>> [9] mailto:karl@denninger.net >> >> >> >> %SPAMBLOCK-SYS: Matched [+mikej ], message ok >> From owner-freebsd-fs@FreeBSD.ORG Tue Mar 25 08:10:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 20E093F6 for ; Tue, 25 Mar 2014 08:10:32 +0000 (UTC) Received: from mail-ve0-x22b.google.com (mail-ve0-x22b.google.com [IPv6:2607:f8b0:400c:c01::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id D3A81892 for ; Tue, 25 Mar 2014 08:10:31 +0000 (UTC) Received: by mail-ve0-f171.google.com with SMTP id cz12so115341veb.2 for ; Tue, 25 Mar 2014 01:10:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=tx2IKXSksle4UaVncmB0A4kY+VeB35CLeqEjl+hn+mg=; b=gG9LcEpFsXKYggVBVMIF90PtMXiaNAuvPAc5nS0Hc5VScevRdohwiYaKvXQXOCjgPg sd6LTNEeYTM2cscUk9yhBQIitmfa2TjejHh1WNt5UjjXPoeyxL9X+zd5wUEkK2NF6fe6 cUX3+FyUr6V46Kk07wRWu4dvePDM62Wfs3nWDTEoBpbAyP+JD9zZjqZGE3EHOtsUo0wG KRXoSxQSYPVIv5l8W6ncZZd3YiXcjyYDk+lMXtCSl9U1+OogAzdCH7bKwFVeX1WSKMjx lGE4nJKkkYnF2R0BzyPQivNdv6ATl/Qcv2XE/DhWQuTqaqTU38jau8dxwzCqHadT7Yfj jbEA== X-Received: by 10.220.133.80 with SMTP id e16mr54517980vct.13.1395735031048; Tue, 25 Mar 2014 01:10:31 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.252.137 with HTTP; Tue, 25 Mar 2014 01:10:00 -0700 (PDT) In-Reply-To: References: <201403201710.s2KHA0e9043051@freefall.freebsd.org> <31e2f5092048c128add6f4cd3d136f4d@mail.mikej.com> <532CD3BC.5020305@denninger.net> <53304D8D.7080107@denninger.net> From: Matthias Gamsjager Date: Tue, 25 Mar 2014 09:10:00 +0100 Message-ID: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix [SB QUAR: Fri Mar 21 16:01:12 2014] To: mikej Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Mar 2014 08:10:32 -0000 What commands did you run for the build? From owner-freebsd-fs@FreeBSD.ORG Tue Mar 25 08:18:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E99B17A1 for ; Tue, 25 Mar 2014 08:18:25 +0000 (UTC) Received: from theusgroup.com (theusgroup.com [64.122.243.222]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id CC91C96C for ; Tue, 25 Mar 2014 08:18:25 +0000 (UTC) From: John To: mikej Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix [SB QUAR: Fri Mar 21 16:01:12 2014] In-reply-to: References: <201403201710.s2KHA0e9043051@freefall.freebsd.org> <31e2f5092048c128add6f4cd3d136f4d@mail.mikej.com> <532CD3BC.5020305@denninger.net> <53304D8D.7080107@denninger.net> Comments: In-reply-to mikej message dated "Mon, 24 Mar 2014 19:00:59 -0400." Date: Tue, 25 Mar 2014 01:10:23 -0700 Message-Id: <20140325081023.99DA9131@server.theusgroup.com> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Mar 2014 08:18:26 -0000 >Karl, > >I don't know why the PR system doesn't allow multiple attachments for >patches >to be sent so they can easily be downloaded. I continue to be >unsuccessful >at getting patches out of email that don't have line wrap are tab >issues. So, >again if you could send a link or post the latest patch I would be glad >to let >it churn. > >If I need an education about the PR system, please someone help me. I've had to hand edit every patch due to the extra stuff added in the email from mime encoding and the PR system stripping all leading white space from the patches that show up there. That said, I've been applying the patches to 9.2. The previous patch was a big improvement under my workload. It's too early to tell about the latest patch, but its applied to 9.2 after I upgraded to the new zfs features, r263423. The machine has 24 GB of ram and prior to the patches it had vfs.zfs.arc_max tuned to 14 GB to keep from swapping. With the patch I've raise arc_max to 16 GB (I'll raise it higher after more testing), and arc management seems to be working well. John Theus TheusGroup.com From owner-freebsd-fs@FreeBSD.ORG Tue Mar 25 12:09:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4F7653D4 for ; Tue, 25 Mar 2014 12:09:25 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id ED0181B6 for ; Tue, 25 Mar 2014 12:09:24 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2PC9CFN020749 for ; Tue, 25 Mar 2014 07:09:13 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Tue Mar 25 07:09:13 2014 Message-ID: <533171E3.7060000@denninger.net> Date: Tue, 25 Mar 2014 07:09:07 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: John , mikej Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix [SB QUAR: Fri Mar 21 16:01:12 2014] References: <201403201710.s2KHA0e9043051@freefall.freebsd.org> <31e2f5092048c128add6f4cd3d136f4d@mail.mikej.com> <532CD3BC.5020305@denninger.net> <53304D8D.7080107@denninger.net> <20140325081023.99DA9131@server.theusgroup.com> In-Reply-To: <20140325081023.99DA9131@server.theusgroup.com> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080300090308000004050400" X-Antivirus: avast! (VPS 140325-0, 03/25/2014), Outbound message X-Antivirus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Mar 2014 12:09:25 -0000 This is a cryptographically signed message in MIME format. --------------ms080300090308000004050400 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/25/2014 3:10 AM, John wrote: >> Karl, >> >> I don't know why the PR system doesn't allow multiple attachments for >> patches >> to be sent so they can easily be downloaded. I continue to be >> unsuccessful >> at getting patches out of email that don't have line wrap are tab >> issues. So, >> again if you could send a link or post the latest patch I would be gla= d >> to let >> it churn. >> >> If I need an education about the PR system, please someone help me. > I've had to hand edit every patch due to the extra stuff added in the e= mail > from mime encoding and the PR system stripping all leading white space = from > the patches that show up there. > > That said, I've been applying the patches to 9.2. The previous patch wa= s a > big improvement under my workload. It's too early to tell about the lat= est > patch, but its applied to 9.2 after I upgraded to the new zfs features,= r263423. > > The machine has 24 GB of ram and prior to the patches it had vfs.zfs.ar= c_max > tuned to 14 GB to keep from swapping. With the patch I've raise arc_max= to > 16 GB (I'll raise it higher after more testing), and arc management see= ms to > be working well. > > John Theus > TheusGroup.com > You can remove the arc_max tuning entirely, since the latest patch=20 allows you to set the low-RAM warning level either in pages, percentage=20 of RAM or both at runtime (making pre-setting the arc_max level=20 unnecessary.) --=20 -- Karl karl@denninger.net --------------ms080300090308000004050400 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjUxMjA5MDdaMCMGCSqGSIb3DQEJBDEW BBRmTjuUzDYnPgdbDjHRbedO4FVguzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAiIQEbmb2lX6MpFqmqr6rX5gPssHM C+4UxDLKTMLdHBn9RocvYuFD1/Bp2UskTQZ2LPfJ8li4mt7H029WbY4lCDqpHdGaJauxKqe4 4DDqk3ycgUuhmLM6brwr1BD61TpK0WLQ1zyR9iUOrBSZH6yMigGV/8yMNikAHb8u8gaW/A4D vTLgMQreBiigJ2HMerxYm+bhu3oN9SWb9Mq7D5llnTx9TxaB3p/lVuWchPCJgKN/qCsv9kgI S1ydYxb8xXvxtWnPpyOCDHbIZesJ9ERRoyKPAR+v85SrygyorBCuJ2vjyRww3kXRSf0BztlB 2WsEO8OirQsR5ceXi77pq86Ovm83KDM5FY1Z8EmlPo1WykLe7SXpLarkhg7j4aaayfQPToob 6UH4ZQos4i2A0IjCpBI/hJiUFP58Wh6zYXreMw/TAjwNEvXx97W4vwqnMRhakuKAyG47ld8t coayhUoGTix/SR+BSieT23ToCDdyn79Jijg4ntBygL2afL8sF0Ew70FJsRXV+5MomKjV5zSG Qga8fQXPZqRcOpd4nGuWz8x1QbECXWELYKO2L3wiU2FuNARhZgDzxsRSEwYiFaFKY3JjpDlT 9htaDLyctEB7c3si0+QJCJfwSZVRWjUQCmLSymFWjQ28DKrv0NKjgiWEU/5y5k4bsCq0OVWI QWpcMZ4AAAAAAAA= --------------ms080300090308000004050400-- From owner-freebsd-fs@FreeBSD.ORG Tue Mar 25 23:10:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B689AF1C; Tue, 25 Mar 2014 23:10:36 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 447F915D; Tue, 25 Mar 2014 23:10:36 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAKwLMlODaFve/2dsb2JhbABZg0FXgwe+R4EegTN0gk8ERws1Ag0ZAl8BLYdeDa02oiEXgSmMbQEjNIJ2gUkElF8HhRmRAINKIYEsAR8i X-IronPort-AV: E=Sophos;i="4.97,730,1389762000"; d="scan'208";a="109106594" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 25 Mar 2014 19:10:35 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 54716B405A; Tue, 25 Mar 2014 19:10:35 -0400 (EDT) Date: Tue, 25 Mar 2014 19:10:35 -0400 (EDT) From: Rick Macklem To: FreeBSD Filesystems , FreeBSD Net Message-ID: <1609686124.539328.1395789035334.JavaMail.root@uoguelph.ca> Subject: RFC: How to fix the NFS/iSCSI vs TSO problem MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Mar 2014 23:10:36 -0000 Hi, First off, I hope you don't mind that I cross-posted this, but I wanted to make sure both the NFS/iSCSI and networking types say it. If you look in this mailing list thread: http://docs.FreeBSD.org/cgi/mid.cgi?1850411724.1687820.1395621539316.JavaMail.root you'll see that several people have been working hard at testing and thanks to them, I think I now know what is going on. (This applies to network drivers that support TSO and are limited to 32 transmit segments->32 mbufs in chain.) Doing a quick search I found the following drivers that appear to be affected (I may have missed some): jme, fxp, age, sge, msk, alc, ale, ixgbe/ix, nfe, e1000/em, re Further, of these drivers, the following use m_collapse() and not m_defrag() to try and reduce the # of mbufs in the chain. m_collapse() is not going to get the 35 mbufs down to 32 mbufs, as far as I can see, so these ones are more badly broken: jme, fxp, age, sge, alc, ale, nfe, re The long description is in the above thread, but the short version is: - NFS generates a chain with 35 mbufs in it for (read/readdir replies and write requests) made up of (tcpip header, RPC header, NFS args, 32 clusters of file data) - tcp_output() usually trims the data size down to tp->t_tsomax (65535) and then some more to make it an exact multiple of TCP transmit data size. - the net driver prepends an ethernet header, growing the length by 14 (or sometimes 18 for vlans), but in the first mbuf and not adding one to the chain. - m_defrag() copies this to a chain of 32 mbuf clusters (because the total data length is <= 64K) and it gets sent However, it the data length is a little less than 64K when passed to tcp_output() so that the length including headers is in the range 65519->65535... - tcp_output() doesn't reduce its size. - the net driver adds an ethernet header, making the total data length slightly greater than 64K - m_defrag() copies it to a chain of 33 mbuf clusters, which fails with EFBIG --> trainwrecks NFS performance, because the TSO segment is dropped instead of sent. A tester also stated that the problem could be reproduced using iSCSI. Maybe Edward Napierala might know some details w.r.t. what kind of mbuf chain iSCSI generates? Also, one tester has reported that setting if_hw_tsomax in the driver before the ether_ifattach() call didn't make the value of tp->t_tsomax smaller. However, reducing IP_MAXPACKET (which is what it is set to by default) did reduce it. I have no idea why this happens or how to fix it, but it implies that setting if_hw_tsomax in the driver isn't a solution until this is resolved. So, what to do about this? First, I'd like a simple fix/workaround that can go into 9.3. (which is code freeze in May). The best thing I can think of is setting if_hw_tsomax to a smaller default value. (Line# 658 of sys/net/if.c in head.) Version A: replace ifp->if_hw_tsomax = IP_MAXPACKET; with ifp->if_hw_tsomax = min(32 * MCLBYTES - (ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN), IP_MAXPACKET); plus replace m_collapse() with m_defrag() in the drivers listed above. This would only reduce the default from 65535->65518, so it only impacts the uncommon case where the output size (with tcpip header) is within this range. (As such, I don't think it would have a negative impact for drivers that handle more than 32 transmit segments.) >From the testers, it seems that this is sufficient to get rid of the EFBIG errors. (The total data length including ethernet header doesn't exceed 64K, so m_defrag() fits it into 32 mbuf clusters.) The main downside of this is that there will be a lot of m_defrag() calls being done and they do quite a bit of bcopy()'ng. Version B: replace ifp->if_hw_tsomax = IP_MAXPACKET; with ifp->if_hw_tsomax = min(29 * MCLBYTES, IP_MAXPACKET); This one would avoid the m_defrag() calls, but might have a negative impact on TSO performance for drivers that can handle 35 transmit segments, since the maximum TSO segment size is reduced by about 6K. (Because of the second size reduction to an exact multiple of TCP transmit data size, the exact amount varies.) Possible longer term fixes: One longer term fix might be to add something like if_hw_tsomaxseg so that a driver can set a limit on the number of transmit segments (mbufs in chain) and tcp_output() could use that to limit the size of the TSO segment, as required. (I have a first stab at such a patch, but no way to test it, so I can't see that being done by May. Also, it would require changes to a lot of drivers to make it work. I've attached this patch, in case anyone wants to work on it?) Another might be to increase the size of MCLBYTES (I don't see this as practical for 9.3, although the actual change is simple.). I do think that increasing MCLBYTES might be something to consider doing in the future, for reasons beyond fixing this. So, what do others think should be done? rick From owner-freebsd-fs@FreeBSD.ORG Wed Mar 26 02:33:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4EA248DE; Wed, 26 Mar 2014 02:33:46 +0000 (UTC) Received: from mail-pb0-x22f.google.com (mail-pb0-x22f.google.com [IPv6:2607:f8b0:400e:c01::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 163AF99C; Wed, 26 Mar 2014 02:33:46 +0000 (UTC) Received: by mail-pb0-f47.google.com with SMTP id up15so1300451pbc.20 for ; Tue, 25 Mar 2014 19:33:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=UtVQt/sQokC8xJXXoqVNS9RGbRJMIZzPazy5/Soq6b0=; b=vjPuER3UWd1fg1maYJ3/vle58zppYGRcgEsFxw39X/m+Ckvh964eKeb3sN/GSEyHJ5 xBw4MEgj4gBrf02Q8Ru05TyzZ5UMP1ljkJzAn0+Ss+Kk9nv/yoQtvPE/uUx9GworBCAA neQ3k3xwguODhp0diYTVFcyKWKowOhhsrCoWbZcFn60DAOwAwk6UCZtWRxmZqeW81lRA n2W9Yy+L/l6CsVIcn6O3qvPEhh5UObElCJNOCgWkqSbpWLwZvPWsiC84P1dhsKQUbux8 8Ro437QUrQLQHLLEsiFB6XTxPCoe8qtBFtNy2/lfeFFnLWjHIWYcyID7ukjFvCj6qNzD N+bQ== X-Received: by 10.68.217.234 with SMTP id pb10mr20431225pbc.142.1395801225703; Tue, 25 Mar 2014 19:33:45 -0700 (PDT) Received: from pyunyh@gmail.com (lpe4.p59-icn.cdngp.net. [114.111.62.249]) by mx.google.com with ESMTPSA id iu7sm51231653pbc.60.2014.03.25.19.33.38 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 25 Mar 2014 19:33:45 -0700 (PDT) Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Wed, 26 Mar 2014 11:33:34 +0900 From: Yonghyeon PYUN Date: Wed, 26 Mar 2014 11:33:34 +0900 To: Rick Macklem Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem Message-ID: <20140326023334.GB2973@michelle.cdnetworks.com> References: <1609686124.539328.1395789035334.JavaMail.root@uoguelph.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1609686124.539328.1395789035334.JavaMail.root@uoguelph.ca> User-Agent: Mutt/1.4.2.3i Cc: FreeBSD Filesystems , FreeBSD Net , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: pyunyh@gmail.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Mar 2014 02:33:46 -0000 On Tue, Mar 25, 2014 at 07:10:35PM -0400, Rick Macklem wrote: > Hi, > > First off, I hope you don't mind that I cross-posted this, > but I wanted to make sure both the NFS/iSCSI and networking > types say it. > If you look in this mailing list thread: > http://docs.FreeBSD.org/cgi/mid.cgi?1850411724.1687820.1395621539316.JavaMail.root > you'll see that several people have been working hard at testing and > thanks to them, I think I now know what is going on. Thanks for your hard work on narrowing down that issue. I'm too busy for $work in these days so I couldn't find time to investigate the issue. > (This applies to network drivers that support TSO and are limited to 32 transmit > segments->32 mbufs in chain.) Doing a quick search I found the following > drivers that appear to be affected (I may have missed some): > jme, fxp, age, sge, msk, alc, ale, ixgbe/ix, nfe, e1000/em, re > The magic number 32 was chosen long time ago when I implemented TSO in non-Intel drivers. I tried to find optimal number to reduce the size kernel stack usage at that time. bus_dma(9) will coalesce with previous segment if possible so I thought the number 32 was not an issue. Not sure current bus_dma(9) also has the same code though. The number 32 is arbitrary one so you can increase it if you want. > Further, of these drivers, the following use m_collapse() and not m_defrag() > to try and reduce the # of mbufs in the chain. m_collapse() is not going to > get the 35 mbufs down to 32 mbufs, as far as I can see, so these ones are > more badly broken: > jme, fxp, age, sge, alc, ale, nfe, re I guess m_defeg(9) is more optimized for non-TSO packets. You don't want to waste CPU cycles to copy the full frame to reduce the number of mbufs in the chain. For TSO packets, m_defrag(9) looks better but if we always have to copy a full TSO packet to make TSO work, driver writers have to invent better scheme rather than blindly relying on m_defrag(9), I guess. > > The long description is in the above thread, but the short version is: > - NFS generates a chain with 35 mbufs in it for (read/readdir replies and write requests) > made up of (tcpip header, RPC header, NFS args, 32 clusters of file data) > - tcp_output() usually trims the data size down to tp->t_tsomax (65535) and > then some more to make it an exact multiple of TCP transmit data size. > - the net driver prepends an ethernet header, growing the length by 14 (or > sometimes 18 for vlans), but in the first mbuf and not adding one to the chain. > - m_defrag() copies this to a chain of 32 mbuf clusters (because the total data > length is <= 64K) and it gets sent > > However, it the data length is a little less than 64K when passed to tcp_output() > so that the length including headers is in the range 65519->65535... > - tcp_output() doesn't reduce its size. > - the net driver adds an ethernet header, making the total data length slightly > greater than 64K > - m_defrag() copies it to a chain of 33 mbuf clusters, which fails with EFBIG > --> trainwrecks NFS performance, because the TSO segment is dropped instead of sent. > > A tester also stated that the problem could be reproduced using iSCSI. Maybe > Edward Napierala might know some details w.r.t. what kind of mbuf chain iSCSI > generates? > > Also, one tester has reported that setting if_hw_tsomax in the driver before > the ether_ifattach() call didn't make the value of tp->t_tsomax smaller. > However, reducing IP_MAXPACKET (which is what it is set to by default) did > reduce it. I have no idea why this happens or how to fix it, but it implies > that setting if_hw_tsomax in the driver isn't a solution until this is resolved. > > So, what to do about this? > First, I'd like a simple fix/workaround that can go into 9.3. (which is code > freeze in May). The best thing I can think of is setting if_hw_tsomax to a > smaller default value. (Line# 658 of sys/net/if.c in head.) > > Version A: > replace > ifp->if_hw_tsomax = IP_MAXPACKET; > with > ifp->if_hw_tsomax = min(32 * MCLBYTES - (ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN), > IP_MAXPACKET); > plus > replace m_collapse() with m_defrag() in the drivers listed above. > > This would only reduce the default from 65535->65518, so it only impacts > the uncommon case where the output size (with tcpip header) is within > this range. (As such, I don't think it would have a negative impact for > drivers that handle more than 32 transmit segments.) > From the testers, it seems that this is sufficient to get rid of the EFBIG > errors. (The total data length including ethernet header doesn't exceed 64K, > so m_defrag() fits it into 32 mbuf clusters.) > > The main downside of this is that there will be a lot of m_defrag() calls > being done and they do quite a bit of bcopy()'ng. > > Version B: > replace > ifp->if_hw_tsomax = IP_MAXPACKET; > with > ifp->if_hw_tsomax = min(29 * MCLBYTES, IP_MAXPACKET); > > This one would avoid the m_defrag() calls, but might have a negative > impact on TSO performance for drivers that can handle 35 transmit segments, > since the maximum TSO segment size is reduced by about 6K. (Because of the > second size reduction to an exact multiple of TCP transmit data size, the > exact amount varies.) > > Possible longer term fixes: > One longer term fix might be to add something like if_hw_tsomaxseg so that > a driver can set a limit on the number of transmit segments (mbufs in chain) > and tcp_output() could use that to limit the size of the TSO segment, as > required. (I have a first stab at such a patch, but no way to test it, so > I can't see that being done by May. Also, it would require changes to a lot > of drivers to make it work. I've attached this patch, in case anyone wants > to work on it?) > > Another might be to increase the size of MCLBYTES (I don't see this as > practical for 9.3, although the actual change is simple.). I do think > that increasing MCLBYTES might be something to consider doing in the > future, for reasons beyond fixing this. > > So, what do others think should be done? rick > AFAIK all TSO capable drivers you mentioned above have no limit on the number of TX segments in TSO path. Not sure about Intel controllers though. Increasing the number of segment will consume lots of kernel stack in those drivers. Given that ixgbe, which seems to use 100, didn't show any kernal stack shortage, I think bumping the number of segments would be quick way to address the issue. From owner-freebsd-fs@FreeBSD.ORG Wed Mar 26 12:30:03 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A969F867 for ; Wed, 26 Mar 2014 12:30:03 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9578E1EF for ; Wed, 26 Mar 2014 12:30:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2QCU3f1095106 for ; Wed, 26 Mar 2014 12:30:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2QCU3vI095105; Wed, 26 Mar 2014 12:30:03 GMT (envelope-from gnats) Date: Wed, 26 Mar 2014 12:30:03 GMT Message-Id: <201403261230.s2QCU3vI095105@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Karl Denninger List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Mar 2014 12:30:03 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Karl Denninger To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Wed, 26 Mar 2014 07:20:25 -0500 This is a cryptographically signed message in MIME format. --------------ms080306070708080308040001 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable Updated to handle the change in that was recently=20 committed to HEAD and slightly tweak the default reservation to be equal = to the VM system's "wakeup" level. This appears, after lots of use in multiple environments, to be the=20 ideal default setting. The knobs remain if you wish to twist then, and=20 I have also exposed the return flag for shrinking being needed should=20 you want to monitor it for some reason. This change to arc.c has made a tremendous (and positive) difference in=20 system behavior and others that are running it have made similar comments= =2E For those having problems with the PR system mangling these patches you=20 can get the below patch via direct fetch at=20 http://www.denninger.net/FreeBSD-Patches/arc-patch *** arc.c.original Sun Mar 23 14:56:01 2014 --- arc.c Tue Mar 25 09:24:14 2014 *************** *** 18,23 **** --- 18,95 ---- * * CDDL HEADER END */ + + /* Karl Denninger (karl@denninger.net), 3/25/2014, FreeBSD-specific + * + * If "NEWRECLAIM" is defined, change the "low memory" warning that cau= ses + * the ARC cache to be pared down. The reason for the change is that t= he + * apparent attempted algorithm is to start evicting ARC cache when fre= e + * pages fall below 25% of installed RAM. This maps reasonably well to= how + * Solaris is documented to behave; when "lotsfree" is invaded ZFS is t= old + * to pare down. + * + * The problem is that on FreeBSD machines the system doesn't appear to= be + * getting what the authors of the original code thought they were look= ing at + * with its test -- or at least not what Solaris did -- and as a result= that + * test never triggers. That leaves the only reclaim trigger as the "p= aging + * needed" status flag, and by the time * that trips the system is alre= ady + * in low-memory trouble. This can lead to severe pathological behavio= r + * under the following scenario: + * - The system starts to page and ARC is evicted. + * - The system stops paging as ARC's eviction drops wired RAM a bit. + * - ARC starts increasing its allocation again, and wired memory grows= =2E + * - A new image is activated, and the system once again attempts to pa= ge. + * - ARC starts to be evicted again. + * - Back to #2 + * + * Note that ZFS's ARC default (unless you override it in /boot/loader.= conf) + * is to allow the ARC cache to grab nearly all of free RAM, provided n= obody + * else needs it. That would be ok if we evicted cache when required. + * + * Unfortunately the system can get into a state where it never + * manages to page anything of materiality back in, as if there is acti= ve + * I/O the ARC will start grabbing space once again as soon as the memo= ry + * contention state drops. For this reason the "paging is occurring" f= lag + * should be the **last resort** condition for ARC eviction; you want t= o + * (as Solaris does) start when there is material free RAM left BUT the= + * vm system thinks it needs to be active to steal pages back in the at= tempt + * to never get into the condition where you're potentially paging off + * executables in favor of leaving disk cache allocated. + * + * To fix this we change how we look at low memory, declaring two new + * runtime tunables and one status. + * + * The new sysctls are: + * vfs.zfs.arc_freepages (free pages required to call RAM "sufficient")= + * vfs.zfs.arc_freepage_percent (additional reservation percentage, def= ault 0) + * vfs.zfs.arc_shrink_needed (shows "1" if we're asking for shrinking t= he ARC) + * + * vfs.zfs.arc_freepages is initialized from vm.v_free_target. + * This should insure that we allow the VM system to steal pages, + * but pare the cache before we suspend processes attempting to get mor= e + * memory, thereby avoiding "stalls." You can set this higher if you w= ish, + * or force a specific percentage reservation as well, but doing so may= + * cause the cache to pare back while the VM system remains willing to + * allow "inactive" pages to accumulate. The challenge is that image + * activation can force things into the page space on a repeated basis + * if you allow this level to be too small (the above pathological + * behavior); the defaults should avoid that behavior but the sysctls + * are exposed should your workload require adjustment. + * + * If we're using this check for low memory we are replacing the previo= us + * ones, including the oddball "random" reclaim that appears to fire fa= r + * more often than it should. We still trigger if the system pages. + * + * If you turn on NEWRECLAIM_DEBUG then the kernel will print on the co= nsole + * status messages when the reclaim status trips on and off, along with= the + * page count aggregate that triggered it (and the free space) for each= + * event. + */ + + #define NEWRECLAIM + #undef NEWRECLAIM_DEBUG + + /* * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights = reserved. * Copyright (c) 2013 by Delphix. All rights reserved. *************** *** 139,144 **** --- 211,230 ---- =20 #include =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + #include + #include + /* + * Struct cnt. was renamed in -head (11-current) at rev 110016; check f= or it + */ + #if __FreeBSD_version < 1100016 + #define vm_cnt cnt + #endif /* __FreeBSD_version */ + + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + #ifdef illumos #ifndef _KERNEL /* set with ZFS_DEBUG=3Dwatch, to enable watchpoints on frozen buffers= */ *************** *** 203,218 **** --- 289,327 ---- int zfs_arc_shrink_shift =3D 0; int zfs_arc_p_min_shift =3D 0; int zfs_disable_dup_eviction =3D 0; + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + static int freepages =3D 0; /* This much memory is considered critical = */ + static int percent_target =3D 0; /* Additionally reserve "X" percent fr= ee RAM */ + static int shrink_needed =3D 0; /* Shrinkage of ARC cache needed? */ + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ =20 TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max); TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min); TUNABLE_QUAD("vfs.zfs.arc_meta_limit", &zfs_arc_meta_limit); + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages); + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target); + TUNABLE_INT("vfs.zfs.arc_shrink_needed", &shrink_needed); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + SYSCTL_DECL(_vfs_zfs); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, &zfs_arc_max,= 0, "Maximum ARC size"); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN, &zfs_arc_min,= 0, "Minimum ARC size"); =20 + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN, &freepages= , 0, "ARC Free RAM Pages Required"); + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent, CTLFLAG_RWTUN, &pe= rcent_target, 0, "ARC Free RAM Target percentage"); + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_shrink_needed, CTLFLAG_RD, &shrink_n= eeded, 0, "ARC Memory Constrained (0 =3D no, 1 =3D yes)"); + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + /* * Note that buffers can be in one of 6 states: * ARC_anon - anonymous (discussed below) *************** *** 2438,2443 **** --- 2547,2557 ---- { =20 #ifdef _KERNEL + #ifdef NEWRECLAIM_DEBUG + static int xval =3D -1; + static int oldpercent =3D 0; + static int oldfreepages =3D 0; + #endif /* NEWRECLAIM_DEBUG */ =20 if (needfree) return (1); *************** *** 2476,2481 **** --- 2590,2596 ---- return (1); =20 #if defined(__i386) + /* * If we're on an i386 platform, it's possible that we'll exhaust the= * kernel heap space before we ever run out of available physical *************** *** 2492,2502 **** return (1); #endif #else /* !sun */ if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 - #else if (spa_get_random(100) =3D=3D 0) return (1); #endif --- 2607,2671 ---- return (1); #endif #else /* !sun */ + + #ifdef NEWRECLAIM + #ifdef __FreeBSD__ + /* + * Implement the new tunable free RAM algorithm. We check the free pag= es + * against the minimum specified target and the percentage that should = be + * free. If we're low we ask for ARC cache shrinkage. If this is defi= ned + * on a FreeBSD system the older checks are not performed. + * + * Check first to see if we need to init freepages, then test. + */ + if (!freepages) { /* If zero then (re)init */ + freepages =3D vm_cnt.v_free_target; + #ifdef NEWRECLAIM_DEBUG + printf("ZFS ARC: Default vfs.zfs.arc_freepages to [%u]\n", freepages)= ; + #endif /* NEWRECLAIM_DEBUG */ + } + #ifdef NEWRECLAIM_DEBUG + if (percent_target !=3D oldpercent) { + printf("ZFS ARC: Reservation percent change to [%d], [%d] pages, [%d]= free\n", percent_target, vm_cnt.v_page_count, vm_cnt.v_free_count); + oldpercent =3D percent_target; + } + if (freepages !=3D oldfreepages) { + printf("ZFS ARC: Low RAM page change to [%d], [%d] pages, [%d] free\n= ", freepages, vm_cnt.v_page_count, vm_cnt.v_free_count); + oldfreepages =3D freepages; + } + #endif /* NEWRECLAIM_DEBUG */ + /* + * Now figure out how much free RAM we require to call the ARC cache st= atus + * "ok". Add the percentage specified of the total to the base require= ment. + */ + + if (vm_cnt.v_free_count < (freepages + ((vm_cnt.v_page_count / 100) * = percent_target))) { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 1) { + printf("ZFS ARC: RECLAIM total %u, free %u, free pct (%u), reserved = (%u), target pct (%u)\n", vm_cnt.v_page_count, vm_cnt.v_free_count, ((vm_= cnt.v_free_count * 100) / vm_cnt.v_page_count), freepages, percent_target= ); + xval =3D 1; + } + #endif /* NEWRECLAIM_DEBUG */ + shrink_needed =3D 1; + return(1); + } else { + #ifdef NEWRECLAIM_DEBUG + if (xval !=3D 0) { + printf("ZFS ARC: NORMAL total %u, free %u, free pct (%u), reserved (= %u), target pct (%u)\n", vm_cnt.v_page_count, vm_cnt.v_free_count, ((vm_c= nt.v_free_count * 100) / vm_cnt.v_page_count), freepages, percent_target)= ; + xval =3D 0; + } + #endif /* NEWRECLAIM_DEBUG */ + shrink_needed =3D 0; + return(0); + } + + #endif /* __FreeBSD__ */ + #endif /* NEWRECLAIM */ + if (kmem_used() > (kmem_size() * 3) / 4) return (1); #endif /* sun */ =20 if (spa_get_random(100) =3D=3D 0) return (1); #endif --=20 -- Karl karl@denninger.net --------------ms080306070708080308040001 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjYxMjIwMjVaMCMGCSqGSIb3DQEJBDEW BBSDwhwrsF5DQBRdl4eHVgPE1IGT8TBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAYDUFBvfbzEjGO/S9bOndOoWVjRmc o2r5uLDYcA5Lcy04X65DZX0UdMRdhRpRjaKfTESjoGnxv6nLif1h3jA2K27oSNBZGCOStD62 V+z7xj7k1Q1UDPwMIDHqKFhd6UCM5C6zFj8mLOeBMqKRzPIGZ98f8MN5/0zoQWLlJgXuvpFb O1LXUvaiY/2Y1nmFoKTpcF5Yql3pazCTz+O9usLPLKblRZn3INyxBhgcvP1tgZfJinr4nt9N KRC9//tuTpdFlzcqXBgkB/pyp5i+zUXqQp7cyKxk8DO2lJ54QJs6VMzkv/GV1Buo/fo4p0jb Kw4axJnB22LRpkV8b+O4hLB9yDAhsfBocQ1kY5d38wawzXgXxJxcLQjSvC0mECoXs1wY1lSr la0VQdjLCiBzX62uFSGr/WY/RDHgkgFnAHHy2qAgrvX/Uiw9zaxrXnUpte4fMhokAdwYFOY5 7+XLtq+xawHfwrWanemu677V2ZC7e+UatDNZy7BjzRKq5vNg2OvlXNskE8WQJGgE7DSi2+cz 8n905Ou66/EcARS20VHGjb+KA70f/BDO3Q7a5WOzUxxyUb4s95wnVV7ty9Vh8VuMxLSdE1wY oy7xGTxEHRffkqUSTG2r4zvoygsRnglRKXZjJs2AIMSuEwtzFgGM3nwF06I4NelJf0zkjrfy KOEtrekAAAAAAAA= --------------ms080306070708080308040001-- From owner-freebsd-fs@FreeBSD.ORG Wed Mar 26 17:00:28 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E456363E for ; Wed, 26 Mar 2014 17:00:27 +0000 (UTC) Received: from smtp.01.com (smtp.01.com [199.36.142.181]) by mx1.freebsd.org (Postfix) with ESMTP id 9F615D66 for ; Wed, 26 Mar 2014 17:00:27 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp-out-2.01.com (Postfix) with ESMTP id 608394F4B38 for ; Wed, 26 Mar 2014 11:53:14 -0500 (CDT) X-Virus-Scanned: amavisd-new at smtp-out-2.01.com Received: from smtp.01.com ([127.0.0.1]) by localhost (smtp-out-2.01.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JurK8ljMR2VO for ; Wed, 26 Mar 2014 11:53:14 -0500 (CDT) Received: from smtp.01.com (localhost [127.0.0.1]) by smtp-out-2.01.com (Postfix) with ESMTP id 3D9D34F4EC2 for ; Wed, 26 Mar 2014 11:53:14 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by smtp-out-2.01.com (Postfix) with ESMTP id 2721E4F4EB3 for ; Wed, 26 Mar 2014 11:53:14 -0500 (CDT) X-Virus-Scanned: amavisd-new at smtp-out-2.01.com Received: from smtp.01.com ([127.0.0.1]) by localhost (smtp-out-2.01.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id YmqH8RjyoRKp for ; Wed, 26 Mar 2014 11:53:14 -0500 (CDT) Received: from newman.zxcvm.com (unknown [38.109.103.138]) by smtp-out-2.01.com (Postfix) with ESMTPSA id AC18B4F4B38 for ; Wed, 26 Mar 2014 11:53:13 -0500 (CDT) From: Jason Breitman Message-Id: <0D13866F-04ED-4572-B7C9-04DC806B6513@zxcvm.com> Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Differences in reporting by du df and usedbydataset Date: Wed, 26 Mar 2014 12:53:11 -0400 References: To: fs@freebsd.org In-Reply-To: X-Mailer: Apple Mail (2.1874) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Mar 2014 17:00:28 -0000 The different disk usage measurements are frequently discussed and most = of the time snapshots are the source of confusion. I use refquota to avoid this confusion for user based file systems, but = can not explain the below reports and hope you can help. Why is there an ~18 GB difference between du and df / usedbydataset? I included additional information so that you can see that used =3D = usedbysnapshots + usedbydataset and that there are no reservations. # du -sh /tank/users/auser 5.1G /tank/users/auser # df -h /tank/users/auser Filesystem Size Used Avail Capacity Mounted on tank/users/auser 35G 23G 11G 66% /tank/users/auser =20= # zfs get usedbydataset tank/users/auser NAME PROPERTY VALUE SOURCE tank/users/auser usedbydataset 23.2G - # zfs get used,usedbysnapshots,usedbydataset tank/users/auser NAME PROPERTY VALUE SOURCE tank/users/auser used 63.9G - tank/users/auser usedbysnapshots 40.7G - tank/users/auser usedbydataset 23.2G - # zfs get refreservation,usedbyrefreservation tank/users/auser NAME PROPERTY VALUE SOURCE tank/users/auser refreservation none default tank/users/auser usedbyrefreservation 0=20 OS: Freebsd 9.1 # zpool upgrade -v This system is currently running ZFS pool version 28. The following versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices 11 Improved scrub performance 12 Snapshot properties 13 snapused property 14 passthrough-x aclinherit 15 user/group space accounting 16 stmf property support 17 Triple-parity RAID-Z 18 Snapshot user holds 19 Log device removal 20 Compression using zle (zero-length encoding) 21 Deduplication 22 Received properties 23 Slim ZIL 24 System attributes 25 Improved scrub stats 26 Improved snapshot deletion performance 27 Improved snapshot creation performance 28 Multiple vdev replacements For more information on a particular version, including supported = releases, see the ZFS Administration Guide. # zfs upgrade -v The following filesystem versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS filesystem version 2 Enhanced directory entries 3 Case insensitive and filesystem user identifier (FUID) 4 userquota, groupquota properties 5 System attributes For more information on a particular version, including supported = releases, see the ZFS Administration Guide. Jason Breitman jbreitman@zxcvm.com From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 00:27:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 75F6AF67; Thu, 27 Mar 2014 00:27:56 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id DF39D162; Thu, 27 Mar 2014 00:27:55 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqwEAK9vM1ODaFve/2dsb2JhbABZg0FXgwq4PYYdTVGBMHSCJQEBAQMBAQEBIAQnIAsFFhgCAg0ZAiMGAQkmDgcEARoCBIdEAwkIDa5cmwUNh0gXgSmIHIMQgUQBBgEBGzQHgm+BSQSUXweBEGqDIIs2hUqDSiExewEIFyI X-IronPort-AV: E=Sophos;i="4.97,738,1389762000"; d="scan'208";a="109331825" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 26 Mar 2014 20:27:48 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 949A2B3EEF; Wed, 26 Mar 2014 20:27:48 -0400 (EDT) Date: Wed, 26 Mar 2014 20:27:48 -0400 (EDT) From: Rick Macklem To: pyunyh@gmail.com Message-ID: <1903781266.1237680.1395880068597.JavaMail.root@uoguelph.ca> In-Reply-To: <20140326023334.GB2973@michelle.cdnetworks.com> Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , FreeBSD Net , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 00:27:56 -0000 pyunyh@gmail.com wrote: > On Tue, Mar 25, 2014 at 07:10:35PM -0400, Rick Macklem wrote: > > Hi, > > > > First off, I hope you don't mind that I cross-posted this, > > but I wanted to make sure both the NFS/iSCSI and networking > > types say it. > > If you look in this mailing list thread: > > http://docs.FreeBSD.org/cgi/mid.cgi?1850411724.1687820.1395621539316.JavaMail.root > > you'll see that several people have been working hard at testing > > and > > thanks to them, I think I now know what is going on. > > > Thanks for your hard work on narrowing down that issue. I'm too > busy for $work in these days so I couldn't find time to investigate > the issue. > > > (This applies to network drivers that support TSO and are limited > > to 32 transmit > > segments->32 mbufs in chain.) Doing a quick search I found the > > following > > drivers that appear to be affected (I may have missed some): > > jme, fxp, age, sge, msk, alc, ale, ixgbe/ix, nfe, e1000/em, re > > > > The magic number 32 was chosen long time ago when I implemented TSO > in non-Intel drivers. I tried to find optimal number to reduce the > size kernel stack usage at that time. bus_dma(9) will coalesce > with previous segment if possible so I thought the number 32 was > not an issue. Not sure current bus_dma(9) also has the same code > though. The number 32 is arbitrary one so you can increase > it if you want. > Well, in the case of "ix" Jack Vogel says it is a hardware limitation. I can't change drivers that I can't test and don't know anything about the hardware. Maybe replacing m_collapse() with m_defrag() is an exception, since I know what that is doing and it isn't hardware related, but I would still prefer a review by the driver author/maintainer before making such a change. If there are drivers that you know can be increased from 32->35 please do so, since that will not only avoid the EFBIG failures but also avoid a lot of calls to m_defrag(). > > Further, of these drivers, the following use m_collapse() and not > > m_defrag() > > to try and reduce the # of mbufs in the chain. m_collapse() is not > > going to > > get the 35 mbufs down to 32 mbufs, as far as I can see, so these > > ones are > > more badly broken: > > jme, fxp, age, sge, alc, ale, nfe, re > > I guess m_defeg(9) is more optimized for non-TSO packets. You don't > want to waste CPU cycles to copy the full frame to reduce the > number of mbufs in the chain. For TSO packets, m_defrag(9) looks > better but if we always have to copy a full TSO packet to make TSO > work, driver writers have to invent better scheme rather than > blindly relying on m_defrag(9), I guess. > Yes, avoiding m_defrag() calls would be nice. For this issue, increasing the transmit segment limit from 32->35 does that, if the change can be done easily/safely. Otherwise, all I can think of is my suggestion to add something like if_hw_tsomaxseg which the driver can use to tell tcp_output() the driver's limit for # of mbufs in the chain. > > > > The long description is in the above thread, but the short version > > is: > > - NFS generates a chain with 35 mbufs in it for (read/readdir > > replies and write requests) > > made up of (tcpip header, RPC header, NFS args, 32 clusters of > > file data) > > - tcp_output() usually trims the data size down to tp->t_tsomax > > (65535) and > > then some more to make it an exact multiple of TCP transmit data > > size. > > - the net driver prepends an ethernet header, growing the length > > by 14 (or > > sometimes 18 for vlans), but in the first mbuf and not adding > > one to the chain. > > - m_defrag() copies this to a chain of 32 mbuf clusters (because > > the total data > > length is <= 64K) and it gets sent > > > > However, it the data length is a little less than 64K when passed > > to tcp_output() > > so that the length including headers is in the range > > 65519->65535... > > - tcp_output() doesn't reduce its size. > > - the net driver adds an ethernet header, making the total data > > length slightly > > greater than 64K > > - m_defrag() copies it to a chain of 33 mbuf clusters, which > > fails with EFBIG > > --> trainwrecks NFS performance, because the TSO segment is dropped > > instead of sent. > > > > A tester also stated that the problem could be reproduced using > > iSCSI. Maybe > > Edward Napierala might know some details w.r.t. what kind of mbuf > > chain iSCSI > > generates? > > > > Also, one tester has reported that setting if_hw_tsomax in the > > driver before > > the ether_ifattach() call didn't make the value of tp->t_tsomax > > smaller. > > However, reducing IP_MAXPACKET (which is what it is set to by > > default) did > > reduce it. I have no idea why this happens or how to fix it, but it > > implies > > that setting if_hw_tsomax in the driver isn't a solution until this > > is resolved. > > > > So, what to do about this? > > First, I'd like a simple fix/workaround that can go into 9.3. > > (which is code > > freeze in May). The best thing I can think of is setting > > if_hw_tsomax to a > > smaller default value. (Line# 658 of sys/net/if.c in head.) > > > > Version A: > > replace > > ifp->if_hw_tsomax = IP_MAXPACKET; > > with > > ifp->if_hw_tsomax = min(32 * MCLBYTES - (ETHER_HDR_LEN + > > ETHER_VLAN_ENCAP_LEN), > > IP_MAXPACKET); > > plus > > replace m_collapse() with m_defrag() in the drivers listed above. > > > > This would only reduce the default from 65535->65518, so it only > > impacts > > the uncommon case where the output size (with tcpip header) is > > within > > this range. (As such, I don't think it would have a negative impact > > for > > drivers that handle more than 32 transmit segments.) > > From the testers, it seems that this is sufficient to get rid of > > the EFBIG > > errors. (The total data length including ethernet header doesn't > > exceed 64K, > > so m_defrag() fits it into 32 mbuf clusters.) > > > > The main downside of this is that there will be a lot of m_defrag() > > calls > > being done and they do quite a bit of bcopy()'ng. > > > > Version B: > > replace > > ifp->if_hw_tsomax = IP_MAXPACKET; > > with > > ifp->if_hw_tsomax = min(29 * MCLBYTES, IP_MAXPACKET); > > > > This one would avoid the m_defrag() calls, but might have a > > negative > > impact on TSO performance for drivers that can handle 35 transmit > > segments, > > since the maximum TSO segment size is reduced by about 6K. (Because > > of the > > second size reduction to an exact multiple of TCP transmit data > > size, the > > exact amount varies.) > > > > Possible longer term fixes: > > One longer term fix might be to add something like if_hw_tsomaxseg > > so that > > a driver can set a limit on the number of transmit segments (mbufs > > in chain) > > and tcp_output() could use that to limit the size of the TSO > > segment, as > > required. (I have a first stab at such a patch, but no way to test > > it, so > > I can't see that being done by May. Also, it would require changes > > to a lot > > of drivers to make it work. I've attached this patch, in case > > anyone wants > > to work on it?) > > > > Another might be to increase the size of MCLBYTES (I don't see this > > as > > practical for 9.3, although the actual change is simple.). I do > > think > > that increasing MCLBYTES might be something to consider doing in > > the > > future, for reasons beyond fixing this. > > > > So, what do others think should be done? rick > > > > AFAIK all TSO capable drivers you mentioned above have no limit on > the number of TX segments in TSO path. Not sure about Intel > controllers though. Increasing the number of segment will consume > lots of kernel stack in those drivers. Given that ixgbe, which > seems to use 100, didn't show any kernal stack shortage, I think > bumping the number of segments would be quick way to address the > issue. > Well, bumping it from 32->35 is all it would take for NFS (can't comment w.r.t. iSCSI). ixgbe uses 100 for the 82598 chip and 32 for the 82599 (just so others aren't confused by the above comment). I understand your point was w.r.t. using 100 without blowing the kernel stack, but since the testers have been using "ix" with the 82599 chip which is limited to 32 transmit segments... However, please increase any you know can be safely done from 32->35, rick _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to > "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 02:45:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 54089194; Thu, 27 Mar 2014 02:45:01 +0000 (UTC) Received: from mail-wg0-x234.google.com (mail-wg0-x234.google.com [IPv6:2a00:1450:400c:c00::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9909CF16; Thu, 27 Mar 2014 02:45:00 +0000 (UTC) Received: by mail-wg0-f52.google.com with SMTP id k14so1957774wgh.23 for ; Wed, 26 Mar 2014 19:44:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=WxTzjm+ofTFvUdW3JPNglL8ykzwepfyCuwfZNI/YPEg=; b=EKPr2w+hTR6vs88hZhC6PFPf6I6eM/01VOGK3B14YBHAo2lb+XI+7xXczRU6pf+S7N dTD2nFOv5Y3jelyLhogcN+/u2I1KBDrp/SsrYJYkkhSDXzc/v16wNeAJfvfERYDZYG2u mtnzLsudIFbzRFFeS4p/OX2chVU1JICd7j80pqPuo6zhvZ2JRTBs57OtklKMMCm+5hF9 7O7VzmDZiZg44g3QIrlrwR8/f+yIHPYVai3RfIQpEdtrBVvLy8cT40HTb2WMggzgYuwj LpsdjqS7sm6vKkeaCyf/SDmXMT6aZ7xES+CT6ejjyEg6Z1/fPYD6EDEau8P3pGUetls+ jfqA== MIME-Version: 1.0 X-Received: by 10.180.89.102 with SMTP id bn6mr35999708wib.28.1395888298103; Wed, 26 Mar 2014 19:44:58 -0700 (PDT) Received: by 10.216.190.199 with HTTP; Wed, 26 Mar 2014 19:44:58 -0700 (PDT) In-Reply-To: <1903781266.1237680.1395880068597.JavaMail.root@uoguelph.ca> References: <20140326023334.GB2973@michelle.cdnetworks.com> <1903781266.1237680.1395880068597.JavaMail.root@uoguelph.ca> Date: Thu, 27 Mar 2014 10:44:58 +0800 Message-ID: Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem From: Marcelo Araujo To: Rick Macklem Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems , Alexander Motin , FreeBSD Net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: araujo@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 02:45:01 -0000 Hello All, 2014-03-27 8:27 GMT+08:00 Rick Macklem : > > Well, bumping it from 32->35 is all it would take for NFS (can't comment > w.r.t. iSCSI). ixgbe uses 100 for the 82598 chip and 32 for the 82599 > (just so others aren't confused by the above comment). I understand > your point was w.r.t. using 100 without blowing the kernel stack, but > since the testers have been using "ix" with the 82599 chip which is > limited to 32 transmit segments... > > However, please increase any you know can be safely done from 32->35, rick > > I have plenty of machines using Intel X540 that is based on 82599 chipset. I have applied Rick's patch on ixgbe to check if the packet size is bigger than 65535 or cluster is bigger than 32. So far till now, on FreeBSD 9.1-RELEASE this problem does not happens. Unfortunately all my environment here is based on 9.1-RELEASE, with some merges from 10-RELEASE such like: NFS and IXGBE. Also I have applied the patch that Rick sent in another email with the subject 'NFS patch to use pagesize mbuf clusters'. And we can see some performance boost over 10Gbps Intel. However here at the company, we are still doing benchmarks. If someone wants to have my benchmark result, I can send it later. I'm wondering, if this update on ixgbe from 32->35 could be applied also for versions < 9.2. I'm thinking, that this problem arise only on 9-STABLE and consequently on 9.2-RELEASE. And fortunately or not 9.1-RELEASE doesn't share it. Best Regards, -- Marcelo Araujo araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 07:50:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5517ACB6 for ; Thu, 27 Mar 2014 07:50:08 +0000 (UTC) Received: from mail-wg0-x22f.google.com (mail-wg0-x22f.google.com [IPv6:2a00:1450:400c:c00::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E7792BBC for ; Thu, 27 Mar 2014 07:50:07 +0000 (UTC) Received: by mail-wg0-f47.google.com with SMTP id x12so2144708wgg.18 for ; Thu, 27 Mar 2014 00:50:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=mZKUshbA6/GVjNJe4IISX8T0rOLFGoSbdEnSBdPhkB8=; b=JR0NrSBCCjGjnoQPl31g0DOnyz2ifaMlztNmo0rKAdl4dSksYiK8OAyc976lAWzR37 FhZLlZA7f8Si6oMaqnbl4ScyyFeKN5wxfIDdsjVGuULTDNKpbIrecslIUpx8lEwveb6i 6KWzv8X0S2rW3miSJDGvablFuGwd7nv+zK3LUZvitoxLZMoDf/s4oEkZTqisxjLCkbCo lJgLCD/TUvAnKlX1U/0/nIb8Izr6ctjzTn7neP0mmyO4O1YsxQD3qhInWAqdufKCPEaW X9wETglnyK3yNug8n5Fwcc6ObjGnYpYqGEFPgbwl4l5KwizNnqwZcIjBINroTZn0ehbM qOgw== MIME-Version: 1.0 X-Received: by 10.180.97.72 with SMTP id dy8mr2623801wib.5.1395906606072; Thu, 27 Mar 2014 00:50:06 -0700 (PDT) Received: by 10.216.146.195 with HTTP; Thu, 27 Mar 2014 00:50:06 -0700 (PDT) Date: Thu, 27 Mar 2014 08:50:06 +0100 Message-ID: Subject: zfs l2arc warmup From: Joar Jegleim To: "freebsd-fs@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 07:50:08 -0000 Hi list ! I struggling to get a clear understanding of how the l2arc get warm ( zfs). It's a FreeBSD 9.2-RELEASE server. >From various forum I've come up with this which I have in my /boot/loader.conf # L2ARC tuning # Maximum number of bytes written to l2arc per feed # 8MB (actuall=vfs.zfs.l2arc_write_max*(1000 / vfs.zfs.l2arc_feed_min_ms)) # so 8MB every 200ms = 40MB/s vfs.zfs.l2arc_write_max=8388608 # Mostly only relevant at the first few hours after boot # write_boost, speed to fill l2arc until it is filled (after boot) # 70MB, same rule applys, multiply by 5 = 350MB/s vfs.zfs.l2arc_write_boost=73400320 # Not sure vfs.zfs.l2arc_headroom=2 # l2arc feeding period vfs.zfs.l2arc_feed_secs=1 # minimum l2arc feeding period vfs.zfs.l2arc_feed_min_ms=200 # control whether streaming data is cached or not vfs.zfs.l2arc_noprefetch=1 # control whether feed_min_ms is used or not vfs.zfs.l2arc_feed_again=1 # no read and write at the same time vfs.zfs.l2arc_norw=1 But what I really wonder is how does the l2arc get warmed up ? I'm thinking of 2 scenarios: a.: when arc is full, stuff that evict from arc is put over in l2arc, that means that files in the fs that are never accessed will never end up in l2arc, right ? b.: zfs run through fs in the background and fill up the l2arc for any file, regardless if it has been accessed or not ( this is the 'feature' I'd like ) I suspect scenario a is what really happens, and if so, how does people warmup the l2arc manually (?) I figured that if I rsync everything from the pool I want to be cache'ed, it will fill up the l2arc for me, which I'm doing right now. But it takes 3-4 days to rsync the whole pool . Is this how 'you' do it to warmup the l2arc, or am I missing something ? The thing is with this particular pool is that it serves somewhere between 20 -> 30 million jpegs for a website. The front page of the site will for every reload present a mosaic of about 36 jpegs, and the jpegs are completely randomly fetched from the pool. I don't know what jpegs will be fetched at any given time, so I'm installing about 2TB of l2arc ( the pool is about 1.6TB today) and I want the whole pool to be available from the l2arc . Any input on my 'rsync solution' to warmup the l2arc is much appreciated :) -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 09:13:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0AF4E36D for ; Thu, 27 Mar 2014 09:13:15 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C0FD0374 for ; Thu, 27 Mar 2014 09:13:14 +0000 (UTC) Received: from firewall.mikej.com (162-238-140-44.lightspeed.lsvlky.sbcglobal.net [162.238.140.44]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s2R9D5uu035442 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Thu, 27 Mar 2014 05:13:06 -0400 (EDT) (envelope-from mikej@mikej.com) Received: from firewall.mikej.com (localhost.mikej.com [127.0.0.1]) by firewall.mikej.com (8.14.8/8.14.8) with ESMTP id s2R9COxI059505 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Thu, 27 Mar 2014 05:13:05 -0400 (EDT) (envelope-from mikej@mikej.com) Received: (from www@localhost) by firewall.mikej.com (8.14.8/8.14.8/Submit) id s2R9CLFT059500; Thu, 27 Mar 2014 05:12:21 -0400 (EDT) (envelope-from mikej@mikej.com) X-Authentication-Warning: firewall.mikej.com: www set sender to mikej@mikej.com using -f To: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Thu, 27 Mar 2014 05:11:50 -0400 From: mikej In-Reply-To: <201403261230.s2QCU3vI095105@freefall.freebsd.org> References: <201403261230.s2QCU3vI095105@freefall.freebsd.org> Message-ID: <8659e58b9fabd9f553c8be5da5dc61fd@mail.mikej.com> X-Sender: mikej@mikej.com User-Agent: Roundcube Webmail/0.6-beta X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 09:13:15 -0000 I've been running the latest patch now on r263711 and want to give it a +1 No ZFS knobs set and I must go out of my way to have my system swap. I hope this patch gets a much wider review and can be put into the tree permanently. Karl, thanks for the working on this. Regards, Michael Jung From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 09:16:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 65B284E2 for ; Thu, 27 Mar 2014 09:16:10 +0000 (UTC) Received: from mail-yk0-x22d.google.com (mail-yk0-x22d.google.com [IPv6:2607:f8b0:4002:c07::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 27519390 for ; Thu, 27 Mar 2014 09:16:10 +0000 (UTC) Received: by mail-yk0-f173.google.com with SMTP id 10so1981168ykt.4 for ; Thu, 27 Mar 2014 02:16:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=72SjRSzNl50lfE+hu5NB2A54KlxKZpuoLl7T0FeaWRQ=; b=B+RN4OiEY+xUjmS1FfsdLrLa+q75yOH+TlbWqAT232hZDdYRyUiLDk+XbbJp3IddMI cVx0g7pqrAS/izkqL2NWeoLPqWxlTc9r5MY8NA8QKORgNkYkNFtx0IiSypS+GGjfKPRw dyG6Y96a8MhALrpOCIYqdZuL5fzV6n3MEfGbuUibF5HZbycNsVwfQyyJ1P7dSdjvgkz4 ekN/p1r9WeHAhB4paEcxEL59RfqzUOeEmJ93F2I8xh1siG/el42wTLEXjPZ8Yo063qX0 lyhfj/xF4wu/AkuEJZnaC/SiRgyLwUywxrt6oA+6o/TrUXTLLBUnyWIRaHQcm43lTWEe 0q/Q== MIME-Version: 1.0 X-Received: by 10.236.69.230 with SMTP id n66mr80155yhd.124.1395911769183; Thu, 27 Mar 2014 02:16:09 -0700 (PDT) Received: by 10.170.54.17 with HTTP; Thu, 27 Mar 2014 02:16:09 -0700 (PDT) In-Reply-To: References: Date: Thu, 27 Mar 2014 09:16:09 +0000 Message-ID: Subject: Re: zfs l2arc warmup From: krad To: Joar Jegleim Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 09:16:10 -0000 not sure if its made it into freebsd yet but https://www.illumos.org/issues/3525 On 27 March 2014 07:50, Joar Jegleim wrote: > Hi list ! > > I struggling to get a clear understanding of how the l2arc get warm ( zfs). > It's a FreeBSD 9.2-RELEASE server. > > From various forum I've come up with this which I have in my > /boot/loader.conf > # L2ARC tuning > # Maximum number of bytes written to l2arc per feed > # 8MB (actuall=vfs.zfs.l2arc_write_max*(1000 / vfs.zfs.l2arc_feed_min_ms)) > # so 8MB every 200ms = 40MB/s > vfs.zfs.l2arc_write_max=8388608 > # Mostly only relevant at the first few hours after boot > # write_boost, speed to fill l2arc until it is filled (after boot) > # 70MB, same rule applys, multiply by 5 = 350MB/s > vfs.zfs.l2arc_write_boost=73400320 > # Not sure > vfs.zfs.l2arc_headroom=2 > # l2arc feeding period > vfs.zfs.l2arc_feed_secs=1 > # minimum l2arc feeding period > vfs.zfs.l2arc_feed_min_ms=200 > # control whether streaming data is cached or not > vfs.zfs.l2arc_noprefetch=1 > # control whether feed_min_ms is used or not > vfs.zfs.l2arc_feed_again=1 > # no read and write at the same time > vfs.zfs.l2arc_norw=1 > > But what I really wonder is how does the l2arc get warmed up ? > I'm thinking of 2 scenarios: > > a.: when arc is full, stuff that evict from arc is put over in l2arc, > that means that files in the fs that are never accessed will never end > up in l2arc, right ? > > b.: zfs run through fs in the background and fill up the l2arc for any > file, regardless if it has been accessed or not ( this is the > 'feature' I'd like ) > > I suspect scenario a is what really happens, and if so, how does > people warmup the l2arc manually (?) > I figured that if I rsync everything from the pool I want to be > cache'ed, it will fill up the l2arc for me, which I'm doing right now. > But it takes 3-4 days to rsync the whole pool . > > Is this how 'you' do it to warmup the l2arc, or am I missing something ? > > The thing is with this particular pool is that it serves somewhere > between 20 -> 30 million jpegs for a website. The front page of the > site will for every reload present a mosaic of about 36 jpegs, and the > jpegs are completely randomly fetched from the pool. > I don't know what jpegs will be fetched at any given time, so I'm > installing about 2TB of l2arc ( the pool is about 1.6TB today) and I > want the whole pool to be available from the l2arc . > > > Any input on my 'rsync solution' to warmup the l2arc is much appreciated :) > > > -- > ---------------------- > Joar Jegleim > Homepage: http://cosmicb.no > Linkedin: http://no.linkedin.com/in/joarjegleim > fb: http://www.facebook.com/joar.jegleim > AKA: CosmicB @Freenode > > ---------------------- > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 09:18:09 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 16A5A56E for ; Thu, 27 Mar 2014 09:18:09 +0000 (UTC) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.81]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id CBE9D3A3 for ; Thu, 27 Mar 2014 09:18:08 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1WT6CS-0006h0-Hl; Thu, 27 Mar 2014 10:02:12 +0100 Content-Type: text/plain; charset=iso-8859-15; format=flowed; delsp=yes To: "freebsd-fs@freebsd.org" , "Joar Jegleim" Subject: Re: zfs l2arc warmup References: Date: Thu, 27 Mar 2014 10:02:11 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.16 (Win32) X-Authenticated-As-Hash: 398f5522cb258ce43cb679602f8cfe8b62a256d1 X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: 0.8 X-Spam-Status: No, score=0.8 required=5.0 tests=BAYES_50 autolearn=disabled version=3.3.2 X-Scan-Signature: bc82d6a2e188773012c482ed32290af0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 09:18:09 -0000 On Thu, 27 Mar 2014 08:50:06 +0100, Joar Jegleim wrote: > Hi list ! > > I struggling to get a clear understanding of how the l2arc get warm ( > zfs). > It's a FreeBSD 9.2-RELEASE server. > > From various forum I've come up with this which I have in my > /boot/loader.conf > # L2ARC tuning > # Maximum number of bytes written to l2arc per feed > # 8MB (actuall=vfs.zfs.l2arc_write_max*(1000 / > vfs.zfs.l2arc_feed_min_ms)) > # so 8MB every 200ms = 40MB/s > vfs.zfs.l2arc_write_max=8388608 > # Mostly only relevant at the first few hours after boot > # write_boost, speed to fill l2arc until it is filled (after boot) > # 70MB, same rule applys, multiply by 5 = 350MB/s > vfs.zfs.l2arc_write_boost=73400320 > # Not sure > vfs.zfs.l2arc_headroom=2 > # l2arc feeding period > vfs.zfs.l2arc_feed_secs=1 > # minimum l2arc feeding period > vfs.zfs.l2arc_feed_min_ms=200 > # control whether streaming data is cached or not > vfs.zfs.l2arc_noprefetch=1 > # control whether feed_min_ms is used or not > vfs.zfs.l2arc_feed_again=1 > # no read and write at the same time > vfs.zfs.l2arc_norw=1 > > But what I really wonder is how does the l2arc get warmed up ? > I'm thinking of 2 scenarios: > > a.: when arc is full, stuff that evict from arc is put over in l2arc, > that means that files in the fs that are never accessed will never end > up in l2arc, right ? > > b.: zfs run through fs in the background and fill up the l2arc for any > file, regardless if it has been accessed or not ( this is the > 'feature' I'd like ) > > I suspect scenario a is what really happens, and if so, how does > people warmup the l2arc manually (?) > I figured that if I rsync everything from the pool I want to be > cache'ed, it will fill up the l2arc for me, which I'm doing right now. > But it takes 3-4 days to rsync the whole pool . > > Is this how 'you' do it to warmup the l2arc, or am I missing something ? > > The thing is with this particular pool is that it serves somewhere > between 20 -> 30 million jpegs for a website. The front page of the > site will for every reload present a mosaic of about 36 jpegs, and the > jpegs are completely randomly fetched from the pool. > I don't know what jpegs will be fetched at any given time, so I'm > installing about 2TB of l2arc ( the pool is about 1.6TB today) and I > want the whole pool to be available from the l2arc . > > > Any input on my 'rsync solution' to warmup the l2arc is much appreciated > :) 2TB of l2arc? Why don't you put your data on SSD's, get rid of the l2arc and buy some extra RAM instead. Than you don't need any warm-up. For future questions, please provide more details about your setup. What are disks, what ssds, how much RAM. How is your pool configured? Mirror, raidz, ... Things like that. Ronald. From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 10:06:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9442A242 for ; Thu, 27 Mar 2014 10:06:03 +0000 (UTC) Received: from mail-qa0-x229.google.com (mail-qa0-x229.google.com [IPv6:2607:f8b0:400d:c00::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 53FF69F1 for ; Thu, 27 Mar 2014 10:06:03 +0000 (UTC) Received: by mail-qa0-f41.google.com with SMTP id j5so3552234qaq.14 for ; Thu, 27 Mar 2014 03:06:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=UCWPa3sAkRz1AC26tCDTReXnfo5BDuPHS7TToRybsvA=; b=X71r/sLQLuOy8kj4w0rJ+8AB3ImxUyzCx8f+pmc+C7Ba1B3BTUJqDoRj9PPwRvmmJi Vwcf3Rssv1CcJA8ZS65ob4rtIVW8okUJ/2SqBdApvP84d1v9jDNbZ7RKiBR8EKb9avQf nsYz1hh1Fp/cGFa5qma2Un37o6z01RIE3ZpYZrLNivnLXvZJvIAa3L2L81kgyByKoHXF VHIDWyoRXrZ2NeK5F9gfJcmuat0UR26y+IJrmJHwhIiPctvQyhmoyCue9dcT9CGrhjFT P+ipwOpjdZDqpzqnHHwl/EnZE3c9bvW7tkgs6r9/u7mFqMF9GnDeKNvoosjT8pUkPo/V fCSQ== MIME-Version: 1.0 X-Received: by 10.140.90.80 with SMTP id w74mr859092qgd.96.1395914762467; Thu, 27 Mar 2014 03:06:02 -0700 (PDT) Received: by 10.96.143.37 with HTTP; Thu, 27 Mar 2014 03:06:02 -0700 (PDT) In-Reply-To: References: Date: Thu, 27 Mar 2014 11:06:02 +0100 Message-ID: Subject: Re: zfs l2arc warmup From: Joar Jegleim To: Ronald Klop Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 10:06:03 -0000 Hi, thnx for your input. The current setup : 2 X HP Proliant DL380 G7, 2xXeon (six core)@2667Mhz, 144GB DDR3 @1333Mhz (ecc, registered) Each server has an external shelf with 20 1TB SATA ('sas midline') 7200RPM disks The shelf is connected via a Smart Array P410i with 1GB cache . The second server is failover, I use zfs send/receive for replication ( with mbuffer). I had HAST in there for a couple months but got cold feet after some problems + I hate to have an expensive server just 'sitting there'. We will in near future start serving jpeg's from both servers which is a setup I like a lot more. I've setup 20 single disk 'raid 0' logical disks in that P410i, and built zfs mirror over those 20 disks ( raid 10) which give me about ~9TB of storage. For the record, I initially used an LSI SAS 9207-4i4e SGL HBA to connect the external shelf, but after some testing I realized I got more performance out of the P410i with cache enabled. I have dual power supplies as well as ups and want performance with the risk it involves. At the moment I have 2x Intel 520 480GB ssd's for l2arc, the plan is to add 2 more ssd's to get ~ 2TB for l2arc, and add a small ssd for log/zil The pool got default recordsize (128k), atime=off and I've set compression to lz4 and got a compressratio of 1.18x . I've set the following sysctl's related to zfs: # 100GB vfs.zfs.arc_max=107374182400 # used to be 5(default) trying 1 vfs.zfs.txg.timeout="1" # this to work with the raid ctrl cache vfs.zfs.cache_flush_disable=1 vfs.zfs.write_limit_shift=9 vfs.zfs.txg.synctime_ms=200 # L2ARC tuning # Maximum number of bytes written to l2arc per feed # 8MB/s (actuall=vfs.zfs.l2arc_write_max*vfs.zfs.l2arc_feed_min_ms) # so 8MB every 200ms = 40MB/s vfs.zfs.l2arc_write_max=8388608 # Mostly only relevant at the first few hours after boot # write_boost, speed to fill l2arc until it is filled (after boot) # 70MB/s, same rule applys, multiply by 5 vfs.zfs.l2arc_write_boost=73400320 # Not sure vfs.zfs.l2arc_headroom=2 # l2arc feeding period vfs.zfs.l2arc_feed_secs=1 # minimum l2arc feeding period vfs.zfs.l2arc_feed_min_ms=200 # control whether streaming data is cached or not vfs.zfs.l2arc_noprefetch=1 # control whether feed_min_ms is used or not vfs.zfs.l2arc_feed_again=1 # no read and write at the same time vfs.zfs.l2arc_norw=1 > 2TB of l2arc? > Why don't you put your data on SSD's, get rid of the l2arc and buy some > extra RAM instead. > Than you don't need any warm-up. I'm considering this option, but today I have ~10TB of storage, and need space for future growth + I like the idea that the l2arc may die and I'll loose performance, not my data. + I reckon I'd have to use a lot more expensive ssd's if I was to use them for main datastore, as l2arc I can use cheaper ssd's. Those intel 520's can deliver ~50 000 iops, and I need iops, not necessarily bandwidth. At least that's my understandig of this. Open for input ! :) On 27 March 2014 10:02, Ronald Klop wrote: > On Thu, 27 Mar 2014 08:50:06 +0100, Joar Jegleim > wrote: > >> Hi list ! >> >> I struggling to get a clear understanding of how the l2arc get warm ( >> zfs). >> It's a FreeBSD 9.2-RELEASE server. >> >> From various forum I've come up with this which I have in my >> /boot/loader.conf >> # L2ARC tuning >> # Maximum number of bytes written to l2arc per feed >> # 8MB (actuall=vfs.zfs.l2arc_write_max*(1000 / vfs.zfs.l2arc_feed_min_ms)) >> # so 8MB every 200ms = 40MB/s >> vfs.zfs.l2arc_write_max=8388608 >> # Mostly only relevant at the first few hours after boot >> # write_boost, speed to fill l2arc until it is filled (after boot) >> # 70MB, same rule applys, multiply by 5 = 350MB/s >> vfs.zfs.l2arc_write_boost=73400320 >> # Not sure >> vfs.zfs.l2arc_headroom=2 >> # l2arc feeding period >> vfs.zfs.l2arc_feed_secs=1 >> # minimum l2arc feeding period >> vfs.zfs.l2arc_feed_min_ms=200 >> # control whether streaming data is cached or not >> vfs.zfs.l2arc_noprefetch=1 >> # control whether feed_min_ms is used or not >> vfs.zfs.l2arc_feed_again=1 >> # no read and write at the same time >> vfs.zfs.l2arc_norw=1 >> >> But what I really wonder is how does the l2arc get warmed up ? >> I'm thinking of 2 scenarios: >> >> a.: when arc is full, stuff that evict from arc is put over in l2arc, >> that means that files in the fs that are never accessed will never end >> up in l2arc, right ? >> >> b.: zfs run through fs in the background and fill up the l2arc for any >> file, regardless if it has been accessed or not ( this is the >> 'feature' I'd like ) >> >> I suspect scenario a is what really happens, and if so, how does >> people warmup the l2arc manually (?) >> I figured that if I rsync everything from the pool I want to be >> cache'ed, it will fill up the l2arc for me, which I'm doing right now. >> But it takes 3-4 days to rsync the whole pool . >> >> Is this how 'you' do it to warmup the l2arc, or am I missing something ? >> >> The thing is with this particular pool is that it serves somewhere >> between 20 -> 30 million jpegs for a website. The front page of the >> site will for every reload present a mosaic of about 36 jpegs, and the >> jpegs are completely randomly fetched from the pool. >> I don't know what jpegs will be fetched at any given time, so I'm >> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I >> want the whole pool to be available from the l2arc . >> >> >> Any input on my 'rsync solution' to warmup the l2arc is much appreciated >> :) > > > > 2TB of l2arc? > Why don't you put your data on SSD's, get rid of the l2arc and buy some > extra RAM instead. > Than you don't need any warm-up. > > For future questions, please provide more details about your setup. What are > disks, what ssds, how much RAM. How is your pool configured? Mirror, raidz, > ... Things like that. > > Ronald. -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 10:10:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7F271303 for ; Thu, 27 Mar 2014 10:10:49 +0000 (UTC) Received: from mail-qa0-x234.google.com (mail-qa0-x234.google.com [IPv6:2607:f8b0:400d:c00::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 40BC3A16 for ; Thu, 27 Mar 2014 10:10:49 +0000 (UTC) Received: by mail-qa0-f52.google.com with SMTP id m5so3490657qaj.25 for ; Thu, 27 Mar 2014 03:10:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=5mgbOkKBsQpDb1CX2QukYvzTIZJ8AuNWlwxMY5EYD0Y=; b=NzoSNV1J5/2n75R3jU+6C38U0j9SdIsvEQYWfv1VtZpu5RIj3rkAyQyEUlchPIku6d y+HO3SlGEjENHcxUbnv8aMC82D2AgG2lwPkxlmy9Z4Ibp4SNh929YAMlKfq3pm16T1Op URoFIKNCoH6wTeQ+tUTVnj3MdQDzF0d6k3wuNmsLq+HNID3IZFcqZUjVsyW4Z0Dzalcd Ak03eX6/4aOS2mWSc1G92Y49SomvqPLchEg5R1inVBGcTY+5bjwyYSYKZWuUitkKAmCY DZ5JrcxE/vsHZDbgsAux8Yde0D5sRmRf6hed5SpzjARMTS05zWEqGMvfIWWLcRMxIq8W 4SSQ== MIME-Version: 1.0 X-Received: by 10.140.90.80 with SMTP id w74mr882396qgd.96.1395915048362; Thu, 27 Mar 2014 03:10:48 -0700 (PDT) Received: by 10.96.143.37 with HTTP; Thu, 27 Mar 2014 03:10:48 -0700 (PDT) In-Reply-To: References: Date: Thu, 27 Mar 2014 11:10:48 +0100 Message-ID: Subject: Re: zfs l2arc warmup From: Joar Jegleim To: krad Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 10:10:49 -0000 thnx, I found some similar post over at illumos related to persistent l2arc yesterday. It's interesting :) But it's really not a problem for me how long it takes to warm up the l2arc, if it takes a week that's ok. After all I don't plan on reboot'ing this setup very often + I have 2 servers so I have the option to let the server warmup until i hook it into production again after maintenance / patch upgrade and so on . I'm just curious about wether or not the l2arc warmup itself, or if I would have to do that manual rsync to force l2arc warmup. On 27 March 2014 10:16, krad wrote: > not sure if its made it into freebsd yet but > > https://www.illumos.org/issues/3525 > > > > > On 27 March 2014 07:50, Joar Jegleim wrote: >> >> Hi list ! >> >> I struggling to get a clear understanding of how the l2arc get warm ( >> zfs). >> It's a FreeBSD 9.2-RELEASE server. >> >> From various forum I've come up with this which I have in my >> /boot/loader.conf >> # L2ARC tuning >> # Maximum number of bytes written to l2arc per feed >> # 8MB (actuall=vfs.zfs.l2arc_write_max*(1000 / vfs.zfs.l2arc_feed_min_ms)) >> # so 8MB every 200ms = 40MB/s >> vfs.zfs.l2arc_write_max=8388608 >> # Mostly only relevant at the first few hours after boot >> # write_boost, speed to fill l2arc until it is filled (after boot) >> # 70MB, same rule applys, multiply by 5 = 350MB/s >> vfs.zfs.l2arc_write_boost=73400320 >> # Not sure >> vfs.zfs.l2arc_headroom=2 >> # l2arc feeding period >> vfs.zfs.l2arc_feed_secs=1 >> # minimum l2arc feeding period >> vfs.zfs.l2arc_feed_min_ms=200 >> # control whether streaming data is cached or not >> vfs.zfs.l2arc_noprefetch=1 >> # control whether feed_min_ms is used or not >> vfs.zfs.l2arc_feed_again=1 >> # no read and write at the same time >> vfs.zfs.l2arc_norw=1 >> >> But what I really wonder is how does the l2arc get warmed up ? >> I'm thinking of 2 scenarios: >> >> a.: when arc is full, stuff that evict from arc is put over in l2arc, >> that means that files in the fs that are never accessed will never end >> up in l2arc, right ? >> >> b.: zfs run through fs in the background and fill up the l2arc for any >> file, regardless if it has been accessed or not ( this is the >> 'feature' I'd like ) >> >> I suspect scenario a is what really happens, and if so, how does >> people warmup the l2arc manually (?) >> I figured that if I rsync everything from the pool I want to be >> cache'ed, it will fill up the l2arc for me, which I'm doing right now. >> But it takes 3-4 days to rsync the whole pool . >> >> Is this how 'you' do it to warmup the l2arc, or am I missing something ? >> >> The thing is with this particular pool is that it serves somewhere >> between 20 -> 30 million jpegs for a website. The front page of the >> site will for every reload present a mosaic of about 36 jpegs, and the >> jpegs are completely randomly fetched from the pool. >> I don't know what jpegs will be fetched at any given time, so I'm >> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I >> want the whole pool to be available from the l2arc . >> >> >> Any input on my 'rsync solution' to warmup the l2arc is much appreciated >> :) >> >> >> -- >> ---------------------- >> Joar Jegleim >> Homepage: http://cosmicb.no >> Linkedin: http://no.linkedin.com/in/joarjegleim >> fb: http://www.facebook.com/joar.jegleim >> AKA: CosmicB @Freenode >> >> ---------------------- >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 10:21:07 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 19B4A53B for ; Thu, 27 Mar 2014 10:21:07 +0000 (UTC) Received: from mail-ee0-x231.google.com (mail-ee0-x231.google.com [IPv6:2a00:1450:4013:c00::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A01A8ACD for ; Thu, 27 Mar 2014 10:21:06 +0000 (UTC) Received: by mail-ee0-f49.google.com with SMTP id c41so2622725eek.22 for ; Thu, 27 Mar 2014 03:21:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=bCwJ7SxefoRzE7/D/BPZTNjPiPwxQpHx/sWDGedktks=; b=ADRXNy6FlNaMcoiXH6dW9IjMmG31g9GZr+c7VA6scHH0JcypL2mjQeOyb8PP4PS+Iz ARm/NeHJ8HpcpRVrf4gepcfojvyUzfwKenlQOLXKBPJ0eafU4vd+J30Duiqk5epDDhee kuuQOCvTFpeZ19IYyy2p4BWPzR6hq2O5PGVOQv6p1QCa8XU6LRjSkix7pVtDbiqnD47/ IzsChKEqpcITX10ZGqN9LhINmiByYG3UV6x5GoUJvWtyWRxkwe0RTKR+EL0bXbP11aib Ovm8oXGiutaAvB2QWCItZBpgIH1Ct1xt7JQGO5rfWWFgjJDR+AFM9mcnKZN4L279r3XK aOGg== X-Received: by 10.15.73.134 with SMTP id h6mr1008890eey.3.1395915663397; Thu, 27 Mar 2014 03:21:03 -0700 (PDT) Received: from [192.168.1.129] ([193.173.55.180]) by mx.google.com with ESMTPSA id l42sm3383924eew.19.2014.03.27.03.21.02 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 27 Mar 2014 03:21:02 -0700 (PDT) Message-ID: <5333FB8F.7010500@gmail.com> Date: Thu, 27 Mar 2014 11:21:03 +0100 From: Johan Hendriks User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Joar Jegleim Subject: Re: zfs l2arc warmup References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 10:21:07 -0000 Joar Jegleim schreef: > Hi list ! > > I struggling to get a clear understanding of how the l2arc get warm ( zfs). > It's a FreeBSD 9.2-RELEASE server. > > From various forum I've come up with this which I have in my /boot/loader.conf > # L2ARC tuning > # Maximum number of bytes written to l2arc per feed > # 8MB (actuall=vfs.zfs.l2arc_write_max*(1000 / vfs.zfs.l2arc_feed_min_ms)) > # so 8MB every 200ms = 40MB/s > vfs.zfs.l2arc_write_max=8388608 > # Mostly only relevant at the first few hours after boot > # write_boost, speed to fill l2arc until it is filled (after boot) > # 70MB, same rule applys, multiply by 5 = 350MB/s > vfs.zfs.l2arc_write_boost=73400320 > # Not sure > vfs.zfs.l2arc_headroom=2 > # l2arc feeding period > vfs.zfs.l2arc_feed_secs=1 > # minimum l2arc feeding period > vfs.zfs.l2arc_feed_min_ms=200 > # control whether streaming data is cached or not > vfs.zfs.l2arc_noprefetch=1 > # control whether feed_min_ms is used or not > vfs.zfs.l2arc_feed_again=1 > # no read and write at the same time > vfs.zfs.l2arc_norw=1 > > But what I really wonder is how does the l2arc get warmed up ? > I'm thinking of 2 scenarios: > > a.: when arc is full, stuff that evict from arc is put over in l2arc, > that means that files in the fs that are never accessed will never end > up in l2arc, right ? > > b.: zfs run through fs in the background and fill up the l2arc for any > file, regardless if it has been accessed or not ( this is the > 'feature' I'd like ) > > I suspect scenario a is what really happens, and if so, how does > people warmup the l2arc manually (?) > I figured that if I rsync everything from the pool I want to be > cache'ed, it will fill up the l2arc for me, which I'm doing right now. > But it takes 3-4 days to rsync the whole pool . > > Is this how 'you' do it to warmup the l2arc, or am I missing something ? > > The thing is with this particular pool is that it serves somewhere > between 20 -> 30 million jpegs for a website. The front page of the > site will for every reload present a mosaic of about 36 jpegs, and the > jpegs are completely randomly fetched from the pool. > I don't know what jpegs will be fetched at any given time, so I'm > installing about 2TB of l2arc ( the pool is about 1.6TB today) and I > want the whole pool to be available from the l2arc . > > > Any input on my 'rsync solution' to warmup the l2arc is much appreciated :) > > A nice blog about the L2ARC https://blogs.oracle.com/brendan/entry/test https://blogs.oracle.com/brendan/entry/l2arc_screenshots regards Johan From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 10:40:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2B8A57DF for ; Thu, 27 Mar 2014 10:40:29 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [88.198.178.88]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6C162C4D for ; Thu, 27 Mar 2014 10:40:28 +0000 (UTC) Received: (qmail 18091 invoked by uid 89); 27 Mar 2014 10:40:19 -0000 Received: by simscan 1.4.0 ppid: 18086, pid: 18088, t: 0.0256s scanners: attach: 1.4.0 clamav: 0.97.3/m:55/d:18700 Received: from unknown (HELO suse3.ewadmin.local) (rainer@ultra-secure.de@212.71.117.1) by mail.ultra-secure.de with ESMTPA; 27 Mar 2014 10:40:19 -0000 Date: Thu, 27 Mar 2014 11:40:18 +0100 From: Rainer Duffner To: Joar Jegleim Subject: Re: zfs l2arc warmup Message-ID: <20140327114018.6d50b666@suse3.ewadmin.local> In-Reply-To: References: X-Mailer: Claws Mail 3.9.2 (GTK+ 2.24.22; x86_64-suse-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 10:40:29 -0000 Am Thu, 27 Mar 2014 08:50:06 +0100 schrieb Joar Jegleim : > Hi list ! > > I struggling to get a clear understanding of how the l2arc get warm > ( zfs). It's a FreeBSD 9.2-RELEASE server. > > The thing is with this particular pool is that it serves somewhere > between 20 -> 30 million jpegs for a website. The front page of the > site will for every reload present a mosaic of about 36 jpegs, and the > jpegs are completely randomly fetched from the pool. > I don't know what jpegs will be fetched at any given time, so I'm > installing about 2TB of l2arc ( the pool is about 1.6TB today) and I > want the whole pool to be available from the l2arc . > > > Any input on my 'rsync solution' to warmup the l2arc is much > appreciated :) > > Don't you need RAM for the L2ARC, too? http://www.richardelling.com/Home/scripts-and-programs-1/l2arc I'd just max-out the RAM on the DL370 - you'd need to do that anyway, according to the above spread-sheet.... From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 11:52:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 94D30A50 for ; Thu, 27 Mar 2014 11:52:49 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 43A206C4 for ; Thu, 27 Mar 2014 11:52:48 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2RBqhSs004142 for ; Thu, 27 Mar 2014 06:52:43 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Thu Mar 27 06:52:43 2014 Message-ID: <53341106.4060101@denninger.net> Date: Thu, 27 Mar 2014 06:52:38 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403261230.s2QCU3vI095105@freefall.freebsd.org> <8659e58b9fabd9f553c8be5da5dc61fd@mail.mikej.com> In-Reply-To: <8659e58b9fabd9f553c8be5da5dc61fd@mail.mikej.com> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060209040607030606090406" X-Antivirus: avast! (VPS 140326-2, 03/26/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 11:52:49 -0000 This is a cryptographically signed message in MIME format. --------------ms060209040607030606090406 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/27/2014 4:11 AM, mikej wrote: > I've been running the latest patch now on r263711 and want to give it=20 > a +1 > > No ZFS knobs set and I must go out of my way to have my system swap. > > I hope this patch gets a much wider review and can be put into the > tree permanently. > > Karl, thanks for the working on this. > > Regards, > > Michael Jung No problem; I was being driven insane by the stalls and related bad=20 behavior... and there's that old saw about complaining about something=20 without proposing a fix for it (I've done it!) being "less than optimum" = so.... :-) Hopefully wider review (and, if the general consensus is similar to what = I've seen here and what you're reporting as well, inclusion in the=20 codebase) will come. On my sandbox system I have to get truly abusive before I can get the=20 system to swap now, but that load is synthetic and we all know what=20 sometimes happens when you try to extrapolate from synthetic loads to=20 real production ones. What really has my attention is the impact on systems running live=20 production loads. It has entirely changed the character of those machines, working=20 equally-well for both pure ZFS machines and mixed UFS/ZFS systems. One=20 of these systems that gets pounded on pretty good and has a=20 moderately-large configuration (~10TB of storage, 2 Xeon quad-core=20 processors and 24GB of RAM serving a combination of Samba users=20 internally, a decently-large Postgres installation supporting an=20 externally-facing web forum and blog application, email and similar=20 things) has been completely transformed from being "frequently=20 challenged" by its workload to literally loafing 90%+ of the day. DBMS=20 response times have seen their standard deviation drop by an order of=20 magnitude with best-response times down for one of the most-common query = sequences (~30 separate ops) from ~180ms to ~140. This particular machine has a separate pool for the system itself (root, = usr and var) which was formerly UFS because it had to be in order to=20 avoid the worst of the "stall" bad behavior. It also has two other=20 pools on it, one for read-nearly-only data sets that are comprised of=20 very large files that are almost archival in character and a second that = has the system's "working set" on it. The latter has a separate intent=20 log; I had a cache SSD drive on it as well but have recently dropped=20 that as with these changes it no longer produces a material improvement=20 in performance. I'm frankly not sure the intent log is helping any more = either but I've yet to drop it and instrument the results -- it used to=20 be *necessary* to avoid nasty problems during busy periods. I now have that machine set up booting from ZFS with the system on a=20 mirrored pool dedicated to system images, with lz4 *and* dedup on (for=20 that filesystem's root), which allows me to clone it almost instantly,=20 start a jail on the clone and then do a "buildworld buildkernel -j8"=20 while only allocating storage to actual changes. Dedup ratio on that=20 mirror set is 1.4x and lz4 is showing a net compression ratio of 2.01x.=20 Even better I cannot provoke misbehavior by doing this sort of thing=20 during the middle of the day where formerly that was just begging for=20 trouble; the impact on user perceptible performance during it is zero=20 although I can see the degradation in performance (a modest increase in=20 system latency) in the stats. Oh, did I mention that everything except the boot/root/usr/var=20 filesystems (including swap) are geli-encrypted on this machine as well=20 and that the nightly PC backup jobs bury the GIG-E interface on which=20 they're attached -- and sustain that performance against the ZFS disks=20 for the duration? (The machine does have AESNI loaded....) Finally swap allocation remains at zero throughout all of this. At present, coming off the overnight that has an activity spike for=20 routine in-house backup activity from connected PCs but is otherwise the = "low point" of activity shows 1GB of free memory, an "auto-tuned" amount = of 12.9GB of ARC cache (with a maximum size of 22.3) and inactive pages=20 have remained stable. Wired memory is almost 19GB with Postgres using a = sizable chunk of it. Cache efficiency is claimed to be 98.9% (!) =20 That'll go down somewhat over the day but during the busiest part of the = day it remains well into the 90s which I'm sure has a heck of a lot to=20 do with the performance improvements.... Cross-posted over to -STABLE in the hope of expanding review and testing = by others. --=20 -- Karl karl@denninger.net --------------ms060209040607030606090406 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjcxMTUyMzhaMCMGCSqGSIb3DQEJBDEW BBST4cX/La1k81iZ920b7xLGzU6ExzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAg58Yb4ERaNOoW98HiWSJ9hmNZVot ULYq1OHZwR4jRaSIpWM9bKiMj8VQ+2XJvfB4VfqNRZhJKhm96Ssx7k7gAM8MX/U1U4OReih8 fDRsI+YsAeDiog6gAG1CsTZiXF1K0yMXTa/o2WaODssbS9sDH7utMaeH/u/XwawPRl+NAEN6 e+0cceRNTPg3k/iWkErg0CC6XmlXrFEXfH29ytyMF+dtIKnqXyxbkIeo+Hd5JFUSn+2cAa9D cfHHcNwF1sEas0Y+4X63yBrZAf68nCyYngQZaqob1Ox2LfL+GQ0S63WpiBRfvPZldUOfQIzE hh00FoL2lwInI1geMnB1k9qRFxvI2SPVxBA3ic/seBb0wbyXb+dnyK7dhq9XwX/Wl7FpT/N6 jf6EwwPFAkSqGsC6Xa5D1/tgWjnrX7rIqIBCSkWFjXakUTFvxpwF7jAJrX2ucG4uZM5+Z9qP 1V1/hA/NvqU6fjr2HOS6O0bKiKWL7iYRHFjxRExq0vkTwEQTwOb4fTmGTHVj+ojYUlGIUsQq xcj/O3w7zdzD3RncCjqGs6+sutTODkIQa0medmBNWPOEOdAgPHaYa+GmG4Kp7nvnwHRECnr3 KQoZu0TyRnoqPvq+cYSPSTerkzp6GXsIMjLoazsiW+m4dAP1N7aTxmWlQ47M1n5mc+E34o5K KSdVZcQAAAAAAAA= --------------ms060209040607030606090406-- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 11:46:16 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 756848AA; Thu, 27 Mar 2014 11:46:16 +0000 (UTC) Received: from mail-wi0-x230.google.com (mail-wi0-x230.google.com [IPv6:2a00:1450:400c:c05::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D1B295FE; Thu, 27 Mar 2014 11:46:14 +0000 (UTC) Received: by mail-wi0-f176.google.com with SMTP id r20so5857002wiv.3 for ; Thu, 27 Mar 2014 04:46:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=R7JbJwGWhUUcyzLY/nKXF06yZT+JAgcZn1bN2AGHwUM=; b=BPPhVoA6X6zf5fsZumyH8iVjWWiMaYP5op31JZH8gETzso6hJyh8t3SinhO1pHAuvP mc2riBjzeMLWn+eyt8/S7oR96lxt+cYGhgdVlhIKO6Kq6r93BSl8ctuJxcoXsRstGCxM S5IZjFJIgBkWRiRueQ8ec2axjnzMrvUksL0lpaJoyQdxXQPoGgkR2ozid9JufsPosRbo IVNbGY0/fXHUByHYnDTq4JIbg3PPZ/5noNPFQeacNkj7jOIWeNUH8dnqoEw7W834JlT2 iSHa0Zs+kuKA2fxzezsF2pHx4uTnnmtjPta2CYkF67Ifoev1nTR+wmUSI+ppUTpF8KqL 7vYQ== MIME-Version: 1.0 X-Received: by 10.180.100.72 with SMTP id ew8mr39620433wib.16.1395920773017; Thu, 27 Mar 2014 04:46:13 -0700 (PDT) Received: by 10.216.190.199 with HTTP; Thu, 27 Mar 2014 04:46:12 -0700 (PDT) In-Reply-To: <459657309.24706896.1395187612496.JavaMail.root@uoguelph.ca> References: <459657309.24706896.1395187612496.JavaMail.root@uoguelph.ca> Date: Thu, 27 Mar 2014 19:46:12 +0800 Message-ID: Subject: Re: review/test: NFS patch to use pagesize mbuf clusters From: Marcelo Araujo To: Rick Macklem Content-Type: multipart/mixed; boundary=f46d044287fecfccdb04f5952187 X-Mailman-Approved-At: Thu, 27 Mar 2014 11:53:35 +0000 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: araujo@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 11:46:16 -0000 --f46d044287fecfccdb04f5952187 Content-Type: text/plain; charset=ISO-8859-1 Hello Rick, We made few tests here, and we could see a little improvement for READ! We are still double checking it. All our systems have 10G Intel Interface with TSO enabled and we have those 32 transmit segments as limitation. We ran the test for several times, and we didn't see any regression. All our system is based on 9.1-RELEASE with some merges on NFS and IXGBE from 10-RELEASE. Our machine: NIC - 10G Intel X540 that is based on 82599 chipset. RAM - 24G CPU - Intel Xeon E5-2448L 1.80Ghz. Motherboard - Homemade. Here attached there is a small report, from page number 18, you can see some graphs that will make easier for you to see the results. So, let me know if you want try anything else, any other patch and so on. I can keep the environment for more 1 week and I can make more tests. Best Regards, 2014-03-19 8:06 GMT+08:00 Rick Macklem : > Marcelo Araujo wrote: > > > > Hello Rick, > > > > > > I have couple machines with 10G interface capable with TSO. > > Which kind of result do you expecting? Is it a speed up in read? > > > Well, if NFS is working well on these systems, I would hope you > don't see any regression. > > If your TSO enabled interfaces can handle more than 32 transmit > segments (there is usually a #define constant in the driver with > something like TX_SEGMAX in it and if this is >= 34 you should > see very little effect). > > Even if your network interface is one of the ones limited to 32 > transmit segments, the driver usually fixes the list via a call > to m_defrag(). Although this involves a bunch of bcopy()'ng, you > still might not see any easily measured performance improvement, > assuming m_defrag() is getting the job done. > (Network latency and disk latency in the server will predominate, > I suspect. A server built entirely using SSDs might be a different > story?) > > Thanks for doing testing, since a lack of a regression is what I > care about most. (I am hoping this resolves cases where users have > had to disable TSO to make NFS work ok for them.) > > rick > > > > > I'm gonna make some tests today, but against 9.1-RELEASE, where my > > servers are working on. > > > > > > Best Regards, > > > > > > > > > > > > 2014-03-18 9:26 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca > : > > > > > > Hi, > > > > Several of the TSO capable network interfaces have a limit of > > 32 mbufs in the transmit mbuf chain (the drivers call these transmit > > segments, which I admit I find confusing). > > > > For a 64K read/readdir reply or 64K write request, NFS passes > > a list of 34 mbufs down to TCP. TCP will split the list, since > > it is slightly more than 64K bytes, but that split will normally > > be a copy by reference of the last mbuf cluster. As such, normally > > the network interface will get a list of 34 mbufs. > > > > For TSO enabled interfaces that are limited to 32 mbufs in the > > list, the usual workaround in the driver is to copy { real copy, > > not copy by reference } the list to 32 mbuf clusters via m_defrag(). > > (A few drivers use m_collapse() which is less likely to succeed.) > > > > As a workaround to this problem, the attached patch modifies NFS > > to use larger pagesize clusters, so that the 64K RPC message is > > in 18 mbufs (assuming a 4K pagesize). > > > > Testing on my slow hardware which does not have TSO capability > > shows it to be performance neutral, but I believe avoiding the > > overhead of copying via m_defrag() { and possible failures > > resulting in the message never being transmitted } makes this > > patch worth doing. > > > > As such, I'd like to request review and/or testing of this patch > > by anyone who can do so. > > > > Thanks in advance for your help, rick > > ps: If you don't get the attachment, just email and I'll > > send you a copy. > > > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to " freebsd-fs-unsubscribe@freebsd.org > > " > > > > > > > > > > -- > > Marcelo Araujo > > araujo@FreeBSD.org > -- Marcelo Araujo araujo@FreeBSD.org --f46d044287fecfccdb04f5952187 Content-Type: application/pdf; name="Benchmarkoriginal.pdf" Content-Disposition: attachment; filename="Benchmarkoriginal.pdf" Content-Transfer-Encoding: base64 X-Attachment-Id: f_ht9yw2lp0 JVBERi0xLjQKJeHp69MKNTcgMCBvYmoKPDwvVHlwZSAvQ2F0YWxvZwovUGFnZXMgMSAwIFIKPj4K ZW5kb2JqCjU4IDAgb2JqCjw8L1R5cGUgL1BhZ2UKL1BhcmVudCAyIDAgUgovUmVzb3VyY2VzIDw8 L1Byb2NTZXRzIFsvUERGIC9UZXh0IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJXQovRXh0R1N0YXRl IDw8L0cwIDU5IDAgUgo+PgovWE9iamVjdCA8PC9YMCA2MCAwIFIKPj4KL0ZvbnQgPDwvRjAgNjEg MCBSCi9GMSA2MiAwIFIKL0YyIDYzIDAgUgo+Pgo+PgovTWVkaWFCb3ggWzAgMCA2MTIgNzkyXQov Q29udGVudHMgNjQgMCBSCj4+CmVuZG9iago2NCAwIG9iago8PC9GaWx0ZXIgL0ZsYXRlRGVjb2Rl Ci9MZW5ndGggNDE5MAo+PiBzdHJlYW0KeJyNXF2rpLcNvt9fcf5A39iy5A8ohaTJ5rplob0vbaGQ lqT/H/rIM3PGkseashCWs08kW9+S9Z78kfDndxn/aYM+/vbLl1+/6E9qpvmD3/7+5S8f/9YfXk0m 9vMvwOYP/fPnnz9uf/ntn1+++zl9/PO/k0TP9SMnqUrjH58/oZwp2R+tIO73H6X+BCVlcfsLWPzw 7ct3X9NHbh/f/vElP86P/xGUP7798uX3KWX+w8e3fz3/Udo1ahorIltE5YtSyQsiFYcYVyFuC4K6 RbR+jSErDXFc+rhSSetJZVjEaEqjLgj+o0XkVK5UG69ExEFyukpJKxUhDxkXzkrBYTMB0qs5bXIQ 5ktGXamwE0oWvkqtISMIn1Ne1SNtQn76dtB2bi9UBfSf8OelAZd2iRqaGi/rnYr8H//06xf84Ep3 rvj5xBJPH4D93639u7+mjx//c2Z+8J77/z1vqILy9lz4aM/1wj+WFeHtOV2Sxkpjs2e+amu0IMjZ YitXL4aGNzRYfFep8En7vV3ZMvGIIZfksdIgb81QjRDzAinVW7NCxioQat6aIbOa19uw94nCV0uc Vyq8G3xDIDmLNTNk0qoRivdgIQSblYj4+8h4x6aWjY0/awOb3CMbyA1C6XXVT/HOCRU6Kt5O8lAq uQQKorQx8mchBC2R0gKx4BxXshBvTqRqHjWwWSoZiLxq2QdhKtChhWx8GGIphSNGqmbqq4rSVw/x p92koo5cjY958RMyU6Oao6M0mC3lyA+pa1oJnZlGRpyWHvhqSYDkVANTKFkhIoGvFoLkOHJVZDck nlSio7BCJDK5ot5cqQW+WuDOyVLxVllE3jJSf3aMnLcWJBpEuoiI+nMjCZy1QImWiDeWMkCktkhB nDY+/igMd+Yy1gDFnkou13CQrx7SPCMvfqZ01RQZJROqLm6rJfj6gpG1qdug4CyOGUYpq37uCfVR gbzMz/0T7GzvMz93R27Pz/0k48/83E+6/MzP/eT3j/zcT9L7zM9HxGd+7idHe+bnfjKpZ37uJ59/ 5ud+cvpnfn5SOeXno1if+bmfXDHLI+X1k0nligvV9STs7wPtIECV6ChQT2qpByqe2XemmaOO83ik maMOKcHaLKMdAjfKJQd61uw7jIK2LK++SDQCNVNBJBThQM00Q25r0VFEgyVJIFyqGsUCLVNDaMnG Sfe0KYg+jSO5IeIOHi06Scdhs3UyLzh4WbUQ37AVtH1DjCN6oyyZLiTgyG6LxlOOLlRUh72W4LSl IPpbiI+nmnyrlMjiNLOOEfpQgSOW0jkQLkIGAqmMiArqo87dMPJZHs6ae2qBLRQ4a24S3qhXxErL yItu8JUsZBPdAKMuxp994kSV5RiJT5xpeEZbEs/FM9rSIjJ0EWMLP/j0C8sdEnkRo1arI3Fg3Kgl rlSMu4rP8yi4SzNS+d4hUIe13MLTIqnCWGokFdRhRC0HkYPRNUkdUWjnjkYfJd0aw3yBhGraUfHZ TFTPjdewUJyGJG+McvIQyKXk1UWySzNI38hEhpG/kSAuEOwhMG4pYOQg/kbIz8k4NPvSCV0Tk40/ Xz1EZzU9ckWBonPKkfmLKrr21RbEWZS04an4yCEdx0VADJxIEBaqo+LsUhAWBpco0MnAWbLJ4X4q hFSFItbk8OyLSrWFYdIvuUujTbm4UlRDVbTQtRiP9nKpqNWGpeJzUeWCdCUUiK7C6bnZNO40XWEM PbNh5Eyqotju0ONKZXgIjttaVBVW5Ah4UQ8coHZlNCLDrB1qrGR09L7TYPoEe5+7dxoL4tBpPBHP celkSJ7ho/NYaJK/BTIqtwXhm+KGxqKvAG/yXQ+OSLwgnMDhE3UgbC1Mts4j3dqKhYgfPyct93MN LpORB1MZeWW0jcJ1YpfNfXylDrfJo/ZAJhnxURwj3zXo0EKb0OBGjwYmYPRoYDZGRyPLWi7nMnYz WSD3Md9yNj/Z1FGUpbKdrbW3jHreGPkWrevMq9SIyii3Id6ieV/f66RDu6sF8moq2WTVvA+YNKdV ZdXZ1q+U20DrfFrSmlqnYmdrJ77P1s5iIdbuioz8XTUw+yJzFF8va2M0Z2tnPZM2t0SRnglKZAvh n/bGyFHZxAIlJnsWn6RnY+QY+fIeiZG06Tw73pxKOio/eIgO8SQHxy2kQy9aVVR8k6BDDEtlm9Gx 3qitNrcNUWVj5I27zAaYI+kWeDQzcXSW2QJLdJSuiLHy8SmoqLe2VqKjDO2jjV36apjTvdU+x2lG B0yDAuOeA0UIODgKBHJ1baPPBqVDx2ypePtnBPt3jND1eEZb1wNGbAzXtzSof6pFbB1NuXX0C8T3 X40uNP6rOW1dHOI2zLsFPsTq8iNHyYthCb0UiTQ07o32WXCCuA3RRglC+6LaJcqS6KBv/W/ACKFd +9+Az2xuJVIz8vnVK60X8nN7YfFUvCMifF0dDUngidr0OEZ70zM3EIwt+HYRWhSywZI8BFqsJkD5 dl2gRYpKqprQREO+gT1V7W2zcfitE8m4cje1qjf/ipiMwC7BfbSfkRZ6/OxnkvX4F/0Mca3RjbSf yWxqElv9oxgvW/X/fJTwikAflvOK2IxcK3WD8A03ckDraLH4+AiDeqAMlpWLu3rrF5yag5PCqOYr zULD2R0SgKS6IraRuj4uxkRmLe+o+EJdZ1r2sFuNigwg1UC26lIzwGySnlf2Z0HR525Utv2Z4s+y v1dUL7mNEcyAWl2pbA8wMjyVVxsFItqdnBnVBm/r66W3vSLUC50Mlf3ZY0CNaaw62sr7ihbUQDYd acGQLMRXqbCXXCwkewjiYbP+sS0d1NtU/Xwj0mWqrN3IUXQEkyI2brZvHeTbsCNghKIideOKW0/D SFnZQDYqyDaDamRSJOleMZ+XMdBkcg9tl2rWlxyKjlsROnTcukA8FZiUiIH40a++wvAIVaQvnbNK PcYOQsqqtRu79F1Cqvda97iaVfL9VfZ8oUIQSw2PW2AuVRchzxFVH2rQHkmgoqIDByqriramBhGI Z/143kwQLXHYhF3fa8yJrGXk+x60I6OXKEdoOzIXKc7+Whoic6nGFraOBSmvWipby1JvSx0LZHuF SegHzaW3G412IUmUgBEnLU8s5M0gxgZGN6p7ORt8DhL94PkxG3y7hcB+1Pt+NthPWvycDR4Hv/rs ndp6Lq9D3UVK2pf2k2RHxt0sF3d7XUor5m77bFCdTKvUI5tZT+huw8bnPEnTJTXdmNiEuhQPdJuO 9lMs0lUHfzSfaXk/2rZ7m3X3NtLFY9PwLMfPTcPgProkyyWkMpcR83paH4pyvy8ABhrruphXo6OM +xobH58D8tAtwl4DPrquyMmKxWeL7I/iFUSkuxu9BFKZKV836oKj6MIEv3DRJZ2jEpqjw6MZk9x3 7s7SJzUEaeYofro4V9RSDmSrg8FbZA02JnAWfckNbtTvQ6djgKA5USKJxK9zhGxChH9ILJ8NxXl5 AAVis3re8oDOEUrugVh0Lpip7Vc+xtmixjOMd2/pUo2n5hHovcwJE1Og97mGUWuk97mGkUak94Ig gU5FIjE1HLf3Ht1Ih41Sjcdso0Rk1GyobAWATidylDzmioXj46dQCRcSG6/8LEtHxsPEqy3/Zx13 yYvAeM7oOn0sOQfuMVceR4/8gwuOX2ugMS6QQc1RhmG0npKKkaSLgizbUfaFCn8UP9p6Th/Pp/2c Pp7VoUGjRo7KunyTTED2DzWSdD+nrA6zTdnyfYVnoeILL9WhJHMWP7lRBQ1aLeM+744mQZLSQRGI 3rN5XxC+LtNPV5Z/9h8vya0pXxD+46Wir3LmEO7e6LFaNjS8MWjZRrTS8J2nbsJJHgGXoZ8l5YCJ fiElw3DZyh74LlTQFsg+J5LbNHWhss2JNNsZyF7qVd1ZrcGNkTuuZM+y1Xpw1tEMlX1ORBeKGl6p +MKH0bCwSHQjnTI3A9m/XumI40iaC5UffT3IUJGBbJXcrAcNhLxcmiYvY9N+eqDrsfPpfoFsH6fw RXUERHRMxMMgfOGT+tXIIHwhB2MZEiK0BOiRb+inK12/ezybitaLmcxZt7JHV+laimxSR0RoLlZr evWQPLKBvJr/wOgMxA/gYAelWYgfwGns1y/rlrN4RrADtNkS3Ui/e0DvEImu67iKORIdYg+TobK/ R+v7ScvBjXRGNFoP+BRkkGFtYSu5tEPkHoXBAnNhw2d79i7pSn1EaaNwRkiwR3mxp9tbaLlFd/ZG aoELzS9gyEJevEYPsZBt/KNfQ5FEjFAptGEhnhEqhUQWspWZ/fZ6GzAamh8sZPtMRvMDyu8zI/2S pqWaA0a3L2lqxIfyLLEWxI8eAdPWb5MXiK/2SJcdTaTc37R1pmIg24YtI9w2a5b+PqIr29ZDfLkn umFbI7PUsrING9j9jVDycOotcFZG2TPX0M51ACP81GapeLl0WK5jtL1q5wv1a49uhK4ER44KDkn6 TtyiGkwfvpNNeNsjMAyqFnPc/WFRN2aSBKITmFQemQPRCUxKKEUFh8CkmmPkb4QYlZrVkX9tRoxC hxqlK10almIhbz4y14Jb+qFqePxKgQVx+JUCT8TpVwo8EdsnXuMi5pUGuSCv3580qQuiOF0Ofe3p LTiHPqwiNZSViC8MdQ7CmRbIVtKh2ZVsINsUEDmLBB1vACmEjjKvx90qUFTDc8a3UPH1/fyGaKyC u78ZHRtvNY1+SqVI+6VoxvhEbCV6QnGtg7qFyCagis5Ri4d+cip9eUaq5JXK97uAmu6ink+b4VRJ t0gDRqylWZXouMjqTUctT8hWold9suxGLr7+rtq6WMi2Elo3Kv5G08wtxN9o6BQhGQXUtyqvz5iy bRvw7flkgWy/iuOu83rMZPCHC4lsrFS8nPU5MBOtkK3R7HAtA3nlN7WNtkA2bekCKcqWBZLfvC+p hOi9hOi9hOi9hOi9hOi9hOi9hOjoxJpqLJGth7778ALZGuSCoxjI7je6m1nXO29b6XM3k1fhbhav X/X3alTkvU9/Jw0S6ErFSw7FCzuIl9xIV6VcIxUNHSOXETCajyoO8mYYqubHx4PrmGf599320E9r 0F5I+EEFOrFiIbvt0d2C+ahNHcQXXs+aPRV9xZkKZ+d6JmUhheYIIipBlFwBRHcA0aGul97mKvq0 0NcbvdrA51rLSsXbp07ZUzcK2h7atrPsz3WaHVAzLDfahi/tNiFYIP41KOvQUHogFx2/jBEiii6V phJB9KtWHZxswg0tuB79KiX0bU0WyPYomvrtiX6hsgVQ3ZzjsVJ5EUA7N1ohfuWqJJ1PmbP8H955 TsefyeEJ2eohnUOxoeJtVZuMGbOfSd378GfkP9dmqIe6sKxUtrgut8936rlMQT1ULeRlPSR1ZZT5 rRDbuXqAgcxfKdbO1UPS0rTxCtkm2XcDaedKJmustFS+ejmrsY4RQpQRGSqbKvJtPWSBbKpIczti EcuLT6Dmd1/BnR/KiiD1NsAJhFsFPRz0GaiopdunYc23nqHKjxXTo9No5yLlUVO1c2X26DRaUFPd O42FkQ/Oj06jBTWVynAYRj6CwbPm5kc7l0P6NYtmgSfEZwGtmGYWWCDjIWb9xWv/A7xw2oEKZW5k c3RyZWFtCmVuZG9iago1OSAwIG9iago8PC9UeXBlIC9FeHRHU3RhdGUKL0NBIDEKL2NhIDEKL0xD IDAKL0xKIDAKL0xXIDAKL01MIDQKL1NBIHRydWUKL0JNIC9Ob3JtYWwKPj4KZW5kb2JqCjYwIDAg b2JqCjw8L1R5cGUgL1hPYmplY3QKL1N1YnR5cGUgL0ltYWdlCi9XaWR0aCA2MjIKL0hlaWdodCAx ODAKL0NvbG9yU3BhY2UgL0RldmljZVJHQgovQml0c1BlckNvbXBvbmVudCA4Ci9TTWFzayA2NSAw IFIKL0ZpbHRlciAvRmxhdGVEZWNvZGUKL0xlbmd0aCA3MDUzCj4+IHN0cmVhbQp4nO2dfbBV1XmH j8MdcbR+RIn4HVQUKooEVIqKChKvCoIKRlEoODcWBxXrJ1UMiVYCUSMBixqLDIVi0Uhx/KKlfgQy RGllrkZbJTrWNrbjNJlJZuof/cM/+vO8ZXW5zzn7nn25L2fte5/fPMOcu/c6a+997r7vs9f+OFQq hBBCCNmtjGufdP0tdwP0OKeNGdvqvZuQL0OVAydClbvv4cc7//2/AXoc7WatrZ+EWKhy4ESocuxj 4AQmJYmEKgdOYFLwBpOSREKVAycwKXiDSUkiocqBE5gUvMGkJJFQ5cAJTAreYFKSSKhy4AQmBW8w KUkkVDlwApOCN5iUJBKqHDiBScEbTEoSCVUOnMCk4A0mJYmEKgdOYFLwBpOSREKVAycwKXiDSUki ocqBE02adN5d9502ZmxdnHbOzBLPOf/CyVdc82d//qOt732aabnuxS0TL79q0PEnHnHUMUcPOk6v //Lpl+MG7ZOnxV2pwWXTZ6mr19/+pMmVuXburXrjomUrW/77KiOYlCSSnGK1ftO2RiVOBcTj72LF 2o2ZBanEXTfvzo2v78i0fGXHR5o+ZNhwlTgx+uxxKl9vffy70EDbFfczrn2SdaWNyl+H2i29avac n/zNCy0vGqWjSZPq96I2h3x9oP0qY+74/g89VsyWOPDwI8OCbD015aVf/HNo9r0HVvRra9u7/z7a u/QW/avXaiYRhzb23tCPerCu9j/gwOWrn+1yTeRu9amlDB5yUst/X2UEk5JEklPldPhdqdaE2hL3 zdPHePxdaGUyS9x3v/00RaVm6cr1odnzW99R4dX0k4aP1ChgwsWXWgXTWm3/8Dfxn1hcn60MKhoF 5KyDGqhleNdBBx9i77qm44aW141yUcikmbGeK7bEWJrabW64Y6EmhkPEbe9/pn1P49C/2/5BaKbX mqK98af/sN2mmEnjztWVFKz3itCsETr809t1qLaHP4FeAyYliaRLk2pf3WN/F2bSzBJXbdgst4pg SQ0wVc00gA1tNBq9bPosvVf1MP4Ty1SnNc+9Nuj4EzX91nsWNVoHzdVQNJ6i+illa7rWpOWlo0T0 uElf2fGRDqLiKRrTrd+0TdPrttcOI5dtfH1HfLKis55JDTvusteNdv4lK1ZXomFprUnjZtpR87dI +5UOArX+2p+dzvP0bjApSSQ9YlIdwGfqVaMiFpCeVAP1xnhiXZMKO2iXB+1HHe2fMvL02nXQWFKl Kf4Tq63Pqlrm5cyiA5Uak4qHHv/rSqRpaIaeMqkaSFh2rrWy6+SAfo/yVNiNhwwbvu7FLeEt2uuu nXurndBQDjr4EI3+MkusNamahbOs6q1ST4XasTUrXFFtZFKhYzatcKPdrLN69aSy6wzJOedfqL23 0SEBNAKTkkSyOya1Bipxdq7Vjudri1jmatfy1c8ePeg4m6tSo7FkqDb5Jg2lz0661pYdlbhwLq6R STur93holkYNdTeq0tikCxYva3npKBE9aFId+WiP0q6i19p/JDLtQpqiXUsHbPrtSFv6MVxPtxMU 02Z06Ohr1YbNEy6+VD+G/bDWpHqtxpoY3/mjo7VK1dG33rNI1qt7TJhjUusw5zyG7dW2zraDxVdg oRkwKUkku29SVTnVnD+ec/N18+7s3FXE9K8VsfbJ0ypREVMZrFSvby5duV41UKVDMh199jibW2tS 1UwpT0VSB+1hot5VqV4D1fBkxdqNdQ/7c0yqt1QaX/esfNWkGoNoK1SoBx5+ZO29nZDDbt5xlBFf vFfYNU3ZJ0zREZTGdDKmXmu/0tyJl18VL+Wbp48JJyKsw9pkLqC//vYntvdatBNqiKq9MVZqjklt 8xsdsGm/0tFgOLWiH+32gJb/1soFJiWJpBt3HIXbjayBtBhqS90iJjGFIqahhGpm7D4d8OstdqOj mbQ2GhdkLKZ3qU+bKxerIkmvcZsck9pZNZXTultddwW0rC5v+oUMhUyqX3GjR2CsQfz5a5fTbyTT j47HJLvOXQdamV+93dtj92Bbh9KuXhgypt20pgPCTLdy9ILFy9TATrxUqje2NXN2N9+kdiE1Psth Y9hm7viFzIdMSMvTpUlVKxo9AmMN4iN5K2LxvUCdu4qYJppnM4NBVapK9fbFzl0mVVENJU7Y3T76 NyNTHcZrVKL6M3jISbYtOsgPJ9N2x6SqmfEKnHnetyRrdR4u1EIz9ODZ3cpXT8aG51Zq88qOjxoN OcOa1L1Oqt1JLq7ssm1d1r24RTtDpXrKJV6Tuo3zz+5aP9qrw5+V3QsXn3uBLsGkJJHs/tnduIf8 Imbt60aFpbPxdVITtEpTozWRjq+bd6d8p5GF3eKbY1I7w9zk2V1D9VCda0Dd8tJRIlxNKtRtLdve /8za33rPotq5dutvozuOlq5cX9k1LNVose7+9tbHv9M+FkbEOSbVAV6jO47s6FH9ZI5RdbSmt8TP 3UA+mJQkEg+TNipi1n7i5VfVzrVhbCOTSo6abiJTJ1pK7Xc1dH51FJBjUjm3UvCOo87qhTbNoso1 j59JTxl5+r777ReeijLWb9pmz2/a9YLMiRG9fc1zr8XXSWtNamdc7XK/XSGte0o/PrfcyKQa2FYa DzDtk4mv8xq25vnPO0PtJ0lIy9OzJs0vYtJfJTozZqgeqh/7drVGJtV7NX3Q8Sd27rrL0cpdBlNk vknV1SFfH7h3/30a3T5UaWBSu5OT5xSax8+kdo4ividcc8MzUHYRQb/EcPleL3QgpOGedVLXpPrN Dhk2POw/duJCu1ymmdk2/+yutkX7mBbX6HKA3hU/Hx3QX4G2Qu9t9OwYZMCkJJH0rEnNlTlFTKVJ NaS2KtrjfnVNqh7seQE77SYV2jMRmXszNOYdePiRqkI5Z3e1XK1b/kZV6pnU6qcqbctLR4nwM6l+ xXZxfNqMDnUupepXr30sdGIPOmnHW7B42fceWGEXQMNYzzrUoVE4rSoF28Oq8c1y13TcUKl+4ZWG ltr3tBPaJXv7LgVrYyYN/WiJ9qVY6q12yGnYcDVzPBmwB3YavRcyYFKSSHrWpJ1dFTG9RUVGvrMz wKpO9qWjpj8zaXyPkz28UKmODsK3gsuh9tV/Upt6UOf2YHv8pYL2JxbfERoeYm2fPC3nmL9SvVM3 vnpld3XK3dxxVIgmTSpJ6Tee89lag8x5dR1QyXR2P61+79pPMi7W7mf38FSq1wXib2awDmNkVe0V 8TdSGjKadoDwcLRa6r3xt9NruXE/2pPPPO9b1827M/NdTLWb0+hOJHlWcxvdDgcZMClJJDlVTsUt /1vErUHt8XNOEevcdQOkqVDO0sF5KE3qKlPipEJZMvOES2f1DJ6GD+FLcVXrdDAff8uNVjvTlUqi hjBdfhd95l3aEB0MqPrl1EaoC/+rGniDSUkiocqBE5gUvMGkJJFQ5cAJTAreYFKSSKhy4AQmBW8w KUkkVDlwApOCN5iUJBKqHDiBScEbTEoSCVUOnMCk4A0mJYmEKgdOYFLwBpOSREKVAycwKXiDSUki ocqBE5gUvMGkJJFQ5cAJTAreYFKSSKhy4AQmBW8wKUkkVDlwApOCN5iUJBKqHDiBScEbTEoSCVUO nMCk4A0mJYmEKgdOYFLwBpOSREKVAycwKXiDSUkiocqBE5gUvMGkJJFQ5cAJTAreYFKSSKhy4AQm BW8wKUkkVDlwApOCN5iUJBKqHDiBScEbTEoSCVUOnMCk4A0mJYmEKgdOYFLwBpOSREKVAycwKXiD SUkiocqBE5gUvMGkJJFQ5cAJTAreYFKSSKhy4AQmBW8wKUkkVDlwApOCN5iUJBKqHDiBScEbTEoS CVUOnMCk4A0mJYmEKgdOYFLwBpOSREKVAycwKXiDSUkiocqBE5gUvMGkJJFQ5cAJTAreYFKSSKhy 4AQmBW8wKUkkVDlwApOCN5iUJBKqHDiBScEbTEoSCVUOnMCk4A0mJYmEKgdOYFLwBpOSREKVAycw KXiDSUkiocqBE5gUvMGkJJFQ5cAJTAreYFKSSKhy4AQmBW8wKUkkVDlwApOCN5iUJBKqHDiBScEb TEoSCVUOnMCk4A0mJYmEKgdOYFLwBpOSREKVAycwKXiDSUkiocqBE5gUvMGkJJFQ5cAJTAreYFKS SKhy4AQmBW8wKUkkVDlwApOCN5iUJBKqHDiBScEbTEoSCVUOnMCk4A0mJYmEKgdOYFLwBpOSREKV AycwKXiDSUkiocqBE5gUvMGkJJFQ5cAJTAreYFKSSKhy4AQmBW8wKUkkVDlwApOCN5iUJBKqHDiB ScEbTEoSCVUOnMCk4A0mJYmEKgdOYFLwBpOSREKVAycwKXiDSUkiocqBE5gUvMGkJJFQ5cAJTAre YFKSSKhy4AQmBW8wKUkkVDlwApOCN5iUJBKqHDiBScEbTEoSCVUOnMCk4A0mJYmEKgdOYFLwBpOS REKVAycwKXiDSUkiocqBE5gUvMGkJJFQ5cAJTAreYFKSSKhy4AQmBW8wKUkkVDlwApOCN5iUJBKq HDiRoElXrN2olYFkWbVhc/f2MUJam/uSqXLQy0jNpEtWrG7pnxrpOv3a2grJFJOSRJJIlYPeR2om tfWZcNH4m+6YCwlyxlmnF91bMClJJIlUOeh9pGnSJY/cv/O/3oUEkUyL7i2YlCSSRKoc9D4wKRQC k5LyJpEqB70PTAqFwKSkvEmkykHvA5NCITApKW8SqXLQ+8CkUAhMSsqbRKoc9D4wKRQCk5LypntV rg8+4V70mXHApFAITErKm25Uub75hHtbwWfGAZNCITApKW+6UeVs751y4XkLb5vTRzh3zKjufVZ9 GUwKhcCkpLzptklX/fjeL/5jRx9BMu3eZ9WXwaRQCExKyhtMikmdwKRQCExKyhtMikmdwKRQCExK yhtMikmdwKRQCExKyhtMikmdwKRQCExKyhtMikmdwKRQCExKyhtMikmdwKRQCExKyhtMikmdwKRQ CExKyhtMikmdwKRQCExKyhtMikmdwKRQCExKyhtMikmdwKRQCExKyhtMikmdwKRQCExKyhtMikmd wKRQCExKyhtMikmdwKRQCExKyhtMikmdKK9J/+nDX2x5+5WWm6Uur+34+85/3d5kY7X85a/favk6 NwkmJeWNq0k/evOF3//q503aSi0///iNlksTk/YU5TVp+yUXTLzsovw2ajBq9Mg97JpN257XJry4 dWOXLTdvf+nkEcPs83/o0SVddquDhx5Zw6de+CtMSvpg/Ez6zqvPqNl7WzZ06Sm1OXXYEFufZ554 oMtuf/v+z3rEj1s2PolJ/SipSf/lP98+4MD9u2x2xlmnH3n0EXvYpAuX3D3g0AHNtJw8bZI29ooZ U6UnDWNzWi594sF+bf3y2zTJjI7pu/OZYFJS3viZ9Ad33zTo6COa8dTUiRPUYcfVl0pYGsbmtHzq scVtbf3y2zTJnJnTmlw9TNo9SmpSjarUrMuzuy0x6djxZ19+1ZRmWmr1DjtiYPP+6hGT7uZngklJ eZPZbxctW3nDHQuf3/pOl3tvlyY9f+xo2aoZTw0bevxRhw9s3mg9YtJzx4zCpHvAXOl8bk2adO6t c04YOjj8+OYHW3+ybsVDjy7ZsHl9XWvIuctXPaw2tZcvNUVe1hI1N1azhr12uXPbe69rSBhbTM0e XbNcHdaq7Ze/fqt///5qv7N6Jdca6IU6V/vQv5pp1ohRw2VSvdAi4g3R2+MN0dxZc2bqY9F6xmuo TtZuXFW71cbm7S9piVpPvQgT44VqWZiU9Klk9tuwZw4ZNryRUpsx6e9/9fN9+u9tp2p/+/7PzH2f vfvq3676kSZ+8tbL1uzzj9/QLGlU6MWnnZttulo+v2aZRqA7Nj8V+tTcm6+7WovesvHJ0IMta/PT j6155P7tL6+tXZMPtj2nfrTc2L96PXrkKbbQZs4VY9Jumyudz61Jk0oHfzKvw17P//7tklf4Y/nG scds2vZ8bFK17NfWz+YecOD+kkvo50/vumnf/faN/9aumDE1SEc/6r3WQP/KqtLr9FnfDr0pk6dN iu28esNKTbQLmiYd6SwsQm/UMYBmyYDxQrWe1j7uOWyI5oaJYTipj0jbEqYfd8KxYau1PuMuODfu Xz/aSsYTmxw4Y1LSa9LIpCG1Sm3GpPJgW1s/u93INCTTya3Wp2bdN3+uZr367BPxsjRO1MQFN3+n LfqrHzp4kF1s1dwwMQwnn3jouwcd8P9/9acOGxKuzGrp7ePOjPufcuF5tkrxRK0eJvUzVzqfWzMm 1WBKbeQsvX5607pKVQoauElzGtBJWyePGBZMWqkq6cmnH5cZTWpfO/hraqm5aqy5Ey4aLwdpikZ2 o0aP1BQ1DiaV2tovuWDR0nvla02UZytVvWpx0qUmqoFkGtatY+5sWT6Wjnynt6s3jSjlu0r1ZqTa Mamdr5am9VpzbVVtQ2rHpOZrre1zr/1Ua64fBxw6QMQG12coe2qK3K0fNbEHx6SXT5+lHaZJxrVf 0mShI8Q1mf12XPukeO5ee+0VXgelNmPSG6690rQYNCTfyXoaA2pEOXjQ0ZXqzUi1Y1Jz65yZ0zQs 1VwNYOVf66p2TKqRpn4864wR77z6zP/82z9qZHrYoYeoKxtmzr9xtq2n7KkpErR+NIMzJt1j5krn c2vGpA89ukSWsSdH1FLt5cQwV9rSSDM2aXzyc0bHdE2x4ZuanTB0cHxDrBnq9u/eEkwan0PevP0l TcncMCz3aWI4gypXmrOCdEzBhvyoKVrDeMhsrxcu+XLD47tq4w3JXCeVYXU80GjNx44/W/q2owVD 6rcTzjt76DopIX0kp44aXenKpHKlOSto6NEf3h3mrqnWqNCDBphhjPmDu2/SrPikrt6oiXFX4Tyt RqBSZ/ygjWSqBtZe/h1w8EEybJh72/Uzw73BXCfdM+ZK53NrxqQagY674Fx7bWNSaUWyWLtxVebB TFlDTqkVgVpm+tTwTcK1oV8Yvum1zBva3PvgQk3Rv5oVkO9s4s7q9VO91irFy9KwMfRgJ3WDamOp 2YZoXGkbEnswY1IJVK+ly3g1RP/+/TXRjhDU4A9PHiqJ1z6M0yMm1cG8flNNctqYsXuq7BGSl8x+ m7Nn7t2//+Qrrlm/aVuXY9IPtj2nBuGqpWlIw8bQwAaedU2q8aZmyY8aQqpZ7MGMSTVo1euJE8bq x4AtWhPtLHGler73gYW31D6Mg0n3jLnS+dyaMal0oxFcPHaTSW0rNFadPG1SkFetNTIm1fBWbeK3 V75q0mC9/OGYNdOaxIPB2htuc0xqps5sSPBg3JWtWN1849hjdlbvldJIOVw7PuyIgR1zZ4dzudy7 S/pscq6T9uv3fxcrDzr4kOvm3fnKjo/iNjkm1ShSKqyrvy5NKpYvmq+xpC36D/bb95qpFwcLx13J 1I02avTIU76oXiftuPrScMlVi9CYNJzLxaR7xlzpfG5dmlQjx0p0NtWQOyQpjeY0EKtUNWTSyTdp GLvpxfJVD6vP2HSNTLrg/vlqlsEWl/myiKImrbshdmG01qRaVu1qhOHwzuooe+kTD0qpMqnax9eO MSnpm8m/42jQ8ScuWLxs+4e/qW2TY1INCaW/bptU2EXP+TfOtm9skE/ttt64K/vmhyuntKu3DPHJ YanzqccWz5k57ajDv/yrN8li0j1mrnQ+ty5Nevt3b7GRVxCr1BY3sAuO1kO+SeUpzY3Po9o9SI1M alc5M4vT+FfLsvudMl8WUcikkmA80LYtDRsSd2UP2mS+u0lL19vtMuuja5bH9ydr1pnn/FFl1+O3 mJT02TQy6eizxy1f/WzO3tvIpJ9//IbEt+aR+7tn0m0vrI6vqH4R3fqb6UpDTo03zx87OrN0DWnV iV4/88QDz69ZFtv5rDNG6O0mZUy6Z8yVzufWpUklAo2zwo92Y2p8x5FMV6k+e9KlSeWj4044NsyS oewOpUYm1ShPrlSH4fFPScpu931x68baL4soZFLbELttON4Q2zQbPodbpzTy1Y/hJqLQ2O5QGjFq uNYzrKSQSfu19bM7lOxsNiYlfTCZ/XbeXffZxdAu995GJjVLfvbuq90z6W3Xf3ljxsvr/iI0vm/+ l39f5sTMJdcpF54XZsWN7WYnux8pvjVX2t2n/952h5JMGp+CxqRO5krnc8s3qVwmI8TefPODrQMO HSAn2nfu6V81OGHoYLv1KN+k9mV94y44d9HSezUA1LvsW3BnzZlZ16Q23FP/WmLH3Nlyn52DtSdb M18WUdSkEp+61TBZxwlhQ9S/DZltOKzGEy4ab401MFcDKVWNTaxauj00Kh1rlj1Iq7ljx58dVnLn rgd57Jw2JiV9Kj3+bYHzb5w96tSTajXUpEk/eetlCU6j2huuvVJvtAudGkvarUd2068aT504wRof dfhANbhySrsa699K9aEY+yZ8e0ZGjbVKmiuN6scFN3/HFnTN1Isr1fuRwo3BmLRnKZdJV29YKYNk vqdIqpJ95BG5Q4KQ0cLjIfKjlJdR4eVXTbGnYNSP/GJv1DhOwzppS+1NOnK0WsanSY2nN62TgrUa epfGeuGb59svuSDjJltW/Nimlhv3mVk92xANk21D5KywpVoxrZXMK8yt6lbvVTNrrEXHH4uUrfXR LKG3xJ+nLCzzaqIdMGBS0nfS4yadOGFsxk0y2qxvXxKPUt/bskFTwhfIaxwqwlw5VwIdOniQJCjT aYAZnnORT9VSw8nwsKq61ZRhQ4+vbfxF9aEYOddMrbfE6/xp52aZVxPlWUzqQblMCi0Hk5Lyhv+f tBkwaTfApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAI TErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3A pFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yK SZ3ApFAITErKG0yKSZ3ApFAITErKm26btOPqSyXTPsKUC8/r3mfVl8GkUAhMSsqbblS5BYuXtXqt W5MVaze2XAclApNCITApKW+6V+UkU+3DfYolK1a33AXlApNCITApKW8SqXLQ+8CkUAhMSsqbRKoc 9D4wKRQCk5LyJpEqB70PTAqFwKSkvEmkykHvA5NCITApKW8SqXLQ+0jTpFfMmCqZQoJMuGh80b0F k5JEkkiVg95Haibts09vlSuFnjXDpCSRJFLloPeRmkk7++TTW+Wi6LNm12NSkkbSqXLQy7g+PZNC LwOTkkRClQMnMCl4g0lJIqHKgROYFLzBpCSRUOXACUwK3mBSkkiocuAEJgVvMClJJFQ5cCJUucum z9JuBtDjjGuf1Nr6SYiFKgdOUOUIIYSQ3c//AsJlc2kKZW5kc3RyZWFtCmVuZG9iago2NSAwIG9i ago8PC9UeXBlIC9YT2JqZWN0Ci9TdWJ0eXBlIC9JbWFnZQovV2lkdGggNjIyCi9IZWlnaHQgMTgw Ci9Db2xvclNwYWNlIC9EZXZpY2VHcmF5Ci9CaXRzUGVyQ29tcG9uZW50IDgKL0ZpbHRlciAvRmxh dGVEZWNvZGUKL0xlbmd0aCA5NzMKPj4gc3RyZWFtCnic7d0vbBtnHMfhFxQUFBgUGBQEBAQEDBoU mAUMBBQEDAQGFBQMFAQcGBgsKAgoMBgICDAoKDQIKCgoKAgYKAgoDAgYMHh3didtrq5qpPO+J0XP I/l0f6yz9dNH9lm25FrvjeXTcg8dDT3WLVoO/QS26XjoMv4PzdBTpZvayFEbOWojR23kqI0ctZGj NnLURo7ayFEbOWojR23kqI0ctZGjNnLURo7ayFEbOWojR23kqI0ctZGjNnLURo7ayFEbOWojR23k qI0ctZGjNnLURo7ayFEbOWojR23kqI0ctZGjNnLURo7ayFEbOWojR23kqI0ctZGjNnLURo7ayFEb OWojR23kqI0ctZGjNnLURo7ayFEbOWojR23kqI0ctZGjNnLURo7ayFEbOWojR23kqI0ctZGjNnLU Ro7ayFEbOWojR23kqI0ctZGjNnLURo7ayFEbOWojR23kqI0ctZGjNnLURo7ayFEbOWojR23kqI0c tZGjNnLURo7ayFEbOWojR23kqI0ctZGjNnLURo7ayFEbOWojR23kqI0ctZGjNnLURo7ayFEbOWoj R23kqI0ctZETrO3h6ST0SM3QU6VbsLbjOg09UjP0VOmmNnJ61dZMJ+eLP3bLz28Xrx632w+ev12c H7YrO81P6+OHq/vszxbzo1IO53XWbKOlOzyvoadKt1611cvl4t3yy+lf8w/1QymPLuv72ad6Vsr0 63nrrL29u/14cVN/LS+v6sfFdmr6kWboqdKtX231oJSX9WavlIu6V87qSfv6dtae87+11V9KGd98 9k5Kz9raF7S2rNft8qROH9yuNsvo9nKjtqvV2ryqjZ61nZdVbaursTalnfpqvff97UZt63fPmdro W1tb07+1TerXDwGLqjY6bbG2J/XNeu+n639qG6mNDVusrVxfP2zXdutFu+9Fu3bwbW0H/UO6k2bo qdJtm7Wd1Pnjsnu1fFrGyz93y+RquVHbQX29v42WfqwZeqp022Zt5bdlva43z9rN09W5f/+8Udv4 S63j/indQTPwUPmOXrVN99rFaLrTLsfTUbt8cnz67NH60P7zF/tl0h6frr9V2Juu7nN0NOpV0V01 Q0+Vbn5xRI7ayFEbOW+O76H50FMFhjZv7qHF0FOlm+s2ctRGjtrIURs5aiNHbeSojRy1kaM2ctRG jtrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5aiNHbeSojRy1 kaM2ctRGjtrIURs5aiNHbeSojZzUH9RGnQw91W36G7nTxdUKZW5kc3RyZWFtCmVuZG9iago2MSAw IG9iago8PC9UeXBlIC9Gb250Ci9TdWJ0eXBlIC9UeXBlMAovQmFzZUZvbnQgL1RyZWJ1Y2hldE1T LUJvbGQKL0VuY29kaW5nIC9JZGVudGl0eS1ICi9EZXNjZW5kYW50Rm9udHMgWzY2IDAgUl0KL1Rv VW5pY29kZSA2NyAwIFIKPj4KZW5kb2JqCjYyIDAgb2JqCjw8L1R5cGUgL0ZvbnQKL1N1YnR5cGUg L1R5cGUwCi9CYXNlRm9udCAvQXJpYWxNVAovRW5jb2RpbmcgL0lkZW50aXR5LUgKL0Rlc2NlbmRh bnRGb250cyBbNjggMCBSXQovVG9Vbmljb2RlIDY5IDAgUgo+PgplbmRvYmoKNjMgMCBvYmoKPDwv VHlwZSAvRm9udAovU3VidHlwZSAvVHlwZTAKL0Jhc2VGb250IC9BcmlhbC1Cb2xkTVQKL0VuY29k aW5nIC9JZGVudGl0eS1ICi9EZXNjZW5kYW50Rm9udHMgWzcwIDAgUl0KL1RvVW5pY29kZSA3MSAw IFIKPj4KZW5kb2JqCjY2IDAgb2JqCjw8L1R5cGUgL0ZvbnQKL0ZvbnREZXNjcmlwdG9yIDcyIDAg UgovQmFzZUZvbnQgL1RyZWJ1Y2hldE1TLUJvbGQKL1N1YnR5cGUgL0NJREZvbnRUeXBlMgovQ0lE VG9HSURNYXAgL0lkZW50aXR5Ci9DSURTeXN0ZW1JbmZvIDw8L1JlZ2lzdHJ5IChBZG9iZSkKL09y ZGVyaW5nIChJZGVudGl0eSkKL1N1cHBsZW1lbnQgMAo+PgovVyBbMCBbNTAwIDAgMCAzMDEuMjY5 NV0gMTcgWzM2Ny4xODc1XSAxOCAzNiA1ODUuOTM3NSAzNyBbNTk1LjIxNDggNjExLjgxNjQgMCA1 NjguODQ3NyA1ODMuNDk2MSAwIDAgMjc4LjMyMDMgMCAwIDAgNzQ1LjExNzIgNjY3LjQ4MDUgNzAz LjEyNSA1ODYuOTE0MSAwIDYxMC44Mzk4IDUxMS4yMzA1IDAgNjc3LjczNDQgMCA4ODMuNzg5MV0g NjggWzUzMi43MTQ4IDAgNTExLjcxODggNTgwLjU2NjQgNTc0LjcwNyAzNjkuNjI4OSA1MDEuOTUz MSA1OTIuNzczNCAyOTguMzM5OCAwIDU0Ny44NTE2IDI5NC45MjE5IDg1OS4zNzUgNTkwLjMzMiA1 NjUuOTE4IDU4Mi41MTk1IDAgNDI3LjI0NjEgNDMwLjY2NDEgMzk2LjQ4NDQgNTkwLjgyMDMgNTI3 LjM0MzggNzgzLjY5MTQgMCA1MzMuNjkxNF1dCj4+CmVuZG9iago3MiAwIG9iago8PC9UeXBlIC9G b250RGVzY3JpcHRvcgovRm9udEZpbGUyIDczIDAgUgovRm9udE5hbWUgL1RyZWJ1Y2hldE1TLUJv bGQKL0ZsYWdzIDQKL0FzY2VudCA5MzguOTY0OAovRGVzY2VudCAtMjIyLjE2OAovU3RlbVYgMTI2 Ljk1MzEKL0NhcEhlaWdodCAwCi9JdGFsaWNBbmdsZSAwCi9Gb250QkJveCBbLTEwMC41ODU5IC0y NjkuNTMxMyAxMTI5Ljg4MjggOTgwLjQ2ODhdCj4+CmVuZG9iago3MyAwIG9iago8PC9MZW5ndGgx IDE5MjE2Ci9GaWx0ZXIgL0ZsYXRlRGVjb2RlCi9MZW5ndGggMTE5MTkKPj4gc3RyZWFtCnic7Xt7 XJVV1vBa+7meC+cCBw6IyYEjgoqiIB400qNcDa94AdQToCKgqFxE0DQta/BS2VWdarSmaTQzQ6tR m5nuac28WM2U2kXTKbOLTDVvTUZy+Nbe54HMqd7v9/33/X7veVhnr2fvtfdee+112xsABAALrAMJ htU1VNa9v2DiboDIPwHga0srllT+a/DsdQBjUgCsZ5dUtNRJl9RD1BZNvTy1y+ZXrFnYvB4g+iMA +abqJctbZj7dfozaMwFMUF1dWWF/0aQT7dcE/el1wV8/dHCcxoeRVbUrF/pOPknjXVVNw724sK5q ycP3PVoLYH8DQPls/orlnur+I18EcG+gMWfPX1JRp6a9WwdgpvGVrcB5J7DNv/rXZfasbyCWTwXw +Lprn+LlH769fkt3WtdJk1k7SXRWQQ+hPtr64CjicWJ3Wvd4k7m3xfgwP6+h72mgQSEowMABqTCT uuYrn5GsUAaSGX0kH9HVEHwJ77IvIE9ug3EE9ysrIU75PdThcWihtoc5SE3gpbbfEX4zwRKCfVIc OKiuUG5DO5WbCbYQZBM8QnAzwW0ETQTLOT3vS/jzBE9SH5N0I1SpgyBPcYIivwRb5XMwVelP5W0E j8FWpYzeW2ArGw33y9nglL+k+lqq/ze4lGIqH4Sp8gEq91L5IbW5qTxHbUQjfweKFk7jHiP4hOB6 mErr3cpOdl/kJfHwJ7mt+yLx0UBjj5N3QaMcTeUjMFP+NYxj39B6ZUiTn4VWNp/ev+j+Wp5H+Fw4 oJ6BVvkwQSvRP0r99sJMaTG0Sg0wTkqB4fIz1JYJipoLYfJimjuXYDS1tcA0mu+7UAlP8XWLtdO6 +Zp71iT45zz9FHAef/1jIP7yBI/zu3cRVPbydiVwvi4DaS/QvuF64uWMnAYLpYfAR+UHBCvkMqhR noJCAUmwkujuJDoO+7TrgdG+ZBHOdeZ3vXBH9ztyOcwmfLacDqOVaEiR3yD9SOv+TFoG9xDUKLNh iHqe5Er88P3nuhfq3/2tFNudRvhj8r7u76mcz3VKOQMLuI72yIrLR9cgS7sOvNR3LLWPJbpjXKfo /RjBk2K8dhoj19j7Nsgh8FPbbQRPy53gJd68fP2GXj5Ja75Rn0B6z/eD74VRcuD6x0Z3ryVYRrhd 7BHXPbKjnv3qAb4PNNdDBF8RfMr1nGAvQQXBbqp7jcoXBU0tRJI+DeZ6y3WH6yjXE2q/SHbQR+wZ 538FtAg9o3VJD8Apgnraq7vVm2EmwVyCsVzuXLZCdrlizb8Wdkp6S+WHZE8DuD1xnaY1bOidk+te 9GUl13+yC+pzjqBdyIH0rrck+1ROi7KPkA/XwZ6Sr5/sg5eSYpQ3wAGu11wne0suO742slluN0a5 7DKeXMKOqJQDMFzYA+lrT9krp155QQ7xSnqMDxJ+o5IIMXxflfXd29Vl3dulgVQOJkgnmN29nRV1 5ygOkIXPmWv4GxqL/N5W5VmYykbBA9zXKDrowtfMItwKC8VeTyVa8jnybrha8HceZHUuTJRfp3I9 XM9GdS/idfILUCA7YAnxu4UVwX3yg+QHeR33iRshSb471C49QyXRyDtgrqDbTvhKiJFzwCy9R213 i/YbpbepPAoF0ivU90FoVWV6bwKHmgrr1RyYTGtayesFPW+nOvkB2KqaSY4LQ75VjoUZfG3E45/k 62CwboV4bTvRJJKM44jmItgUF601BnT5NXo/SkB9lbchjo+lNcNcmjNd2UF1LxBQH4pk8eoo6r/S 8F1F1OcPhi9PJxnRmGoRzKB3h1JPNA64XqNS20PzZkCGymnfIfgSMrSNYsyJch7Z6x7qM5d81AHa dyC9fhDy1KthsnSK7OV1WieH6wj+CkOUl8hvkZ+Wt5A/2kvlPoJ4yJPOwXh5BsmoHIpof13ym1S/ EZbQfrWqd9O4GRR7LlJdEcFwyFGvErrF5Zcj/7X7Ep+b6vOkNeRfq0kvqrufYbu7/6peJB92iM/d fUnwQHPI/J1kRT6ltcdW/sO2rrQXozTscmCPffaWFPv4upQHSB9oLvI/z5B/f5PKvVQepfJNKs+T PTdzv6POpDjZKeKEk+z+HREvKWYJnQrZ5q96bbCZfGrTj+3xCru81rDHyB57vNIOryx5vBUx75NQ zGMfwn09suAxg/t9ETdojh7aK8ueviG/3n0Rt8JQFoSh8hmKE2ndndJYsvPhsNA0ETar1wL5VNiv PUPzV8I1hi9MJbiaYI38NtzFjsN6DoTvp7KO+3qeJBlZk4vnQ5QZ9SFQ4YckSxI0P/5QoyQrqqab zBZrmM3ucIZHuCKj3NExfWL7XtUvzhOf4O2fOCApeeCgwSlDhqYOG56WPiJjpC9z1Oirs64ZM9Y/ bnx2Tm5efsGEawsnTpo8Zeq0oukzZs4qLimdPWdu4Lqy8gqYN39B5cKq6ppFi2uXLF1WV9/QuLxp RXPLylXXr15zw9p1N960/uZbftW6YeOmzbfedvuWO+686+577t26bfuv77v/gd/s2PngQ799+HeP /H7X7kf3PLb38X1PtO0/8ORTT//h4KHDz/zxT39+9rnnX3jxpZdfOXL01df+8tf/aj/2+htvwt/f evv4iZPvvPve+6dOf3DmLPmt1bTSVeCnvHE03IBr8S7sZkfZa+yUdIO0SbpVekg6JlvlKfJcuUy+ s98t/f7b4/BEevp5EjwDPMM86Z7RnizPGE+OZ63nd55dnr3xSnxEfFR8QvyA+KHx1yWwBDXBnhCe EJnQJ6FfwuCEgoTyhMrEhxIfS/xL4uuJH3/JvtS+Z93dPEOHB2nuTnaE5j5Jc99Mc98uPSyjbJOn ydfJd/Rb1+9fNHeEx+3xiLnTPKN65374P+aeY8ztpLljeudeQHPvMeaGL5Xvsbu7+0OAbhfEAwSd AF0Huw507efb37Wyq6WruWtFV96HD32YccZQijO3nNl69rszt5z96uyXZ1rOtgGc3XTGeXbIWc/Z fh8s+uCNs9qZ905vPf3V6c0Ap3cRLD9df7rsdOrpYe//+v073/3i/aXvl/BxlOeE8rWIDD4tBAIf SVZ7xYf9lv2ePfUftbuvrOltuSsEP9Gyjt0skL3wMNwMt+AI2AqfwK/gdtgMv4FH4XfQCZtQh/Vw N3wJX8Ft8A1sQDOchi9gB+yB/4Z/0fnot/A4vAZHYR/Mg/lwByyAv0IlvAp/gdfhv6AdjsGnsBD+ Dm/Am/AEVMF3cCcch7fgbaiGz+ECbIRFUAOLYQnUwlJ4EJZBPdRBAzRCEyyHFdAMn5FkVsFKuB7W wGo4BA/BWriBTjE3wkXogGfwJL6DFrRiGNpQQhnfxffwfTyFCqpoRwc6UcPT+AGewbP4D/wQwzEC XRiJH+E5CEI3fozn8RP8FD/Dz/ECduA/8Qv8Er/Cf+F/49f4DW3NCfw3fosX8TvsxO/xEkahG7sw iNEYg30wFv4BH2JfvIqsBbAfxjFkjElMZgpTmYYejAfyVJiAXqYzE/bHRByASczMLIjI4CM4h8k4 EAfhYGZlYczG7MzBnCycRWAKDsGhmMpcLJJFMTeLZjHwRxyGwzEN0+FjOM/6wEk4C+/B+3AKzsA7 8AGOwbHox3E4HrNhN+ZgLuZhPhbgBLwWC3EiTsLJOAWn4jQswuk4A2fiLNiJxViCpTgb5+BcDOB1 WIblWIHzcD4uwEpciFVYDduwBhfhYrgfa3EJLsVlWIf12ICNuBybcAU2YwuuhAOUjQ9kg+APcBBe ZmPhKXgaXoGb4EVohS42DH4N/4SX4BH4GzwPL6AJvodLAP6MGYXXTijIz8vNyR4/zj92zDVZV48e lekbmZE6dEhK8oDE/t6EuGiX02EPs5hNuqYqssQQUnK9eeWetgHlbfIAb0HBEP7uraCKissqyts8 VJX3Y5o2T7kg8/yY0k+UC6+g9Ico/b2U6PBkQdaQFE+u19PWnuP1HMLZ00oIvy3HW+pp6xD4JIHL A8RLGL3Ex1MPT250dY6nDcs9uW15K6o35Zbn0Hj7LeZsb3aleUgK7DdbCLUQ1pbsrduPyWNQICw5 d/R+BnoYn7ZNSsytWNA2dVpJbk5sfHypqINsMVabmt2mibE8NZxn2OzZn/L8plsPOWBe+WDrAu+C irklbVIFddok5W7a1NrmHNw20JvTNnDVR9G05Mq2FG9ObttgLw1WWNQ7AbYpiQ6vZ9M3QMx7Oy78 uKbCqFETHd8AR/kSe8VE7T04EG/EIa0vPp7zsvmQH+bRS9u6aSWhdw/Miz0A/tTBpW2snLc839MS OZO3rOtp6e1e7o3nW5VbbvysqI5uWzfPMySFpC9+EumH2j1t0oDyefOreVlRucmbkxOS24ySNn8O If4KY625+4elEn1FOS2ihothWklbqreuzeUdHyKgCg/fg5rpJaKL0a3Nld0G5fONXm2puTmcL0/u pvKcEIN8LO+0ksOQ3n1m/whP7JPpMAJKOR9tUdm0KQNyN5UsWNgWVx67gPRzoackNr7NX0riK/WW VJbyXfI62gaeoenixYyiF63tCuoeYr5yLVH3lLBYqZTvFlV48ujLOz6LGhy0XeKV7+j4LE8JubIe MprFoODYj8ahFykxu4A3SbxrdkFsfGl86PMLLMUaPCmJbfplYzmoopen0Dw/y1qImjM00JNbmXMZ gz8aVDEYNEb7aT4Zl4UxMfXQ+XYW9DRJiWS5VMdoGFHFdzHa0wZTPSXeSm+pl3TIP7WEr43LWuxv 4XRv4bTZJWK3DZvcpHsLp2/itd7MUBV4Nk1oA1InPxlOZviIUG0eeZ5Nm/K8nrxN5ZsqDnWvm+f1 OLyb9hcWbqrLLefTlpAID3U/szm2Le/W0jZHeTWO5uN7JyzY5J1ekhUrlIFlzygxuMg0tFIMT25j /H4vbpi2348bps8uOeygFGvDjJIDFKiyy8eX7u9PbSWHPeSFRS3rreVvHv4GhUi+4ADFL94Ue9gP sE60yqJCvM8/hCDq9J46hPmHWKjOIeroM4Rm6Z4xZQ+m7hm75/U90pyC4XGzCaYTFBFMI5hAMJWg Jmd4XAlBMcEsgskEUwgmEeQR5BLYN+DaaNwS2Blgjmj8AHBt05amnU1PND3X9HrTB02apwHXNmBZ ETp2+HfU7bhjx4M7nt+heu4edve6uyV/Hd5xA9atWbfmwTVta86sUZatRfvquNWe1VtWy/br467f cr3kX4FT2BRpijxFkctb6lraWiR7TlxOas6WnJ05T+SokJxMSVS4U/f77GOPRaHXlptgzY0353r0 3Dg1t5+cexXL7Qu5ffRoPUp36eG6w+/UbbpVN+u6ruqyznTQCw9p3UWFbfrUOSX7EW8vbQsvhMIZ 4w8DYvcttw3+yc94vKqwLXZ6SdvWq0oL29IIgav2R8F442XYVaWDkVzUeCycWrJfp/rsuaEyylE3 Zr/Pl1vjCTm/8tKc/cOg7sk0GAYxddF1jT/6LA8VV87euHwwz10pX1cOQwyBS06lMxV0nye4wCFI 2atygnLpUXR6olOW8nfKOqmT8iqYfy5f/X/9UN5nQxvlm+fpCX06qU6mmtDzAw0YNLsp330CDlOu eoIyqWcvw3n9S5SvEs4m4JNsI5owCVopd7kTT9E4n1FelkI54r04g8b5M+VCKfAqO4MB6Sb5OcqY LPR+J+VQX7Bh8knKdB7B9+j7Qeam+qfYG6xK+iN8x2rYeVjH1lFWtAcacSRlvD2fL4mPLwWW/D8+ +eJZDH+hvG8jnmSDfuZ5QrqOnvukk+L5OvTI7yo+ZbvytfK1Wqz+U/2nNpqeZ//z0QP0nDPlm/5p 9pu/sqRb6uh5wWqyNlifDIsNW07PC7Za23O2oL3MfsF+wVHxv8//Pv/7/P/+8GsCWIfXSevk60AC DQb4o5Sd0oPyTg3c4KMq0HYivQKkdnV1YGpZgIrhwyKc8c7EeGf8Ogm61jE6ZdMQ0EXejt+k7es+ xqLQBxYIP6iuB6u03mSBsR3oOE49fSNGpqdFRbpUb8KAfTWFhTUc1oti0SLqjXaKJdcozxEvCX6H 0qQxmKWhJjEVrKoMY8c6TrWHj8LUriMOGi3eqWT096U72ajgcYzbVS17nuzc3N45jrjYDCDPoYjk ghb/dK+G0vYwzA/HzPAN4SxTL9BL9Gq9Rd+gqygXZ9oL7CV2ye5zFbuaXZILdFM4F0ASF0BkCVbT uXcDRReMmO3S1TlWl9WqcFY4E+llAVpZWaD9+CuOU4G3Ah1lgUBg+LAAlAXSM3BkxghvgpY0MrRo DeMj78c3Nh/dHnhixAetZ4MPB99iAyayx9pv/f1Dm2o2t/9h13dB33nifkv3BTmCuPfAZn/pLe57 3SyFZbGJTPI60h3ZDklyhiOYwvqEFb8afTL6k+iL0XK0UlzVt7nvtr67+sp9E1qdqPOLEezTxzyn 2NRqYmAy2SUEaU6cM9w+JzIuLFKsoeNUOwdn+Cja3I5RManEPkY7jgc6aC3tjreOtx9vHz6MVhPA gM8XlZ6WMWKA15vBy8tXRiB7E7bctHzDxTtWDU0682btwlWpry5475tDnwY///r09a9d/+miW47/ 8eWFT46e9epTx+vqgm9+TLqXTbtUS7tthjAo9Wf4rMVWBklWZFbZpjZZZunSLPCZ883FZgnMdt2C qCsIFk0Km63PJuVEq8aVgpjv6MhytJcFjN041ZHV0eE4LiqGD0On1xmfgenO9EjEeOlkZ1el5G24 J9gQ3EA7uzZ4I669V3rs0u3H2K+CBST9R0j6ScRVNEzxD0EIiwT7rCor6la0KsXbIo9Gssg+JFqi dJv1OeHhkm12mDTHHRbGeelwdDhHcf081S6YCXS9FQh0HHG87CA5Cgk6SYIkOmd8ZLxTiM8Rn/BI 1c6GSw2/WtlYJUV2nRs/9+SnXd+eXHZ8NY68d2NzNjt+KtjZr/Nc15sks5uJuwnKcXBAH8jzD8ao 4iQTmmR0FvtsqNnctiSbz1Zsa7WpYIuNmaOqEQgRc6xhoAju2om7DiGk+i7nKNrkl0lAblXViJXw SBd4PU4HSI70tJF8pxPYxU33fj20euLr3we7XyZD37zmw8DSxjlnlON3TQ6uTg12v3cy+A1bi1aW eCkz+M0tzau2kNVspX1tVI7QrkZBPDzgr2rqe29fJm1XsCCM9tCLcAuZpCkiN1Mr0Eq0am2DpmhX 5VpsOei2tFqYJTpHQzcmoQ/zsRh34QnUMYE02DnBXeXBYk+r57xHAo8nksnRfSagLXKCjLrMhEan 0Y8z3ckVuj69I410Wih018sBB20OV+uQWpBOk17Hxzv7p6fJpMOy00VSjAqptsQRQwSzgn5Mbv4I 8RA6g38LzswaHzx/TcmU4ft+c9cfBiiHbC89va+yLHji1c/lT4IPyc892vVmsK+2YuOaBtKQqd3n 5UeVv0EkWv1v5AM2IbJiS5Wl2SJhAeHJ2ONftuNumuBVPIlm6Sdr4drw+8L/Ei5JLcoGhUFjBEZG 3BLBNB0jVaxEbLKglGTJt7AkEhoDzeq2Jll91nwyqiprs1W3utXcfCetMNmKJZZqS4tlg2W7RbFY i7EKm7EVt5GgD+JREvZ5NPOMGLbBLjgIik6mwPh45OBdVps0wWWaYDO5pJC6O9q55QUCaQFnekCo PJle4EggYLhCIeYfHiwLQAA1LthIV3h6ms+txnuAlC4+Tb5pcfELBz975+nXm1buDLbTsxdX4oBL yuITxcH2zz8PXrjnb4/iPRjAXHyaxxquZ9fQ2cRCljAoms67+Wazprm1JE3SwiBHydH0JJ0BWW6B LCtQYFIKdN3Ees00ZAjcVQSyHB8JV8GtMgR4H34U7Mu2X1og7ZL/Fnz0VHDCeeXw+Z5559C8JvIQ I1m+T0dNd9NUPr1Yb9Zb9V36Uf2Efl636IrgQk1SGahmzoUKBZqqayFVDb+MhbcC9R/1MiCmZ1JX V/CS9IqYvGueMff99LWT5pYgzh8u5YQClQ9kYgl05OOGRhw+LJ3GuR9l5XAnKR64ui8oF6ifFbf4 1zVL2CJjswlbLFhAgZDlk4Yw5qWjjQml8ww/Ucj6MFnFTFuB7aRN0m3RtkxbtW2DTdGYmyUxSVei lWRFwjxQ85I11LVoLVkL2fMh7VXtpPaJZk4C7OEvH4pJlY7Soes8mLVD3Rf99WazWqg16tBoNZsl mTGrRVH0JpO1yaJboi3JFkkzuU1JJqnZgi3k4hplS6NkNrdK26SD0lHphHReUqUmTXbLSXKx3Cqf l1WO5tNLs7xLPiFrcgvDFgoYuWaUPjGj+VD3maesVrWQI/6hJhNh9TJbYKIq1mBRBKI06Mky+dMY eaAsgWxCOsNbmM7lSh9SaUfH4NDPYKHOAUwN1HcEyMtEc/dC2i7czKhrUnlM5cDruwYH6olMtNfX 1wvbEB6IfxSvhF6UvJiOmK5cuLQ9+E4g+M6O7zFpE16DIzeiFHXphDTw0mfKM5cSpfe4/lEMeEHo 32b/EApZfJUsEVBSmiRo0nJ0liPpFCbdUpLkk/IlRTKrHodDLVS5CExhaiFyESTzKvSYLWoh32xy SC3kQlRsVneRvqKsmkwSqaocWrww84ZAgN8OoONIwHGEL5R+QnY9fJjCA206ku+Vi7tGda5hT3W6 pJ2XlhPbt0rLO9N5tkn+UJlKmY2FIkOFf/wtCkoFEVjg2O7Y7ZBYi444iDZLj8bwXLfqU5lqzeWa 09zrjE6ADuC2TzC5JkjEniTsyBGyIgq1HUKsYmPI60SQVwn5Fg+gg5xNuJOCLZtKMSUOs3AyZWDP Bd8JvoK2fb9/qT14875n2Kc4CvcHW4I76CnHB3BhsDv4MbpQY1LwU74CbvvzlJdoBZFQ48/JpUCW 78D8CMw1IQvPTep1AYoehjkhY/GxfHaUnWAasChJsbrCTLYJZnM4c01QmK6EfAFXlY50HrnSQ4vJ eosbcVnIf5JfoDjlkXmy0BOb4hPwvuAllFNwDo7r2pdy7fgBB34XvHMgi+06pxyyvP2PoM5eCg7W mn/1kcH3JeVVsFPeUOUfcwvei0xCm0y+0x6VW2DBVgtaZEeOnKMpbiVJ8Sn5igJKrCNGNdmcUsQE RCVmggloXcKDjg150PS09F6WhRcLdBznaxnFNSIeB1Ce40wjzp0ulXKmH6Kq9EZw5dWLgv+KwgGz 2dD7mnD/pd0FAx7e9tt9g1gA1a4HlMP24NSbjqd01congjXq3FtqZ4XiqXSU4qkH/uK/q9mJLQ6E VaT4LYiY4USKnJmkxjxuKizJ6XPmO3c5DzqPOlWHkqv1dff19ZX6RkFUrs9ywsLyLaFQ3GrZZtll OUhVJks8DQrV7hY3w0WmVSa2KHpVNJOSHD5HvkNKhkwoIM+b2AcHmTBa0yUpwmnDuIgJFBTjTLbL IqIz3D1KmLn4CaWC9ZQH8ozLiIQggmHASAtJKv0HDMjgByQ3j45qpEvpx0RePbX8vnvP1QafaBr7 3UvB5ikbt8+ej+67b8oJnr/QPu3T2gcLNzy6ZOucPWfKzswrm3jDhoKVD80/8LGIU5R9HCc/4YQS /zUlFPYdWKwhXGvH3DDKv3KizQXmFrNkMseYmTkfcy531ifoDAbhCgV62m6x4YaRfR0w9JKfCgIR 8U6X2FTaZ8oeE/pnbEX5jhtu+w3KwTe7uoOfK4cvxa24584bpPcvOf7RDZ2fEl9K9wUtjMciOOqf XAD5MmvpDRoXNRV5tEmmwFNC8eZV20WbbjZTJNEkWc1TLHlWpUW2tmATSE0y3GSSb9IZMzU1U7xl Orn2558ykx+nEPO8P5VjzM+/fQxDAUqKNiebM/mB4nZgW3TtdhlMsslKLl4xnDz3Jw5xkOOeLjXA nXl0qFL4O/6Qhoc8fOiEQT4cyYfHI/fh8Ur64uDzwdcWB+PQjhE4sAEHYTjaXdIDlxYo7dwZciCh jiM/Hi5nw1UwAF7zt2out4tBkxdXxeOqRGxKxFVWUm3KXhRMljPlAllCfmxlzO1Ah2qBfpMik7UE dwJLmNyqoppU3B9L+mGJB4FFx/YrspjsRboj2pHsyHQUOEoc1Y4Wxwbhag85XnWcdHzisIMj3FNk MoX1L5LCisJjLZKFjr6GDLh5O06RWTveFiodMLK7gON41yuvOF4OiIMhKTFP6ERWR0DviSo/3meM 6B+f5g6dEumU44x0uUW9NMDriXSRvrP3Eh6d+/gRyvMxJeuL28pG3njgm49mxU0cPnJWkt8TPLnr 7OLM/Md3PvM3+59TXxzzQfC1c8d9I2Om4p/CXrzjLNdukp50gqQXCS/4N1eZm82t5m3mXeaDZrUn n5Uh30yBMSI3YialyvdGKBJlQiyZWzckUpzhZ0s6cJLv8IVyZovb4iOnsI3SYnVyq3Ob4T1OOFWg vNmC3DgYmouupVybEzOLDYtEegwgFbn0IpvFpYWcgBDcKRIH16Se8HSkNw0WssNeJ4Ahk3fE83yY xATOEZQQS68vLrnn9kdw3OZFc5c8vOxi8CyORvsfpXf/OP3Pu4JPzH0lpz/XuS7+h+0MZpI2pSmd wsNX+3PfjjgXwaRqV4uLXJmySmG5ppkmytv6Jvdl4AibZNsROXm3hj6tWGvVtmm7NAW02DCbu0gO LzLbzbLFrNkAegzCcPFk/l1ZWb2nJzo/ZQh2M0YA+S1yuFLPOkb6pMy4e6acCXZj0ulVCzo7pxZv /S2O71/jGJeQjn2DQRybho9ndapSqv/gY8EDg6PAsIgS2tMIkunX/kOt6jbKRSSoVVerbGYUJQt0 fkziiUGeijDTThk/pXsM9T7JfdjlKWi11kKHyu3abs0SquIvihY52ReO7nBfeH54c3hr+LZwNRxd k3GyZsVW6zbrLutB61HreatqjdF4GtRKadB0J6pFg7TR2rWaJLlJWFVasyZrpmhXsosJ6bq4ulG+ hLK7yC4XmSx2rfeEURZoD9Q73uZqECirbwhFAa4X9RQG6g1d+MGQCOKjokKHIqED6WkQ6UqMJ6HK 2V2S9tuNj5VW3Bzs/Cb4IY5671Psc+lbZuvz2XH867rbS59bTFkNKcPo4GcnhpZ9ThbiJa0coHxM FlLmz440WbhfZdy1ZtoOkWM9afuEO1ebQ5mkTbJM6jkx9khCB6upSLcV2VlRhNVsVyOgxzNkibvA jo6OrCO0uiP8CiEUsdzpkV5KAr0Z6fxmyM3eL0rJn4+n3+184Df6O3tL1ykYu6o0LOzYpZ1SxbFX /pVEO54WnCt9LN8IwyALDvtXJlsz+R0Qt7PEwRmDSV/HRI9pGSN5HX3D1cnJKZkpBSktKXJKjG9y Ul9s6Yt9x0Di5LDhk1vCMfyaZJbJSphEJ2uHd0RyH3dcdOSIouS4zLiCuJK46jglrngQDpquFych JE0nGy5KHV0UmWqzmCN7vB6/1CsLvNUV4MfYI5TH8FzGcHxvUfw+zm9zKIiL6pARi0zTF8psSApe DwVxnxHJfW4tdN2ZlMTlQzQ8FaKYHuHqTYVU3471D9Xeev+g3w8KBmeNOnjp0B0V8x5vHvPBntgh A7VAbtxdI3BJ8P1j/06vKS0tryqaVRvseKnFn1nmOpyxMLr53b1nRzb8pvi6dQffU9TB7uR+wfd+ e1ieU9ZYN29aXS1pQStlfj7yC+GQ40/RXQiTwnXLJH1S6HjlM/H7wfOmb006mCLMFrnIFm42mW2a pcf4L8vueHodLy6wVI322Re6wHJKWasXjLkhNhis2PPaaXbP/JbswZe6lWezur4q+eCNrlpu1+Sw 31eOUdbshNH+hFCAl0zXS06bddJJJx6irMsZETbVAatVh9XsUHlOfLwnsa+nXRFzY8/lMUlXIjWL d2Jh1syZWQS5G/FXyjGBEnSmSfPav3/TWH2AVm/h2bpuRj0sOiw5TEJKe2CKRZ9iskjqJGlS6Bzp E8fHVjo6WpM0H53SFVWbwiiGSLJJBdkMZpMqpMLTOy6ZUGLHRXM8lBVwpQjEY0g63DlLI4KvtHR2 YiUOCW7Fp/Dc2uB9JJpL9+H9wUVdVcThge4LLIE41GGRPzNXQv4jo9Rsxv/0ZqZQHrBB3i7vllUZ Jm1Hcj4IdJ674Lfxwy1SqiRrqgX59lEeyjdvMHfcIl6HtJW+I3xOCsjehIwDnStG37qHOPp+4pEB Sy+EJKaWkg+Og3r/RD2e9EWLQ1/crjgWR0rDI8YujWmOSZrT7UyiDLvKed6pOj16dHQ0013RlMJE R9ldUVJRrLUoKs6sm53mWC1KCw9JzsivToVujYwb0voODiHlMnSrV8nS3dyQQjfNTqXsz5nVkZ3z SlOXxXTuTa9+5O3a1N0znjzKdv59/OBL37MjUwJ5gy4F5dTrGpZnjG1/susaMHQgjVYUDtn+IcIC 8slb6JNOmPCnjMAwAbNq0wwTMLTwrforDEDwxg0gs6bZt8bb+dTQhUefO8UWb7xhzNBLnXLqgpZz 7V3LjLiWRxzwU+Ma/2xyUAWRlNq5EVqsG6zbrZLU7OT5hsRKAItVOgK7k93MOZkfgvPVKgqDB1VV tU7muTnrOQuHblMoTY+yFZkiiiRSZe3y83C9OBAbBxAj5QhEiKBCZ2KK2onintdBixmHsZ98jbZg xxdfB79CubtpY7BpE+vzDY4Ingx2d0Pwbzi8K7jj0B4MHOJxujE4V55B6wmjOH2d31+pN+nkqnkU dk027bDbUZ6Ek6IZ8pNvMZN6jsHFrJVt48fgGGeR1V2koU1SrRZN600zQnzzEJnl+Dp0mE/kco7k fpPzfNnFLC7p7LxuI4PguW9WVOTMeXzb1sdnxwbHKX8vmNv1X8Hvgh9J13S1+x458Oz9hg5Ic4lj GyzwX8uzYqbZfLZ8m5QkimJbla3Z1mo7aDtqM9lU0yR10iENQ5eJPkqQzmvfappmF4EJwMyKrLqq ma1G2Gg3Lvu5J6CQGOgJiAbrFAn/lT21uLWzs2bfuhwpL2VHQ9cOOXXeiqSePHY4cdYPLvoPa27U dX4zB6vc5AN0zHcjK2AI+VHIb3pyKecJpSn82kG3R9uT7fwXWMoG+3b7bvsh+0n7J/aLds2uTPbF 5McwLcYdkxTji6mKaY5pjdkWsyvmYIw5xgWuyYfM2CrS5aPmE+bz5m/NGpjjzFGLolZFbYqSOynZ OmlBpulRbjlc0hxuB9vmOOo44ZAy6fARxu/mKeeRivqGF4X1tYT1KN4rZQHHK4FAPXeQXOnqQ2lj x8v8glPkPD3ZTuCHr95fLSX1/+EAHOmKkg1vn1tad9tdJWuDn7/1yK67C4ruWj8L+6w+9eziG3OP llZfMzmj8fXN9xe+mFc9cPyyHXX3PublUh1OXvUrpZLsrdw/1sHvZvP1Zn2bvks/qGs6yVc1SaZJ LXARWOjYuw1kSI7IjGBgD1P4aUg1FUGRFE7xQaytI0uc7I+U8Su/AL1l0dupACU/EZTvZPjEzS2P 6r5IEaEeX726EwcE3y2YPXty8UMP7pUWvvbu3OBfXgsObiwb/nG/p3cJrSS/kC6nUlQc6x+oRyDa JznIL/2gecVC9zTQwi1FMlBWrqsO9Qqv1MUTcoUn4h6nM93JswwfldKojGcmBzd3Hp55V3Qnzh7i x1vZwa65x0pGszPfvwGhc7iYXYXt/pn8coOJbcVIlsgymNRHHaSOViVJ16P1an23/qquwAYFWyWK alOYMkUk4jzlYuoaYGtQlqGJD8APX/xXCSrKh7of8Efym1YfRVWmywhyPVpUlYk18KPR5SfsLpKr OHkbqsGXJU7V8Sx7drAvHaKBztMmF+XCDex2voKw7gtqJq3ACu/752hkJjqgJqNxg5Bpa7FtsO02 El0zVKstKuM3PgzVKYplivUGUG6Q+cWC2WzcJ6w1yWvFfYKbTE+vIoU5qsv8WuGNpyyha4U3/MM4 xoZFxfBvZzj/toaphck8YNPRu14316NWL6NFJkJdVdQr7hXEH1iJe+PLbxZGxfxwXdwjhZ5zlnG9 EIHpGJEuPzwn+G7wvbnBWtROoDob7Z/828Xu6lpKYrmRre1aw9bzvSUPPZIkEwb7/BN9OjLFFGlK NEnmsD5hg8IkKRlagDE602KzHbkPYS20Oxvo3WKxQouKqmydkhmGzWEYZjdPSZKbZSabTDYLv2Hh y5X5DYvHwhceuptIsmCmpcBSYpHAEiYXSbYwNZQt0UEodJuCZDNHxK+HxFc7v0QJOYDegxCtVZW9 GZRDJ4U2viI5+N0TGD81uB8l7I8aSnsX33bqzvIj0tiuvuwj8XdU2dKg3n/wGQM9/+zD/7d6jIEz Mu75Bi7x3+wauEyW10OjEH2VgavggEbxH9US9bXBbwycXASsFzj/72szvGLgMvSHxwSuUr0Knxq4 DB54XeAU9EBHk4HLkABfCFyneisONnAZBqBT4CbiYjX6DBwhGp80cBqHmQxcgmF41sBpTNbTV4Fo lm7gKnhYoYHr0I8tNnAT4QcN3CxfYBsN3ALDtCYDt0KBgZu5HLQTBk5y0P4scAvVh2vdBi5DinZO 4FbOpz7YwIk3PcSbjf/Xul5s4DIM0q8RuIOPY9A7+DgGfQSXp36jgZM89SqBuzg/+kMGTvzorQKP pHoXthu4DEMNuUUJ+tcMnNM/IfAYTq9/ZeBEr4fWGMv31xRn4LS/JkXgV3F+jH28ivNjGijwOEE/ 38A5/QSB9+f7a2o1cNpf0xKBDxH0jxs4p7+b4/plctYvk7N+Gf/6ZfxbL6O3XkZvvUz+VkP+2cvq VjbUVFUv9yTPH+gZPmrUSM+kmvkNyxqXLVzuyV7WULesoWJ5zbKlQz3jams9grLR01DZWNmwonLB 0BkNlfOa5ldXLvdMmu4Zv6x2wQ99f2ji9bMqGxppFM/woWlpvS2Tpg/hbTADGqAS5kETmWM1YcvJ QCbBdPqeTG9VonU5VPwC3XJowjBYSib28zQL6e2X5soTb8t/nkLaIP1Zell6jr73QzYsgzpYSbQ1 xGG1oEqmHgOpHA6j6Bkp+tVQXQPRNhIsFFS8ZwP15d8VVFND2FIYSi3joJYez2VjNoq3SiorqVwh VjD0F9Ywlcbi8mqi1tC4K3+Sh58aYTy11v6ihHooZgluGo0Z+HqHQho9/9mT9xvS2++XpPFTq0qh 9gVi9VweS8XqPUSxkr5nUd1Soqyk755xltJTIUbxiNYfdiJF1HBpVhBU0xxLCFsq6njvRvHWKLBK If+FPzn3QsGth94qqGWloJ8vZqwU8zWIlgUEXP61BMuJaujPSJzr9GIxh0dQNhp8N5I8an6kHXxm Lrclole1WOFP8czflgnee6i4BNJhmGhpproaMT+XQYVYUa2QWJWgbRH1lT/SPj7HArGyZcT7UkMK lYK3JkM3Q1wvFzJYYEhqOdF7xL/1ca6Xidafk4/HWGOPrBsNiYVWwCnqCFtIveaLGr7LS4xd5rOH dmCBGO3y2SsEB02wip5aQV8t5m8QNBWGZl+pkymGpCoNTeqRZD2NVCn0pWdPmoUWeMT3YjEz75tA 43FZ1YpZVgrcI3a+xqir+Bl9SBYtIR1b0ruTS4y1VZL9Vwj/MF/wXiHWVkvYwN4V891sEnZR3bv+ kK3+nBZxOwjZynwhVT5mY+94PVTzRf9GYTeVwgIup065TE+qibIZxpIUftjBn1rrQjFij479YLM/ pUXzfrQP3AuG9Li2t77CGLOmVyNDcm8w5NcorKXKaKvo3fHGy8adYMzeIKx9udDBBJj4I4n+3Khc F2rESD+/u3UGbQKNXC0iTR2MhlR6msUzlMa8UheHCskvIZqQFYS8O9/5anpP/b+InDX/Q+ScSBRc fito7Bph0z9Pmy/mbhT+ZrnYm1+Kpp/S/iyGf9PInxLVz1POEm8/314gtH6F8Dq/RHf5in+BMzlO HiNfLWfTQShT9svXyIXyqF8Ydcb/RWaR9z9wVshXgMNplF+i4S11JLFfkupEauW+oIIOAv8HaQv0 uAplbmRzdHJlYW0KZW5kb2JqCjY3IDAgb2JqCjw8L0ZpbHRlciAvRmxhdGVEZWNvZGUKL0xlbmd0 aCAzMjkKPj4gc3RyZWFtCnicXZLLboMwEEX3fIWX6SICm1ciIaQ0SSUWfai0H0DsIbVUjGWcBX9f M0NTqZYAnZm5nmvG8bE5NUZ7Fr+5UbbgWa+NcjCNNyeBXeCqTcQFU1r6lfAth85GcRC38+RhaEw/ RlXFWPwespN3M9sc1HiBhyh+dQqcNle2+Ty2gdubtd8wgPEsieqaKejDTs+dfekGYDHKto0Kee3n bdD8VXzMFphA5uRGjgom20lwnblCVCVh1ax6CquOwKh/+ZJUl15+dQ6r01CdJCKpF+Kc6IwkjkjZ HindIeU50YGoRMoypIIj5aQr9+hg7bX77Xw3yknE9/hJSStyclBQa0FBai2oMlsdJKSjA2QnCpI8 JXlO8mzdjGwVKQXPVELy4pGckzynw5ViPQBZXv7mMvX7qOTNuTAlvBo4nmUw2sD99tjRLqrl+QG4 eam3CmVuZHN0cmVhbQplbmRvYmoKNjggMCBvYmoKPDwvVHlwZSAvRm9udAovRm9udERlc2NyaXB0 b3IgNzQgMCBSCi9CYXNlRm9udCAvQXJpYWxNVAovU3VidHlwZSAvQ0lERm9udFR5cGUyCi9DSURU b0dJRE1hcCAvSWRlbnRpdHkKL0NJRFN5c3RlbUluZm8gPDwvUmVnaXN0cnkgKEFkb2JlKQovT3Jk ZXJpbmcgKElkZW50aXR5KQovU3VwcGxlbWVudCAwCj4+Ci9XIFswIFs3NTBdIDEgMTUgMjc3Ljgz MiAxNiBbMzMzLjAwNzggMjc3LjgzMiAyNzcuODMyXSAxOSAzMSA1NTYuMTUyMyAzMiBbNTgzLjk4 NDRdIDMzIDM3IDY2Ni45OTIyIDM4IDM5IDcyMi4xNjggNDEgWzYxMC44Mzk4IDc3Ny44MzJdIDQ5 IFs3MjIuMTY4XSA1NCBbNjY2Ljk5MjIgNjEwLjgzOTggNzIyLjE2OF0gNjYgNjkgNTU2LjE1MjMg NzAgWzUwMCA1NTYuMTUyMyA1NTYuMTUyMyAyNzcuODMyIDU1Ni4xNTIzIDU1Ni4xNTIzIDIyMi4x NjggMjIyLjE2OCA1MDAgMjIyLjE2OCA4MzMuMDA3OF0gODEgODQgNTU2LjE1MjMgODUgWzMzMy4w MDc4IDUwMCAyNzcuODMyIDU1Ni4xNTIzIDUwMCA3MjIuMTY4XSA5MSA5MyA1MDBdCj4+CmVuZG9i ago3NCAwIG9iago8PC9UeXBlIC9Gb250RGVzY3JpcHRvcgovRm9udEZpbGUyIDc1IDAgUgovRm9u dE5hbWUgL0FyaWFsTVQKL0ZsYWdzIDQKL0FzY2VudCA5MDUuMjczNAovRGVzY2VudCAtMjExLjkx NDEKL1N0ZW1WIDg3Ljg5MDYKL0NhcEhlaWdodCA3MTUuODIwMwovSXRhbGljQW5nbGUgMAovRm9u dEJCb3ggWy02NjQuNTUwOCAtMzI0LjcwNyAyMDI4LjMyMDMgMTAzNy4xMDk0XQo+PgplbmRvYmoK NzUgMCBvYmoKPDwvTGVuZ3RoMSA0MjU1MgovRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDIy NjM1Cj4+IHN0cmVhbQp4nOy9eXxTxf4//Jk5c87J2iRtmq1pkzRNCk3ZugCFSsNSQNlXKVIo+y6U sqksRfayioqAC4uorBJKwYJ4qYp6RRFcQAUEVFBcEK4iLqXN85mTFIHr/X7v83ue55/nRcP7fGbm zJyZ+cxnmzlpAQIAWigFARpPnDR84g87i78CaGQD0NQ8OHj88NfJyFyAJfUAEvTjB0+fqP5A9zgA wfvgHjdh6GCiafs9QLAjgHnzqPGTpx/ZVBLE+80xv2HUqOGDY8ebtmHda4gUzA47knN9JaYPIpqO HPfQiAWrfvgFoPePmG0xYuLI8dNSt/4KUH8UgBgaOnWye/XEj6YCtMD+pMZDxw+eOLKiywqA+M8A 4pYDHzv98MdHFlxYO8iQ+6sqQQX8Z9PXqWmc7r2n/J9/7qoZaWyh6oxZNdYnSgW8yq1qu0JbI/y5 qzbT2CJafvNHrMdLxHp4yYOhIAIFIzSCNgBMp6lBXlEp2oTm3ESIfgoDWQnEI+6VE2Ga2Bf6kYXQ n26DGRxCIgTZDpiEdbdhvjXSA7wt1u+DOIfIRfRFOKJlXRCDEb14Huvu523xGRP5cxRaAv1VLpgg 9g3XYH+rxXdgBOI5TG9iX8MWKQfGY34ztjvEAJrxOthmtbQN1mD5M3h/KJY9h7Qf5jdiegC2axxN q+VlYOcUIWF5fXzOkuh8U4XXoSkrCX+JcynAZ96HWIB9dEfaHtEJ68QhbYNYSN6BReSd8Ca8jxTm Yv8LeTmiXZR2xOfMx/t52C4F83Mx7cBxSEgNCA+iHt0BOdQMB5E2wvnfH5k34h0Yxed8c044/uiY /h2RMXa6FdjnawgvzQlfRKq+ZWx3Yu4duFfIhFKkYxEJiB70KIxnnYEgv9aKF0HgQMnkfDqLuIcN g66YJzjOXmIFrON5RBcFJeEa9gxsEK5Bc7z3sLQa5zEM+d0EcR0a0R+hgeSD2Shf7fD5cxDP4TMv KfIwDHpj/w2RZrKLigwtQCzFvq7U8YnzBvNzcF17Yl83uMZg+16IDrgupYhxfDzYfyPOc77upG9t Dta9gHUGcGC5VQHOncskb8Pb47N8UTnc9BeFTVhnGfL1PFKGiOdjqIMiZ1HgvbfxOXaEhEhENERc RGxCjEW0QLyCqId9A/YrKPKKMsNlU5EPlA3xHeQhjk2R2cgcnlPWM6IzG6PP4v14pB0wNgoPfybX Fy6zOJbddc/mOsVlpo4q8j2Wyz35F58nl6mbFHWP/QAd+BgUHUTZqqNc73DMXB9W0z6wCOk6lOO5 XGb5+Ooo5wuXNYUnqBNRmnvLXBsrOoJUAPBGZX1uHa3jxU06CjbjM4ukIWhTNkBHNhk6Co/BEHYV 2gn1oaHYGMtwPlg3RH+AnqoqyMS17Ib5tXfQNRzyCTJGrMJ5bkd+noBnkafF7ARNZieIKG4PfycC eVfcTmcp6X+jd4JURe5xynHrvf+75f8noCfF7Wgzt4e/F0+EwzifVVwn5B9IY4S7jmJ5OaIUkaYK kDWqsaRS7gNGCX0bYgILQgsxCM1YFa5PPNp51AUs7yN+CYeEZbCYnQh/TkqhlJ6ABXI8DKar0aZh X/QkzOXgz0c68RY5uk3m7pSlOlonr3dSbvOjMuVCKqH+fRDFhSiuI35FOXqeRPpoxu2z4h/QRiMW ROQ1/OdN+XwXXkC6pE4+75DTsXfIp+5OubyTKr4F7XudnuI4FtfNn9tHbuO4jeR2jtuZuvp30lva l9FtKMfcDh+F/lG9To7iPhzjV1HdRzuM631/OCy1D78kVYS3CLHhLVIGpj9DiOGXcN7Tb/rUfuHa qD+tX+dLI+WgrfOjYiaMj9qzzYq9+RmeUPxoX2V8amkXzBarcd3RBirj3RDVQeQnjnssK0Ker4Ol OA+7sBD1EcsRAzhPlLUAsHG/wH2i8CTymfuiZTBXOI3xAm+bCSbFX+TB/Tj2d5Uy9Kmc8jLxftgk /QAZrA/a2ioYxteKz4OPh6+9agroVfFoJ05AE7YV68SDButtUHgQhJcUueBtx2JchLyQh4KMMtsV 6/DnbVTaBCE2yo/NCi+U9hiLcPnivMBnSvHQU4knfoD1Yh+4H3Voo1wKG6U+qHPxsAWf8QK268PH gu0cir9+Eh5A/VqEtmkR2hxQ5L9/uFrYjvOZjnYdIZQij7aDTSxFHo5V5t6ORWzsQq4/wjbwcxmR nkQ7zOOJJ6GMBSBfGgvLsGyZiHYS+12CZfNQfxuj7i7G9q6o3QbsezGW87Z5PJbhMQLXFzkIcVKp EgeAMgYep2D/wnewUbgPFqEct1Y9iXyYDw3QXxCUvSREkwiU/KwolkaglBkjlHgEI8xUyjPhI7pN 0KLcch+6n82B0awvZAhNwM5M0IB9iLr6BzwtGGAQOwJPs0pYyvMsDuoJIZx/BcaWvPwYdOfl9CPM r4H+LBfbL4IH2SAoEXaj7H0CGjYC1xrbictRTlKw/c/43CjI19Bf6Iu6tQDTf4R38HpKHxXh+zlY R2igtLsFyljrcMeYaSfk2324pjhenr5tvDjWm+OsG+PfjE+ZJ38utuN12NOAe5bwGYQvQmt70GWw HbGBnoK2Qhd4iGxBA/MMtCcXEc9EsRM6KnQ3ogf6+GwyA9GQZcMriDmYTkf6D8SuSB5jt2w4jZiP z34d6R6+L+CgbaApp1j2HGIN4r26e7eC9/V35bdCTIDb83uhlINcC9dw3Fkf+dwU+2vK7kF+IlAW V3JIs6G/PBXXLxXLk/CZd+Sxnwy2F8b8b+P530COQWOFhxEEb51j3XogtfwXOHMLdXMa9Q3/j8b3 fwJc39mIQoW/P0F8VIZiyElIRtoXaV9hCkznwHwDzBfU8ZPg7lfBFnhcKb+5fpFylBXcUsI9d5bf mb9zXf+3PN0DL9yKOjm4KQ+rYB4Hy8P6iDvzqndhHof0Ft5769/z7KX/Bf0hTVinjAkUGbsjL3VD n4mgKThWh9JmKcfN/DHUZQSvq7TXwzIORXcRtAJGc9y8n432G3ELX5tyvmKfyv269alblzvXB8cX ZB8g+qOv+AAaI+2FtHUdvSnfUXtxm8z3iMj7zTy3JRfvqPOXTvylG8e4r/n7Z/7/Cag7RxDvIN7+ /7ovbmW4jTByO3EG45A8jCNPYHzyAMwFqEFbcqMR4kW0Q72Rfopl6L1r6yP0mDZh2UikzwJU/4rp SVh+IoIwZQmwIRpX2rFsX7StKvq8XpH21f8E+BMl6s9dkfbV2xBjMP0vxExMf4H0daRrsP732G4e 0jci92sGYX4q4iDmf8D8OEQ/TK9EGo80HRGHiMX2qzl4PPJv+9D/1+nf7z/+W4oxy1Acp4ufeSGd cece4r+mdev5v9A79xp16/+/0VvODO6gET7gnukrjPtCt+59/qc9Th3F9ay9FaxPuAZjSh2Po3ks y+NnJX6MUmX/psSx2C+AuY7y2JnHrzx25vEr0o1IF0miMp4+fJ/PxwWKS1HgVBQC1D0xhylNP2Dq pvyMlh+DQnOYRWaTFWQV2UhC5AwJ0wL6Dn2XfiEQQRDUgleYJZQJS4WNwgdMx7qxAWwQe5w9xZ5l z7M97FX2OftO3C++KX4vXpN0UoLkklpIPaWx0nipWJolLZDWSJulrdIu6X3phPRH0vykP9wGd7w7 yZ3s9rsbuhu7M90t3LnuVu527gnu2e7N7pfcOzyiJ85j8SR7/J6Gnt6egZ4nPVuSabKUbEiOTY5P diS7kusnB5I7Jg9OHu6lXqPX4wMf9el8Rp/ZZ/M5fSm+dF+WL9c3zlfqm+db5Fvqe9y30bfDV+47 4DvoO+x7z3fM97nvG3+uP+hv4y/yD/WP8I+9JF6yXWpxlV5tUk2r3dVNq3OrW1W3rm5X3a26oHpm 9ZLqJ6vDN4bU5NX8XHsjfCMc5ifgsEHh3AayixwlfyLn3kbOfSbATc7NQ84tF55nhMWwHmwgW8lW s3VsE3uZVbLP2CUxJL4qHhevRjnnkYJS0d9y7mpSadIGt84d57a63ci5NORchjsnyrkxyLnnkXPb buNcL88DnpU3OWdCztmTk6KcK0oepnDO/R841/0m51b6Nvi23eTcEeTcZ8i5Fjc5N9w/5hJROEeu smqCnEurbo6cC1a3rW5f3bf64eqy6uXVN24MrGmFnCvlnAt/jYL5ZNhMj9DXhEbhM/R91AgDSuQq Mo2MJZNubMD8aC6ztYHatNr6tfUwOQMehqkwDkZBZ2h144sbZ24cv/HejfM3PrpxjNe8sfbGmhs7 bmzEz+M3Zt+Yd+PRG6NvZAJ8XQjw1ZnIqf75+Ygnv3zg/Lzzf3y55fw0zL2CQLt6vuz8zC+nnBtz 7qHzB75OP7/83JZzq8+uPrvp7BKAsy/ytuesZ4vPomU+2/hs8Gzm2ZQz7c/kn8k9k3Om6ZnMM43P 1D+TfCbhjPkMOf3T6R9OXzp98fRXvNXpt08fOv2P09jL6bdOv3B61+n8021Otz6dcjr5tOd0kqPK 8afjS+M/MNL7h/yi/Kz8jPy0vE5eK6+R35V3yhvl9ei/vpNaibg7FYZy3SVNb39PQb+J4Lb8VcFS lxeGwf/wI3RFS/P3d5YjnsOIqCvryYqQDrn1LhuIGBHBf/ph3TlYz2iu6/80jjta+lm9m+mU/7Gm 5j/e6XxbVoDnYR7MFwbCavgGFsByWALPwlbYjCFCGbJ1LjwOV+FfsAyegkXwBpyBK/AcbINf4Ge4 BptgB/wT3oadMASGwkoYBkdgOLwD78IH8B68D0fhWxgBH8IxOA4vw0j4CR6DT+Aj+Bhl9Tv4ARbD GBgNY2E8Su+DsAEmQDFMhElQAlNgMsr0NLgE01G6H4JHYCbK+SuwEWbDLCiFOfA9/Aj7yWryFKFE IIyIUA03yBqylqwjT0MN1BKJyEQFYfIMeZY8R9ajLdpI1ERDtERHNpHn4Tr8RjaTF8iL5CWyhWwl 28h2soPsJC+jzQqR3aSc7IHf4QQpI0tIBdlL9pFXSCXRkxiynxwgBmIkJhIL5+FLEkfM5FVykMQT C1lKXiP/IIdIFXmdvEGsxAa7IETsxEHeJIdJAnGSRJJE3iJvwx/wJ3wFXxMXcRMPSSbvkH+Sd8kR 8h55H23mB8RLUoiP+Mkxcpx8SD4iH5NPMEJIJfVIfZIGF+AiOQEn4Rx8DqfgNJyFT+ELcoVcJf9C X/Uz+YVcI9fJb+R38gf5kwRINblBakgtSUc/BpRQSgXKqEglKlMVVVMNaUC1VEf1NIYaqJGaaCyN o2bSkMZTC2lEGlMrtVE7ddAE6qSJNIm6qJsupR6aTJqQDOolmTSF+qifptJ6tD5NowG6iC4WjaKJ XhHmCHOF+cJCYbGwTFghPC48KawVnkXP+YKwVdgu7BR2CbuFvcJ+4TXhdeEt4V3hKOrqh8IJ4XPh C+FL4aLwnXBZuCL8i/6L/kx/odfor/Q6/Y3+Tv+gf9JqekPQCFpBh96F4KQ2sxfYi+wltoVtZdvY draD7USvsouF2G5Wjp65gu1l+9gr6Gf2swPopw+y19g/2CFWxV5nb7A32WH2FnubvcP+yd5lR9h7 7H12lH3AjrHj7EP2EfuYfcJOsJPsU/RSn7NT7DQ7w75gZ9k5dp59yb5iX7ML7CL7hn3LLrHv2Pfs B/Yju8x+YlfYVfYv9jP7hV1jv5KvyQV2nf3Gfmd/sD9ZNeyGclpGsmAv7IM3cXe0ByrgMDwKr8NC tEXdhJ5Cd6GH0EfoK9wv9BN6Cb3hV/ItrWKz4CCshcuomS/AKpIHK0hrMpU8hv7icTINKskMcpn8 xIrZJDaHlQgFQn/hAWGAUMjmsSlsGpvPprIF7CG2kC1ii1kZW8KWsunsCbaMLWcr0CM/pvjkp9kz GNM8h5HNGraWzWTr2Qa2ET3180K20FT4ReB7RAmg7kUxoXihd5gdvCkwUZJVao1Wp48xGE2xceZ4 i9VmdyQ4E5Ncbk+yN8XnT61XPy2Q3qBho8ZNMjKzsps2a57TomXuPa3ygq3btG2X375Dx3vv69S5 S9du3Xv07NW7T9/7+xX0f2BA4cBBRYNhyNBhw0eMHDV6zNhx4x+cMLF4UsnkKVOnTX/o4UdmzJw1 u3TOo3PnzV+wcNHisiVLly1fsfKxVY8/8eTqp9asXff0M88+t37Dxk3Pb37hxZe2bN22Xdix8+Vd od3leyr27nulcv+BVw++9o9DVa+/8ebht95+55/vHnnv/aMfHDsOH3708ScnTn762eenTp/54uy5 u7Hj3djxbux4N3a8GzvejR3vxo53Y8e7seN/FzsGW7cO5rW6J7dli5zmzbKzMjOaNG7UsEF6IK1+ vVS/L8Wb7HG7khKdCQ67zWqJN8fFmoyGGL1Oq1GrZElkAiWQnu9tX+QO+YtCzO/t2LEBz3sHY8Hg WwqKQm4san97nZC7SKnmvr1mEGuOuKNmMFIzeLMmMbpzIbdBujvf6w4dbed1V5L+Pfphelk7b4E7 dFlJd1HSK5W0HtMeDzZw59tGtXOHSJE7P9R+6qiy/KJ2+LjdWk1bb9vhmgbpsFujxaQWUyGrd+Ju Ym1FlAS15rfYTUGlx0GFHN52+SG7tx0fQUjw5Q8eFureo19+uwSPp6BBeoi0HeodEgJvm5AhoFSB tko3IaltSFa6cY/ms4El7t3pVWVLK40wpCigG+YdNnhAv5AwuID3YQpgv+1C1ocv2P7K4sNj2/Zb eOvdBKEs3zbazbNlZQvdoQ09+t1618OvBQX4DGxLfe2Lytpj10uRiZ16ubE3Or+gX4jMxy7dfCZ8 VpH5Dffm85KiMe6Q2tvGO6psTBEujaMsBD0f8pQ7HMH94fPgyHeX9e7n9YTyErwFg9s5d5uhrOdD e+xBt/32Ow3SdxtNEcbujjFEEzr9rYnhN+8pKaU6T3XqeZOzhI/Iey8KRMg91I0j6efFOTXnl+HN oWxoc6yGPwUEW4WG4YqMDqnbFpUZW/By3j4k+jBGLPsVbXuR9/KPt5cMjpZIPuOvwJNcTm6KGt6v S4cCgVBaGhcRuS2uKY6xlZLPbpA+tZJ6vRONbiTIPuiOvB1c0KIRst/j4Qu8pDIIQzATKu3RL5J3 w5CEcgg2ChSEaBG/U1V3J74Pv1Nad+dm8yIvSnKFsuuLD6n8N/8ZjJa4/FEtQsTyP9weHrnfqZe3 U4/+/dz5ZUVR3nbqfVsucr/5zXvRVCiubT8hgUZTNEFQ7qJQDrhZmWf66ULMh/8kRaiHVcoqlEql hLjbh4xFHSPXAo3H8182qgxf5a0U8lez6DBDLQK351velr9teLoyAQfM/LRT7/5lZZrb7qGoRTq8 N0pQ4qF3P4+7bQj6oGb68F9luKo5R0FCKIgsa8sroPxFiqLZ2yomRNMF+MOls0F6ezR0ZWXtve72 ZUVlgyvDpUO8bqO3bD99g75RNjG/qE5wKsMHliSE2i8tQF6NIi0atPaCQbDCFUQYIYALr40Q3RCD ECsQ6xGSUo+XTEDMRhxCXFXuBAVr+arMYCWSJQrZM2ZchpIdHMkOKFSye+4viNAuPSK03b2Rai0i 1ZpkRYobtonQ1PQIjfVllHKq0WdUtbZg6H4cQWEiXgk9DAZCwAUbhHgIIaggRUuCQuyeFH/G+kMC AwwHBIJhqStcJZByvSmjtYaG6RWIBRf9iV6O3KGX98SYMta3vo9+BbsQhxAC/Qo/X9IvYTY9jxpg wGseYj3iEOIY4gpCoufxcw4/Z+lZrPUFNELkIQYh1iMOIa4gZPoFXo30DNcn5crTeQhKz+DVSE/j tE7j1UBPYeoUPYVD+7i8WU7GfiURaBRNuHzRhDUhmoi1ZFTSj8r/qO+qpF/vcQdcG1o3pp9ACEGx s0/w4Z+AG9EdUYSYiJAwdRJTJ6EUsRKxARFCSNjmJLY5iW2OIN5HnITGiCCiO0JFj5djN5X0WLm/ jau1hX5A3wErMvUo/adC36dvK/Q9+pZC30WahPQIfbs8yQWttXgfsI0RqRFpI7wv0tf3pMS6wq1N 9BCyx4XXRog8RDfEIMQKhEQP0eTyYa5YfMircEQFWLMcvlPoi7BJBcExrqC/LcqYm1/8Le7BFF7W u9f7adC/ei1m+cW/fBWm+MU/bymm+MX/8BxM8Yt/3FRM8Yt/2BhM8Yu//yBM8Yu/W29M4aWSPvdK SqqrWbexxN3aQKchl6Yhl6Yhl6YBo9P4B/5gfGxPl6elIcfWBQP101ylB0jpQVLak5RuIqXDSeks UjqHlOaS0oGkNEBKnaQ0iZQGSemrpDmyopQEK27L5gRtpPQIKd1JSktIqZ+U+khpCil1k2bBSuop vzdTIfkK2dOa6xXSe1plGHCMHuSoB8Xag2p/CK/HEGElF8RK7uRIZXsSp8l70vIi+YYtMia07kjf xIZv4jK8CecQDBfoTRSjN/Ehb+IDDHjNQwxCVCGuIMIICWsn48BXKFcDXhsh8hCDELMRVxCSMpwr CAoTokPcpQysUXTQ3XiOvomfZPx4qCeYaHQaA8aOwgonMSSRbknhJNoMLHyXH2tSmXC3tu83/e+/ 6UHdWk2X0xWQiAuxMkpXlP+R6Koka8r9r7pax5OnIImh1JEc8BMf0uZQouSzwaniNAucdDvSjHJn X2xmKPenuw6QGN5qn+sP5wXXd85KislLzlddn7orGSl3ncCS7ftcnzgXu95tVKnCkoP+SoLkgFup ut/Z3LXziFJ1Dt5YV+6axck+10xnB9dYp3JjeOTGwBLMBQ2unv7+ro74vHbOIa5gCT5znyvPOdCV G6mVzdvsczXGIQQiyTQcbH2n0qk3SXlgn2aVZFQwXV4t95O7yU3lDDld9sguOVFOkM2qWJVRFaPS qTQqlUpSMRVVgcpcGT4fDPADYLNk5IR/Z4AAU9JGyq/8rJjbNaKicB+E4oROtFOvNqRTqGoodBri Dl3v5a0kGnSgorcNCcV2gk6924SaBzpVyuGeoWaBTiG5+wP9dhOyvABLQ3RRJUHvV0nCvGh+Ag9V 9wMhpvnLEjitN39ZQQHYLFPzbHmxrUw57dv9zaUoeg389WO7LZ0YWt2pV7/QtsSCUAZPhBMLOoUe 57Hsftw/X81vtx+30kgK+u0XWpGf83vycqFVu4KCTpWkr1IP3ORfWA8l5l9KPVUSuHk9cKuSIvXW Rer5sD3WS+EE66nV4FPq+dRqpR4jvN7ukpT8drtTUpQ6VjeUKHVKrO5b6xzxYR2fT6ljKYUjSp0j llJeJ9RKqeJ0YpUkp1KFOMCpVHESh1Kl719VGkWrLL5ZZbHSk0D+quOM1NGfr6ujP491Av/tz/A2 gQDZ07Jg6AC+Dyjy5g9HFIWWTB1lC5UOcbt3Dy2IbhD8RUOGjuJ08PBQgXd4u9BQbzv37pYD/ub2 AH67pbfdbhiQ37vf7gHB4e3KWwZb5nsHtyvY06F7VrPb+lp8s6+s7n/zsO78YVm8rw7N/uZ2M367 A++rGe+rGe+rQ7CD0hcoMt69324VtCnAsFOhe6hWg/JalOApaGMxTmylCG9Lj21WwgHGv9inxShc hzs6PYLfatC6QWt+C3WK34rhm73oLduslp6EA2RL9JYRi03eNhCYPKVkCtjyR7eL/CvBHyyaPIUz PHINlPynH7yXj/u2diWTATqF0np1CuVhnLtblrG0iE8p1KKuTKvNx3AzUtgQC1vwQkG4WZGX5fIy tTpa8d/Xf0qUtuVaUEpf3UOCSWQylBQIoaROvSmagt7RqPoAhkvcPZQU4ARLSICU1D1DGTZE0sDn W4fJU6KpKB8mR2mkFTYpqWPHzR9sg6ZKPAB2hEN8CezMDzaA8LeIS5zWjg5f4vc5pd9j5cooALbA TjIadsIheINcBX6ytx8qgEc87eAZmAFPwEL0Yv2xZDH0xI+I5U8Qe7gCGsFG9GMb4SjWvR9mwQGw EFv4O5gN84WPsdV80EMytIbuMAGWkc7hKTAAzrG50Aw6w4MwkZSG+4WXh1eFN8MLsF/4Z7gGtOCA ofg5Gv5J/Cx8BhpgiydhLZwjq9R7IYi9lGLNZ2ESrBMKGQmPDP+JI/DANBwDgy5wlFTRAD59OHxL bGSG0Baf8nw4FD6MtZxQCKNgHRwg2aQD9YgDwl3CR8GCfUzHp66FctiHn0p4DU4RnXg1vDl8FeyQ DvfifCrgA1Il1NbMqc3jjEYu1YccvDMB/gHvwHHiJa/TCaJOzBCD4sPhT8AMTaAPjvYlbPkN+Y3O ws9s4W3WPtwGYpAvj3Fuw1vwJXGQRqQb6Uvr0wn0OWESqLDHJvgZBqOR32vw6WdRavZRHT0mPM+2 s2opsfZ8OAZXxA9Pw7PwOtHjTN2khDxKTpKvaVs6iD5NvxKeYFvZR/JgnPVAGA/LYDv8RmJJc9KD PEBGkRlkIXmMrCVHyXFyibamvelYekUYJRQLr7E2+OnFSthccYG4RLpU26/2cO2Htb+FM8ILoAfK wxwc/ZPwHM5sPxyDz/FzDr4iItGSGPzwU98+5BH8zCLLyCblDLoCezlOviLfoQf6lVRTdKxUogn8 lBU/XjoJA8on6DP0GH6O0x/pH4JVSBYCQraQKxQIE3BUC4WV+NkrfMkc7BgLI58zxNXienGLuF18 g79Pkx9Fl/7+jedr0mrO1kLtotrVteW1FeEvIR7XEJ0FbqFycfSD8TMG13s1Stwu+JjokHcOkkZa kc7ImUFkDCkm05GT88g68oIy9pfJQeTSp+QKjllPncqYG9Js2oZ2w89AOpwWY+y1ilbQk/RPQRa0 gkGIF9KEDkKhMFyYLDwkrBZCwvvCF8JXwnXhBn7CTMNcLJn5WYB1YIPYFPYc+5Z9Kw4Q3xMvShpp vLRAqpT+hUFMK7m73EMulFfI++RPVEX8FBX2wiu3vuog54U5Qr6wF5bTTGbHHcsHKM+DYJjQhaKk 0i1kEZ1JKmiKOF1qSVuSrnAVt/ZP0LfpenqdthS6kE6kF4zhv6nKfyQz47/5ncvehMvsIM7tA3zy dElHZtErkg7KifJ70+QtoTELCO/BKeEckdlGOM00xEou05eE7igFr7FWYj/wCM/Ay0IxmQl7aT6A plq1FOW4K9mGdqE3ySC/C2GMeruiFDUTvoa5MJZ+BpdRjxfBU2QYGwnLIZPMgG/hRdSK+uKDUpoU T96lo1kZjSMVQNlW/vvMJIUIohnmkUJhnXSFfg5T4BjTwFlhB47+GH1Z6MKuij3JKNSAmbAAisNz 4CGxH/uIjASB9AUfO4/WbYaQwTxIZ6NVGYA2bR9q9wG0A62FLlhiQ8npjHLRBy3EOvysQTvBUIJG o47fj1bsA6iQetNKGCnGELQ6AOy92p7QP/wirA2PhAfDq6AB2oOF4Rn4xC1wEVbAFjK/9hGYiDvH z1G3O4vt6TGxfbgBLaOf01509e3ri9z2ERt8j5+XoT20El+FMvYp9IK88NLwCZTuemhh18IQjE8v 4Cx/wh46ClWQWduV7g63FybifM9Bj/BLYRfRwKjwOOgGB+EFWYTBciDawbj/B7iA40tDjECcxBX5 GUBYhTw4w3/3H0AeCKDaAaD+5j9D0/d2aPcB6A4C6E8BxLwRgaEpgHHpf4Zp078jLguxBcA87L/E mggs/QGsywFsFQB29J0JGQDOdIDEZwCSVuKeF5/rxk1CciwCx+pFHqT8DuDHstRgBPyFZtoYgACO O70NQINPABp/BJCBvMl6mP/dhLu4i7u4i7u4i7u4i7u4i7u4i7u4i7u4i7u4i7tAUKK8cBH5t/pl aFNByQVJrqRrg3EgsgsCaGR2gYBdJYkXqHCQNgE1WUsagi1gvJ5bk9vVeC23S00u5GHaeAMvTRp7 TB6TDy8EGNxwC1U3gvxL9m5Wxd/1b6s9S+bCUdBA170a7HC7VEm6B/1EyKWUaEguaKiAGZCayy26 wSCYALNhAw5ug3bjGuzyWuG1C8bLuUbskF+Nl401l4kpNqdJ48zszHizJKc2bdps39Hu92fkNBWO Hi1e4u9iH/wA9tuaVNIxdDzOMT1on0gnCrQL6YJdeoE6xIlYwc4mLrMFuhovFBq/gUZdLjdpDMWk MC7bE9+a1ieVe/fy0R/Ay0IcvQC+oI3yweZGhrgL2Aa8v4Epo7xeWHgZBxgZ1IGjR48q33IIf0tz xI+xba/9IITPlptzaGX4bNBtznlKIFRYL+wSqDAViJn/Ih3BehrhEtBLpJJsxc7ZnofxybnGa5eN +OzcvNyFYsNA4Uzj4SaNSWEgEE8yCdm6srafXfzxTzP/Lbw+4W+ZSawCIyTC3grJbTc6K8NXy6lb +4/webAgYhGG8PngECYtpIu0iwzvxohqWWuj+XGd4++zt03oHTcgfoC9Z8JYeax2aNy4+LH2ooSH 6DRpqvZhw0Jpjbza+K7tFD0pndSeNjgcSUw0J+n11hJ10OPNaqwmoDaqqXqly1QCleGqYAyWuiGI Q1uZ9M4ShU2By3gpDnBmcXaRwmIohOb8hyDijLFNMzMslth4I5W8yan+OKMlM6Opyej3JstSn7Ef b5haPrnNmI83fvLQY/u3zpixdeusGfcV0o8JI/fsGLSnNnyqtrb2zZ1rXiHP1j515SoZRcb8NHoB X4tzyKBq5I0GSoJuIag3ZY1ls+kKulbFdjCiBkmkglokOkqOaJSxx/IZAXFjW4dODOoNWWLdlBqL xC0GRSratQdILpkPESkqDgSUuUW0I8+aQ0w5fIZQGPB4TZIkZ6OkZtLqitYf937qq0aT2SOtZrhe 7nBkEB9fLq63jONLgm+DTVuKLaVXxUPSq/I7qned8r26Al3vmLG6YTEPxz4ctzj2YOxFx8WEqw7d Ie0rcTTB6DQmGpOM0j/CV0HGBVYhVYevBh1JGqNKko44HWan06FyOlDmVA6noE8yVtLNe7qZiKmS 2Pbqk8wiJFXSV4MGQnWaEuvHOB6+nuRVOgfcYCTNgzrT3jw6iE6gsymjB2gKuMiK3ZEFRem8HuBC qhiEvMs1hRdMsXzueFkY0zAQgwIb0VeoW+XmUEgKJ/niPf5myJGmTbOzcHkVRca1R5WWZPzH5BvN qNX3/LorW9Y+8ugzZH/c7x9+fL3jS29sGpC0c2fr3KFVsw5fHDH28WfK4o59/v3OftsObl40uAly sm/4G2ZBTgbg42A9UW/R5+sX6Fm+6X7T1AShp2WccYx5mGWK/iHzAn2ZeXHCC3qN6Bb4F5e0/JdV mUy8eh3hDAriw14l/LW0nmRX6HTxzHaAbgY7HRVMiU9yiiypvj62ZJB7gpu6S+USv6IDfgJ+o5/6 VzawVZLm5faPyQH+V6BRcrR/KUN6JVm1u04frkU14lphRClqkH85jdDMcUZG+IhShJxDQSLFcc0s XCMUlsnNbibruMfZJ/MreJP9fStcT46dvWvTzMzO5lhtSeWCMaOXmis83788/cjYEcMeXVl76eTr YTLXtnZh6NEZG83P0ekzhz46b5577zsjy4cNeqZh0mvLq2p//QZH7EDpNIoHUHv0cC3YNLafbpRu nW6r7l2d2FnorH+CCbEoW6CTBFnUaAUZdDq9/ojAzILABD1QnZ7Jwqv0VVCh89kQ1PA/tqjTwREN q6QjXhFFTTDRlaWpJM2CejmY7M2SSz3Z8koD5Rqn15uzgBqpmwp0b0wlWapw7sdC5F4gcA1F7xuj om/olq7nmnI4x3JyFjYMMBQ8g8GAvFO+UqJH+xubo68MfxLUZuYIyQ1yBJaYmMu/6lGAnMU6QbMu qM3RlXbP0QX9ObpkJ9IGOcqXQQrQuWWTTFNmvNckmAhdXTOPPvv4229X1GaTQS8I+27c90LtRlSN J2vGotBwO+wRX0Rd/j6Y2MnxUGJZ4uq4l+Le1J3UnU5QqeNsMWkOQd1YbKw9gOoqoOgZ4zTxsXFx R2IM5pg4c4xBj/IXjIvRJMUHYzbE0JgYQzCexMc7Y1FNXzEw8jGXTVTeoJclOfWmQcYJxtnGFUZm RDm0KXJoI2Az2qhtpTv2IMkGA3kSpbh5eczev5NH1+3y+JdEcqeLcph3GSWy0IRA93thoaphQETm gqLVikKT4sJbBROlMc4T7xFQIiHeLKMl9/d5LX7tuEcrdi69f2m9rcvp5zWvdJv3WBVRTV527Z81 pNRYtuTwpnXl3fIs9F87aqcOqL3+4TuPlZ/nXq0LcjMe9TkR0qA8mDrWTtrJwfh29nbu/rG93WOF YfIw1ZjYYe7JqinO+aoFzpOqTywmGRW6ItXtdXu4ZpvqJQX13fUURSmBfDyI8w6VWC0mJYjJSWY9 +tvmwXjY6ysxKrwzEjAajdS4Ml3DmZVEcoKaPOsg6wTrbCuzVtKUPYGoN7tcx6mo6ioq26jwch1b uMrK/lTFe0ky19BYbt+8yWAyNuP6Ssy3cE2o3mNLv3ds39Z9htDWB0dW1Ew7Pu/L2gvPLr6084ua Zt2Wd520edMjD29jvWLGNO7SuNVPZ4YW1f72UdnlWaQTmUG2vr7ljRtfFG4rqHxuza5dnHOTyEbW gklKlNchmCpKhMlq8AnEJ1DZx5jka0zJenqMUnpIBIea2FX391d8WSQgKiy+zOOtXIw/0JXl4D+c lyeb64KHtbjRXPgnhzBwS83T/M/QhWswqixAKyFDDEkKDm1kbGwcqRqlLjIuElYa3xXflqqMV41a lVhA+tLuxlHakPEX3S/6X2LUTMf0LEbQatQiY2iEVZIs6zCtknQyhlFuWWfGAioIbqYzYw11kiiq kiRBqqQTg2pQ6b4L8l9UPEC0QIg2GKtzw3BZ6NmdHWPnmLCSEVZJSFDbXVcln9MJK3VEx/NGg3xM prPlUpnKjxtOfqoEm8V2BP6z4aQdduPly2DLy3VczruQy4PQyzwEQ9Ff2NAWiPo1zpaFxsOHYw4f XihGKHKpU0jbq1MoqUf/fhXMIKjkA+iTIfw7F4oCMqm40IvBm1fwCHEewZ8qyQLN/JD2+2J7zdMb Pyf/Wts+2ZkpHvizPTlY2472J6v3T1u2BFdxNVrh75C/JkUX5gS7M9be29c7wluinqeWRjumiBPV Jdq54lytlGpRC7bUtCRLolodF5uUlla/PjgTk5BLrqQkE6hsfqm3z69zpCcmuZUYpzDQcoAi0kqE f73L5TqHjkCRzkUbm9PIlMM9UsQhoWhnmjy3eJwY6iWejIg793tRSDKataKR9Grq3/JeyYiR81fc X/r60trHyT1zmt/Xqf2jz9WeJuMH+tv2b9H7yaW1O8UDBfuHD3wxM/Vg6cjdRU2EnibLiC73Tqhf vUHWNR/bvudDTXikNCL8rTgVo+pEqAwWDaVjEqkbMvRDYSJMTiyFeYkrYZ24XXhBv1+o0L+jPw4X En9JNMXEJpoSE4U0qZ4pzel2ddD3Nd8f39c+Shyb+Ejskth1wtqYdc4tZDPdYjoREwdmcBjNRgfj 4Xp5vRzC/VBqvRyjAQhLiEvSCQlJTG30G+4Dv5sQ4nBZ/W4VUdmThg7g+nOtsMtlZGJhlzrDYFJY FggU8nCXTCJWiXmTU5A7sSmZGcwq+7n603hzLDcGrOKNe2rfvHi59tOnd5G2b5wh6S0PZb7x+Nav B4z/ZsHzX1Ha5Er16+TBjy6SPrvPv9dgw6pNtVcee7X2u7KDqPPPoQ72RxkxIH/mBf1uF2mriiy8 yZhkABUOVE3UDleiMbruSX+tOzdoNxe9SeO2DwWbCgmySlKJKqZikt3msFFJq9Fp9BpBireYLXEW QUoQrB4SG4MXm8rpIRaNyQOBAM41DX/mEEVIrBYrhvRmiiLi82REQz60ip7nyB/b+88qmFzS9eHH js6v3U1yHnuhSX6Xp8Z13Vn7vnggPrHzkNpjh1+qrd06OGNn0yb53734zW9pSVwK0OCw+ThPNXQK pklikkq1QiayDALjcwWV/AxGZVpKHVqmjs5U0zK6ODxAV5bnAoYNEfku5AKOuzcTui0FW4Qvblyk oZru4oGdtS121ozAJ4xHyduPkueDz4L5CeaEeFqUSgaq4kiskJICnlgr9QH2TiRrUozgSZLUhPhT fSlutFvUnVqE8cuk0lSSmuh3a4jG7h/6QJ2sdDEW4gJ0wSFw96FIDNc5JRuJAXO4q8UFace8CU6H 0+4UJJ3f6Iv3u/wqH/N7fTZ9ogcshjgPVjbHuWXMJYs+D3FqcWXMJrwkqT0eSBHwonxDFlcI95a5 N7/vytcKZTPbZ7pNNi1WuSFF4cSYHMWToXg2Mwmd6fgVtcc3fFa7vmIP6X56PSGr/Ls8Q/ZNmP/G NE/zhYQ+NutqK5q3g9Scn1Synwz87CQpqRhZ+UTjiaVdeszrtmj94drfSwc3Iya+ks+gxLqUlfx+ d6yWa1p2XHyWilt7WYV2X0VlQVCpGaVqWcUEtySJhW4tcWu7a4u0E7WlWlGrwiVWNm06bBld64JI OMPFGg16rhIjomDH8gARLTZrGFg4k1tq/oXjClWwfQ7uAar2tc9RBTMiyYwcOdmubNj32TGZEUny Um9kG6/15sgxZkQcz1/bF4fJxEgyEZPxPPn77vicKINJ9HvJSjyJoSQxZZq8xPTMOwI98M6NWvFA 9Rw2+8/2rLS6FGcwAKOeH1DWGkNt8JmhwlBWIkxmzJeaLeQ42wr3yp0T813tUtqn9hIK5AGJ99db HBdTT+9PoSlCqq+pIcvbzpffqL+7r7ePb5x2jH5szAjzcNtD2of1DxtmGqeklPgWCGXaxfoywzLj /JS5vlX61YbV8Um+lBi9VvSgvUhQyRITqER8KclYhgqW0GCFgzguW6CBkbhJd1JEJpKVRMLYKRT0 NUhKsghiUgN1gt9xn9oP9Ul9R4bHH0v8GKLxo5YmN4X9AjrW22wj3+ggrvGNDtpJ7lIjLgYKi0kh bniSKEpdxGKkpPr92VmRnU7UasabrRZmVdwQxlUp/gGv6Af9c+aEbb26D2hZO67H6JGzfn7i+T8W iAcMO7eGNuY0J5/3K314QfWz79T+spZ8anxw2f1tStrlj/RaBweaPT98wuvDRr8/J2bJ8jkPdMvM HFuv5d6pU46VTP4O59AYLc4BJbrpFtSLNAnZA8ovpasrackedyTIeEVyE9pIIAKm9xLF7vC7qn1r I0aWWx9jzYXCb4zKiVZe3fFZNrc5NK42kZXVJoj6nTv//IXrxka0OMnYpxmKgxq/oR/rp3pXxSxc 1C0o6lmspao9u0811fCieMkg64Ca+D7eKanNflrothC3pbuFFlkmWkotgkWv2B3eVo1tNYXxXEdw TQKF3AAVFqPT5wuSqxwvoSnINJlpxBSgpEbiVBMremNYbfUnH9T+OfGNDjtnntwnHrix+4vaG88v J/rvhG43yg/tHfKGcp4V/rJ2NE7mB4xbHEEdyeOnbmBnbVvfEmA2aSzgxF1sa+3oRx/lEet94UvM yVpBPWhGEoPL1Xp1ml3vSKuvT0vL0TeNb5bQIu3etEJ9YdoY/ei0osZl+gX111medmzVx79o31Zv n/3Veoftx+p9FP9FPVU7C3FZXbZAelpWDstJv5d1TO+rKgiMUI0OTNUtxJ3rH/o/AqZmWTGEGRul ZFkzPGbboPoT6tP6zkYxeTErYtbHhGPE9TG7Yq7ECDExTgED/21Bi+1Js9MpQ36qJsMpaOsPNg4G nyelkj4QNKYG+dbf7W/s3+UX/U1yOKddSbifyKnKoRtySI7VZ0tulHJIOiZRl5QnUalJc7594Pst 3ESg872WW3PxIl+DC3XHAHi3OOIE6k4C+CFAIRT7uLQrutBM+WRnpUY2Xq2oohyW+Hizxer1C5Ic g4EFXzysJOQO2z9m18EOJR2zx54aSTLzF81+KDFke/D44kXbuhvV1uSDTuuQwxMGZIwfPWqTP3Fu n/bb53ed09Uco3ek+DQPNrinoNhWvKRTcPB9DadfrZ5/T3PyRT2nsV6XRh2LHuh2zzRcwQW4gtya 8zPIk8EdRNQZUsRsMV8U81whF3W5MLR1tnFOdK10SS3ici25js6Wzo5CVaG+n6HQMtAxRjVOP8rw oOVBR5Xrc90p6yn7V3E/Wn+0f5143hV22d1iI0Mjc2MxzxAUOxu6iyPEU4m/sj+NOmN8DJMoJDgl mWjinTFaW8pxLTFqg+gkSrVMOxnNLWQKPkqrCNqtDSRErhLmInmkGxGIPalDs4h9Kp7E/fA1bpGK FXXAf0rYq3Aeb0Oxx4v6gCYoicYbwZucKqAFurmHIw1eqpi0e8iu4mDtz68dHEuz+jw2dccLU6bu EA/U/Lqi24ojJbVXak8+S1Yf6rPk6HvH3z6KWt49fEm4jFLvgKPBDmodcTnbxrW19orrZS2KK7I+ TZ8W1uk3Gzc7dCq9XTOGjhbGiFN0E/Wl+hd1e9X7NHt1Ootuge5rKsQkDzJMMMw2CAbChfXexri9 7w5FGBqvhA1wHq6ikzUYtGjIYp1a2eZkWqeBGFJikhNwFCnagIsQ3ECRe53xKcdk4pLzcGfUJCHr sGK7ivled1L01933o4JXNS+4POna5Ul18Yopp5ERTXnhhTrTjWGustXNUg51b9prziwhd3filZdP 1f426bvFO8+4dtln91+0bfO8McvJfOsrx0gi0ewgdM6ujQljx7358ck3HkXJao9cOhfd/ZwMbtdQ pvfps/Tt9GK2Odt5P+2t6Wnu5RxJh4nD1UPNRc4q1yfiibgv7BfjLpqvWH+wX1QkyOJyBRxc7Do5 uAxicJOib2hpQbP1nWi+vr35Xuf9mr76kfqL0reWP8m1GCOJF2K0GPgnIMdMgKIlaG2ZBHwmg89o PG4iRlPQVGQqNTHT5NiUQ7ijPCeHZcZ5100WZHtSVveoYHW5jCKlvL/IvaDYWY6/RIsrtSebKzVq dYRhKGbEfMvxQPPhh2efmDLmk7lFqxvtqXHvmDL1hS2PTN+44Lml1c+vJ0JZj9Y05s/2NPb9I6+/ fer9w8izTqiNSShZ8cizs8FhLnDG0z5CoVio7qMdLowVJ6iHa1VGMBIjTY39XPzTfN0hN4ltYW/i bB3bxdHa2SN2gL2nc3DseMdg53Rpevx1et1mBAsx6K3W7hbuWASL07DSuMFIjUaW4NTIwAVPTZ6M Q+GyBvWKt0lNywrpid7hwtwenz+L02Ait4wu4rJkGlPkYEpa1i0si+pioEvNha7G4kDgenFA8U01 0ZOo3JriXGXvHRvdiZLiSXXCZoTMDDCZZY/isIhHOX2RhIEH0n/a/13tFWI+c4LEkBuXNOXzhy6t OUV76Jr3XTxjK+lrfb6CuNAW6Ei92rO1fxjduw6MIk8uaDvqRe6J49A9lWI8ZoU9wSSzmhjsjeyN 7UH7RPvTumf0W/Uqh76ePmSvsjM7n109hysrUaUXdAanhsTTgDmOCRJo1puJORwXZFYfA4GuIkrU uqdJ8ywletU4XVkrsa/nbfaD5AB44DrRgA2nXxjgJ+y5yknM5UIeNuQqZ+05psiuwGw0SWpZUqFL MapjE8AkGRIIBpxpc+aQAArWJIw0szOzs5rxyAn1kKthPD/LLF+/Ps4xd2rnAQnNM3q2O3ZMWLe0 eGxW+/tjn9W0Lxqy9MYIlKE2tT2E71GGkiANrgaLtFrRnK71mTtr882SOtGemK71m9O9Odqm5vu0 7c195X7aUdo/Nb/GxzT0pqe28rZK7Zy6Mn1DutzU07R+Xnp7bXtPfv3ent71R8tDPUPrF6WXpp9K veT5yXsl1WS1SPGVdHdFPWecrFgwoxvDLm6/SqEKjgOXrpnB1qLTadDkJzt1Gkt8pi9T47PZjluJ 0Rq0FllLrcw62UB8kOxKOWQ4ZjhnCBuYy5Bn6IZW0R5In+zhChnoqijkNR58FvOA7Do/6bwQPey8 EImBcKtebOVbV8V3pqJ00YhmWrMxHlL8b9wt6jlilzaj7eSZi2wxZGro9NUHP1x28OEXh5/e8I/v 1744c8aWnQ9P39LP0cOXMax/s9ASkvvFGkKWrim9Meb3Y9O3C2kfVh16/82338TVXwggXFJivt37 wcIPwOOtWT6WLeQLB/RMeVORYrVnWVUmncksiAQMTlE247bcpw5mNs0Kq0kV7vC7KkGiNatpVshy 1UInWjZYQpawhVmo2RfZK8Vj5av8LaUbOXseGHSN79DdFn2FpeyaAtcib+pyIz4Qw/KIuMVIMbIv RtIlEL0KBQ341mYOBApJIDPiGTEaN3lNClekeNPCillVU1/uVDFlbPdluegGf15VuPmZmkF048JH ei2fWfMqytgiVDG8pZxUzgwWdlOvVG9Qh9RV6nPqq2oZ1C71RHWpen206Lw6rNa41OirZEYFtSTM IiCJEtNIsk8E5c9OhVgVO8+kKnaVUWBudhxzjHVV1c1wUq7yzgBnFjnDi1XONgsnFcdlZ8YLOItF FRUV7Idjx6rjmb/6FD/d3FTbg7RQxhgLa4NdmOgTW7JMcYEoWlWiKDNGmRgHRK+lglnHTKJW5uPS SrLTZFiJem+1OnQ6vU+jWaklLm2etptW0NrjzDs9HeoEUjkB6GrMH97um2LI68JjD2Xnf3OIpszM hUZV5GgmRmU0+FVGTQJRx8gJEFkE/vo6M55E3gvxqJ2fvS+oqB2V3NTVrGlFZuun7mXfffjhH4+s jbl3FRtQveFwl2HcuiH/hd9xblryftAhS32l/mrBoP9FvC4JfYRpGhorueM8WarK8NU9salZaqQV SGNFpcCjFATnYYnEmMikZuoOyB2pgaafZpowRXNK+FqSX5SIV/LLPlWO1Fydp++mL2AFUj+5QD2T PSSuVb8tfcROShek7+TfpD9U8bEajSgIjEqSrFarMKNWqXyyZJZlScBdsKgxi6JGgyvPVATXl/+t WZVWCxrG/9yCmKxCEvS6lfjFsRJdj9YH1IdxH+A+pBvKm12n/9LTYcRffFc2pcV1u1L+PYNcflqA +1F+4stfJSG1KS8zZVwBVa6gXCMvl4IadXpijlqVmJgr8QPDxBwkn5S7FbLbE32FhP4d/RREf8NZ CleVe5SDhnILJ2fLjTlShCg5nUJ2a6OnBwUk8qvRwdgvGFGZLdib2ZyrXLDV9XIbb/zj7oRIdVJY oASrXB35twS8REaBJtu+qx1DDp2t3Tgbd20HSah2as0w6nq4ln9bYi6KQTNFupfuBxGdUrPmygvv PVnZEdq4SYQm+yIvwn1olQyiS1wvnhNZN7xcFQWXOFEsFcMi4/+/AhUihoY/STE4DvRA64FUYRhK b7E67KZOBgIRrVSM7yRlJnwGcyv4WXjEMkp+9EReeHs/qMOfBVtr9WgZL7AL6i+tF93iCfG6m1pV bq/aluBWC4I3ySnFO7WogkTyOuxGzXEf4X+LlfpQF2N8K5UX4IV7bb6VCSQBU0E70EyvjxwHwuNl 6gIuLQLYU3yVZPqevxQV9wg1FzBAuXytsKaroq4YBKNPRoOiiJLJeuuZXYzOHOc360wJJFYfX2cu la8F8BeKymGGVXnzrdhMxTnfaj03Zrw4ZupTrllHntu2xzug1cQnKvoN6zynBfM/2XXQkH4Hdu2r SaXPjhvU4snNNU/R8unTu697rObzqB/5BrllgfeDcaIgxdEtxkrj18K3cVeF63ES4zrbBBn4kJGs MR63nbeFbcytMseYLbHoUIhk0Wv0MbqYFK3iVbQE/2m72pSF5F7FdtVGJ9o22EK2KhuzCTQz3hJ1 LLH/5lisdU7lWm5kp4tuRTn/yOUm7qZfsUgmtUalkTWCZPSbpJgEYtDERhnGjydReRSZjm8a3eLe wrCFm6Z8UbSxu1FTkTa2Y8lLzP/UrvyJXTJm1pTQBQ+Ob73q/Rp+Ot4O4+FU5Ike7PB6sDBW1th1 HaSOqr5SgWqkNFqlyjK2iG1hybblGzvFdrLk2waIA9Q9jYWxhZaetvHiePUw4/jY8ZZhtmkkXi2J +geE3mJvzQO6ccJwcbhmnE5jdTLZhCJnTpE5K+JSfFmNZQKyUXZjaNvkHBc0LLfz4BfTMSkQxCpc 0Cg0cfDAN/IdmeJA4fXCwr++JsN3B1z91b3EXuoh4hA1Qx2PU94hQvSN4q2xSLvNi986TSyP/LDk XO3l/eULF5Tvmb+wnMaR1OVTa7+sOfrDoySJ6N9/7/0P33rvCHa9sHY08yBfYjHKOxZ8QWdsYLzH 2MnI8twhN3W56+u8iRnxGYltEie6V7pVLawtEu6z3pdQoHpAN8A6IGGMaqxutHG8dWxClftj8xe2 LxwfJ10wX0g67w67LV4WMAbis1kLY3t2n7G/8aL2h8Rao9YUgzsHvlmXLLhZhxh7ynENMWqCmiJN qYZpJpO4TJoZ6wP42+26C7fr5O/268qG3ZRz63Y9rk7JLPFm/qUif6pJuIVVCze3WDVq0fExU849 0n9FQ9OLU6dvf2lyye7a0eJrZT16LA2veb62eknnFjXVwuajh9878d6RT1G052NA8DbyywRzgy0b xREjI16WxdqyXmwEm8wktUmlVqn1cSa1HgQV0SoTBY263koVUSW740gcTTb9xzgstsPhm3EY7qev TeLvKfi0cuq+3gDGdxfG8ONrKJzEX7FEZhiJ7GXUhvmbWo3Oe2BgqzZtWg40JzH/xuKOLV5K7ZBX NKnmE+7v83AvvRvH35h8HnyEJZuTW6jvU7dL6Zs8PHmGerl6XsqLcdvT3xD0aqvDZm3cKf2kVUyg fSg1ZhCNbYBqgHqAZoB2gG6AfoxqjHqMZox2jG6MvsJfkWrgZ7Qp9Zum9NcUaIf5h9Wb7J2cUpry uOYZ3ap6T6U/2XizZqvu+dTN9fb43/JbEvkxemxSTn9Vqk+nYQ63P55pGyY6eOjvdNnz7N3sg+y7 7MfsksHusk+wn7Mzl32FndpfpX1wTwt8h2AkQUKN5DjGAcRIKH9Vt8dsyVJe2SXFmLIIaTggcVwi TXTGy8zZUOtyEEeKPRhny7JX0gfK5ZQ0rPmKM+d4GklzZPBWftyvFmVUZdC8jNIMmmEkhKSAO8WQ fO5m+NCkbota3IV/O29SV8Ws8V3qtUD0QKQYN6oBtFeTFNGcdOHmqxxrxNgFUxskeXEr5TcZY41x RkFK1rsTQF1PTiBiA7wkmTHrifEmQLJXr1PVx0CvXqpaIwVYAriMidwsRl7gKBfl5UJaYM4cHocX 80D2ry9kpPpTG+LOpWmzfzs3x08SjZgPf165YfEjM6Zn+x5/e2231s3THus187X+ppCuZPSMMRZL o4R5h57qO/rtmcc+J/c4x04a3u4er82Xce+crh0equcKdHxkpK3ngJ7NvM7EOE1KZusZA/qvv38H l7SU8M80TVyL++bS/aDBtfH6ebhYFWyNiVI7xvA6vYYIYDGqAwYNGgNBazAmQzLRx/p0JCyr8tX5 RfJEuVReKTNAK7pBDslV8nFZkg/QMWAjTXePiCiL8jVR3LdcuJarHK/U5HI7gCGz8d3IFyZ91sjp Ct8Lm5op3+VRTrWp0dE5d8i49Hnz9uzdGxeol7RxvbHV8E106FIij6tdtrTm8S7pDj6Xuag155W/ AvPafnDwkw2Mgag7zpJl4M60fqw5KxBHUlRxFh2Js2hR4U04Hci0+GxWxYlaSZWVWLs6FLXnTtRx 1UEnOjY4Qo6wgzlwB3fTIPDvUrrVx3Gvw9Rd7Tc3Zpfr/CdaBuV0MjdiERSRcjBjjN6gp1LknS16 UaZLAL3KFNkepKXNQZuIchI9Zkr1K1sEqyImynZByJtxYuDz3YzaCq3pwR49lreseKai4/hu2SV0 Vc2eZU069Oi1YhHNwe0Q4d8IEy4hLzRk4CvZuAlNNuVouDbrTTlqDCCyVPxCK8Pf70FKohRrfBZU J3myoB5eMHcpqMZ4Eix4wdyp4N56DbPAjReDrj7UU/s1OZCt6QgdNH1JX1qg6qceQUbQ0arR6ukw jUyjD6mmq6dpFpKFdIGwWF6kKlM/C2vUj2l2wCbNa/CKvFvzLrylOQUnND/C15pquKZJ14CosYFF Uw/8mmaaboCxuxiMtWSJQQyFNLiN8Kk1ZrVaAwLFHQOYCcEGGjTdKhWlRJI1agGI2EhHdMmqYDCI u1KqriQJe4MY+FIRU0G1mwZJsvb7j/iSXXbYawprCh22yxcKIwdWOTe3FyZlb7FwpvJtEuVlJH9x Xlx4618hgkIPyYz7v1q7+uA4yvO+u3e629v72LvVWfKHjF8LbMlYlmRbRj4nhDPGdoQji1oyQSQO rO727ja+uz1293QIPJDMJHW+JtNmpgmQP9pOaMEzbcD4E5MWShmGkjYlM+kwpZMMmUJC2pJJZghT Si31eZ73vQ9ZfDRt5rR7z777vM/n733ed/f2dLA+HO+Gdfx3F8t//dqmDau3/seTi9Xg5stfKDoz 88qX8PoUf/JP6roAGTGU09mknpavDW7RlJtTn0p9PRVIIT4jGzaOJfvW8+u37F9uuGYsGIpFukPr ImuMrqAUDEUj0YRqJKXuQDrcp66LroflyabwterWxJi0K7xH/UjipsDBUDY8qR6K7tMPpm42PqUf MY6H82rRWAjdE/bVJ0OX9PPGb0LvRgajqUFpMD6QGNQHjJH0bmncaKi/rz4Q+FbsEflR5dHon8fO SedDlxIvwHXfP0feCL6h/9x4K/RfkT4j0NUFEA53RTRNjcZiWjKVgvF16EyXZLCLSxPZgqYn2N+m wioLpwxja1cYLgbDCS0W2xRPpOPxhJrS9a2amobuUlcri5Iih42gqqdiibiW0oIBIx6LqWo4jGk1 dD2RkLT028m4fGccb60H4hflR7Iam9JkR7tfU7SLytFsZColO6n7U/gx29FsNNkl30lXPAFI/CPn 5Le73y7QtLBm8q1jx1ZD2Yc/BMCx1T9rZT0pXvwZWUJEivYnJzvBsPwNkHAyAZeeieRHcUMat0OP b5i+7WycxZjyvaVXJRm2xNJLZ6VRnRkXl16lZ9DokaNDj49Nw0WnuvTS6TA+mgYNG6cPPb6T7uSr S6+eDjPeaoinlp5EQed1hrLhsv+lJ8KjKPEJabdyiWtqCW/166V+qaVXz2gsyPBp39nmw5eJpR+d NzLSEGx4SdydoQtiusbDq2ICOWG8uxeBfnVgICAfWnzq0qkbgjtPPfnHu64//9ji2adObXkZQP/t 11IvKtXLD3z/H5TCu68oJ8799z/i/4qEevRrQH9SblzQDVnvX0OX09nzazK3698MflN9MPGQ/kzX M6Fnwt/XI3q2J7M20B1ZFV+b3CXviX5e/npUHTE+GZwNz0ZvS3xLfkB7IHpBuRh7Ifpi4u+TrwT+ KfLD+L8kX9cMIxQKhNVIRA6FIl3BAMxWOhTduKzr8WQUarYSjwZiSS2kK7qWfF56PqIkN0mRtCRF Akr8+bgc3xQLpGOxgBaBC1QllIwDCiVtypCNifh9sX5NN0OR+7IaFJIL2dAtoc/RQ277sgkWuE/p nwJHJ1InnhMPXlNtgdKSfD351ps/O7YMYvRdAQGgY+JhzYyun1QJOHwPb4im1n2Ms4nV6zNRerRh fSbW35sJwIbHT2zMJOkG9aqM3L8xE8n2NR9qwCzCigNvOUB92tmLlWoc7zgEBmRd/sLigz/9znDf 0KYzLy/+ofzVH7+yZ/EXyqC8+M7B0Rt3vrsYu/wD+ebZxWNYvTYu/l7gl5C/tfLJM3qfrKMVD/dl BtO36o9pgWw8CwFlg6NjSdyFYxGjJ77aGIgOxAbi18Wui+9KPJiKDhqD3R/vmTVmu2dX2Ybdba9a CM3HF1L3pO9Z9cX4V1JfM77W/eX0A9qj0e8ln0pdSv+b9vP0b+KXk++kl/qughIQS0I9gcq/Jt3d vcnQ0nCgx6BgbIpq6WhU6zaMWCwaCvSt0aW+ZJ8y0vd0n9J3UbnhnN6dNbLpi8pMNnqDkTWUO4yn DcW4KN94Xpf7pf3rNDxl6CyazbLYaGwqFrglthRTYsBxZkQHZ5Ubzq5jJ6B4rF2TvIyPH0JW8enD 1cm3XluDXy14c+3q5JtESatxcdNMsdp5ZwpzfJISCpUhASNyNYzIp6TY0htSdOkNuWM8ppd+cn48 o/WPZxIwCZ9blUn18+dUZvFbDDAkIZ/dA/zGN7zaUxA+mn91//3pjwx99OO9qc1d0cXKsz/e2r9h 67+eXSzvvWb0xK1ji8VTycFr1h3X1wcHLz9Y//yJeeX4uy88duPsNOZ5EMbpjyDPCflL2bhxUfk7 VTHkHUYv3lz8QTYChPyxq+hW47PZm4HYogxGRpIZOaNNyAeUA+pEZCr5aXlGmVFvj9ySLMs5JQeX IPfKvnpv5KvyF9UvR96R31LWrVE3y1vUrZGM+mfqy3IY0XshuWpMgQoUwUfAB2ApruyJaIqqaZtk BSYIRcbnThWzayu4qJlxKb41oSkXZf0sTBJdIfyEf0gK98f/JCFLiWzizsTnEr9KdCV8SbtPlh+T 5CnJkZbwZpGe9DfiEG3fWsTPGF6jz2+Sl+lbTK/D6vR1+uhPrACSiee20gOC+GmN+C+I57bIm1W8 nuFhUTFIcPTsBQwPxog/xnXXrHyMcqrCONXRO/H2xoV1mYjas+56nO6f6MWm/8xqPRklDdvanvYI 3rlLDl2NT6DI4et2blw1qDzs3bY4Fchf/htn4bPyv38joIa+0bj8mXsj36b/bXUgcFjiv6gqSYvi t2Ews5r8MUErUqLrJ1Lzl1c/0/WMoIMdPF3S6q5fCjokJUJXCTosPRcaErQqbQ6fEHRE+kr8YUFr wWdJM9JRaS4xLOiYVEj8gaDjobOhXwk6IX068XbrJ8vu148IGpZ2+q8FrUhhY6+gA9KIsUPQwQ6e LilmTAg6BPymoMPSnFEStCp1dycFHZH291wjaE0x9R8KOipt77EFHYPLg4cEHQ/cbrwo6IQ03IPf OpODAbAt1vMu0V1AJ3ujRIewvXcd0WFqHyBaJXqc6IjIEad5jjjNc8RpniNOBzt4eI44zXPEaZ4j TvMccZrniNM8R5zmOeI0zxGneY44zXOEtNbhb5R8OUh0rKM9Qb5/kugk+tJbJLobaKO3TnS6g38V yhF0T0f7Gup7kuh1pIvLXN/Bs6GDvob4/4joa4n+DtHbiD6NtNphv9qhK9bRHmv6MiMtSDXJkgqS KeXgnUmnYJuRSkRPQkmpwuYLLibtgyMXaNyb0G4TB4OWMvQfBuomajf/n5JGWpYxaRrOlOl3ljiP B20T8M71bZcy8BqVtglqB7XuhR5leD8CfYpgg0+9joA8DzZXmod9nmyowjlLqrQscUEvAy5TaOL8 NkSIQQ/sjxKr0hBpwTMmacoJWSa08J4VkogelMD6Ckm04YxP3CXShVH3hQaPPMxRX5/OV0kKvqNN DtlgC19qJBstypFVHmnDM8ifp3duf520MdLQaZVN8n04X6XjBskuCe2W4HVIFtfdbC+TbF9EJAdH PDJX8vkg06Ko2PDOZedES50ijblqo8ShvLgU0TL1R0sRHRXRq6khR/3nhVZbeIrneDTbUSgAJ0rj re242iK6jvDEJv46HbWz6hFiy2Tde2OiOXK8li94rkLy2jJc0HNcWGuK+OcI00zgvhmzPOkuUivv 34Aztsgh8pQh9xwjDuyLcG5eRJtLaI9lk3LF0cEohjnhv01ZKxNPjcYZR2OVenJPOtFtt5DF4Pzd IjMVsgaxyfPmiZFcbtlRoaM2ev0r6o13hX85oWOOJNQp0vll2LSku6C9Gdk6/WfhpocFwjYjDNxN sfUIdz5lo9jKOtrOxzuOpaHWaPIEytr1iJ+tUEZM6R7qz61GuTk620Ya156naNVolCy0vGjqxv4N Om9SJFyhA8cQj6JP/ZsWN6XXCEMVqqFN24ZX1NU9y7KG9a5I+Mfs7pFuFfqatRZr5W7YM1gVT1IO XBoPfBxt6ZA1CbhuH32XcO6KcV8h6cdbOf6/1nyel6KohJaob+06xaUehfmASbdQfyZtJn2TsJ8C 3QVCbjNiiE2Pol0S0oalw8A3A7PHAdj2gUdIT0Er9j8A+09Q+35omYY9joGDEMX98Jqk1hkpTt9B 1chLW4zDK+fQZju3mGeuJnLbHgsr48PnPAdi4BI6SsTd9KdZ+Zt4mqOzC8Bfb+nMtWooj12d+rZr nyVGB1aodr3mdcIWtdkTtaNIUqxW7cXYzgptWEXmRc2ea816XKf/AZFpYqvRqoKWGNlWa+y4VKd8 UTcKAvfvFa/maMeIWR1S2tVipb68wBdieY4qMLd6TmSmKiS/V4YGyKvlkeKVfyUqVmpu1lCsliat aEzQWhbR9kStej/dw4T9akc9X1iRC0usZjpHDp8lTLKoRpHFecum8fbhOWcCi9WOGtrUi6M/T5G2 O2Yrt2PFNdTidjtw214jfHCk0LoKyW/iylkmr0H5P07Z7KwmzTrc5nSAl9eZOkUc5Zda/nC7OtFd EZWbx5+PqprAR7vCL8fQB3nUxscE+b4yc801Hs5tllgJcm/4ujJHWa1ekQP3ini3JaN/DlX+vKir 87QGa0idq7gPz35THh+TllhrLJ+Rm/JW5pFHq70yzpHMleO4mTHzilgXfitr21FeqWH5umK5RZZY LfswQzYl4CyzF1q3STg37pbGpHGYDxnst8PRNrjeGINtVMJrzqPSIcE5Sv/lfwxenB6XdsKGva6T dsG1CW4ovURrkhroG4FXg17DNLcvH/E5qnzvN08gdRONzkYLF3wWtEW1RZuOUIXmc+hhsc5yxAoe xyefSV06Y1MGpmHfnjcQVXhlheuE387uEeLHX8wdgb1PFQJzNUJzzx2EEr6eGG5x/m41NGgNwHmt 34mW5rmRK/DYkj2zULMKZs5ip9hMyWKTTtXxoYntc9ya45q+7VRZrZwbZjeZvvkhTCMojE075Tq2 eGyiCv22ZzKj22C3Y5jtLZfZEbtY8j12xPIsd97K73OqvlVBIe4C80zoBO12geUtzy5Wh9he1zbL LAdcpg0nK45rsVK9YlZtz2e5kumaOR86eL6d85hfMqsMzi0wp8Bs0FJzrbyVszzPcT1mVvPMBPn1 XInZQpRdZX69arGG7ZeguwWtTh57I102QQf0N8GYZpvfsKq+bQF3Doi6uzDMKCTOvOWa4J7vWqZf gVPYIVcHFz1U5jkFMJNMKNTLZSDJVlBfcUCJXc3XPZ9c9fyFstUZCUyOh1ost2JXicN1joNYE+zP 1UFRlSzL22bRwfONkg0elqxyDSLisKI9bxEDZdlkZQgHq1gQu6qdA3azVrMgjNWcBUp4uG0MFrPu BmcqVnmBgW8eJLmMMip2mcLrC9x4Ql8OesxZrO5ZeR5N6646GlvPYfxZwQGXQSI45ft2tYiuuxbk 3feGME0ehIxwBIcVs2jeY1dBtOXnhnjQoHve9mplcwFVYO+q1fBqZg1MA5Y8mOjbHgpG9prrVByS NtzE6h7u2hGrWC+b7p5boR+idsfw7h1scNLOuQ7maAtxTc7Q26NsxoXcV0z3OHr8QcgHX4oAQgvw RpgC1qPT7BbTZ5vZzCSbKhSGyTCr7FmNErANH56amTgwsW/vzMTUYTZ1gH1iYt/+w9P72d6DR/bv n9x/eCauxbWZEqSiGWlMCwoG58Brn7LQsgdGnlN0zVppgfQg+DFOcwtswaljzxwiFKyrV/OEPsAE AIpwDZiwAc3AbhZdy0L0DrNZ6FYyATrOHA496OkvMwaj1UAIWpBsC7PjWjkfsFGA2LftwrQ7RYtY CBatfpBOQPxc3QfRYKYDo7DDoQGvaRSAvxWKVmdEKJs3y3VzDlBpeoCqzt7D7GiVcL7Q9AJ8EsmB IWEyr2bl7IKdW+k5gyhWCaHY18znbcwxIMelwjWEzS7FlirCFUaV7YqNDoES4ms47nGPA5swTI1O AzBTnyvbXgn1gCwe7gqAG+yHVNUWGAe8iNByRRSPiULbOax4d9Utj9RArcxZblV44Aq7idkrOfVy HrA6b1sNXuJWuI98kEkLqka+XRZbPoJZVIxzfjvH6JgprC68t1gyudVB1AohCPSY/h5kODq9l21j g7vHxrew8e27t42OjY5GIkcPQePo9u1jY7Af3znOxq/bldmViWsl36/tGRlpNBrDlWbic06lc0xY 7CbXbGAsYAiCUSDpiDMHI/Qw1CwHCvwQDlLXztkmmzZpbHgwY+3e8T6yR0p+pTxS8atmxRqpeHeY WCeGsfF/2aFhlaHV+vAueDQi4kjcsBhy6DIYFyBVWujCJaAch8n8s3D8C1oKNM9P02IRl0S4aMkH HgqcDvxV4GnYngxcCvxFhyyTFgbN45+SbGuZLmuZNJIXvCq4PXgoeDB4PewzwG3SJWJeLEdK8uPy nwYkWuLhTRiXlmcoQ5L+B/H4gOoKZW5kc3RyZWFtCmVuZG9iago2OSAwIG9iago8PC9GaWx0ZXIg L0ZsYXRlRGVjb2RlCi9MZW5ndGggMzA3Cj4+IHN0cmVhbQp4nF2Sz26DMAzG73mKHLtDBQmUbhJC qtpV4rA/GusD0MR0kUaIQnrg7RdsxqRFCtEv9ufPckiO9am2JvDk3Q+qgcA7Y7WHcbh7BfwKN2OZ kFwbFRbCr+pbx5IobqYxQF/bbmBlyXnyEaNj8BPfHPRwhQeWvHkN3tgb31yOTeTm7tw39GADT1lV cQ1drPTSute2B56gbFvrGDdh2kbNX8bn5IBLZEHdqEHD6FoFvrU3YGUaV8XLc1wVA6v/xQtSXTv1 1XrMzmJ2mh7SCumMJI9IIqXYCUkSZUSZQMqfkXKJtCPPpfrqtbYmSCSeyIW0MifaU0FBl0vKgS4L 8iyog0cyy8g6/6X5KEi+29HlCY+9XNqiRuapzK+3jlzdvY/TxifGMc8DNhbWv8ANblbN+wfVkp7p CmVuZHN0cmVhbQplbmRvYmoKNzAgMCBvYmoKPDwvVHlwZSAvRm9udAovRm9udERlc2NyaXB0b3Ig NzYgMCBSCi9CYXNlRm9udCAvQXJpYWwtQm9sZE1UCi9TdWJ0eXBlIC9DSURGb250VHlwZTIKL0NJ RFRvR0lETWFwIC9JZGVudGl0eQovQ0lEU3lzdGVtSW5mbyA8PC9SZWdpc3RyeSAoQWRvYmUpCi9P cmRlcmluZyAoSWRlbnRpdHkpCi9TdXBwbGVtZW50IDAKPj4KL1cgWzAgWzc1MCAwIDAgMjc3Ljgz Ml0gMTYgWzMzMy4wMDc4IDI3Ny44MzIgMjc3LjgzMl0gMTkgMjggNTU2LjE1MjMgMjkgWzMzMy4w MDc4IDAgMCA1ODMuOTg0NF0gNDggWzgzMy4wMDc4XSA1NCBbNjY2Ljk5MjJdIDY4IFs1NTYuMTUy MyA2MTAuODM5OCA1NTYuMTUyMyA2MTAuODM5OCA1NTYuMTUyMyAzMzMuMDA3OCA2MTAuODM5OCA2 MTAuODM5OCAyNzcuODMyIDAgMCAwIDg4OS4xNjAyIDYxMC44Mzk4IDYxMC44Mzk4IDAgMCAzODku MTYwMiA1NTYuMTUyMyAzMzMuMDA3OCA2MTAuODM5OCA1NTYuMTUyMyAwIDAgNTU2LjE1MjMgNTAw XV0KPj4KZW5kb2JqCjc2IDAgb2JqCjw8L1R5cGUgL0ZvbnREZXNjcmlwdG9yCi9Gb250RmlsZTIg NzcgMCBSCi9Gb250TmFtZSAvQXJpYWwtQm9sZE1UCi9GbGFncyA2Ci9Bc2NlbnQgOTA1LjI3MzQK L0Rlc2NlbnQgLTIxMS45MTQxCi9TdGVtViAxMzcuMjA3Ci9DYXBIZWlnaHQgNzE1LjgyMDMKL0l0 YWxpY0FuZ2xlIDAKL0ZvbnRCQm94IFstNjI3LjkyOTcgLTM3Ni40NjQ4IDIwMzMuNjkxNCAxMDQ3 Ljg1MTZdCj4+CmVuZG9iago3NyAwIG9iago8PC9MZW5ndGgxIDQwMTQwCi9GaWx0ZXIgL0ZsYXRl RGVjb2RlCi9MZW5ndGggMjAwMTAKPj4gc3RyZWFtCnic7L15XJRV9wB+7n3urMwwwwCzscwMwwzL gCCgiJIMCpSRimtgkbigkhvuWm9qZblVaqZZWdKmZosjmOFWtGfLqy32WlpZWWll+ZZZKcz8zr3z gGjL+32/39/vn99H6Dzn3HvPucs555577jODAQGACFgIEmTXT6+tf9R8SyxAlwCAcdKUUZNr36oL HgFY8A+AhIGTR82t10VEnQYgVpRyTpo6ZhQ0xXQHKBsKEPPYhMkz505eePU/sb0HlhsmTKgdZXo5 6g3kRRlIxuLYN1dEID/sQeg+ftK8ca6417GvEb9h86Jx9eMnPyL9EAOQNgFAcfmY2TOdPxy6522A ongAVY8xk0fVj9/efwVA7CGA6LuAz52++/2dEeTOkYbCX9Q2NfCfR74sTOD42csa958929pmLFWP Rl4NAhEM+FT1Dg6AvkY4ezY4yFgq13f8KFJ5DT5/gkIYAwqgYIQsKAZQ9jdOQl1RpSxCC2TYAs3S W1DPZoAJoUyVAFWK12EE+QauxbaJCH2lBIhnT8Ew5J+F5RmIV9OCUBvyD0d4BCEXoT+CF+EahKtl GIJQjDL7ELZgHyN5PwJ/Cder3oHLcCxAWIswCuEexXBYg233KgtgNK/Hse7APtxI34f1Dyq3wCqk 12F7FecVmMsPhyuxPQPp1YrhoZDqTlBhHSDdhvVmHP9uPmfEXhx/BpsROol0OvbdD9sXIx6GeKg8 X6ugv+QyYq18jUs5jfqZj/WrEAYjLEe4BvXD5bNRzoHlO5GOwHlpEOsQIhlAEvIU0ssggDgTx+8r rxvEunEdHWvC+Ys5/TlwnRZ3BpwTX9cJhHcQDnSa28Vw5wUwA0qkXGE/vmY9Qi/6DvRBvQT5uhRf hX7lgJ55CNe1G0HBxkJXNYS24DyLFNthHZZzEAoFzADC1sNU6TTaYDvcoFwLD2M90K4IZ8BDvwe7 0gP5qL9K7P9qhFrs82XhD2P5HELfI3awr8COfdUgXI9j72vXE9cNlq9Au1YibyvfMajXRQh1qIN1 CNP5/HD8LK5ztPuvZHjwCeQ9iuOUc8AxHQJw7WG7wiyUn4Z9ETFO2A5hjIDt16NOn0F4AeFFPod2 EH4mg+hrC0h0S+hnxNEIdoR3EFZxf0OoQWjgPDi+Fvm1wl/RZ7hvcv/gvqF4XfjqED738BrEXlgu 75nJKH8Ngg0hVfkUXCtDKvJy/YzmPsv3S3vf3Le4X7dj4dMTud+Tb/k6uU91wvcoWmAQn4MYF32r HfN9h/3O41iKFXO6XzoIK7nPcn9rx1wv3Nf4fuR7QsYVndaaIe+RDJRPFL6OvtiO23XRgffD/djn cOUq9NPvYAD7GAZIb8MAxTzEd+P6dmIdrocdxBjmg4HqFkhDWw5E2fsuwus4qA6S63GsFexJ1MVB eFDo9SBNYgeJQvFk6IQCyD7Fk3S+oP+ALwbSEm7jmEPntv+2/n8D9EPFkzAO6W8VB0MhXM/dfE+o viPZCM52jPWNCAsR0tU+sk49kTSrhoFRiWcbwlTmh54KP+SzFihiseBHPXmwfpjychF3V2L/r5Pv 4E601+2qWHBLJzA24lj0QzwfEHj/iPt38qMLfO5iX2rH7f56MeY+w+MuYgViG+67XQi7ET6W4XOE L9Afp4j9i2cDj8/ifMAYjXBn2F9DJzv8cx+sR3xXu39e5KfpF/mn6mK/vBjzs4XHd3G24D7FedzZ vn4eH3mM4zGSxzl+9rXzX4w7ya/B2PEvEYffgRHyvk5DyEbIwj72yHFkt9QcOo179Ljy/dBuVVFo t/RmaLfyvtBG1cTQG8rtofW47rSOM7UlHMv4fmo/S7me+LnYfo4qvDBOjmf3C14cX5yjw0UcAOU8 3H/Xw2js921+rvJ9KK3HfYf6xP5uYZthEvsCVuLcDdLWcD0bAgN4TGSzkcZ6jOm8PUJaKdoHs59h NktDejPiByBKqYLZype4TOgdUfdluI3XKUbAveh3WWwpPKbYBpXcVnwdtFvoTW573PN29UJ4UAXo w1/A/ewsrrkF1/i6wA8If+KyTaGzfH2qXmBRSLg+zoPAZRQPglPWx1qhixahozXCh1EXvE/lByLf AMUh5N8AN6m1cL86BePTL2BXYSwRY22Dq9V+oXcmzut/4/74Dn1sGCxRxIR+F/7/VCgkncU99B3u Lw4E22LBpvgOHsC9tEToJ4yX8/0jfQex3EdwfUNFPvEd+vjjMF35JNyhbEG/O4hnwUG023e4lonQ A+lV7MnQOeQtxT6Aj431g0R+ws8pf+gA3y+qFrCq/Dg+8vA5iPwPx5W+wvmuhiUYS4rV38GjSifP awhB30tE6BoGUV6AMB/hjjCIOmMYExf2cZOor4U36BaJon/z9n3sCdx7D0CxtAm0bBzmD9/CLTQL FksD0O9O4pkhoRyWWQakSiehXPpNnD+LFVrIF3xmPMePQwWrQvkWGMsaYawUQtqKsAb9EeUUzTBC MQbzrOuwHxlod5TRQIVyOdJZoac4nxjjt5CZA5sHOUKuE4i5tgOf8yOd5rwGdXsz+gOfL9Kd58vn 2jFPeY5/Nj+xTt4vygmewzxTD+G9JeQJ4+Ageic8idBAP8Y8vAXmk7WYrKyHMvIVwnoZnoYrBN6G MAjK2HyyBKECgbH58BDiTMTfIhxEWI+wB+EH1g1uw75fRNzE7wUc6PMYuxBj++MIexE+bW/rDHys P6vvDOxruKCsyIEFHGgG5oQZ8Ef+hyCPzcU4nI36RJBmQwUHZSRMValhKv0C63lMuqiM95172VRI /E/z+U9A9kO20GEY/J3X2G4PxOb/ARzphJ0c4/7K5Ofz/3WO/y2gfRcgjBf6b4AuwoeOo/5VoCF7 4DpyFP1vPVzFQS7XCH0+hPtethPWLxH1F9kPfaW7NBj8F9cjfQuH9vLFdv1PZex3a2do94N2UOVg LoLAPkV+hIvLeB7czkHJfSxDlG/i0F7uGPevYCjkoZ7KEIPwsYvKSiPM4kDrsbwOuJ9P5tBRHop5 1dCwf3JA3dZxQB0CB6wbzwF1BxyQdxGHTnqt5HrFMbkstNun3c8vtg+fF3sF+Y5hzjwU7BfjDv+W 48UFPj8o7O8dZR5LvrqI5/yeOL83cK/8VZ//fwLcO28ivI7w2v/XY/Eow2OEkceJ9zHfCGCu+ije Md+COwHalgCcexGgdSTGIbxVtz6NdcOQ9iL+N4IV6+oQ42l0Dr2sFb0x+AHCOwgNLA7mynmlDcul Ydm2jXJ/nrA8lzuL2c657mH5c4sRHkD6nwjoZedeRnwP4l+QP4ByVYjnY90tiPOwXIFQhuV3sdwb gSLdE+EEAs6zFdOY1iyUfwhhNs9H/uQe+v8u/ov7x/8Uh98BQLXIOXG+F98h/se43Z7/AV9812i3 /3/C7XeJP2BZD5jzvcmh093nb+847Rjt+bsMpxF+ZEtDbZhTqkQejbmsyLl5/ihjkW8fFPkkkd8p CsxzZ56/8tyZ56+I1yO+TbEf5zMDruL3fD4vdH1JhnixIUAzGEtIaSuBabrzd7T8NSj0wHNsAVlB 7iYPkwA5QkK0ir5O99FPJCJJkkZyS/OlZdId0sPSP5mODWTXspFsNbuXPcgeZU1sN/uInVDsVLys +FZxWqlTxikdyp7KwcqJysnKacr5ytuV65SPKZ9QblW+rTyo/D3xtsTfnQZnrDPRmeT0Ors4s525 zp7OQmdvZ4lzqnOB8zHnJudTLoUr2mV2Jbm8ri6uoa7rXGtcm5NokjLJkGRKik2yJzmS0pJ8SVck jUqqdVO30e3ygId6dB6jJ8Zj9cR7kj0ZnjxPoWeSZ6FnkWeJ5w7Pas/Dnqc8jZ5dnj2eVzxvefZ7 PvJ87S30+r19vDXeMd5x3onHFcetx3ueoqe6nqPnnOe6nys81/tc8bmScwPPVZ276dzyc2vOhVpH txW1/RRsDbWGQvwNODQIzTWQreQdchY19xpq7pAEHZpbhJq7S3qUERbJBrHr2Eq2lt3PHmHPsGZ2 iB1XBBS7FQcUp2TNuZR+Zc2fau5U4sLEBqfOGe20OJ2ouXTUXI6zQNbc9ai5R1FzWy7Q3BDXNa6V HZqLQs3ZkhJlzdUkjRWac/6F5io6NLfS0+DZ0qG5N1Fzh1BzPTs0V+u9/jgRmiOn2DmCmks/1wM1 5z/X91zZueHnbji37Nxd51pbr2vrjZpbyDUX4vepNaEY+ibdK2WFjtC3cUcY0CPvJnPIRDK9tQHL ddxng75gejAtmIrkP+AGmA2TYAJcBb1bP2k90nqg9a3Wo63vte7nnK33ta5rfar1Yfxd3bqgdVHr La11rbkAX1YDfHEk/Fb/6G0Iaz6/5uiio79/vvnoHCw9h7ASYdnRmz6f9dn1n807uuvLjKN3fbb5 s7Wfrv30kU+XA3y6kct+Zvl02qcY4T/N/tT/ae6nyUfKjpQeKTxScKT7kdwj2UfSjiQdiTsSc4Qc /uHwd4ePH/7q8Bdc6vBrh184/PxhHOXwq4cfP7z1cOnhPoeLDycfTjrsOpxob7GftX9ufB4zvedV G1UPqtarHlDdr7pPtU61T/W06mHVBjy/Tih7K+5UgDSG713S/cLPKejXYbigfArvTPKPNBb+5kca IC38i5a7EPBsYQPYYFaDeHTnVrwHAt7fBPzVD6vgwAbLpQF/N4+LJL0stYNO/ltO7V+2XHVBUYJH YRHcJl0Ha+FruB3uguXwIDwBj2GKsAzVeiushlPwb4zS98ISeAmOwI/wEGyBn+EnOA2PwFPwBrwG T8NoGAMrYSy8CbXwOuyDf8Jb8Da8A9/AOHgX9sMBeAbGww+wCj6A9+B99NUT8B0sheuhDibCZPTe KdAAU2Ea1MN0mAGzYCb69Bw4DnPRu+fBjXAT+vlz8DAsgPmwEG6Gb+F72EnWknsJJRJhRAHnoJWs I/eR+8kD0AZBoiQqooYQWU8eJA+RDRiLHiYaoiURREceIY/CGfiVPEYeJxvJJrKZPEG2kCfJU+Rp 8gzGrADZRhpJE/wGB8kyspxsJ8+SHeQ50kz0JJLsJLuIgRhJFDHBUficRJMYspvsIbHETO4ge8nz 5AXSQl4kLxELscJWCBAbsZOXySskjsSTBJJIXiWvwe9wFr6AL4mDOImLJJHXyRtkH3mTvEXexpj5 T+ImycRDvGQ/OUDeJe+R98kHsIukkFSSRtLhGHxFDsKH8Bl8BB/DYfgU/gWfkB/JKfJvPKt+Ij+T 0+QM+ZX8Rn4nZ4mPnCOtpI0ESQaeY0AJpVSijCqokqqommqolmTSCKqjehpJDdRIo6iJRtMY0oXG UjPJItnUQq3URu00jsbTBJpIHdRJ76AumkS6khzqJrk0mXqol6bQVJpG06mPLqFLFUZFFP1Rulm6 VbpNWiwtle6UVkirpTXSfdKDeHI+Lj0hPSk9LW2VtknPSjulvdKL0qvSPukd3KvvSgelj6RPpM+l r6QT0knpR+nf9N/0J/ozPU1/oWfor/Q3+js9S8/RVkkrRUg6PF0ILuox9jjbyDaxzewJtoU9yZ5i T+OpspUF2DbWiCfzdvYs28Gew3NmJ9uF5/Qetpc9z15gLexF9hJ7mb3CXmWvsdfZG2wfe5O9xd5m 77B/sv3sAHuXvcfeZx+wg+xD9i88pT5iH7PD7Aj7hH3KPmNH2efsC/YlO8a+Yl+zb9hxdoJ9y75j 37OT7Af2IzvF/s1+Yj+z0+wX8iU5xs6wX9lv7Hd2lp2DbdBIl5E8eBZ2wMt4O2qC7fAK3AIv8vdW 0kBpsFQhDZKGScOlq6VKaYg0FH4h39AW/p4F7oOTuDMfh7tJEawgxWQ2WYXnxWoyB5rJP8hJ8gOb xqazm9kMqUoaIV0jXStVs0VsFpvDbmOz2e1sHlvMlrClbBlbzu5gc9k97E52F1uBJ/IqcSY/wNZj TvMQZjbr2H3sJraBNbCH8aR+VOomdZd+lvin3kqA9g+KCWbkQC8KO9goMYVSpdZoI3T6SIMxyhQd E2u2WG32uPiERIfTleRO9nhTUtPSfRmZXbKyu+bk5nXrnt+joGevwst6F/mL+/QtKS27/Ip+V5Zf 1X/AwIpBg4cMHTb86sqqEddcW33dyJpRMHrM2Npx4yfUXT9x0uQpU+unTZ8xc9bsOXPn3XDjP26a v2Dhzbfcuui22xcvWbps+R133rVi5aq7V9+zZu296+67/4H1Dz60oeHhRx597PGNmzY/seVJ6amn n9ka2NbYtP3ZHc8179y1e8/e519oefGll1959bXX39j35ltvv/PP/Qfg3ffe/+Dgh/869NHHh498 8ulnl3LHS7njpdzxUu54KXe8lDteyh0v5Y6Xcsf/We7o9/uLel9W2KtnQY/8bnm5OV2zs7pkZvjS 01JTvJ5kd5LL6UhMiI+z26wWc2xMtCnKaIjU6yK0GrVKqWASJZBR6i6rcQa8NQHmdV9xRSYvu0dh xahOFTUBJ1aVXcgTcNYINueFnH7kHHcRpz/M6e/gJEZnIRRmZjhL3c7AOyVuZzMZMagS6TtL3FXO wElB9xf0SkHrkXa5UMBZap1Q4gyQGmdpoGz2hGWlNSXY3bYIbV9331ptZgZs00YgGYFUwOKu30Ys vYkgqKW05zYKaj1OKmB3l5QGbO4SPoOA5CkdNTZQMaiytCTO5arKzAiQvmPcowPg7hMw+AQL9BXD BJR9AyoxjLOOrwaWO7dltCy7o9kIo2t8urHusaOurQxIo6r4GFE+HLckYLnhmPV8ETs39a1c3Lk1 TlpWaq1z8uKyZYudgZZBlZ1bXfxZVYV9oCz1lNUsK8Oh70Allg9x4mj0tqrKALkNh3TylfBVhddX 6y7lNTXXOwMadx/3hGXX16Bp7MsCMHieq9Fu9+8MHQV7qXPZ0Eq3K1AU564aVRK/LQaWDZ7XZPM7 bRe2ZGZsM0aFFbst0iATOn1norajTVCCnVPlgzs0S/iM3P3QIQLOMU6cSaUb19SDP2p7wLIxPZAN f6oISgXGokXqApq+NcuMPXk9lw8oPJgiLvsFQ3uN++T3F9aMkmuUHuMvwEnuJx2uhu3tdMDnC6Sn cxdR9UWb4hx7i3K3zIzZzbS7u97oRITqgwrU7aiqnlmofpeLG3h5sx9GYyGwcFBluOyE0XGN4M/y VQVoDW9paW+JHcZbFra3dIjXuNGTt4tLX2xA7e34z2A0R5dO6Bkg5r9prg23lw9xlw8aUeksXVYj 67Z86AWlcHuPjjaZCkT3rZTiqEzROEm0olNe28HMC5W6APPgf0rh1GMDEjqlqCDOsoCx5orws0rr cv2lTLNK3UmoOXSKSwl0XkyeZaCn78JyrwvKF8xOt0zC+TIvLR86Ytky7QVtZRiAli0rczvLltUs G9UcWjja7TS6l+2km+imZfWlNe0GbQ7tWh4XKLujChcxgfTMhL3S/WAgBByhFmldkzEmx98s3ddk iM7xFxulNVCBQCEg9YcWBApTpVWwAIEie3ljZtecnZxo0kbmGJF/OTgRFiJI0IBPIsp+BM6/vCna zLu/tdEQJeRubMzOCxNNRmtORXGMNBeIVCtNATc48MI2BRIRj0GcgHg0JtB6MU9/k8GYsxDHK0L2 IikW0rC5GDPtHMQlkh3iBNusxsjwOLMaU9NzirVSX8kqWAySHvIQqyVVY47DuVvy40z90pImTQSf 35JGY2zOXkweVBCDXAuRy+Iw7JW0kIXAVzK0SaPPWVmsw0OxAYEi1xTsYoN4+qUpjdgRjlcqxYMZ 2yZKCRCLuExKbIx1tOyWVgu2u3kvOF7vRnUuR036yJyWYo3UG1sDmPW3IPDRVjZ5e+RAsVdKhWwE ikpdgNQC/iV6aRlSy9BMy9A0y9A0y3AWy0CJefZSbFmKPFnSDVAvzYGVCBuQZthlbCNqcKcgklNz dko2yYqaMO5G3RGstTdpIvnMrI2maMFmbdJF5hTtlWbAQASKk5/ZZLHmTN0tpYulZDRZ47hAfaNG h6qzhG2BgmZug71SvJQoNJEgNBAodmCZgEFyAMGL5wGuHfo+PcjtS/djmeO3ZPyOjP8ZxqEWeqAJ R/E30/c4PlocT7/CzkbST2ADUpTupi9DNgp8TJv5LOhHdCcUIT6E5bGIdyLORbyr0fWGo5k2NyHC uT/QqDfzxdKXG31ZMuHwyIQlTiZM5pxiD32Jvgjx2MW/ECcjfpG2QBLiFxBbEbfQmXhNcdBnaTfo hXi7jF+he7hP0+foDuiBuKkxkk8h0KjiaGujkqNnGiFcqshy7KHP0CfBjqxPN3rtWLu5yZvsMOzG /gh9nM5sTHCYirX0YVJJTiNTAxziGEz0kcZ83snKxj1Ox066kq70W/P9Hn+mf6OU7cnOzN4oOT3O TGe+c6Oz2EjvAgUqDzcsXY7PfHBS9B4EP8JKurSR5QeK23BNfF0UFuKzQVA1+KwXFODT2NF6SlBF 9DYYiECxj/kICxAWItwMDJ83INyI8A+Em0TNTIRZCHMwfNSjRD1K1KNEvZCoR4l6lKhHiXohUS9G n4XAJWpQogYlalCiRkjUoEQNStSgRI2Q4POtQYkaIVGBEhUoUYESFUKiAiUqUKICJSqERAVKVKBE hZDwo4QfJfwo4RcSfpTwo4QfJfxCwo8SfpTwC4lslMhGiWyUyBYS2SiRjRLZKJEtJLJRIhslsoWE EyWcKOFECaeQcKKEEyWcKOEUEk6UcKKEU0gYUcKIEkaUMAoJI0oYUcKIEkYhYRT2mYXAJY6ixFGU OIoSR4XEUZQ4ihJHUeKokDiKEkdR4iids006UPwqihxAkQMockCIHECRAyhyAEUOCJEDKHIARQ7I S58plEHRbeYjLEBYiMBlW1C2BWVbULZFyLYI95qFwGUDKBFAiQBKBIREACUCKBFAiYCQCKBEACUC QqIBJRpQogElGoREA0o0oEQDSjQIiQbhuLMQuMR/75T/tWnozaRSjYcrXUjSBF4A3ws8Hw4JfBNs E/gfsFHgG+EWgW+AfIHngFdg7E/gmeBQk0ZHvqHYjCFgIMJIhKkIGxC2IryAoBLUfoTPEEK0mz+J GVQDVRtUW1UvqBRbVUdV1KAcqNyg3Kp8QanYqjyqpM7iOKoXcRRDC6wQzwX4/BEBDxF8FgmqiObh uHkYZ7vhbx7N80eddP6YTvankxfSydZ0siKdFGvo5YSJSOeEfLxrOUilX+ft7TiEkO9N6Y2R6a4d 31scjd7ujmayJ4zS/D7E3yNsQ9iIcAtCPkIOQiaCB8Eh6tKRv9KfJHe5ByEFwYXg5EOAmb92M0Wp /TupnmxselUPGj5OSirK7W5MyUbU3JgyENFzjSmjHcUasgNSeBpEnkXLPYl4a6PjGDY/HUZPNTp2 I9rc6MhDVN2Y0gXRNY0p7ziK9WQYOBgXHSrjIbhujgc3OoYj26BGRxoiX2OKl3On40AebE0jlXAM sUeWSg6P5G509EKU1Ogo4NxqSOGGJ0rIFNNTIHAsNeGEftxJKhnxRzhOOlY7vkfx71Cx6B4fOZsZ ov2eZjLcr3XsyXwImYsdjcVazo/nwzYZBzh+1rHRs9TxAPZFPDsc9zm6OO7KbFZj9Z0476ViiEbH LXgveNIf7VjoyHbMzDzmmOG40jHKMdhR7cH6Rse1jj18mlBFKumTOxwV2GE/XIWn0XG5p1lMscwx z+F3pDgKnHu4fqFHuN/8zD1cA5ATHj0D9ZvuaeY+Piy/mUT501WnVCtV16j6qHqp3KokVaIqQRWj NqmN6ki1Tq1Vq9VKNVNTNahjmkNH/T7+oVGM0siRkvEnE7SR8if/fAkvF5SoKVwJgWipnJYP6UPK Ay1joHy0M3BmiLuZaDHtVrj7kICpHMqH9gn08JU3q0KDA/m+8oCq4prKbYTcVYW1AbqkmcDQymYS 4lW3xfH77TYCt90ZtxMIsd12Z1UVWM2zi6xFpt5RBWUlf/KokZ++8z/WzmRCYG35kMrAloSqQA4n QglV5YGb+e13JzVQfWnJThrJUVXlTlZPDaWDeT2rL6lCtmOCDb05EtkghSNkU/cBJ2fDeNKHs6GN wnxeFEc+F0fIp9WDV/B5tXrBxwjn23bIWVqyzekUPB6AQ4LnkAc68aDHoGzJNq9XcLmdpJJzkUq3 U0wsTXTkcCBLpkOwEMzrREcOIgYLZJ1n8cgs3TpYuomxJHKexxHmiUlt54lJRR7f//Gnto+PNHWd Nf9l/kKhxl1ai1ATWD57gjWwcLTTuW3+LPlNg7dm9JgJHI+qDcxy15YE5rtLnNu6vvwnzS/z5q7u km3wcunQym0v+2tLGrv6u5a6R5VUNRUVVhZfMNbSjrEqC/+ks0LeWSUfq6j4T5qLeXMRH6uYj1XM xyryF4mxSuu431dUblNDnyq8wArcRCO06MM1ca6qPmZjfW/u0Dt7uazz43YxIJshAq/zOnefgB6B N2UWZxbzJtxnvCmSvzWSm6zze7nidpHNcpMRq6PcfaBdtcCZygPdBpUHXENGVHJXCfhH/bnNZvAf 0WyF0roS/A/LMwXgb2dOmPGnPzP/7GfWrFkz+GOWbwZAeSB9SHmgO96/t6lUOFRNSRXWdWmvkyRR t02jKW0OtWCjDydBZvLhOOUjPtSgX4u3LhVtUDaoKL8qzGyyJ+RM3Ysn+AIEvMfROY1Z4r5M5zQl efj9ZWZTVrcwxvspx412Vw6O0JSPohx7wtgflYnESs/KzJX5DZ6GzIZ8Jdbu2IiVjo38KG3M2ijB TN+MdkUgObMKlY3T4uM93BifIAZu4ITPV+WbQYS+/qhs0q70DsXOkHudIbqf2W6QcP0MCDOHG32z 2oVmySKicZYQQVKxC+IFbIJ45sULFISOtUOwLnSMt3FMv8UwnRAG+acRnoJ/kVTihCZyFizwG7GR rtAPXe9XzMu2Qhuswcv6UFhLTHghM8Mw6EcY8vjgDvJAaHboBFwGd8MjoefILaEt2L4CXoPfcAaf 4jGYDwOQfxjUwgnpK6gK3Q9qWAwReGEbTMwwCj7E319wDqvhHnie/CP0G44aA7dgf4VQDMWhF0Ot kA53sJWKQ5pnYRXsJsrQmFAdpj9JsIz6Qh+GPgMvVMGj8BTOyUda2BXggolwG6wjNuk1pNbAYxAk Olot9VW8gCP1g+EwBebAMtgCbxITqVAcUpwK3Rj6Bl0sGlJxTnVwgnQj/enjTBfqHfoYroGd8Aau l/+2sGvYJsU1waLQg6GX8Gr9HNGSPeRFRY7irrabQw+HngEdzqcramQAjjMaboUXYR/8G36iC0IL 4AoYgiO/ShKIk3hR4x9SG51P50vvQxdcbTXOdhZsgABaZBfshr2om8NwFL4iMSSOXElGk1XkJ6qj Y+l+6QFpu/QBI+wJ1LcbPKijmfA47BCf0u0nCuw/m1SQ68lUci95kBylAfo9/ZWp2a3sHGtTeINH g+dCA0K/4IXaDlfBDbAAdfuo+Jzin3AQfoKf4Qwxkh5kgvi2xFHyPdXQJDqQ1tO1eDV+WhogrZJe ZN1YHzaRvcM+VtyuWK4apQq2bgyuDj4dfDf0XOhd9J1I7N8LZajRm9ErHocX4H3s/SP4BL7g/oP9 9yIjyHU4ygyyhNxDniavknfJt7hKEL9JtBctwVGn0umop1voanoPjr6fv8agH9NP6Hf0F0khJUnd pWnSw1JAapYOSF8zI/OyLqwrG8hGsBBaJkdxuWKIYrPiScVLilPKQuVYZb3yuOoW1SL1223pbZ8G ITghGAg2oe+q0ZNuQE08BI+g329HG7yJGv0nzvgonEYr2ImLpOC8C0gZKSf9ydXkWlJLbiGLyd1k HXmAPEKewRXgGqgK5+6jxXQIHUVr6SK6mN5Jt+PvLrqPfkgP0ZM4c4vklnxSV6mf+BxnCq5hpvgm xSr83SLtl96XvpGOSyfRahaWyGaxG9h9bBPbzt5VXKWYjL+PKF5QtCjeVbQqWpVUaVfGK7OU1ys3 K79QKVXdVRWqpaoPVD+r60k8SceZOzt/UExtuAcT6RYawxaQk1iRgFcKA67ch3YYgrviZyiSgmiX SN6Oc4ulNhbNJZV+FuAvJMhu6EZehQVKKmHWx45CIzlCj7KX6WVwkNQQG9skTVG8SV3wJEajlXQP 3U36wHZaSIfT9RKQr/DI+wr9fS7cQyaSGfAkOUl6kptIPlkAH1CzNIQsgsLQI5QRDelHTgHOAG5m Y+G6v/9MnRTAETgRfIjp2T8wPjXDWrToU/AZeQLOEkXoe4xuEkajURhl7kB/vw141KvGfbYA96MN I8gk5X7Yzr87pcpX9mY3wCn4HU4odqFH9cFI+k2wjj3EvgzlhzJxh+Eug8247ybA5bhjvkIv2Ytl XroWd7oWY0kO7uoKGAFj4SaMeqtCgdD60K2heaGp8BbKniUZ5CxpwB3RjBKF8Ab+roCPyHLch5f/ /Tr/6ic4FlrgW2IlHpKD++GkYrZipWKLYrviecU7yq6o7UXwAHr0F+jNWlzBGHgXvoVfiRptY4MM yMP59sC5V8IkWiXthb7EDvW4Z1MxjveRVzIDe7kFtbce9/Ne3BunME5cC8/DIUKJBVc0BsdXYz/l qOeRyL0RLXgracKasRi10+E7XHck6YG37QzwY09rMWq14JyOwNeo7ZCYVwbGhRIyHPv6Fa6GsThC d6jgf4UU2oGRagCUSG+jvpOJEfqQJPIYytXgDo2EBChQfEkoZAQHhHrQOmkvnjEhrG/A0ysOLiPT cBYGXEcbxJKB0C04GOcQ/pn0f4RdCL+h/w3AvTEY4QP52yMIitFhUN59EXx/HlSj/2+gufLPIWLZ /w70swEipwMYngEwPoS3/S0A0Qix1wOYESy6C8HaAmDDdvv95yFuPkBiPYBjJYBzLoDrofPgjuf/ hs4luASX4BJcgktwCS7BJbgEl+ASXIJLcAkuwSW4BJcAgfJvbSvwFyRQQZ/tlASVqmZa5I8GBQtK oFWxIAGbWqkIUmkP8YKGBIgVrD7jmcK2wgHG04X92wqhCGljKz66ZruiXFEefBBg0OqUWlr9/A+u nKyFf4ejmCymdbQBx8rxu7KJn1CSjyMbJaeULTGpRGEEJ2Rjs409PsnqG2A8Vt3f+HU1ZJ2s7pod jT0X01SymNiC3/DeVuPjKWJD9mR/LO0BWuo1gEP0wLCH8bN5D6er+7dBUf+TXbNzUX41/6hNSIfa Qt/QXor3UbqHPwEI6UelGP63TgRrtOQ7aldI32Evq8U8Tvc/OcB4pv9JXGlR4WJFF99Nxle6ZqtI LpHIxPeDq2yK78/G8L+EHh76hkUqWiASp7HaXz5Xu0S7iWxRbdFsinxO84ZGPTyqylxlH+4YHzXB PME+3qEuoAXK7pru+n60n7JUU6bfpHmL7lO+onlF/xE9rPxA84E+ymh1WqmVf6LsMZnzrBvVeoch y0ANfiwZNoIi4dBARpg9KeZQhM31/kvn5zuNT/ikbxqHrtmkGqqrSY7FHGVUKd1JEGXM725JUqqU UUazOTene373KKPXS3MOzl2xcs7BD4Nn8ZlbYU7IG5gbRoqWdduDI4M1O9aSfmQjeWjH2hPFQycH 8edFf/HQSWhM+mIx2uURNKkXdaCB4X7NRHojXc7/hKyZpDWNVBBFM73uObVGQUCngd2kEnVGaLVf rwDmYE4WYIzZtLvIJtIAYfMV9ueeJhR/uvpkQddsqHa5opSqbt2T83Mlb/Cb+9+dQmj2MeZeWRpK 3nc794xcAKbDGSQQt3/ks9Yd9p1xb7LXrQesB2wH7Oq+cX3j+yYMtz3A1li3sI3xaqXdCanKfPsV rK+1r62vXZ1sTbYl2yWzlw1nS6zr49bHr0/YEr8lQW2CBGOCM6FrwuyERQkrEz5MUCdwu5hjYvMS qFFnSOAOTLkH+tGN+JcB0EbQTB9uokRn4N/rcjt0WTqq47bTbYxWaA6ZzWQgTtnuMBwyzqG2xHYD nhYWLCzsb0QjtvmmHcNN5queVhhlKiBRub5q/i0BSAi1NEYV8Dk0GgTyRxoLmNpYoFBHIY4qCH+w X8WtXz6oci/EhY5CPEJC6GiPHj2qyLRq9IkoV3dTPtq/W57XjQ7h6Z6cm2OOjUHXYEoV07WmGBu+ f97Xs7aqcoI6eNxG1K999Nvl/XODZy43E0Xw3D1Ec3hb0dXDrqu9/sb4429++8yYptHFpyu8fD/g g5WgJbSgh33+Ap1TX6DR2XQ+3RDdRN0XOuVJPVEyM/OwVP0V+mv0m/TP6V/TawhVg06pVym0EXoV 6HR6fTN5xm+XWIyEYYLqmF7SU6YFlV/foj+Ahd0kFdQYzLbvAMZQAJpJ5XbFCi3RNhPqNxlVG1Qv qCSV3VBEF1BKbZG7yFXkCuFhx6YZz1T3Rz/jTlaEAa2tupCgkk0FBSAQ3+4M97vBYOBqxF3kw/jW jeRG5ca6o0gUofPbNtN/fL9jR/BUcCtJOSM92nrdr8GPaCL5JRiBOrgaY0I66sACbtjp73V9xCz1 YvW9tk2KTeonIrdE74zcEbU3uiVqf7Q+VtE9qsR4g/lZ+p7xQIxqN+xHcUZUVpMxzhlH47iB49Bz 4jYa9A5Xlou6uB+5Nvo1BzQhjaRpJgObthJCmonLn+RgWYwyzsA2xirIIZiTeGigjujsHushky35 okBxOuxlp6vPVMsRgyuBa6A6HDqIwiu8o3tujik2BkQAAfQTEsODR9h1mCF4Sju0b9WNxrr1gXPB 3/Z/GvyCpP+w6XDbw/MHDZhQP3RQPRuSOLSioe0fwdMffB48RarIUrKajN3demLpmhuWr7iN/3nE PtzCXzCvOJG6+OOkHkSp7MG0mq0SpUovcSqyFVSxVf3Ok/wU4nYzFp7BOFd0MnxGoDtH7eNRntgk PcetP4djPoVhZCbu6BzR85V+H1OAUqWhqiI2EDWVr1Q8IxXBQOTLp+HDSUntGjz8+vcVjmL8Goow HKF/nGl3Eu4l50cdxsi5ECNSNqEYFEuIFGzj0WgVgNKm2AU68pQ/IkLyqr0R6MVEag4t9Gvie+Zp nT175WmaQ0ebZOx/LL4L1uJDqVFrv9R8r2VMo9VG03hm1Di0bprBnJos7Xg6gdVqrtfOoXPZY5ot 2mc1u7RnNGe15g1spWaD9jXNPu2/6CH2oeYj7Tf0OPtK861WP0czV3srvYPdqrlDu5KqKiNq6fVs vGaCdjadx1QltJyVaMq1V6uv1lRqVVZtVmQe7cnyNL20RZEqvvGUGo02ltqZRaNqDvXyZ0pajZOp NZqc8N6kEVptjkSRpBFqSdIxSnVarUajUjsiSWQz0Tfxv+PchYe1AmPjNdV5Cu7SliFD8xQ5Kr9q gZqo9y5A1eyNcEboaDPt4TdhguJHRvAjE+Q40IF5N/qus9D6p6ed9PmMhT8YC+02Y9u0tmmFdqux zefDCtzXeGYgxnPDZBGbePFNryzuYuXIhy5dHogeghFUHTq6LcLJQ2G1+Jk2Pfwdq2nVmC8QwtMZ 4iJRq8huoiUqsid4MvhJ8Mvgp4pdrVbp+Nkydsu5+RzQzleGvmY/YUaRQQL+y3ZGNSfsSH0tg6mi VbGWaEus1VerqE2dqZyrn5n6ke5Dt65KOyxyWFKVe4JunGm8qy51fMachNsT1rp0Jjf3hURHHsf+ Wps9b1DSIPeLSS+62bSkae6bk252f570uVvp06brk5OS3QX6PHe5tlxfktTXfb2+1j1Pf0PSUv2y pI3aTfrNSdEarUavTFK6bVqb3pykSnJr9YxYhlv9NmfeVCuZat2A2cUuWotnQ4tfZy9wxJG4zBgJ riDcNv3szjy+GSpIDVlJGjD7ayFq8gPz2wuMmHVkpmusP4YsxOKPtuRZylUpXnsXR0qDMWCkxnLy YxQ/CinYMt8bEj7Ny4dUbgN/jyoMONXhlOq0bzoPPdN8p6t9x8J4uu8YmixsC3HMJaE+4hJ6oz4O yPjLxuiCJFQPIiztazTx0gG/wVSgd5oKtAIMvO64P1KHdfoCrZVDdMEFX5yr4unENFId7THLUSxF /HbL694918ksIuaplLExFjMz8yORuZ1wJXHaNyxeseqyq/J2/lCzeMGPT5AYYlEFD0XfdNPN/bIy epDA/ll3hOCF4LfBD8kn8auWzBuU1y/O1KXX8HnP1L887qc39dPGdEsqyPNkjZu8d/n8IxMJ/3oz 9AsdZ11YbzwlcsgA/wSVXR2vSDDbr4y7Ir6f57DxsyhNd1uZ7WrvONt47+3eu22r7RsxsXnd/kac TqnUx5qVNnOKMi22yjaH3k43Kp9VvqbUvZD3kZEmJOd0jcrQJ/t9XfKS/Ump+LAl5E1Nbk2myWUi a8iONORdlkB4dhNI+D2BJSRkkFzwYy3PqDFwuvzxUUUuf5wRH1Z7nquZznyWqXR6bQZPcrBNYGwW GDkykMPvj4lI7OpVp2lS9VUO3QYdxe0bwh3sj8Tsxz4wj+TV4L65KxuPrNw010gL+cxCBlpGWqZa JIstt6447DPTpqOXTDtZPcBYfcYXLh3jhzXufR+mhbjfhe+YCrKqp/lOVmMRI7MUaSwsxPsIZjjc wilozxy0oBRjtri4iZWYAgszY+ITPrzwlOGWRj8QuRCpDfne27+nuVyK8wS/jTCqpCseq35s7/AH 7n71qoqp5UPJdd2/Tc6vLLmqNNcYQb/ocv89VUufCzbfcdtV8fk2dVlZ45IRd5bHe5zxg0p7Bd8z 5VhTCnsNz/HmJ9dyWy9GW9+DZ4IB4uHBnWAK/ebvGlGQH3d5HDUNVw7XDjcPt1bF/6pSdmO99L2i u8WVsnJ9eXRp3D2q+zRaXSQeTmDn3z1VqGK4pqMjIgygtbjU9vpEkmhMo5LXwP82Q0fqYSHfgQlF YW1OK+x/sq3w6wHGafJ1Bs/NItQTTKsm1X0r/RHjlOO048zjrHXxiuoqzHX4tYmf+Xjao8ZSYqNx L3Qc+HgXu6XxpWCwbec12/ymvH7zqm9dNL72dsWutlP3BL8J/o4Z0cfXVK2n6Y8PrN/w5I6HH+Tn 4TBcexH6uQ0+9w+qNFSZqswTDHWmOvNN1nm2e+m9uteMr1n/ZfzQekJ5Qn0i+kTsb8roHtE9Yq80 XWkus1bp6nSqnqZ8c75VmqOYY1isuN2w1LbZtMm807TDrIkU/heXx/Gzppi8yFw9r7El5glsiMrT 7yIM74oz/aaoCPAjK/iRD3JXohfuwp3IsMlpURFeS1yQpeeE3jUQDzB7nMoVY7NXFp9Pnar7n/Sd PunDCHa6+hj6Y9tpnw9xOK5Ma8+ahFt1z1colR15E+sa/C5yzMC6mxZMrBgXS2J8p985EfyOmE++ 9BX9PmfI0FVb9q6/ZmrW8y8RL8EUkHg2cb8ZirobJfvNSn+mqUpZpa0yhb1lHbrGbxpNfeLCRNpT ytP1jM2zXSmV6K6MLbHdp9HECHeJ4F7jj4xQRRrQFFpLWqTeS7inGAxgX8F9x6W2JVQWdqxw2pmw x4gbWDjLwmRwGvcVfZ2yTltnCnuLsrrK5eomL9CUm2PB47Ozq7BRwXPF20Y8FzwXfKnxFmJrM2WV 3DBqyaLxYxevv6aKpOCpEkls91Bja/2Wq6Y8/thzD2/A9RbjelPQV2Ignjy6E4y4T8oiCu7T3K9f a9ys2KTdrdmtb7ar1THkCnq5skw7MHGzfodyh/117Ru6D7WHdL+pftXr4w3xsf64hLxYf2RUniH2 hdj9sVKs8IbEIoEjLYjpnX6dIdJUEVkTSSOtJn767bDF5ZFck7jIJTjzBE5KC2NfZhhb4wX2GzBY NvC/UzHitEeaTPwL4izCZOXqTo5QgYtkxYadKCtxZOLUxA2JLNHgUvv1hjxUuBzrfFzj1dypeD5+ kn9BPcbqT40psvoTDfjAAGvlkVicXUVt4nA04SSQw8Qng0wmORBz3NjOikFUnHdCALABU3vebuEo 0KTR9hbFYleR+Hp51TEeQqvF8JF+1FIkHzSSDx/pR2WJ749XZeGF1IdHNGbCufyOMA2jBeEu7sQD lPs4SC5xrEaHT1ELPUus3U9sDX53Wx2Jef8kMSnb/NIto/qMSJHmDr+2sJCQwVn3P/zsqk/QF3zB 14N7b1p+BZl0w4K+fWfwuGHl/zoT5ldmaPbndGcknTmNzqgqttCqULMXrDTWHEVjTOaoyGgDGCOj +Z8PxWjUhggyMiIUQSO4IbRKEmUwk5CZmHkxkf+V0Sn+R0fRMVpNbpF6oLpCLalTjVlRI6NoVDNh fn1ktJfGjIQGc4uZmrlPaHR5Zptl7k5aF35D4cOQyt99tVYXnq62HQMrbhO8p7chFOGjIMcQvjny gyg6V2QWORaViAqx/AbpinJb1xfcN2vuDG/f3pd1e++94Dfrmbfi9kVDkl8xFgwq/6T1Oamf2PvB QaxG5AdZJMdfMydhcQI16fT1XW/XL+zKnMRN3Xj1yKW5kp/0pX2lKkNVTJVneNpwNNVvUb9FR/XS 55p7peZmYKpoLk8tyTila7No78LzOEKnj0jX6VMizZbYTL0Okx1rMvf/Z4X/CzePjBIu0hShC+PU 9LD7uz1h3DUvvA00sXHiUB+p4OHGYUjhKFKbydUdEauy2pTpaRFeu5WHHI3NZrev6Eq6YgBq9msh N9llsmV3xJ7TcvQxnjS2HWs/qtpOTw+nbsd8eAGziBtqAQeV2th+jE0TsclQF1PnGZ82zleXpeQn mUVhtrQf7t0wTMlOaunmioqJpG4nZgPRne6y80ixOiF1+JR8T7R+fsuHN40m5IVXFxJV7/rdK4I/ fdF6a834u5ZMqL21LKVHbKLL3NV93QNPPbviIIkg9qfXtF6+Z9f1hTvviqS3PvHgww893vAgquRu AFaFsdsMjX6fgThIATeWsQ/pE/Up+Z1oVAqzIplWRk2IUhBCo2OiTNFSDCUGrroESYU3wJhYrRkg QutVa/zO5LytGhLSEI1dvBs0JyXnrbQ2WGm99ZSV/mglVojxmmNFaELehlhyKpbE2ixFYfVibs1f 3WK0QeqMXBIxnl+ZTqJOLSKHUhfK13+eBCTSWHTXPHGkKTlJnlyyd9T6gQnBb5yDLiubkhv8Bo/+ rzZcUb9kRdsq2nXTiG4lS29v+x4Xjf4r3tciyW/fc3aCBmdWFKUt8msqNHShJqBp0RzQ/KhRODQ1 mgWaBqxQSEoVKJiEJ5UfDsBRlKzGvEepUKqYlqrwXBQe50rOYza1vK7z6ygSW1BSGPmKwpngdF/7 VX11+AUB20FYsPXclcx77mO00Hy8qa/DHZZCeu2ENOy7GueHEU0XqzTr8qQ8dZ41z11CS9Wl1hK3 zillpQ3R1KQtTNuQ9phyk2qj7lnls7pA2oG0o2mRkJaVVoENL6R9lqZM89vj84qwvFA0KlQuprIn 8BDUqFW5RCRiKmNUVEpcfLw3RYtLNBi9pij/iG41UWQqTriZlvkN9jhvQjzWTY0nNfEkHuu2e7ze FH56NwKkiANNU8SxvzvOOwVZU/zFCIUIySl5Kf6el+VlpexP+SxFMqQ4UhamSJDiTMlOCaWwFFvq l4Xt6bZ8OQrvvMIzeHZgeDszrdpXeN5FxM0aN2T7WyLU73QfD3HEF+2K5cm2RaTcFrNwmZQOlznv PfOJtLxl3NrsskeunfVIKvpQQsqgXhO6BL9JLOpePCEz+A3zrnpi6LBhQ0deW7KurYqOfKhL4RXL 1wYpLXtgREbZovvaWsNvV1gV2swMG/xWVbQleoR6gpo1M4LWMpaoSwwnjAql2EJRqki9UhcRgWkP JV4ziC0EJMTfvv7FFtJGeHWRXL96va5jJ+nIKYyZF+4koak/bCbx9qEjY3JdsHWEknBDsargN8mD CvrN9KFDKpa/X33/QAdNfKq2R8WixqCDeddv7zth0Y18/wzGXOh+XKkeM+d7/VccJ9+of43+NZa9 To8rqMmmsGlolXF49HBzlfVeuk65Tn2vrllzkB5WHNEc1H2j+EZ5XG/cpH6Lvq18Wf2aTjFLvVS5 SC1FCS+MsHAVxTBVTIHKXhNXH0fjIl1wQaobvjCEE8D2KKupM47D/K/OyggPsXiRzjPhskC8IEz2 ejrF08HL2tb/m+QF931/d/DXZcS5dsqUNWumTFlLk+4gymXB13/8d/DlRaHND23e3LB+82Z+3i8G kPJxvUbY7E+9V0E0kWSIYpxilkLKMlVGToisNzGtxqBz6OgKXUhHi3QDdVTXTOf401QqtLFEldpU 0Bg12Zp6DdPYF5g2mOhI0wLTVtMBEzMZwUskfgpFULqQNOCFyhZVtJPEQ/s1qcOkZ6pt/cPHOtoS LVyQEw6J06A8YBnC/x5yROU2bU6PKvHJBNq144BXRpEGbtW+E0tqqq6+/LJeg7OY996JJd1+6VK8 JfhvXGM22tSIa0ynU/wPKaOUbnWKJcriXmdaF3Nvypp0jSqmLIaadut3Rr7u+sr9m/5MkjJNP0xf q18Tca9pU9JOnarY7U8u8Y5PGutdbFocc3vSrcmafG+psiziSv1AQ5mrT5IqKTnFm6/r5uqW1M3d LVml1CqiNC6rPkWXlJTkViUn+TNm6ObGzIudnTYrfUnsovT7Y9ekb0/a7tYvJCssd1jvS38iPZCh TGoOvcVPfZeMk8RLqmRePtrkSA6XbXZR9schMVFPuieVJa3T35P0StIHSUpXkk7PmB3kvAJyeYbR ZMksInIKKspJnjzx3ikBIyaQ8JsnVkMWklNEAmIU76GY4Iw2Iych/npgZCQ7xSgrS40w+7Frc67F j/1a/Nipxd8tP8/C33xY/J40fGC/BotDvGRglmF2P+55g51U2EN2ai+LVllcZr/LnWf2xzvyHGby GWaKuWpXhWeFh3r81oQ8jz1DvLLEAFuRQbIzSFYGyUh0ZRuJMRdvi3IQFhhZwimRRp8HNt/cZu5Z rRhYxesMOVj4+Lt3HnL5S7Bq8RZMTm948bRPDsc8IbcUyFm8L5xNTsOf6vDnQMmhfX5NhKnIkIoP tMD3O/QFuhhdAScbdfw92LfbIgpA/lvRKty/4Tde/DOfFG9KsnjjxaN15xde/J/g429JsondNGXM 5HxPTGy/4FPXzP/4q48/SA3+GjWycmq2M95LXqyqPP3jR20kyzd4WGp8ljM2Jqq89/D7lu25a3nX 3n0cZndibPy4K8tvv/u9APB/suo4XaV4EGPYO/40J2Daqk0z9Iy8MrLKoLLFglUyx4LFFB1DLCYa Q6ySRqVV6azc0AawNFgCFqkGUYtFsmB63ogXZ37Rglj+qTXea3URmixtFmCGPBJ3NE/gU62S12Ia FlsUsyFma4xUE7MwZmXMgZhTMQqIMcY4Y7JjGF7p5za0H37lgXzc071wT++EmFBLj6pwdn+6utB4 WmT3J8Wn3ch6DI+9qFw5u68mmMrHCJ1auNL4S8Qod7fcbp4oekNLREp8ypXW0f+46oaCCM3NNxM7 8x4NDr3FFx/3cXruoNKua8j+o+8/FlyK+rkTI8IQ5sXzbL3fcnXU+Ki1CkmjtCkLaWFUOS2P+oaq REYYxSLMoI2NwYsL3l68sbHAg1mkWZxq4SvO35xqGnXHcaYmp9RE/deJYf+ThcY/nGbV4au+Fxfp Ci+7e3dOSgN67q2buOUqYnMMLrpiejqxbRg2+rota2lD0Hq0ttfAWcdIC6ZauM4IPLdH4Doj4Ad/ rCLVnpWn4g8lf6j5Q2oOHWpCLJI8p71n3v2MKKUItVqri8BMlpoku8auTYLMiNcjdLjRTvlT8Y6u BUVEDNgiPJAekQc9IxaDJgK0LEKr0VBKlEhrCvibIL81PjUvQu/QZ+v9eqa3WOxGbZF2oFbSNtNs fwSjBRGMfy4ksV00G5OEhX6DrhsQJwYkidh0r6C/2LjD+Kz9T1bjSVFtG1BaW/K1KIsciSdIpgIi PiAi03z8Y8Pwn2m7iCvawl87RrsIeS44lKS80dOijDS+SVxBVEjbF8+WmjMzaaLISCPxzjcYtRRN 8rabUhUkmkdXq86QpzbrDXkq/lDyh8KMdZSvy4GawpSV6SMilUYK0UoWTZkkEX7JrcEo1Uy2+k0R Bn1WZCo4Y7Nja2IlfjEQsdebJ+4LpvjEvFjUCCuQ/FZbHv9Appmk+DVUlCihvGQiBeCP754nv+GP eUXeQb7+bTZ8cs20CZX4fNOm9zeexhvcyeqssF5QK1Hyp6tIqCLFzU1WT3V5wIgbsCduwEZmhF0h vJ+HTm2TjKQH/8RahDsFf62vjyqKNkbb8GGyFinQ/k1Y4LgRy+G+qqJd0ahlVaSE170U8QoukviC vxF3cGlfT9+rF1QMGmDr0230dTbUeyT9qZXurB59WVLUEf2MKu6jSZiBfIjaN5LU7aZ9jOC5E/J3 NUblaQk+VEStpb+S37Q0P+Jy7eW6SlJJ60gdXWBSf8YO6H5kR3VMm8UeVu2mM0ENWjIUjwKNWkmy dA+LLWwwGkG7gm3Ao8vp7YKOTnzbNdpco0E+RQziOx/8GDEYDU5DtsFvWGBQGuyo+xbMYahJpc6F hbqVPO1Bt9BgHyqdhZeIrxEN/ueHkQ4Po6i5N7YfRjzRqfZNN57GLT9t+plqXnO68KRvOv/4rO2X Y/g8yTGRv0KCRxCZJqxAcctpIwoijDoEvThdqiC3G+mer1Qpurliiap7ris2idw1OqtrRXCpNCV4 /YpZ8aTpMNlXnyUReuL1YMYDql8BYCS7AcL/lyaAoHhyGnM40lumKagVn0L7/82pkv+/aAXNIEbx nkwrwKo4LtNKsCqtMq2CV5TZMq0Gr2qhTGtgmX6zTGvZS2JkTkfA6MgcmdbBuMh7ZFqv3K48LdOR cG3k2Y7/DcICw3CZJqAw/CLTFJipVKYlyDB1l2kGWlORTCtAZ7pSppVIj5ZpFYw2TZFpNURHm2Va A6XmdJnW0lGGgzIdAV3NU2VaB7nmBpnWSyNMB2Q6ErqY+fedCJNwbpEWvUwzsFuIoBVYr7WkyDQD syVO0EqsV1ouk2kGJktXQau4XSwDZRptYekraDXW6yxjZJqB1XK1oDWyfcN02L5hOmzfMB22b5gO 2zdMh+0bpsP2DdNh+4bpsH3DdNi+YTps3zAdtm+YDts3TIftG6bD9uW0VujqRpnmupoqaP7NDpNl jUwzSLQsFbRO6KRJprlONgk6kv/Dj5Z3ZZpBvOVVQRtFP00yzfsJ80cLnZ+Uaa7zzwUdI+YTkmk+ n58FHYv1MVabTDNwWsP2NXN+a4FMI781U9A2wT9Epjn/5YKO4z5gnSHT6APW8YJOEPMZKNN8PmFb OwT/Upnm/DcJOpn7gHWDTKMPWFcLOp3rx/qcTKN+rE8KOlP085ZM835e4LS6k/7VnfSv7rQudad1 6Trx6zrx6zrZRddul6EwD+qhFsbBKBiD2AlPIAyFCYLuD1NhCsJMmcsJfbE0HWn+HIX1dYLDiTWT UL4LUiWiftT/saesjpk5YQi2TBL/zn2YZwbW9UMcHq8rFOBvNmTKVI6oLUaJSYgHo8x4nMNMITUY +5uBMB1m43OsmMMUbKuFyR0zmY7jOpFrlDxSmL8ONeRECS7Pe5wCGWIU3jJKjDRG7ov/Gydhycmi R76CCTj7yaLHOmyZKbgniLG41mfKI8wQKxwjZGeK9imiF475nKaKOdTJa6kXffMZjRGzmiFG4y2c f6zA4fnPEqM5xQidZ1Un+p+J7VNEeY7oe4I8eq3MO1X0FR67vX6S6HumrJExWApr5mK+mdhnrdBK HeJw32PkmllC09xW571kqrDLdKHRSUKez5R7x2RZqn2EMUJ+tjxqnbxS3hbW5nktjENO3lu49rxe 62TtTpVXUif4Z4nSeavOEB47Sczuz32ifefM6FgLb5ss+jvfx3QcZ6I821Gy/scIn3bKft+us7Fi 7PGiNiw/B1vqZBtynklo+7CPTMXneGybLWs73MP5vTxK2CrsHU6hwzHy+uuE1SYJnnqxz8LeOEVI hlfS2bvrOjzLie1zZctMFrPhvhm22wx5J0/qmMdkUTrvvTMvijczLlrfGHmM0aKHWULTYy/wzVqY hvXtmuW+PaZjheOEbzuFD8wVup0h/G6msMb4DqvzuYf3O99LGR27aYbsZefjUbh1srDIKLhByIdn zfsdI1rPe1p49LFCW/Vil8zrWEX72Fx+jmgfJTQxXR6D76GwFmcK+fYZt/deL3xosoih7XPr8oe4 2vMCq/URkXMs1g6XR2qPsjxK9sCnE1KxD6796WInhHdQWqdeMjt66Y++fb7+GeHr0+W9P1n4z8QO O/9v437YNuPlaFgrx7jzsSrc6zA8E5xQIeSd4BXj9cfnQBx7nPDedq1x/5whND5B7q0LDEC+oXiC lCH0xRVxeiDWcvkyfF4l6kuxZgg++T64HE+OUvztL2qHgl58m1crVlkn78WLz9H2+vCMw9arl+17 fj/8UT/hc28q6mC68JAJgrt9Pe3Rv92nRovWecg/q2PMMR1xNKy7WUL2fPyrlXcIj1LnY3Y4VtTJ 8XmGHD/Gi15qO+Iv122VPBqPJLPluD264+QLjznzbzTT7mVzOiJhrby7azv2z3QRq2bKsWOc7Pt/ pq/2Hc81Vtupl/MR44/jjZX9i/vyaBGFw7MeLVtmitzzn1koRazqQk2Fo/8fveKPI7fHUR4xR4ms ZhSOOknW9gw5Xv3V2F2E70/pFNPn/cEWtXJG03nnhE+KUWJG9UKz/OyqE/vtP9vcKfvilE5xtH1c vvvHCk3XdTqxpnfKujI6uKd38tvzecLfa4rPbrLov92vpl7Q3xxh/4nCmp2jSXssPs85FXnDcWaW 0Djvf0LHesLz6uzdk+XoHdZ/eFfVy/5xPspf6EN/t6Lz/tFPrP2PlmvP8/j5Vitng+HVhHPLMcKq Uy6ywfSL9H2+Z76+qSL7GSvH1dkiD5sDnTO5/2z99v7Ce7JWzjcuPJXb+/ujHcPaOp8djxF9/nEf t1ts1EW6Hvdfzfb/ae1qdts2grAC65IAPvQJOkCBNgFsyXaUyHaRoq7jpIZjObDlAD0FK3IlLkxy md2lWfUJ+ijtsQ9Q9NIn6S0v0HO/GVKyGCdNCwSCSGp2/ufb2V0e7Jss37bQ3lu0PdLNjjlgrVxo 4FXmANTNDq+Su51BZ4iVkXDdxq9NnDkG+G51+Jx92TlpOLcwuo2RQfM87DzkP0yP7+POI5xP+Mva E9mXFLDXx6eST0/W9/aMj6TzfWid4KenMjurJS7qVdA03ZZ9OpcOXa+ho2avZZtdPM/PeiV1MmKk Ahe43qwbjCo+XfGO4f/53Rd+/q9lfVyDdAiuVV/WnteCkno/0VtyfloLlewBal79Sawsxvrv4HGp ezwv9FRFmn6lcaLp1OY2gESH1hXWqWBsTkUa9eipCuojTH1WRhc2LZni6TiH3Pbe3tYmLjs9OkhT OjezJHg61167ax0f2jzojJW4OXkFIdDNlGLtzSzfoANnVEoRuJTBYGadpqTMVG58oChRTkUBAj6Y yFNIVE4Ym5OdkoGVwulYR9p76zypPCYF/WWUkGlUmZxCmWuqTEggrkG1MUvzc6pgA/IKzixoodJ5 MBrcER5KN++RpMRea6cQXnBahQxDLBCVCNGzMW+ncFNcmJZpikfxFeYzCyMmj0sfJFQf5qlezQQX x7MV7TKTC4ezV1Cr4H9UwlAunsVGzSyPV4lBhIlOC2TE0sxca2GQKitKkQ7KNHKXmwjsqig00phH GkbqdBtOFukfEUym0zkhNo8ip6wjM6mkNzS48Y29CBITTaXXcZ1N/aZkZ8uI809Ti5ChEUGFYPIZ h+406h78BpfJI2WCI/zM1Ez9ZHKo1iHaqJMG8dj4IlVzNsHSua58oQq4BpYYLgbjWTGzF85mVrT1 Fljdr0P7zqbx/isIMWR3ers7dP/URM5ygR4IyyaznI7l+RcaO1Q/U+6KY/437COaGWCogThBFVgv L+ilCvQljU/pbDrtiWs69bpKwNYbnY2Pnx0fHoyPz0Z09oxeHB8ejS6O6OD5+dHR6dFovH5v/d44 QTEWuebCsGKEh7iD1GHpD+aenTlVJHOxw/DnTE3mNLclS0aMUXhX5rHgD6gApATZQIUBnsGuZk5r xm+PfoBYogAeO+HJB8nQcoZTVjEINcqtuT5ORwHomCL7N35x4e1MC4sAYymHggLzkzJANdy0mIcr AX3lF04B/stULIUZo3St0lJNgEvlgatV6R5d5oL0+SIKxNQUB5NCkS90ZKYmuh05IYu5YJRlVRwb rjGw46R1bTDZSW6lJ7zjVGoywwHBiPBV1l35GtqCYiHaCpgpJ6nxCduBrjrdGeAN/1GqYk415JsM tQ1JPo6nN8Fxz3tTai9m0C0j7fImAtf4Lcw+sWUaA6vXRld1k7sVPvOhkhp9I75pjMsY4Za04yjc 1JgDU43X0/erFZeXAk23aBTBjgr7zHB5cUCbdH93MHxAw+3dza3B1tbdu5cnIG5tbw8GuA4fDmn4 +NHeo731e0kIxX6/X1VVL1sUPrLZ6pzQ9NSpinOBKQinoOncTjBDR+haFi1+gyepM5FRdKFkbnis Wbs7H9DdT0KW9rOQq0z3M/9acZ/oMfE/ClQ6BVV/XIR/9Zs8CnfrdclIXj04OYKp1kjolHfWscy/ bVGnsm1cpTxrXjet0NZ+Xvt97c+1P3D97b3WzC1rL/BUHwusjJat0eey3VscFflA1PbgLe5Xnb8h /Rb01bFXIrFK+V7u1xJJe+Rl8wqilI2jlaPJh7xvedD9vPtN9+vuYfdxd7f7bfdJ96S715IcvzeX J3y/sw16m1q/qrtq27jzWeevtS+w1WpnzTYvT/8BuX51CAplbmRzdHJlYW0KZW5kb2JqCjcxIDAg b2JqCjw8L0ZpbHRlciAvRmxhdGVEZWNvZGUKL0xlbmd0aCAzMTEKPj4gc3RyZWFtCnicXZLPboMw DMbveYocu0MFAdKuEkKq2k3isD8a6wNAYrpII0QhPfD2CzbrpEUK0c/xZ39ySE71ubYm8OTdj6qB wHtjtYdpvHkFvIOrsUxkXBsVVsKvGlrHkihu5inAUNt+ZGXJefIRb6fgZ7456rGDB5a8eQ3e2Cvf XE5N5Obm3DcMYANPWVVxDX2s9NK613YAnqBsW+t4b8K8jZq/jM/ZAc+QBblRo4bJtQp8a6/AyjSu ipfPcVUMrP53/0iqrldfrcfsPGan6TGtFhIp0Zloj5QXRAeiHdGZ6IiUkS4nXU5UrLRDkjn6WTvL Xx9320JgmpB4ZE+oLQqqdMJjJzAoqbzMKEhdJOkkmdxnFCSdJK/7w+qAei7DWR7xPnl18z4OHV8a p73M2Vi4/wxudItq2T9YWKBLCmVuZHN0cmVhbQplbmRvYmoKMyAwIG9iago8PC9UeXBlIC9QYWdl Ci9QYXJlbnQgMiAwIFIKL1Jlc291cmNlcyA8PC9Qcm9jU2V0cyBbL1BERiAvVGV4dCAvSW1hZ2VC IC9JbWFnZUMgL0ltYWdlSV0KL0V4dEdTdGF0ZSA8PC9HMCA1OSAwIFIKPj4KL0ZvbnQgPDwvRjAg NjEgMCBSCi9GMSA0IDAgUgo+Pgo+PgovTWVkaWFCb3ggWzAgMCA2MTIgNzkyXQovQ29udGVudHMg NSAwIFIKPj4KZW5kb2JqCjYgMCBvYmoKPDwvVHlwZSAvUGFnZQovUGFyZW50IDIgMCBSCi9SZXNv dXJjZXMgPDwvUHJvY1NldHMgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldCi9F eHRHU3RhdGUgPDwvRzAgNTkgMCBSCj4+Ci9Gb250IDw8L0YwIDQgMCBSCj4+Cj4+Ci9NZWRpYUJv eCBbMCAwIDYxMiA3OTJdCi9Db250ZW50cyA3IDAgUgo+PgplbmRvYmoKOCAwIG9iago8PC9UeXBl IC9QYWdlCi9QYXJlbnQgMiAwIFIKL1Jlc291cmNlcyA8PC9Qcm9jU2V0cyBbL1BERiAvVGV4dCAv SW1hZ2VCIC9JbWFnZUMgL0ltYWdlSV0KL0V4dEdTdGF0ZSA8PC9HMCA1OSAwIFIKPj4KL0ZvbnQg PDwvRjAgNCAwIFIKPj4KPj4KL01lZGlhQm94IFswIDAgNjEyIDc5Ml0KL0NvbnRlbnRzIDkgMCBS Cj4+CmVuZG9iagoxMCAwIG9iago8PC9UeXBlIC9QYWdlCi9QYXJlbnQgMiAwIFIKL1Jlc291cmNl cyA8PC9Qcm9jU2V0cyBbL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSV0KL0V4dEdT dGF0ZSA8PC9HMCA1OSAwIFIKPj4KL0ZvbnQgPDwvRjAgNCAwIFIKPj4KPj4KL01lZGlhQm94IFsw IDAgNjEyIDc5Ml0KL0NvbnRlbnRzIDExIDAgUgo+PgplbmRvYmoKMTIgMCBvYmoKPDwvVHlwZSAv UGFnZQovUGFyZW50IDIgMCBSCi9SZXNvdXJjZXMgPDwvUHJvY1NldHMgWy9QREYgL1RleHQgL0lt YWdlQiAvSW1hZ2VDIC9JbWFnZUldCi9FeHRHU3RhdGUgPDwvRzAgNTkgMCBSCj4+Ci9Gb250IDw8 L0YwIDYxIDAgUgovRjEgNCAwIFIKPj4KPj4KL01lZGlhQm94IFswIDAgNjEyIDc5Ml0KL0NvbnRl bnRzIDEzIDAgUgo+PgplbmRvYmoKMTQgMCBvYmoKPDwvVHlwZSAvUGFnZQovUGFyZW50IDIgMCBS Ci9SZXNvdXJjZXMgPDwvUHJvY1NldHMgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFn ZUldCi9FeHRHU3RhdGUgPDwvRzAgNTkgMCBSCj4+Ci9Gb250IDw8L0YwIDQgMCBSCj4+Cj4+Ci9N ZWRpYUJveCBbMCAwIDYxMiA3OTJdCi9Db250ZW50cyAxNSAwIFIKPj4KZW5kb2JqCjE2IDAgb2Jq Cjw8L1R5cGUgL1BhZ2UKL1BhcmVudCAyIDAgUgovUmVzb3VyY2VzIDw8L1Byb2NTZXRzIFsvUERG IC9UZXh0IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJXQovRXh0R1N0YXRlIDw8L0cwIDU5IDAgUgo+ PgovRm9udCA8PC9GMCA0IDAgUgo+Pgo+PgovTWVkaWFCb3ggWzAgMCA2MTIgNzkyXQovQ29udGVu dHMgMTcgMCBSCj4+CmVuZG9iagoxOCAwIG9iago8PC9UeXBlIC9QYWdlCi9QYXJlbnQgMTkgMCBS Ci9SZXNvdXJjZXMgPDwvUHJvY1NldHMgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFn ZUldCi9FeHRHU3RhdGUgPDwvRzAgNTkgMCBSCj4+Ci9Gb250IDw8L0YwIDQgMCBSCj4+Cj4+Ci9N ZWRpYUJveCBbMCAwIDYxMiA3OTJdCi9Db250ZW50cyAyMCAwIFIKPj4KZW5kb2JqCjIxIDAgb2Jq Cjw8L1R5cGUgL1BhZ2UKL1BhcmVudCAxOSAwIFIKL1Jlc291cmNlcyA8PC9Qcm9jU2V0cyBbL1BE RiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSV0KL0V4dEdTdGF0ZSA8PC9HMCA1OSAwIFIK Pj4KL0ZvbnQgPDwvRjAgNjEgMCBSCi9GMSA0IDAgUgo+Pgo+PgovTWVkaWFCb3ggWzAgMCA2MTIg NzkyXQovQ29udGVudHMgMjIgMCBSCj4+CmVuZG9iagoyMyAwIG9iago8PC9UeXBlIC9QYWdlCi9Q YXJlbnQgMTkgMCBSCi9SZXNvdXJjZXMgPDwvUHJvY1NldHMgWy9QREYgL1RleHQgL0ltYWdlQiAv SW1hZ2VDIC9JbWFnZUldCi9FeHRHU3RhdGUgPDwvRzAgNTkgMCBSCj4+Ci9Gb250IDw8L0YwIDQg MCBSCj4+Cj4+Ci9NZWRpYUJveCBbMCAwIDYxMiA3OTJdCi9Db250ZW50cyAyNCAwIFIKPj4KZW5k b2JqCjI1IDAgb2JqCjw8L1R5cGUgL1BhZ2UKL1BhcmVudCAxOSAwIFIKL1Jlc291cmNlcyA8PC9Q cm9jU2V0cyBbL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSV0KL0V4dEdTdGF0ZSA8 PC9HMCA1OSAwIFIKPj4KL0ZvbnQgPDwvRjAgNCAwIFIKPj4KPj4KL01lZGlhQm94IFswIDAgNjEy IDc5Ml0KL0NvbnRlbnRzIDI2IDAgUgo+PgplbmRvYmoKMjcgMCBvYmoKPDwvVHlwZSAvUGFnZQov UGFyZW50IDE5IDAgUgovUmVzb3VyY2VzIDw8L1Byb2NTZXRzIFsvUERGIC9UZXh0IC9JbWFnZUIg L0ltYWdlQyAvSW1hZ2VJXQovRXh0R1N0YXRlIDw8L0cwIDU5IDAgUgo+PgovRm9udCA8PC9GMCA0 IDAgUgo+Pgo+PgovTWVkaWFCb3ggWzAgMCA2MTIgNzkyXQovQ29udGVudHMgMjggMCBSCj4+CmVu ZG9iagoyOSAwIG9iago8PC9UeXBlIC9QYWdlCi9QYXJlbnQgMTkgMCBSCi9SZXNvdXJjZXMgPDwv UHJvY1NldHMgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldCi9FeHRHU3RhdGUg PDwvRzAgNTkgMCBSCj4+Ci9Gb250IDw8L0YwIDYxIDAgUgovRjEgNCAwIFIKPj4KPj4KL01lZGlh Qm94IFswIDAgNjEyIDc5Ml0KL0NvbnRlbnRzIDMwIDAgUgo+PgplbmRvYmoKMzEgMCBvYmoKPDwv VHlwZSAvUGFnZQovUGFyZW50IDE5IDAgUgovUmVzb3VyY2VzIDw8L1Byb2NTZXRzIFsvUERGIC9U ZXh0IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJXQovRXh0R1N0YXRlIDw8L0cwIDU5IDAgUgo+Pgov Rm9udCA8PC9GMCA0IDAgUgo+Pgo+PgovTWVkaWFCb3ggWzAgMCA2MTIgNzkyXQovQ29udGVudHMg MzIgMCBSCj4+CmVuZG9iagozMyAwIG9iago8PC9UeXBlIC9QYWdlCi9QYXJlbnQgMTkgMCBSCi9S ZXNvdXJjZXMgPDwvUHJvY1NldHMgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUld Ci9FeHRHU3RhdGUgPDwvRzAgNTkgMCBSCj4+Ci9Gb250IDw8L0YwIDQgMCBSCj4+Cj4+Ci9NZWRp YUJveCBbMCAwIDYxMiA3OTJdCi9Db250ZW50cyAzNCAwIFIKPj4KZW5kb2JqCjM1IDAgb2JqCjw8 L1R5cGUgL1BhZ2UKL1BhcmVudCAzNiAwIFIKL1Jlc291cmNlcyA8PC9Qcm9jU2V0cyBbL1BERiAv VGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSV0KL0V4dEdTdGF0ZSA8PC9HMCA1OSAwIFIKPj4K L0ZvbnQgPDwvRjAgNCAwIFIKPj4KPj4KL01lZGlhQm94IFswIDAgNjEyIDc5Ml0KL0NvbnRlbnRz IDM3IDAgUgo+PgplbmRvYmoKMzggMCBvYmoKPDwvVHlwZSAvUGFnZQovUGFyZW50IDM2IDAgUgov UmVzb3VyY2VzIDw8L1Byb2NTZXRzIFsvUERGIC9UZXh0IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJ XQovRXh0R1N0YXRlIDw8L0cwIDU5IDAgUgo+PgovWE9iamVjdCA8PC9YMCAzOSAwIFIKL1gxIDQw IDAgUgo+PgovRm9udCA8PC9GMCA2MSAwIFIKPj4KPj4KL01lZGlhQm94IFswIDAgNjEyIDc5Ml0K L0NvbnRlbnRzIDQxIDAgUgo+PgplbmRvYmoKNDIgMCBvYmoKPDwvVHlwZSAvUGFnZQovUGFyZW50 IDM2IDAgUgovUmVzb3VyY2VzIDw8L1Byb2NTZXRzIFsvUERGIC9UZXh0IC9JbWFnZUIgL0ltYWdl QyAvSW1hZ2VJXQovRXh0R1N0YXRlIDw8L0cwIDU5IDAgUgo+PgovWE9iamVjdCA8PC9YMCA0MyAw IFIKL1gxIDQ0IDAgUgo+Pgo+PgovTWVkaWFCb3ggWzAgMCA2MTIgNzkyXQovQ29udGVudHMgNDUg MCBSCj4+CmVuZG9iago0NiAwIG9iago8PC9UeXBlIC9QYWdlCi9QYXJlbnQgMzYgMCBSCi9SZXNv dXJjZXMgPDwvUHJvY1NldHMgWy9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUldCi9F eHRHU3RhdGUgPDwvRzAgNTkgMCBSCj4+Ci9Gb250IDw8L0YwIDYxIDAgUgovRjEgNCAwIFIKL0Yy IDQ3IDAgUgo+Pgo+PgovTWVkaWFCb3ggWzAgMCA2MTIgNzkyXQovQ29udGVudHMgNDggMCBSCj4+ CmVuZG9iagoyIDAgb2JqCjw8L1R5cGUgL1BhZ2VzCi9Db3VudCA4Ci9LaWRzIFs1OCAwIFIgMyAw IFIgNiAwIFIgOCAwIFIgMTAgMCBSIDEyIDAgUiAxNCAwIFIgMTYgMCBSXQovUGFyZW50IDEgMCBS Cj4+CmVuZG9iagoxOSAwIG9iago8PC9UeXBlIC9QYWdlcwovQ291bnQgOAovS2lkcyBbMTggMCBS IDIxIDAgUiAyMyAwIFIgMjUgMCBSIDI3IDAgUiAyOSAwIFIgMzEgMCBSIDMzIDAgUl0KL1BhcmVu dCAxIDAgUgo+PgplbmRvYmoKMzYgMCBvYmoKPDwvVHlwZSAvUGFnZXMKL0NvdW50IDQKL0tpZHMg WzM1IDAgUiAzOCAwIFIgNDIgMCBSIDQ2IDAgUl0KL1BhcmVudCAxIDAgUgo+PgplbmRvYmoKMSAw IG9iago8PC9UeXBlIC9QYWdlcwovQ291bnQgMjAKL0tpZHMgWzIgMCBSIDE5IDAgUiAzNiAwIFJd Cj4+CmVuZG9iago1IDAgb2JqCjw8L0ZpbHRlciAvRmxhdGVEZWNvZGUKL0xlbmd0aCAxMDU5Nwo+ PiBzdHJlYW0KeJytnd/OI7t1Z+/7KfoFLEsllf4AgwHixPZ1BgbyAEFiYAAHiPP+wJTaXXv9tKvX /s7F4MDxSXs1i9zcJBcplnT5ft7++d1l+z+P1/L93//27b+/vf/kfll+/MHf/+Pbv33/r/cfnh7r D/bnv/zuJ375/v7n//z5+z/+5e9//fb7P5+///V/tlIu5/X+/Xm5/+NftpL+80fR7z9ZLpflvP/R L7nL5Xz7+YfnJ3/5/H7UP/5le9Qf/vLt93/a2Mf3v/znt8velNtz+9vX2/e//O3b/zqfL/f//f0v /5f/dX2cXvfz6wO5fCL322k5Xy+JnK8NeZ2uy+2RyPL6RB6v03K7fZSy/PMn8ryeXo+tzYFcl0/k tZ5ey/Mx1eVyvpzO6/n68aS1MZfL6XJ/fjT79mzMcj69rh/I2iJzuW7I4/ZR49u9M6/T/XH96ITb Hxpze21p9BGb9dyQrReu9y0JsphbYx7X0+1yXz/K6S1/nrfqvD4iePtjZ16n1+u6jFHeeuJy+ezy /qzlvJyuj8cY5eWynLZiPsN878zrdLu/Puqz9nKW12m9fNb59qfGXG+ndbl/PuvRmNvltNwfY7Yv t/vp+vzMwvWfGrNuXboF6KM+Ld+XrU8fz886H+rzuGx1vt6m9Fmel9P9fL6PdX7eT49lWaYhurzu p/vtcp/G6HUbXdfH9Rdj9I9/2WbF2/f9P++JKf7fmJ7ux+np/tjLWg/T0+Pxo/WF9Ay637cZ7PlR Sh8Yj/V0e/1oO8ijKn2YR99Vut+2JWD5MYe/J9v3H72WR/+jN/WO7c8p+XE//tkbW5b1V3+z/dGb 2uJxuUdh7c9+BOt1+9XfbH/0prbeWqKszz/61+2fXy5u1/t7SdvWm9NW6rrNXtt/Lctp/Xrx+7nU /ejn98T30c/veX25rDJV3K+n53n7G4H0fn68k/wdjkB6P2/zxPk91QbSxtPzsS1m76kWpA+VbWbb BtNHXS6tlMs2sz3Pj48m/ZxFt4z6dQC2NecXT93wr3pieZyqI66X8/+fntgWra96ohDvCRDtCRDt CZDjav+zJ0Be2hMwbc7a5Gmr8Hu1D+ZPnXlua+d5mdp9We6n5cecDnP5l77c307nrcuSOayd2xrz /Czm/AsjuF/Wz+q0JeayTY7/WO69N7f/eavy46Mj1hbltzacLx/R6avr2xqey/mjJ9bequfzdF8u c09sK8xtW6qnvFjeM89y+0iMpdnQ2xq2DPzsib7ibdbw3JbyjzB3I9jG1n15vKbeWq7b6rG8Prui 9fpyu56W63mdwrysm5deLx/1uXVjWp+n5/UzzF2GtvXufr1dx/A8bqfbdf2ozqWN0OW5bFW+f2TG uQvK67xV+XmZEn55PbYqv25TCLfIbGJxvk+D63rZ5PXW0rCF57pscna7LjPzOm3T0W2Kz3WbWZ+3 dRyk19ttq/NjyrDrumxVfi5TeDZn36r8WsdH3Z/vHctzGl3XzWGe63KZRtd120Dd18/06aPr+rqc buvtPobn9V5r1teUPrfzY6vz4zLF53a5bXV+jqN0mw22Or8+49xS47Zts24/NqnB/HNnnpu3L8vI bN6+LSfr1F+3dVv/Wr/3/rrdtwXw/hHC66Mj7y3L43Mk92Y9thXw/hxHxe25LYH313OszmtbAh+X j664ti5dz+9t3/LRFX3N3jLwdHtcRztYL9sS+LiNE+a6bEvg4/7ZFb2c67YEPh63qdvXHzvn59gV 621bA59Toq7rtgI+P4XmUJtNes7PZUyM9bEtgc/r/KzHtgQ+P53mwGwbsdvzU2oOz3ptS+DzU2q6 gm4GttX5U2qOzLYEvs6XKYL3y7YEvprUdCdcfmygxsnwft2WwB/nJd72bXN5Or8+reZSG7HBVrdk ktqXrSqCrRbSsxpbLaQPRGwVpLUOWwXxfYPWN2y1mINlYqteDrZKfbqPYasEp+th2WohfcEIW6XK /XAKW4XpMo+twvRmla06gq0OTNlqMX3SDFul6YczrrJVeqJbJraqvRW2Sm/1Z2Gr9FY/d8JWtbvC VonPwUTLVrVLsVUvBluF6UaLrWp3ha3C9I0DtkrT/9iZslXt0rBVntUGRdgqXdHaFbaqqRG2qqkR tqoDGVvVcRy2CnMw0bJVbxa2qhkWtkoIu/Bjq5oaYauaGmGrMG1UhK3q5BO2qt0VtqrdFbaqC07Y KkxXP2wVpn0AEbYKc1DR3Va1u8JWvVnYqocZW3UGW9WJJWxVywlb1dknbJXwdDPGVjXlw1ZVEMJW Ye6/wW225e4rt1EEt3Gk3EYR3KaQ/ikWbgPST9BwG5guAbhNMT1fw220xuE2MP0IDbfhWf1cq9wG pH9+h9vQrF4MbqMRDLehyl3Hym0KOZgfbuPRwW1olrsNVe7HP7gN5bQZKNymmONpXbkN7XK3oV2H z+/KbbTt4Tba9nAbzVTchmL6x264jTcdt6HpXdlwG2dwG5jDaV25DUxrVriNPivcRrsi3Ea7ItxG wxxuQ5zbwozbUEyvMm6jXRFu4+HBbTw8uA1MGxXhNjDuNtql4TbaFeE22hXhNjrRhdvo3BxuQ18c mHIbnTDDbbTbw220T3EbDWG4DYy7jXZpuI2mRrhNn+hmB3jpxZpyAEVwAEfKARTBARTBAV52wyIc 4KW3VHCAl916CAfQ6oQDwLgDvOzmCA7wsgso4QAvu8gSDgDjDkCV1QEKGRzAo4MD0Cx3gJfeUcEB KMcdoJjBAWiXOwDtcgfQtocDaNvDATRTcQCKcQfwpuMANN0dwBkcAMYdAMYdQJ8VDqBdEQ6gXREO oGEOByDO6gAU4w6gXREO4OHBATw8OACMOwBMbxYOMDDlANoV4QDaFeEAOtGFA+jcHA5AX7gD6IQZ DqDdHg6gfYoD6KgIB4BxB9AuDQfQ1AgH6BPd6ADXy/krB3CkHGBAdgdwpBwARM8BAlEHCEYdAMYd wGuMA0Q5nSkHiPqYA0Qx/SZNOYBHBwcYHlUOEM/q94p3BxiiUw4Qj+oqUQ4w9FY5gFcZB4j6HO7o 7g7gvYUDBNMX1HKAeFavTznAUJ9ygKFd5QDepeUAUeXDNd7dAWB+Kts84hf9TLZGvCKMeEdqxCvC iFeEEV/I6iNei4kRPzA14hf7CDlGPMzhWn6N+MU+rmbED0iNeJrehyEjvpjD7X5GfDGHT1BrxBdy PdzKrxHvEWTEO8OIVyZGvDOM+MUuDsSIJzxdjxnxPEutf6gPI94ZRjx1PswK+4jXBIsRr9mD9Q9M Wb9nGNbvGYb1e4Zh/Z5iWL+HEOufmN36B6asf2B2648Q9lO0sn7PMKzfMwzrH6pT1u/zE9bv3Y71 e4ph/V4O1j8wZf2eYli/pxjW7ymG9XuKYf0eZ6x/YHbrn5Dd+gemrN9TDOv3FMP6PcX4VNPrw6ea PqnyqaZ3O59qeorxqeZQTn2qOTA/7+B5gnEHzxOMO3ieYNzB8wTjDt4Q5bqDNzB1B2/wtLqDNzH7 HTzPMO7geYZxB++QYbNl3r68O+cIlulIWaYiWKYiWGYhg2VqMWGZN7tTEpZ5szslYZkwbpleTlnm gJRl3uwKR1hmMYNl3uyeB5ZZyGCZHmUs0xksU5mwTGewTO3RsEzC45bJs9wyvT5YJvXpyoZlardj mZpgYZleDJbpDJapGRaWqRkWlqkZFpapKRaWqWEOyxyYskxnsExnyjI1w8IyNcPCMjXDwjK9Olgm 9dG7c97tYZmaYmGZWk5YpjNYpqZYWKamWFimplhYpqZYWKbGOSzTmbLMASnLdAbL1BQLy9QUC8vU FAvL1PqEZVKfdvQelqndHpapKRaW6eVgmc7slqkJFpapCRaWqQkWlqkJFpbpUcYyncEy3dOwzIEp y9QMC8vUDAvL7Bk2W+b65S1GR7BMR8oyFcEyFcEyCxksU4sJy1ztKkhY5mrXTsIyV7t2Epbp5ZRl DkhZ5mqXe8Iyixksc7VLOVhmIYNlepSxTGewTGXCMp3BMgmhWybhccvkWW6ZXh8sk/p0HcMytdux TE2wsEwvBst0BsvUDAvL1AwLy9QMC8vUFAvL1DCHZQ5MWaYzWKYzZZmaYWGZmmFhmZphYZleHSyT +uj7xN7tYZmaYmGZWk5YpjNYpqZYWKamWFimplhYpqZYWKbGOSzTmbLMASnLdAbLJIQ9PFimplhY pqZYWKbWJyxTUywsE6a/WYFlaoqFZWr6hGU6s1umJlhYpiZYWKYmWFimJlhYpkcZy3QGy3RPwzKJ Tn8PGMuE6e/4YpmaYWGZPcNmy3x8eU/WESzTkbJMRbBMRbDMQgbL1GLCMh92sSks0xks82EXm8Iy H3ZBCssckLLMh10fC8ssZrDMh137wjILGSzTo4xlOoNl0nT91ppg+n1SLFNDGJZJeNwyqbNbprYr LFOzJyxzYHbL1AQLy4TpH7xjmR4eLFMzLCxTMywsUzMsLFNTLCxTwxyWOTBlmRqfsEyY1i4s0xEs UzMsLFMzLCzTm4VlavaEZTqDZWqKhWVqOWGZmmJhmZpiYZmaYmGZmmJhmZpiYZka57BMZ8oyaXq/ botlwvRXl7FMDyGWqSkWlqkpFpapzQrL1G4Py3QGy9QUC8v0crBMHcllmZpgYZmaYGGZmmBhmZpg YZkeZSzTGSxTMywsUzMsLFMnsbBMzbCwzJ5hs2W+vr6JrQiW6UhZpiJYpiJYZiGDZWoxYZnFDJbp DJYJ45bp5ZRlDkhZ5kuvR2OZxQyWWYxbZiGDZXqUsUxnsEya3r+hF8t0BsuEccskPOevh8bbhr4Y Go7U0BiQfWg4UkMDpH9PTg0NkP41OQwNfxJDI5ieQzU0YPp3zjA0gulvF9TQGOqzD40J2YcGTP/q GoZGhEdfVIxn2YuK8Sh9UXGocg2NqI6+qBidri8qRjn6omL0hL6oGO3SFxWD6UOsNmDB9Df6agMW jL6kMNRn34BFmPt+sDZg3hVswLwr2IBFV/R9bm3AohzdgHlXsAHzprMBC0a/iMm7gg1YMH3nVBuw oZx9AzZUuTZg3l1swLy72IBFmHUDFt2ll0minF7n2oAN3VUbsGi7fm2ojxw2YANTG7Bg+tt6tQEb yqkN2MDUBiz6or/MWBsw79PagHmXsgHz0cUGLLqrb+RqAxbl6IuK3qVswDyd2YB5t7MBG8qpDVgw fVtUGzAf7WzAhmf93IB58rABG0qpDVgw/RpNbcA8MdiAeWawAfNVmw1YMPq1oZEZ+rWhPmmwAfMs ZAPmGcbXhnr28LWhPu/eb9sS+Pq0mkM567YGvj6t5rD5vG9r4Os1dek2IE7X86fVHH4zZzOfy/nT ag5V3lT2dV6ev0j4WcKXL290O4KEO1ISrggSXkh/sRQJB+nv8iHhi13qCgnX2oSEw3RRR8J5Vn81 FwmHOXwVyC7hNOvwMnFJOIy+KRxV7oKNhHvTS8ILGSTci0HCKaf3BBK+2NW5kHDK6eaHhMPoe4PB dJlHwrXOIeEw+ilIxMclfGnX4n4h4RrmkHCq0+UZCfcqI+HeFUi4PwsJVyYkXLsiJBzGJRymf9UF Eq5dERKuXYGEa1eEhGsWhoRrV4SEa1eEhHs5SLiHGQn37kLCYfTbUD3MIeEa5pBwjXNIOPU5fL9/ SbjGJyRc4xwSrn2KhGuYQ8I1m0PCnUHCPcxIuIY5JFzDHBKuqRESrmEOCXcGCdeuCAn3cnYJ154I CfdSkHBdBkLCe0/MpnX98lazI5iWI2VaimBahbhpgbhpXe1iU5iW1iZMC8ZNi2e5acGoadEsNy0Y Ny2q7KblTS/TKmQwLS8G07raNaswLRj9TpZg9FYzzHDc6eVgWjDdxjAtGD/uhNFbzRFD+4aGQPy4 U7siTMvDjGl5eDCtgSnT0q4I09LUCNPSrgjTgumagGnB9J8KwrS0KzAt7YowLe2KMC3NnjAt7a4w Le2KMC3vCkzLy8G0tCvCtLRdYVo6KsK0tCvCtLQvwrS0L8K0NH3CtDQ+mNaAlGk5g2lpd4VpeTmY lnZXmJaWE6al4QnT6t01+8b65f1WR/ANR8o3FME3CnHfAHHfWO2KS/iG1iZ8A8Z9g2f1DxDxDRj1 DZrlvgHjvkGV3Te86eUbhQy+4cXgG6tduAnfWO0iUfgG5bhvwBw+gi3f8HLwDZh+dIFv0Pa+eOMb a7sA9Cvf0BjiG1SnHzThGx5CfMObjm94CPENLwff0BCGb2gIwzc0hOEbGsLwDa8PvqHtwjc0zOEb Xgy+oWEO3/By8A1vFr7hYcY3PMz4hoY5fENHe/iGtit8Q+McvuHl4Bsa5/ANLwff0DjjGxrm8A0N c/iGhxnfoDp+sqPNCt/QMIdveDn4hoY5Tna8HE52NMxxstPjPLvN48tblY7gNo6U2yiC2xTibgPi bvPQ64e4jdYm3AbG3YZnudvAqNvQLHcbGHcbquxu400vtylkcBsvBrehnN5buM1Db0PiNs7gNg+7 LBpu4+XgNjB+luLl4Dba9nAbYvhbhurz61ueijBUC+lpxFAtpN9bYqiCPG2oal1iqDrDUB2YGqrO MFSdYag6U0N1QGqoOsNQdYah6kwN1UKGoerFMFS1Q2OoPu2KSwxVmMOHxzVUn3btJIbq0+5nxFB9 2j2PGKowPlSfdqckhiox1GNPED/21K6IbYiHmW0IjP44/FAO2xDtitiGEB4/9oTxD5i1u2IbAtPf NWMboqnBNkR7K7Yh2l2xDdHuim2ITqixDaEr+mku2xCYfsGVbYh3F9sQwuO3PGH8A2btitiG6OiK bQiM/hxFMP2IlW2I9ldsQ7S/2IYQZv25zehSv+UJ03+rnm0ITL8JyjZEuzS2IRrm2IZoeGIbouGJ bYjGJ7YhtEtfs4ty+pZn/4BZoxMfMHt0+IBZEz4+YO5T/Gha6/nLq3yOlGmBLC1KZVog1/Y6SJmW P6hMC8R/2DyYvr6XacEcfqyjTCuY/qwyrXiW/vBXlGObIm85pjUwZVoDU6Y1MLtpgbhpDcWUaUU5 uimK1NED3yhHN0XB6IGv1wfTinbpj34E0/WnTMvjg2lFfex9miE8ZVpDeMq0hvCUaQ3hKdMawlOm 5eHBtDw8mFbUp6tEmZbHB9Py+GBaHp8yLQ8PpuXhwbSG8JRpDeEp04r6dIsq0xrCU6Y1hKdMy8OD aXl8MC2PD6bl8cG0PD6Y1lCfMq1g9Ie/PIZlWsOjyrQ8zJjWwJRpDV1RpuXlYFreFZiWdwWm5V2B aUV99IfNfTLEtLwrdtMaWl6m5RHEtIZyyrSGCNb7NEME632aIYL1Pk20vL/oUe/TeHR4n8YTnvdp fN7lfZrhWfU+jceQ92m8L3ifZiin3qc59MVsvZcvr1U6gvU6UtarCNarCNZbyMEgsV4Yt95i+gvr Yb2U04/5sV7K6WfvWC/ldHsu69WWh/U6g/U6g/U6U9ZbyGC9XgzWe7GbO2G9F7tJFNar5YT1Xuwm UVivl4P1XtotoV9ZL4xbr8YnrJf6HH62YLdeDw/We7HLT2G93nSs1xms18OM9Wo5Yb0a5rBeDXNY r4Y5rJf66FvkHmes15uF9Wp4wnq9HKzXw4P1eniwXg8P1qvpE9YL0+0Q69UQhvVq28N6NYZhvV4O 1qsxDOvVGIb1agyxXqrTXzzBemH0a7yGEGK93nSsV0MY1qvlhPVqCMN6NYRhvRrCsF5NsbBenTDL ejWCYb3ecqzXI4j1ejlYr0cQ6/UIYr3uaVgvbT+YcVmvxiesV3sirFd7IqzXn4X1agzDerUvwnq9 nP0t8kNXzGJ8/fL+ryOIsSMlxoogxoogxoUcLoogxsWs/XuIEGOYLr2IMYx+8B716d+EhhjD2G9A x6MO7lxi7MUgxhrBEGNnSowLGcTYi0GMr3btK8T4atfQQoy1nBDjq11DCzG+2hWzEONru2L2KzGG cTHW+IQYUx8VYw8PYuzNQow9hIixM4ixhxkx1nJCjDXMIcYa5hBjDXOIMfVxMdY4I8beLMRYwxNi 7OUgxh4exNjDgxh7eBBjTZ8QYxgXYw1hiLG2PcRYYxhi7OUgxhrDEGONYYixxhAxpjouxjAuxh5C xNibjhhrCEOMtZwQYw1hiLGGMMRYQxhirCkWYqwTZomxRjDE2FuOGHsEEWMvBzH2CCLGHkHE2D0N MabtLsYanxBj7YkQY+2JEGN/FmKsMQwx1r4IMfZySox7V8xivH55edwRxNiREmNFEGNFEONCDie0 iPGql7UR44EpMYbpJooYr3rrGzGG0RNjbXmIsTOIsTOIsTMlxuvXl8eHYhBjIqhfeRRM28aEGGs5 IcZE+fDjECXGMF1oEeNVL6EjxtSn329AjL0cxJg4qxivek8dMYbpN0kRY5h+QRYx9mchxgNTYqxM iDFM6/YQY8J8+E7REmMNc4ix1wcx9vqUGOs4DjH26iDGXh3EmNHVpRcx1hEYYqzzXIgx7TrcWi0x 1lEaYqyjIsRYR3KIscYwxNifhRh7OYix9kWIsaYGYqyZGmLs1UGMvTqIsVcHMdZUDTHW+oQY9/rM LvGsq5D9ux/LJQrpayouAdI0CJd42sUCXIJSWuLjEiB+57KYw8tquISXg0tou8MltMrhEtpyXIIq 969mxyV4VP/AHJeA6eeLuIQ3q1wC5OAJ5RLeKlziaZclwiW0nHAJGP36RO/1cAkYdwlncAmvMy5B Oc3HwiW0u3AJiunagktoEoZLUOXuG7gE3fUvv2HmeH19b0URZg5HauYopE9RzByFLPZeXCD6Cmsw +gorzMGzmTm0UTFzFDPcWynmMJxr5njZJzwxc1Dl3nRmjpd94hQzx8s+4WHmcISZY2Bq5qDP9dcP oul9aDBzvOyDvZg5tLdi5qC3/Hgepr87x8zhDDMHvX64iV0zB+XobW3NjJg56IpeDDOHZk/MHF4O uxAYfS/OUyN2ITBd6dmF0Pb+E3XsQjR9YhfCs/oOg10I3dV1vXYhmoWxC9EsjF0IjzrsVGoXovNK 7EIop38UwC5EMyx2IZpisQvR1IhdiKZY7EK8HHYhmmKxC9EUi12IpljsQjTFYheiKcYuRDMsdiGa YbEL0RSLXYimWOxCNMViF6KTYexCKKf/UBvH85picTyvKRbH85oacTyvKVbH814Kx/OaYHE8rwkW x/OaYHE8rwkWx/OaYHE8rxkWx/OaYXE8rxkWx/PuhBzPa4bF8bxOYnE8r5NYHM/D9I8COJ7XDON4 nhD2Xy2oXz8YmPr1A09Cfv1gKGdz2cf502p6qj7O19N6/rSaHuZtnG91/px8erc/NvO5nJvV9N3/ 5rKvc7OaXs7mso/L+faL9NHNxeP2q63DuBe5X+zOTO1FQPqkVXuRKKXNWbUXCaQlVO1FQHo+1V4E pPcxe5GBqb1IMIeTjn0vMpRTe5Fg9JfYPMS1Fwnk8HLpvheJbtCrQsHc977/1+2f//72Ru+XZeuP 5fvf/+Pbv33/r2+bhz3WH2v39l/v+X7ryWWzhnUHtr/1/p/ef/Xnv/zu/df//W/ffv/n8/e//s+U VvfL3pV9Wqy0KqQfF5BWIPZCciBPS6tC+nRHWoHo4RiMf/VLNFt/+9KrE2lFq/qekrSiHNvigvyW Y4n74yGnBfRZIf14iz4rRL+uB0QPNEF6/tBn1KW3nz6D0ddpgun9QZ/B6EvkweiBJsxv6pDX9atz IkfokEKWf7IOKcQHUSE9AnQIpeg5kRcTHTIw1SEwfd6lQzQ20SHO1CAakJqbi7nqV515kDknGpq+ nxMNUa5zoqGYOicamDon8ihzTuTh4ZxoYvZzooGpc6KBqXOiSHe9xhntOpwe7+dEE/PznCi6ou0z OCeKYvrN0zonCkavcQ5Nr3OiidnPiSJT++FEnRN5pnJO5GnIOVEwhzf/93OiYPoBT50TBdMPZvZz oghhP5epcyLvLs6JPIScEw1MnRMNTJ0TDUydE03Mfk7kDOdE3u2cE3m3c07k3c45kXc750RDneuc aGDqnGhg9nOiCdnPiQamzokihP1T7zon8gmTcyLPZs6JfHLmnMi7i3MibxfnRANT50QTs58TDczP c6KYmvsZUJ0TeQQ5JxpaXudEnqicEw1RrnOi4Vl1TjR4Wp0TTcx+TjQwdU40MHVO5IOdcyJvF+dE EZ/DVc/9nMiXSc6JgulXT/dzokN1RjF+bNPMF2LsSInxgOxi7EiJsSMlxo4gxgNTYjwxuxjDHA42 SoyHckqMB2YX4wnZxXhgSowHpsR4YHYxHpAS44nZxXhgSowjynr1wstBjCdmF+OBKTEemBLjgSkx HpgS44n5KcYDUmI8MCXGnsyI8VBOifHE7GLsDGI8MCXGA1NiPDG7GA9MifHA7GI8ICXGns2I8VBO ifHAlBgPTInxwJQYR50PH7LuYhxMF9oS42D6O1Alxh4fxHhgSownZhdjbztiPDAlxgOzi/GE7GI8 MCXGA1NiPDAlxs4gxhOzi7GHGTEeyikxHpgS44nZxXhgforxQJQYD0yJ8cCUGE/MLsYDU2I8RLnE ePC0EuOJ2cV4YEqMB6bEePLGXYwHpsR4YnYxHpgS44HZxfgQ5lmMtyz4SowVQYwdKTEuRG8WgvQd BmJcyOHoHTGmGL2THOXoi/9enRBjbXiIsTMlxgNSYuwMYuwMYuxMiXEh/UQ0xJjo9A+uEGPvLcSY cvqRKGKsVQ4xHpgSY2cQY/K0X51DjGlXv+CLGGuGhRjTdvtJdw9ziLGGOcTYw4wYe3gQ44EpMVYm xNgZxNgZxJju0vebvO0hxjD95LnEWLsixJiu0J90H5qFGDuDGDuDGDuDGA9MibHO3iHGmvEhxhrm EGONc4ixxjnEWNsVYuwMYuxMibGHBzHW5STEWJeuEGMPIWKsTIixhjnEWMMcYqzhCTF2BjHWGIYY e7t2MfZWIcbeKsSY2rgY6/QdYsyz9MX/YPopN2LsnoYYD0yJsTOIsTOIMYrQr9chxsSnX+lBjAem xFhHToixxhAx7sgsxje7vIMYK4IYO1JirAhirAhirEiIsTOI8cCUGBdzeKkEMfZyEGNnSowHpMTY GcTYGcTYmRJjRxDjgSkxdgYx1iiHGGs5IcYDU2LsDGLsDGLsDGLsDGI8MLsYO4IYO4MYe5gRYy8H MR6YEmNlQoydQYydQYwHpsTYGcTYmRJjRxBjDXOIsZeDGDuDGDuDGDuDGA9MibEyIcbOIMbOIMbO IMYa5xBjLwcxdgYxdqbEeEBKjJ1BjJ1BjJ1BjJUJMR6YEmMNc4ixl4MYO4MYD0yJsTO7GDuBGDuD GDuDGA9MibEziLFHGTF2T0OMB6bE2BnE2BnEePDGEmNnEOOBKTF2BjF2psSYMN9+gxivdhcbMVYE MXakxFgRxLiQfvqDGBfSD39CjFe/0F1ivOrlccSYZ+mlb29ViLEzJcYDUmLsDGJczKLvokfT+13l EmNHEONVr8QjxkRQfznMywkx1qaHGA9MibEziLEziLGGOcRYYxhiTHz6cXCJsSZziDHF9Oogxprw IcbedMR4YEqMGccuxpoaIcaaGiHGlNO/zgkxprv6e+aIscYHMdauCDHWrggx1hCGGDuDGDuDGDuD GHt4EGMNT4gxjJ8YO4MY66gIMe6pMS9yjy9fpHGERc6RWuQUYZFThEVOkVjknGGRG5ha5IoZTn+8 HBY5Z2qRG5Ba5JxhkXOGRc6ZWuQcYZEbmFrknGGR0yjHIqflxCI3MLXIOcMi5wyLnDMscs6wyA3M vsg5wiLnDIuch5lFzsthkRuYWuSUiUXOGRY5Z1jkBqYWOWdY5JypRc4RFjkNcyxyXg6LnDMscs6w yDnDIjcwtcgpE4ucMyxyzrDIOcMi1+M8L3LP/aaQvr4Loq/vgujruyC+yBXS34NmkaMufveHyvTt DIsc5fhOrpjf8trt8/zlJSqQ60MCCaLvroN0GapARil2iSoQfXc96qu/nhK1ObwrvQcyGN0Se/iw hYHZbWFCdluAObxuX7YQVe55VrYQTM+z3RYiyvrabTCHi1a7LXhaYAveo9iChwdbiF4/fPnjbgtD OWULUU5/DbhswRMVW4gwqy14DMsWPITYwlCdsoWoTt+hly0M4SlbmJjdFpzBFjzM2IK3C1vwQYot BNOfVbbgcS5bGKpTtuBhxhaG8JQtDEzZwhDCsoWhzmULUWf99RSfL7EFfxa24PHBFrxLsYVDOfMi t3x5IcIRFjlHapErpL+owSIHooscSJ+hWeQW+3whFrmBqUUOpp9+sshpw2ORc6YWuQGpRc4ZFjln WOScqUXOERa5galFzhkWOe2JWOS0nFjkBqYWOWdY5JxhkXOGRc4ZFrmB2Rc5R1jknGGR8zCzyHk5 LHIDU4ucMrHIOcMi5wyL3MDUIucMi5wztcg5wiKnYY5FzsthkXOGRc4ZFjlnWOQGphY5ZWKRc4ZF zhkWOWdY5Hqc50XuaifbLHJX+xyCRQ7EtsSBNNNmkQPRRe5qh/WxyNGkwzeM1SJ3tQ8qYpG72gcV schd7dA/FrliftPWeq1vgDjcM907pJCuOnSII9UhIGodq53A0CFU9w/aIcUcNnR0iFY4OgTGrUOr HB3iTFnHgJR1OIN1FHP4qnGsw8NT1uEtxzoGpqyDR/UDDKxDywnr0KaHdZAZh29HL+ugPv0lHayD +ugPkwbj1uF1xjo0m7EO7a2wDk3msA4PM9bhVcY6BqasQ5mwDmewDs3msA6NT1iHtj2sw5myDs3m sA4vBuvwpmMdZEbfNmMd3nSsg/r0D9GxDsKsL+57ncM6NJvDOrTOYR2azmEdGmeuYQ51rmuYA1PX MAdmv4Y5Ifs1TM9mrmEO4alrmEPT6xqmM1zD9HTmGuZQTl3D9LZzDdPTmWuY3nauYXo679cwI8rt O8G4hjnUuK5hejJzDXOocV3D9GTmGuYQ5bqG6XXmGubE7NcwB6auYQ5MXcP0bOYapseHa5jedq5h Dkxdw/RsrmuYh2JmMb5/+Y1WjiDGhfQvx0eM7/a1DYgxiH14F4gfx+mTQowHpsTYn4UYa2xCjJ0p MR6QEmNnEGNnEGNnSowdQYwHpsTYGcSYKPevWkKMtZwQ44EpMXYGMXYGMXYGMXYGMR6YXYwdQYyd QYwJc/8oDTH2chDjgSkxVibE2BnE2BnEeGBKjJ1BjJ0pMXYEMdZsDjH2chBjZxBjZxBj6qM/TBqM /jCpT/Ahxjqnhhg7gxg7gxgPTImxxifE2BnE2JkS4wEpMdYRGGIM0z+2Q4x1WQox1vCEGA9MibEz iLG2PcTYGcR4YEqMndnF2AnE2BnE2BnEeGBKjJ1BjD3KiLF7GmI8MCXGziDGziDGgzeWGDuDGA9M ibEziLEzJcY9zLMYP+uKj91qAznctCoxLqTv5BDjp10dQIypSzv0Q4xB/MRY6xti/LQP/EOMn3Zx IMT4aZcmQoydKTEekBJjZxBjguwnxh6eEmNvOWI8MCXGT7ueEWKs5YQYa9NDjDWRQ4y16SHGmsoh xjyrH9Eixl5nxFizGTH2KiPGmswhxh5mxNirjBgPTImxMiHGziDGms0hxhqfEGNte4ixMyXGms0h xl4MYuxNR4w1w0KMvemIMfXR38qMZx0ubJUYa51DjDWbQ4y1ziHGms4hxhrnEGOvM2LsDGLsTInx gJQYazaHGHt4EGNvOmKsTIixpnOIsZeDGGvbQ4xJjf5rAYgx9enPQoyd2cXYa4wYe40RY03mEGPt rRBjTeYQY68zYqx1DjEemBJjZxBjZxBjzeYQY41PiLG2PcTYGcRYsxkx7sWMYvyOlpn/TzF2pMR4 QHYxdqTE2JESY0cQ44EpMZ6YXYxh/J3GoZwS44HZxXhCdjEemBLjgSkxHphdjAekxHhidjEemBJj jzJi7OUgxhOzi/HAlBgPTInxwJQYD0yJ8cT8FOMBKTEemBLjIcwlxkM5JcYTs4uxM4jxwJQYD0yJ 8cTsYjwwJcYDs4vxgJQYe5gR46GcEuOBKTEemBLjgSkxnphdjJ1BjAemxHhgSowHpsTY44wYD+WU GA9MifHA7GI8IbsYD0yJ8cCUGA9MibEziPHE7GLsYUaMh3JKjAemxHhidjEemJ9iPBAlxgNTYjww JcYTs4vxwJQYR5RVjAdPKzGemF2MB6bEeGBKjCdv3MV4YEqMJ2YX44EpMR6YXYwjzPWNVu9ftf5/ 9YQ24AplbmRzdHJlYW0KZW5kb2JqCjcgMCBvYmoKPDwvRmlsdGVyIC9GbGF0ZURlY29kZQovTGVu Z3RoIDEwMDQwCj4+IHN0cmVhbQp4nK2d3aosyXFG789T7BdQq7u6q3/AGMa2rGubAT+AkAUGGSy/ P7j26FSsr6POin0uzIAYttZkZ0ZEVa7Mzqq+fJy3f3532f7n8Vo+/vTXb//z7fMv98vy2x/+9udv //Hx359/PD3W39jv//K7y/q8ffKXj89//v2PH3//l7/95dvv/3j++Mv/bs0sl62V5+X+cTmv98+m /vO3tj//8vl/nfc//ZBbLvfz9z+en/zH58+P+vu/bB91e34s62vr+/LbZ39Cn396vq79T5/U5eN6 vv39T4/78W+f2OV1+dF/2f70SV0+budzNNb+9ht2/+F/2f90/60XS7b19qd/2/75YVau989c/P1/ 1+vrtKwfy3JafyJp31P0T79++/2/bh91/fj1P79d9lJYP9PzvHz8+tdv/3A+r+s/fvz6X/y/9+vp ed7+i0Buz3fkcTndz5+dCeTWkNfpdr4+35DHO/J8nJbzeknkfH1HXuvpfL6/9eXSPuhyXrb+Pt6G dGlDulzOW4efrzfmnzrzPN22xt56/IffmD/8+lWKtnTu+Xk9/38StGyf80WCQDRBgViCArEEBbJI ggJ5WYKC+WdLUDD/agnycV+W+2m72dySufxLY6630/myPJLpdXe5XU7P92bOvV5ur9P9sr5355fG rI+ty/d1yub2f29dfrwlYm1RvjyuW5fforP2kT/Pp+dyfsvE2kf1fJ7uy2XOxOt+ui3La6qL5Xw7 LcvtrTCWc2Muy2mrwPdMXDvz2vp8f89Ey/qyPLY+P15TtpbruvX59Z6KlvXldj0t1/M6hXlZL6fz 9fLWn9vamefpeX0P870h9/vpfr1dx/A8bqfbdX3rTr+zLc9l6/L9rTLOLV3L67x1+XmZCn55PbYu v25TCLfInO638326uK6X6+l2a2XYwnNdLqct1MvMvE7b7eg2xee63Veft3W8SK+329bnx1Rh13XZ uvxcpvBc7+ety691/Kj783Rez8/p6ro+1tNzXS7T1XV9Xk/39b18+tV1fV1Ot/V2H8Pz+pxs1tdU PrfzY+vz4zLF53a5bX1+jlfpdjfY+vx6j3Mrjdv1fLrdz6+ZeZ6W+/Kei3bp3G730zadrFO+bus2 /7W893zd7tsEeH8L4fXRkW0CvD/er+Te5cc2A96f41Vxe25T4P31HLvz2qbAx+UtFdeW0vW8TYGP 5S0Vfc7eKvB0e1xHO1gv2xT4uI03zHXZpsDH/T0VvZ3rNgU+HrcppZ/Oc388x1Sst20OfE6Fuq7b DPh8F5oDs0nP+bmMhbE+tinw+a40hx4/tinw+e40lzZrr89tCny+S83hs17bFPh8l5r+WZuBbX1+ l5punds1c3q+zpcpgvfLNgW+mtR0J1y2KfC1jDfD+3WbAl/X8aZx38Tn/Hq3mstjt+DBVpen9b5s VRFstZBe1dhqIf1CxFZB2uiwVZB2Wwhb1f6GrRZzsExs1dvBVulP9zFsleB0PSxbLaRPGGGrdLmv pLBVmK6Z2CpM73LZqiPYKkwXbGy1mH7TDFtl6N3YsFUy0S0TW9Vsha2Srf5Z2CrZareXsFVNV9gq 8TmYaNkqTP+sslWQtjwLW4XpQ8dWvR1s1buDrTL0P6itakrDVvmsdlGErZKKphthq1oaYataGmGr eiFjq3odh63CHEy0bNWHha1qhYWtaoWFrepNI2xVbxphq1oaYauairBVTUXYqk4mYaswbqswF7VV mK6Z2KrmAlv1YWGrGuawVb3xhq3CdAnHVmHasMJWtTTCVglPt15sVcs5bFUn/7BVmPtPeMt2YX/l LYrgLY6UtyiCtxRyaxHAW0D67hjeAtMneLylmF6v4S3a4/AWmD574y18VpeJ8haQe0fKWxhWbwZv 0QiGt9Bl9ZZCDlaHt3h08BaG5d5Cl/vWDt5CO+0OFN5SzHEnrryFcbm3wPRdP7wFpvcZbyE+3bXw Fo0h3qIhDG/Rgg9voZ1WqOEtHkK8xcODt8D4LhtM86jwFpg+oeIt/ll4C0yfvPEWTRfeotkKb9FU hLeQij50vIV2mieEt2i6wls0XeEtHma8RSs+vAWmewLeop8V3gLTJ2a8RdMV3qL5Cm/RG294i84V eAvp8l02vX+Ht2hphLdo2sNbtOLDW/RmGN7iDN6iKQ1v0dIIb4Hp+1HssvX+zG5zP3/pNorgNo6U 2yiC2yiC2xQyuA2Mu00xg9tod8JtYNxt+Cx1GxB3G4blbgPjbkOX1W0KGdzGo4PbMCx3G7rsbkM7 7jbFDG7DuNxtYNxtYNxtiI+7jcYQt9EQhttowYfb0I67jYcQt/Hw4DYw7jYw7jYw7jb+WbgNjLuN pgu30WyF22gqwm1IhbsN7bjbaLrCbTRd4TYeZtxGKz7cBsbdxtvBbTRd4TaarnAbzVe4jd54w210 rsBtSJe7jd6/w220NMJtNO3hNhrCcBu9GYbbOIPbaPmE22hphNvAuNv0/sxus9nAV26jCG7jSLmN IrhNIb5vA+JuA+NuU8zgNtrjcBva6QxuQ3/UbWimn2rCbTQ64Tb+UbgNn9XqDLfx6OA2fNThO6ly G88WbqNdDrehP/2IEG6j2Qq3gekTPG7DZ/X+4DbeH9zGx4XbaEpxG7rcNQq3Kea7is5X/Eu/H68r XhGueEfqileEK14RrvhCVr/itZm44gemrviXfZ0fVzxMv3y44l92dIArfkDqimfo/TLkii/m1hdX XPHFHL7Nriu+kOsf9Ir3CHLFO8MVz9B71XPFv+yARlzxA1NXPOHpus4VT5/7KoQrXscVV7wzXPEv O+TCFa8FFle8Vk+sZpxhNaMVFqsZrbBYzWiFxWpGSyxWMxrCWM0MTK1mtMRiNQPTtb9WM1phsZrR CovVjFZYrGZ8WKxm9P4UqxlNe6xmtMRiNaPtxGrGGVYzWmKxmtESi9WMllisZrTEYjWjcY7VjDO1 mtEKi9WMXuyxmtEqjNWMllisZrTEYjWjw4rVjJZ8rGY07bGa0RKL1Yy3w2rGmf08pBZYnIfUAovz kFpgcR5SCyzOQ3qUOQ/pDOchtcLiPCTML52p85AawTgPqRUW5yF7hY2WeT1/eY7RkbLMAdkt05Gy TEfKMkHcMr0ZLBOmJw3LDEbXlcGoZQ7t7JY5IbtlxtDVMmHcMmHUMkHcMocol2UOTFmmZwLLDKar TVlmMGqZER61zOizWqaPC8uM/nRlK8v0tJdleoFhmUMzZZkDU5bpFYZleoVhmV5hWKaXGJbpYcYy J2a3TC8xLDMYO8foIcQyvcKwTK8wLHMYVllm9KfvdZdlDn0uy/QSwzK9HSxzYMoyvcSwTC8xLNNL DMv0EsMyPc5Y5sDslukVhmV6hWGZfqPDMr3EsEwvMSzTh4VlRn/aHj6W6WnHMr3EsMyhnbLMgflu mV5gWKYXGJbpBYZleoFhmUOUyzIHpizTpxMsM5j+lEtZ5tBOWaZXGJZ5qLDZMpcvT506gmU6Upap CJapCJZZyGCZ2kxY5mJHXMIyYdwyFztOE5bp7ZRlDkhZ5mKHn8Iyixksc7FTS1hmIYNlepSxTGew TIauz3YPDJYJ0x8HwTIJj1smfXbL1HGFZdKfrmNYpqYdy9QCC8v0ZrBMZ7BMrbCwTK2wsEytsLBM LbGwTA1zWObAlGVq+YRlwqhlakbDMrXCwjK1wsIyfVhYpncZy9S0h2VqiYVlajthmc5gmVpiYZla YmGZWmJhmVpiYZka57BMZ8oy9eYTlukMlqklFpapJRaWqSUWlqnDCsvUEgvLhNGTGV5iYZlaPmGZ zuyWqQUWlqkFFpapBRaWqQUWlulRxjKdwTJVWMIyNTphmc5gmVphYZm9wmbLvH15/tcRLNORskxF sExFsMxCBsvUZsIyb3YOLSzTGSzzZgebwjJh1DIHpCzzZsfQwjKLGSzzZufHsMxCBsv0KGOZzmCZ Nzv2FZYJ45YJc9jvLMskPG6Z9NktU8cVlqnVE5Y5MLtlaoGFZd7sBF5YplZYWKZWWFimVlhYplZY WKaWWFimhjksc2DKMrXEwjK1fLBMDWFYplZYWKZWWFimDwvL1OoJy3QGy9QSC8vUdsIyNT5hmVpi YZlaYmGZWmJhmVpiYZka57BMxtWfES/LBOlHlrFMDyGW6R+FZWqJhWVqiYVl6tDDMrXPYZnOYJla YmGZ3g6WqXe6skwtsLBMLbCwTC2wsEwtsLBMjzKW6aPCMvUeFpapEQzL1AoLy9QKC8vsFTZb5vrl SWxHsExHyjIVwTIVwTILGSxTmwnLLGawTGewTBi3TG+nLHNAyjJXPR6NZRYzWGYxbpmFDJZJl/up VSwTpp9LwDJhurZgmc5gmQNTlkl4zj9xaTy+PLLsCJeGI3VpKMKlUUh/ZxGXRiH9lUVxaegnxaUB o+/Yhenv/4lLA6Y/XcCl4f2pS2NA6tIopr9GKC4NwqMPYMZn2QOY8VH6AObQZS4NuqMPYEbS9QHM aEcfwIxM6AOYMa7+pCILsIedDYsF2MPOhsUCDKZ/7cACDMZe4RpIXzixAHu042M/WoBpumIBpumK BRjp6iFkAUY7fTHDAkzTFQswTUUswDQ+sQCD6f1hAQZjR5aHZliAeXdYgGm6YgGm6YoFGGHuDAsw 0nV4uLIWYLTTj2KzANN0xQJMr65YgGl8YgHmDAswmP6wJwswb4cFmDMswMhFP2tcCzBNaSzANKWx ANOrKxZgpKt/XcACjHb6w4wswDSlsQDTG1QswPRmGAswvRnGAkwvr1iA+WftCzC9jmMBpsUTCzDv MQswvT/FAkwrIxZgWhmxANOZPRZgMIcDJ7UAozL8MIneNGIBplXIK1y9wniFq1fY/bZNga93qzkc klm3OfD1bjWHxed9mwNfr6k0tgvidD2/W83hLbib+VzO71bTy+e+qezrvDzHYW0u+zi/W833dfcs 6s+vT30rgqg7UqKuCKJeSH/4FFEH0R/DgDk+KVyirr0JUYfpMo+o81n98V1EHebwGpRd1BnW4YHj EnUYfZo4utwlHFH3oZeoFzKIujeDqNNOX8Yg6jD6NHEw3TIR9acd0wtR93YQdZi+KEDUvR1E/WlH 8ELUiaE9WxhIl3BEXVMRou5hRtR9WIj6wJSoaypC1GmnWy+irp8Voq7pClHXVISo+2eVqGu2QtQ1 XSHqmq4Qdbqjb0rxlIao+7AQdU8Xoq4hDFHXlIaoa7pC1GH6VwqIusYnRF3zFaKu+QpR13wh6h4e RF1TEaLuDKLuDKKuKQ1R13ZC1DVdIeowXWkRdU1XiHpP1+gtt/OX54gdKW8ZkN1bHClvAVFvCUS9 Bca9xXuDtwSj3hKf1bfQyluCMW+JYam3BKPeEl1WbxmGvnsLiHvL0Ex5S7Sj3gJz8I3ylmhHvSWY wybk7i3B6AZjMPqGtxh7l4DylmD0DW8ew/KWoTvlLUMIy1uGEJa3DCEsbxnaKW/xPuMtHkK8xUOI t3gI8RZPO97i4ypv8TDjLUMz5S0eZrxlaKe8ZQhzecsQ5vKWIczlLR5mvMWvdrzFx4W3eJzxlqGd 8haPM94ytFPeEszhUbTv3uJhxls8zHjLEObyluhO/6zyFh8W3uJhxluGdspbPMx4y9BOeYtfpWww HuI8u83y5elVR3AbR8ptFMFtCnG3AXG3WewAS7iN9ibcBsbdhs9yt4FRt2FY7jYw7jZ02d3Gh15u U8jgNt4MbkM7PVu4DUyfLHEbZ3CbxY4bhdt4O7gNjO7JDO3gNjr2cBtiaHsygeiejKci3Ibu6J7M MCzcZmDKbTQV4TbaTriNpiLcxtvBbTQV4TY9FfNd8/rlaSxHuGte7QV/3DUL6V+Qcte82isJuWtq X+Ku6Qx3zYGpu6Yz3DWd4a7pTN01B6Tums5w13SGu6YzddcsZLhrejPcNTWhcdeE0feXBKNvyYMZ 7ppXO0EWd00Yv2vC9P5w14TR95dEDPWuCeJ3TU1F3DU9zNw1YfRXg4d2uGtqKuKuqamIuyaMHjnx dMVdU1MRd02YvnSqFaFmK1aEmq5YEWq6YkWoN9RYEZIKXxHC9C8MWBF6ulgRasXHilDDHCtCTWms CGEOu9S1IvTPYkUI03fEWRFqvmJFqPliRUiYDz/VVitCUtp3u1kRwvQfMWZFCKNHTjylsSIkPLqT 7eGJFaGGJ1aEGp9YETIuPfMf7fTV5/cjJx4djpwM0akjJ34/4MiJ38M4cnK4cGYbW788AOwINlbI 0iKJjRVybedXsTH9IGxstfMrYWOrnV8JGyvm8HZxbGy18zRhY3yW/gJLtKNrWB152Jgz2Jgz2Jgz ZWOFDDbmzWBjq53cCRtb7ZRQ2Bjt+Bp2tWM5YWPan7AxxqVvKQ+mKxI2pvEJG1vtmBA25uHBxjw8 2JiHBxvz8GBjHh5sTMMTNqbhCRujP103sDGNT9iYxidsTOODjWl4wsY0PGFjHh5szMODjdGfblrY mIcHG/PwYGManrAxjU/YmMYnbEzjEzam8Qkb8/5gYzBdbbAxjSE25h+FjWmYw8acwcY8FdiYthM2 pqkIG9NUhI1pKsLG6E8/KouN6c0wbExTUTbmI8fGNIJhY94ONuYRxMY8gnUAeIhgHQCOkfdTp3UA 2KPDAWAveA4A+32XA8DDZ9UBYI8hB4A9FxwAHtqpA8CHXMzWe//yNK0jWK8jZb2KYL2KYL2FHAwS 64Vx6y2mP2EX1ks7/VsZrJd2+lclWC/tdHsu69WRh/U6g/U6g/U6U9ZbyGC93gzWe7cDW2G9dzv4 Fdar7YT13u3gV1ivt4P13tuhrh9ZL4xbr8YnrJf+HPYpd+v18GC9NKO/OzgMHet1Buv1MGO92k5Y r4Y5rFfDHNarYQ7rpT/62JvHGev1YWG9Gp6wXm8H6/XwYL0eHqzXw4P1avmE9cJ0O8R6NYRhvTr2 sF6NYVivt4P1agzDejWGYb0aQ6yX7ug7lIPp27hYr4cQ6/WhY70awrBebSesV0MY1qshDOvVEIb1 aomF9eoNs6xXIxjW6yPHej2CWK+3g/V6BLFejyDW656G9TL2gxmX9Wp8wno1E2G9momwXv8srFdj GNaruQjr9Xb2x94OqZjF+Pn1cW1FEGNHSowVQYwVQYwLOZzrQYyLWfuLExBjmC69iDGMfzlPf/qr WxBjGPvRyviogzuXGHsziLFGMMTYmRLjQgYx9mYQ46ed0gsxftqpwRBjbSfE+GmnBkOMn3YiMMT4 2U4E/kiMYVyMNT4hxvRHxdjDgxj7sBBjDyFi7Axi7GFGjLWdEGMNc4ixhjnEWMMcYkx/XIw1zoix Dwsx1vCEGHs7iLGHBzH28CDGHh7EWMsnxBjGxVhDGGKsYw8x1hiGGHs7iLHGMMRYYxhirDFEjOmO izGMi7GHEDH2oSPGGsIQY20nxFhDGGKsIQwx1hCGGGuJhRjrDbPEWCMYYuwjR4w9goixt4MYewQR Y48gYuyehhgzdhdjjU+IsWYixFgzEWLsn4UYawxDjDUXIcbeTolxT8UoxtuF8pUYO1JiPCC7GDtS YuxIiTHIYYe2xBjmsKtcYjwxuxgH0020xDj6o+9fCMZ2jH3kiPHAlBgPTInxwOxiDOJiPDRTYhwR 7CMvMQ6mLWMQY28HMY4odzssMQ6mC22JMczBREuMoz96anVop8Q44nx4HuC7GAfSP6rEOJj++sQS 42BUjAemxHhidjEOpqUUMY4Q9hcMlBh7CBFj7w9iPPSnxNgv0hLjoTslxkN3SozjytFzEn51IcZ+ D0OMY1h6atWvQMTYKx4x9qsUMfYYIsbDZ5UYD+2UGHsuEGMvDcTYS7XEeOhOifHQnRLjoTslxl6p iLH3BzE+9Gf2hGsdc+xvfipPKKTPl3gCSFMcPOFqhwbwBFpphY8ngOh5SpjDc4N4greDJ+i4wxO0 y+EJOnI8gS7398TiCXxU/zIcT4Dpe4d4gg+rPAHk4ADlCT4qPOFqByHCE7Sd8AQYfdewZz08AcY9 wRk8wfuMJ9BOc63wBE0XnkAz7glahOEJdLk/uYInkK5/+Yk7x+3LMymOcOdwpO4chfRbFHeOQhZ7 Li4QfZo4GH2aGObg0Nw5dFBx5yjGz6TAHC7nunPc7NubuHPQ5T507hww/a7AnQPGXsU8INw5YPoZ dO4c5FxfxRxD75cGd46bfWkXdw7NVtw5yJZuvQfTX7PMncMZ7hxk/XDKuu4ctGMnsb0y4s5BKnoz 3Dlgepe5c8D0t0ezwrjZd3+xwtB2YoXhDCsMxt5/L4cVhpZPrDD4LF9hkK6u67XC0CqMFYZWYaww +KjD7+XUCkPvK7HCoJ2+zc8KQyssVhhaYrHC0BKLFYaWWKwwNO2xwtASixWGt8MKwxlWGFpiscLQ EmOFoRUWKwytsFhhaInFCkNLLFYYWmKxwtCbIVvv0U5/PXJtvXuJsfXuJcbWu5cYW+9eYvvWuyed rXcvMLbeh3Zq631i9q13LzC23r3A2Hr3CmPr3SuMrXevMLbeByesrXevMLbe/SbG1rvfxNh6D6Zv 89fWu1dYbb27HPEqZjcfXsUcTA9zvYo5mN7lehWzC93jfD2t53er6WHervOtz+83n572x2Y+l3Oz mr7631z2dW5W09vZXPZxOd9+UD66uHjcfrR0mNcidzsPw1qkkH7TYi1CK+2exVoEpGWHtcjdvjZi LVLIQbZZi8Acfg2p1iIw/m3H3b7pirUITF8gsBbRELMWATk8OFprEdKgx4CCue+5/7ftn//59one L8uWj+Xjb3/+9h8f//1t87DHunnUp2/8dr//zOQ2snUHtv/qk/j8T7//y+8un0bxp79++/0fzx9/ +d+hru7blPg9l/2C3esKpO8XVF0FYk8bB/KUugLp97uqq0B0dwzG3/0Sw9Zf4vLuUFcxqpfVVbRj a1yQn9mX2BZjsl1Azgrp+1vkrBB9Xw+I7miC9PohZ/Slj5+cweizMsH0fJAzGL0XBKM7mjA/lZDt fv7FRpEjJKSQ5RdLSCF+ERXSI0BCaEU3iryZSMjAVEJg+p4uCdHYREKcqYtoQPabM8xVXzvnQWaj aBj6vlE0RLk2ioZmaqNoYGqjyKPMRpGHh42iidk3igamNooGpjaKotz1jGaM67B9vG8UTcz3jaJI RVtosFEUzeiPJgejX0UPQ6+NoonZN4qiUvvuRG0UeaWyUeRlyEZRMIfH+veNomD6Dk9tFAXTd2b2 jaIIYd+YqY0iTxcbRR5CNooGpjaKBqY2igamNoomZt8ocoaNIk87G0WedjaKPO1sFHna2Sga+lwb RQNTG0UDs28UTci+UTQwtVEUIex7bbVR5DdMNoq8mtko8pszG0WeLjaKfFxsFA1MbRRNzL5RNDDf N4ri1txfDlAbRR5BNoqGkddGkRcqG0VDlGujaPis2igaPK02iiZm3ygamNooGpjaKPKLnY0iHxcb RRGfwznOfaPIp0k2ioLp50r3jaJDd2Yx3m57X4mxIoixIyXGiiDGiiDGioQYO4MYD0yJcTGHHQnE 2NtBjJ0pMR6QEmNnEGNnEGNnSowdQYwHpsTYGcRYoxxirO2EGA9MibEziLEziLEziLEziPHA7GLs CGLsDGJMmPsxV8TY20GMB6bEWJkQY2cQY2cQ44EpMXYGMXamxNgRxJgw6zeoQzuIsTOIsTOIsTOI MX3WH7MNpgstYgzTv7FEjDU+IcbOIMYDU2KsYw8xdgYxdqbEeEBKjDXMIcZ6lYYYw/RvYhFjGP0x 22AO8lxirJ8VYqxjDzF2BjEemBJjZ3YxdgIxdgYxdgYxHpgSY2cQY50CQ4zd0xDjgSkxdgYxdgYx HryxxNgZxHhgSoydQYydKTHu94xZjB/rl2KsCGLsSIlxIXq0EKSvMBDjQg5b74gxzeih5GhHn+r3 7oQY68BDjJ0pMR6QEmNnEGNnEGNnSowL6TuiIcZE53D8sMTYs4UY007fEkWMtcshxgNTYuwMYkyd 9hO+iDHj6id8EWOtsBBjxn74obZdjDXMIcYa5hBjDzNi7OFBjAemxFiZEGNnEGNnEGPSpS959bGH GMPYw0ueihBjUtElEzH2YSHGziDGziDGziDGA1NirHfvEGOt+BBjDXOIscY5xFjjHGKs4woxdgYx dqbE2MODGOt0EmKsU1eIsYcQMVYmxFjDHGKsYQ4x1vCEGDuDGGsMQ4x9XLsY+6gQYx8VYkxvXIz1 9h1izGfpU/3B9OUHYuyehhgPTImxM4ixM4gxitAqNcSY+PQjPYjxwJQY65UTYqwxRIw7Movxyw7v IMaKIMaOlBgrghgrghgrEmLsDGI8MCXGxRyeNUOMvR3E2JkS4wEpMXYGMXYGMXamxNgRxHhgSoyd QYw1yiHG2k6I8cCUGDuDGDuDGDuDGDuDGA/MLsaOIMbOIMYeZsTY20GMB6bEWJkQY2cQY2cQ44Ep MXYGMXamxNgRxFjDHGLs7SDGziDGziDGziDGA1NirEyIsTOIsTOIsTOIscY5xNjbQYydQYydKTEe kBJjZxBjZxBjZxBjZUKMB6bEWMMcYuztIMbOIMYDU2LszC7GTiDGziDGziDGA1Ni7Axi7FFGjN3T EOOBKTF2BjF2BjEevLHE2BnEeGBKjJ1BjJ0pMSbMt6/F+HG2s9glxo6UGA/ILsaOlBiD9N2fEmOQ vvmDGMMcD3TvYhzM4V2xuxjHZ+mhbx8VYjwwuxhPyC7GA1NiDLP05wZKjGPo/azyLsYDUmIcjJ4x jgjqz4J5O4ixDx0xnphdjAemxHhgSow9zIixxxAxjvj07eBdjL2YEeNopnenxNgLHjEehl5iPDG7 GMd1rGLspYEYe2kgxtFOf59TiXGkqz9oXmLs8Skx9lQgxp4KxNhDiBgPTInxwJQYD0yJ8RCeEmMP D2IcjO4YD0yJsV8ViPGhNOZJbvnyQRpHmOQcqUlOESY5RZjkFIlJzhkmuYGpSa4Y3/0Z2mGSc6Ym uQGpSc4ZJjlnmOScqUnOESa5galJzhkmOY1yTHLaTkxyA1OTnDNMcs4wyTnDJOcMk9zA7JOcI0xy zjDJeZiZ5LwdJrmBqUlOmZjknGGSc4ZJbmBqknOGSc6ZmuQcYZLTMMck5+0wyTnDJOcMk5wzTHID U5OcMjHJOcMk5wyTnDNMcj3O8yR33U8K6eO7IPr4Log+vgvik1wh/TloJjn6omd/ojN9OcMkRzu+ kivmZx67faxfHqICuT4skKt9dUkgC+kyRCBX+7KMQK72LWoEkv7qT6NEbw7PSlcgYXxJrOELW3Cm bGFAyhaKOTxujy3Q5V5n2AJMr7OyBaKsj90GczhoVbagZRG2oBkNW9DwhC2Q9cPbH8sWvB1sgXb6 Y8DYghZq2AJhdlvQGGILGsKwBe8OtrDal8xhCx4ebGFgyhaUCVvQMIct6LjCFvQiDVuA6Z+FLWic sQXvDragYQ5b8PBgC85gCx5CbMH7jC3QZ/1pFL9fhi3oZ4UtaHzCFjSlYQu9nXmSe3x5IMIRJjlH apIrpD+EwST3sC8GmORA9GcOgtGTwhNTkxxMf6cwk5wOPCY5Z2qSG5Ca5JxhknOGSc6ZmuQcYZIb mJrknGGS06THJKftxCQ3MDXJOcMk5wyTnDNMcs4wyQ3MPsk5wiTnDJOch5lJztthkhuYmuSUiUnO GSY5Z5jkBqYmOWeY5JypSc4RJjkNc0xy3g6TnDNMcs4wyTnDJDcwNckpE5OcM0xyzjDJOcMk1+M8 T3JP29lmknva9xBMciC6JH7atxBMciC6knvaZn1McgxJ324H42+0iu70JQ+T3NM2/WOSK+ZnltbP c70B4nDO9HtCQLrqVEIGZE9IIPZ8EohuxEd3+wu3KyEwhwVdJcQ7TEKC6d+5VkK8yyRkYHbrmJDd OgamrAPm8K7xso4hPLt1DCMv65iY3Trio/oGRlmHt4N1+NCxjqiMw+vRd+uI/vSHdMo6oj/6q6PB qHUMfS7r8Gou6/BsYR1ezFjHEOayjqHLZR0Ts1uHM1jHwJR1eDVjHR4frMPHjnUMzG4dXs1Yx9BM Wccw9LKOqIy+bC7rGIZe1hH9Ofxw0m4dEWZ9cN/7jHV4NWMd3mesw8sZ6/A4cwxz6HMdwxyYOoY5 MPsxzAnZj2F6NXMMcwhPHcMchl7HMJ3hGKaXM8cwh3bqGKaPnWOYXs4cw/SxcwzTy3k/hhlR7q8s qGOYQ4/rGKYXM8cwhx7XMUwvZo5hDlGuY5jeZ45hTsx+DHNg6hjmwNQxTK9mjmF6fDiG6WPnGObA 1DFMr+Y6hnloZhbjy5dvtHIEMXakxPhir21AjEHa5YMYg7gYX9qbC34kxgNTYgyjKxUfeIixMyXG A1Ji7Axi7Axi7EyJsSOI8cCUGDuDGGtdhBhrOyHGA1Ni7Axi7Axi7Axi7AxiPDC7GDuCGDuDGHuY EWNvBzEemBJjZUKMnUGMnUGMB6bE2BnE2JkSY0cQY8LcvwJDjL0dxNgZxNgZxNgZxHhgSoyVCTF2 BjF2BjF2BjHW+3eIsbeDGDuDGDtTYjwgJcY6u4UYw/Sv7RBjnZZCjDU8IcYDU2LsDGKsYw8xdgYx HpgSY2d2MXYCMXYGMXYGMdZMhBg7gxjDuBi7pyHGA1Ni7Axi7AxirNUcYqzVHGI8MCXGGp8QY2dK jEF++QkxvtYRHzvVBnI4aVViXEhfySHGVzs6gBjTl7bphxiDuBhrf0OMr/aFf4jx1Q4OhBhf7dBE iLEzJcYDUmLsDGJMkH3H2MNTYuwjR4wHpsT4asczQoy1nRBjHXqIsRZyiLEOPcRYSznEmM/qW7SI sfcZMdZqRoy9y4ixFnOIsYcZMfYuI8YDU2KsTIixM4ixVnOIscYnxFjHHmLsTImxVnOIsTeDGPvQ EWOtsBBjHzpiTH/0xzLjsw4HtkqMtc8hxlrNIcba5xBjLecQY41ziLH3GTF2BjF2psR4QEqMtZpD jD08iLEPHTFWJsRYyznE2NtBjHXsIcaURn+ZKWJMf/pnIcbO7GLsPUaMvceIsRZziLFmK8RYiznE 2PuMGGufQ4wHpsTYGcTYGcRYqznEWOMTYqxjDzF2BjHWakaMezOzGK9fHuB0BDF2pMRYEcRYEcRY kRBjZxDjgSkxXu0wVoixt4MYO1NiPCAlxs4gxs4gxs6UGDuCGA9MibEziLFGOcRY2wkxHpgSY2cQ Y2cQY2cQY2cQ44HZxdgRxNgZxNjDjBh7O4jxwJQYKxNi7Axi7AxiPDAlxs4gxs6UGDuCGGuYQ4y9 HcTYGcTYGcTYGcR4YEqMlQkxdgYxdgYxdgYx1jiHGHs7iLEziLEzJcYDUmLsDGLsDGLsDGKsTIjx wJQYa5hDjL0dxNgZxHhgSoyd2cXYCcTYGcTYGcR4YEqMnUGMibL+ONjgaYjxwJQYO4MYO4MYD95Y YuwMYjwwJcbOIMbOlBgT5nqj1efPWv8fJ8kMeQplbmRzdHJlYW0KZW5kb2JqCjkgMCBvYmoKPDwv RmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDEwMDI3Cj4+IHN0cmVhbQp4nK2d4aoryXVG/9+n OC9gWWq1WhKEgJ04/u0w4AcwjiHgQOz3h/SZub3Xp6279pkfYcAZTtaUqvbe3bWqVN26fJz3f35z 2f/n/lw+/vL3b//77fMv22X5+Q//+Ou3P3/8z+cfT/fbz+z3f/nNcr1vn/zl4/Of//zjxy//8o+/ ffvtH88ff/vn3sz1sj0+Hpft43K+bZ9N/dfPbX/+ZblclvPxpx9y12X/wy9/PD/4j8+fH/XLv+wf tT4+rtu6fdyXnz/7E/r803N99j99UpeP63n95U/37f1vn9iy3n/0X7Y/fVKXj/V8jsba337GLj/8 L/ufLj/3Ysm2Xv70p/2fH2blun3m4pf/vV2fp+X2sSyn269I2vcU/f6nb7/9j/PH5frx0399uxyl sOfguoMfP/3927+cz7fbv3789N/8f7fr6XHe/4tA1vUVuV9O23nvTLZyacjztJ6vj5dW7q/I435a 9v/upZXlFXneTufz9tKX2/kVuZyXvb/3lyGdr425nPcOP57TsC+Xx2ndG3vp8aMxy3baC3udYnO5 rqfzZblPI7+sl9PjtZm3Lq/P03a5vXTn0pu53fcuby95WP/wM/OHn76qqu1cJfV8/D/V1F7aX9VU IT221BRIL7uqKRCtKRCtKZCn1hTMv2lNwfyH1pSOO2qqmMu/a00V814wR02B/F5riu78TmtKs7n/ v/cu318S0a/cy/26d/klOrc+8sf59FjOL5m49VE9Hqft8245ZeK5ndZleU51sZzX07KsL4WxtDvJ cllOewW+ZuLamefe5+01Ey3ry3Lf+3x/Ttlarre9z8/XVLSsL+v1tFzPtynMy+1yOl8vL/1Zb515 nB7X1zBvDdm203Zdr2N47utpvd5eunNpV+jyWPYuby+VcW7pWp7nvcuPy1Twy/O+d/m5TiHcI3Pa 1vM2XVzXy/W0rq0MW3j2yfi0h3qZmedpvx2tU3yu+331sd7Gi/S6rnuf71OFXW/L3uXHMoXnup33 Lj9v40dtj9M+WT6mq+t6v50et+UyXV3Xx/W03V7Lp19d1+fltN7WbQzP83OyuT2n8lnP973P98sU n/Wy7n1+jFfpfjfY+/x8jXO7Ga7X82ndzq/92TrzOC3bsozMup326eQ25Wu97fNfy3vP17rtE+D2 EsLrvSP7BLjdX6/kVvHrfZ8Bt8d4VayPfQrcno+xO899CrxfXlJxbSm9nfcp8L68pKLP2XsFntb7 dbSD22WfAu/reMO8LfsUeN9eU9Hbue5T4P3+WhqPzuxz4P0xpuK27nPgYyrU222fAR+vQtML47ZL z/mxjIVxu+9T4ONVabrl3e77FPh4dZpLu0hvj30KfLxKzdtnPfcp8PEqNT2Cu4HtfX6Vmt6f/Zo5 PZ7nyxTB7bJPgc8mNf2zln0KfC7jzXC77lPg8zreNLZdfM7PV6v53ufdggdb3e8h0vuyVUWw1UJ6 VWOrhfQLEVsFaaPDVkH6MgBb1f6GrRbzZpnYqreDrdKf7mPYKsHpeli2WkifMMJW6bKvgGB6d7BV Z8pWHcFWYS6dKVuF6etDbLWYfmMNWyU83eqwVbLVTRRb1YyGrZLR/lnYKhltC4ewVU1p2Crx6e2U rTqCrTqDrcK0yy9sFaYPHVtlWH9QW9V0ha3yWe2iCFslzG1WCVvVtIetatrDVvVCxlb1Og5bhXkz 0bJVHxa2qtUTtqpXV9iqXslhq3pjCVuF6eaHrcJ0y8RWNV1hq5qusFWdcMJWYbr6Yasw7S4Wtgrz pqKHrWq6wlZ9WNgqIWybMWGreiWHrWoqwla1nbBVmG6H2Crh6WaMrWrJh62qIIStwmxfu831fP7K bRwptxmQw20cKbcBWVsEym0C6Tto5TbB9Bm13Aam1ytu4z3GbYLpW2jlNvFZfV/rcJtAto4cbhPD 6s2U23gEcZvosrkNyJv5ldsM0Sm3iWF1ZSu3iS737Z9ym2in3YFwG5j33brDbWJcfVur3CbG1R2p 3MbHjtv42HEbr9Rym2imFRhuMwy93CaG3sNcbhNMu7PiNsF0Ryq3Cab1GbfxMOM2HmbcxkOI20QM 27hwm2in+WG5jYcZt4mhd08ot/FU4DbBtIrHbYLpClluM3xWuc2QinIbTwVu4zcx3Mbvu7hNpKIz 5TZ+M8RtPO24jecUt4kY9l3Iw238xoLbBNMqDLfxtOM2wehO3Ft/ZgfY79VfOYAiOIAj5QCK4ACK 4ACFDA4A4w5QzOAA2p1wABh3AD5LHQDEHYBhuQPAuAPQZXWAQgYH8OjgAAzLHYAuuwPQjjtAMYMD MC53AMblDqBjDwfQsYcDaKXiADTjDuBDxwEYujsAjDsAjDsAjDuAhjkcQMMcDqAhDAcghu4AtKMO oGEOB2Do7gCainAAGHcATVc4AEwPDw7gqcABNBXhAHoTCwfQ+244AKlwB9CbYTiApj0cQHMaDqAx xAH0xhIOAOMOoGkPB4BxB+hlODvAfiF95QCK4ACOlAMoggMU4vsAIO4AMO4AxQwOoD0OB6CdzuAA 9EcdgGb6SRocQKMTDuAfhQPwWf3LiXIAjw4OwEe9fQ9SDuDZwgG0y+EA9KfP3TiAZiscAKZPqDgA n9X7gwN4f3AAHxcOoCnFAehyn7txgGK+K9t8xa/6nWxd8YpwxTtSV7wiXPGKcMUXcvMrXpuJK35g 6opf7SvkuOJh+uXDFb/a19Vc8QNSVzxD1281Yda+COGKL+btG9S64gu5/kGveI8gV7wzXPHKxBXv DFf8agcr4oonPF2PueL5LLd+7w9XvDNc8fT57a5wXPFaYHHFa/WE9TuD9WuFhfVrhYX1a4WF9WuJ hfVrCMP6B6as3xms35myfkLYjR7r1woL69cKC+v37mD9en8K69e0h/VriYX1azth/c5g/VpiYf1a YmH9WmJh/VpiYf0a57B+Z8r6B6Ss3xmsX0ssrF9LLKxfS4xvNb0/fKsZ/dEzeJ52vtX0EuNbzaGd +lZzYL6fwfMC4wyeFxhn8LzAOIPnBcYZvCHKdQZvYOoM3uBpdQZvYo4zeD7lcAbPK4wzeG8VNlvm 9uXZOUewTEfKMhXBMhXBMgsZLFObCcvc7CxIWOZmBwfCMmHcMr2dsswBKcvc7AhHWGYxg2Vuds4D yyxksEyPMpbpDJapTFimM1jmZsdOwjIJj1smn+WW6f3BMulPVzYsU9OOZWqBhWV6M1imM1imVlhY plZYWKZWWFimllhYpoY5LHNgyjKdwTKJT1e2skyQvieMZWqFhWVqhYVlepexTPrTH5bBMjXtYZla YmGZ2k5YpjNYppZYWKaWWFimllhYppZYWKbGOSzTmbLMASnLJDz9uB+W6SHEMrXEwjK1xMIytc9h mXpTDcvUPodlaomFZXo7WKYzh2VqgYVlaoGFZWqBhWVqgYVlepSxTGewTPc0LFMrLCxTIxiWqRUW ltkrbLbMx9enGBXBMh0py1QEy1QEyyxksExtJizzYUdKwjIfdlwkLPNhx07CMr2dsswBKct82AGg sMxiBst82MEdLLOQwTI9ylimM1imMmGZhEef0AimnwbAMgmPWyb9ccv0PmOZ9KfrGJapaccytcDC Mr0ZLNMZLFMrLCxTKywsUyssLFNLLCxTwxyWOTBlmc5gmVpiWKZWWFimVlhYplZYWKZ3GcukP/o8 sac9LFNLLCxT2wnLdAbL1BILy9QSC8vUEgvL1BILy9Q4h2U6U5Y5IGWZWmFhmTD9JASWqSUWlqkl FpapfQ7L1BILy4TpT1ZgmVpiYZlaPmGZzhyWqQUWlqkFFpapBRaWqQUWlulRxjKdwTI1ymGZeoMK y4Rp64awTK2wsMxeYbNlPr8+J6sIlulIWaYiWKYiWGYhg2VqM2GZTzurFpbpDJb5tINNYZlPO9iE ZQ5IWebTjpiFZRYzWObTjoZhmYUMlulRxjKdwTIZet+DxDJh3DI1hGGZhMctkz67Zeq4wjK1esIy B+awTC2wsEyY/sU7lunhwTK1wsIytcLCMrXCwjK1xMIyNcxhmQNTlqklFpapJYZlgvTjv1imVlhY plZYWKYPC8vU6gnLdAbL1BILy9R2wjK1xMIytcTCMrXEwjK1xMIytcTCMjXOYZnOlGVqhYVlwvg5 WW8Hy9QSC8vUEgvL1GGFZWrawzKdwTK1xMIyvR0sU+90ZZlaYGGZWmBhmVpgYZlaYGGZHmUsk1H1 N+1gmXpxhWVq9YRlwvQ322CZWmFhmb3CRstcL1+exHakLHNADst0pCzTkbJMELdMbwbLhHHLHJiy zGDUMod2DsuckMMyY+hqmTBumTBqmSBumUOUyzJjWPq2mWDUMoPp+lOWGYxaZoTn/CsujeXLI8uO cGk4UpeGIlwahfR34HBpFNJfgROXhn5SXBowvYa4NIrp75yJSwOmP13ApeH9qUtjQOrSKKa/uiYu DcKjDyrGZ9mDivFR+qDi0GUuDbqjDypG0vVBxWhHH1SMTOiDit5nFmAx9n4Z1gJsYGoBFszbQ4jH AiyYfls4FmDR5b4erAXYMKxagHkqWIBFKnqXawEW7egCzFPBAsxDyAIsGH0R08QcC7Bg+uKqFmDB 2JHlAakFmKeLBZiniwVYhFkXYJEuPUwS7fSh1wJsSFctwGLsugDzamYBFkx/Eq8WYMNn1QLMU8oC LOKsL2LyfLEA83zVAsyvHBZgkYoenlqARTt9kVYLME8XC7AIT2uHBVgw+iImv7pYgA3t1ALMS4MF mJcGCzAvjWMB5oXBAsyTzgLMk84CzGdbFmDB9AVhLcAi652pBZhf7CzAvMJYgHn1sADzKLMA86zz 2tBg2lzBa0OHdtZ9Cny+Ws0bc9vnwOer1fTq2bZ9DnyO97D9gjhdz69W8/3CmeV5/fIktiPIsyMl z4ogz4X0B0KRZ5C+UkGeVzuMFfKsvQl5humCjTzzWf2RWuQZ5u0VHoc8M6y3h4BLnmH0Cd/osr7F dBh6yXMhgzx7M8gz7fRMIM+rHXkLeaadt/MvJc8wb4Jd8gyjb/kIpgst8szY+9cFyPPajqr9SJ41 hsizDx15Xu10Xcizhxl59vAgz/5ZyLOmIuQZpn8VgDzD9DdvIM+aipBnTUXIs6YCeaY7XUSRZw1P yLOmIuRZ0x7yDOPy7KlAnvUqDXnW0gh51lSEPGsqQp41FSHPOvaQZ81FyLPmIuRZ44M8600j5FlT EfLsXUae9coJedZUhDxrKkKeNRUhz1qqIc86rpDngSl51nSVPHsryLNmIuTZ20GefeTIc8/EbGPb lyeWHcHGHCkbUwQbK8RtDMRtbLNDS2Fj2puwMRi3MT7LbQxGbYxhuY3BuI3RZbcxH3rZWCGDjXkz 2Bjt9I1ebGyzo1hhY5sdVAsb2+yIWdiYt4ONwXSrw8a8HWxss+NjYWPE0N6+EIhvZWoqwsY8zNiY DwsbG5iyMU1F2Ji2EzamqQgbg+mHJbAxmG5+2JimAhvTVISNaSrCxuhOl0xsTNMVNuYhxMY8FdiY fxY2pqkIG9OrPWxM+xw2pqkIG9NchI1pLsLG9CoNG/M+l40NSNmYM9iYpitszNvBxjRdYWPaTtiY pjRsrKdr9o37l2dXHcE3HCnfUATfKMR9A8R9427HV8I3tDfhGzDuG3xW/3IQ34BR32BY7hsw7ht0 2X3Dh16+UcjgG94MvkE77ht3OyQUvnG3gzvhGzC+++Pt4Bsw+n63GLvv/tzb4Z4f+YbGEN+421mj 8A0PIb7hQ8c3PIT4hreDb8D0rQt8Q0MYvqEhDN/QEIZveH/wDR0XvqFhDt/wZvANDXP4hreDb/iw 8I0e5nkyeH59xEwRJoOnvbWQyaCQ/s0wk8HT3rPIZKB9icnAGSaDganJwBkmA2eYDJypyWBAajJw hsnAGSYDZ2oyKGSYDLwZJgNNaEwGMH3/mckARh9kgBkWn089qsZkAOOLTxh92Wcwvvgkhrr4BPHF p6YiJgMPM5MBjP787tAOk4GmIiYDDWFMBpqumAw0XTEZwPSJh8kAxn7QzLMVk4GmKyYDTVdMBnpD jcmAVPTwMBnA9HU3i09PF4tPwtP3n1l8wuiDDMH0BSqLT71yYvGp6YrFp+YiFp+ai1h8EsO3Xys7 Fp+ky8/RwPRf+mXxCaMPMni6YvGpqYjFp4YnFp8anlh8anjiqwDG9faQQn0VQDt+jkbjU18FaHTi qwAtsPgqAOb5tUV9XrNfWJQjZVEgS4tSWRTItR2mLYvyDyqLAvGfhQ2mz91lUTBvrzoviwqmf1ZZ VHyW/mxKtGNLah85FjUwZVEDUxY1MIdFgbhFDc2URUU7elA/SkeX1NGOLqmD0SW19weLinHpkjqY rjZlUR4fLCr60+3wsKghPGVRQ3jKoobwlEUN4SmLGsJTFuXhwaI8PFhU9EctyuODRXl8sCiPT1mU hweL8vBgUUN4yqKG8JRFRX+6IZVFDeEpixrCUxbl4cGiPD5YlMcHi/L4YFEeHyxq6E9ZVDBdW8qi PIZlUcNHlUV5mLGogSmLGlJRFuXtYFGeCizKU4FFeSqwqOiP/iys3wyxKE/FYVHDyMuiPIJY1NBO HagYIlgHKoYI1mnkIYJ1GjlG3k/b1mlkjw6nkb3gOY3s911OIw+fVaeRPYacRvZccBp5aKdOI7/l Yrbe5ctjxI5gvY6U9SqC9SqC9RbyZpBYL4xbbzH9cb+wXtrpXxJhvbTTv7nBemmn23NZr448rNcZ rNcZrNeZst5CBuv1ZrDexc5ihfUudjIsrFfbCetd7GxYWK+3g/Uu7dzXj6wXxq1X4xPWS3/eXvp8 WK+HB+td7DhbWK8PHet1Buv1MGO92k5Yr4Y5rFfDHNarYQ7rpT96cMXjjPX6sLBeDU9Yr7eD9Xp4 sF4PD9br4cF6tXzCemG6HWK9GsKwXh17WK/GMKzX28F6NYZhvRrDsF6NIdZLd/THAoPRV+0NIcR6 fehYr4YwrFfbCevVEIb1agjDejWEYb1aYmG9esMs69UIhvX6yLFejyDW6+1gvR5BrNcjiPW6p2G9 jP3NjMt6NT5hvZqJsF7NRFivfxbWqzEM69VchPV6O8czeG+pmMV4/fJEtyOIsSMlxoogxoogxoW8 HTNCjIu59bc4IMYwXXoRYxj9Uj36098jgxjD2C9oxke9uXOJsTeDGGsEQ4ydKTEuZBBjbwYxXu3Q YIjxaocYQ4y1nRDj1Q4xhhivdn4zxHhtBxR/JMYwLsYanxBj+qNi7OFBjH1YiLGHEDF2BjH2MCPG 2k6IsYY5xFjDHGKsYQ4xpj8uxhpnxNiHhRhreEKMvR3E2MODGHt4EGMPD2Ks5RNiDONirCEMMdax hxhrDEOMvR3EWGMYYqwxDDHWGCLGdMfFGMbF2EOIGPvQEWMNYYixthNirCEMMdYQhhhrCEOMtcRC jPWGWWKsEQwx9pEjxh5BxNjbQYw9goixRxAxdk9DjBm7i7HGJ8RYMxFirJkIMfbPQow1hiHGmosQ Y2+nxLinYhbj25ePHjiCGDtSYqwIYqwIYlzI2w4tYlzM264yYjwwJcYw3UQRY/qjjzoGozvGOvIQ Y2cQY2cQY2dKjAsZxNibQYyJoL54Ipi2jAkx1nZCjImy/jhLMF1oEeObHTAPMaY//XwDYuztIMY3 OzyOGIP0j0KMYfqb1BDjm51lDzF2BjEemBJjZUKMYVraQ4wJs754wsMcYuz9QYy9PyXGeh2HGHt3 EGPvDmLM1dWlFzHWKzDEWO9zIcaM6+1EaomxXqUhxnpVhBjrlRxirDEMMfbPQoy9HcRYcxFirKWB GGulhhh7dxBj7w5i7N1BjLVUQ4y1PyHGvT+zSzzqKGT/CbpyiUL6nIpLgDQNwiUedrAAl6CVVvi4 BIifuSzm7VFHXMLbwSV03OES2uVwCR05LkGX+4ttcQk+qn9hjkvA9P1FXMKHVS4B8uYJ5RI+Klzi YYclwiW0nXAJmD5Z4hKa9XAJGHcJZ3AJ7zMuQTvNx8IlNF24BM24S2gRhkvQ5f5UCi5Buv796zvH dv7y3IojdecYkOPOAdJvUXXnAFnsmbdA9AHoYPQBaJg3z647hw+KOweMn1uBebucjztHdLl/W1B3 juiyvlY9GH0AemCOO8eA1J0jmP5C+bpzBKOntaMudHs+wqPPvEW2+hVfdw7PKHeOyGi/A9WdI5jf 2Z1jYOrOEZVhp7WjGb1zePVw54hU9O7UnWNgahUyMccqJJi+t1yrEC8NViExrv4DP7UK8dJgFRKf 1VcYtQqJVHRdP1YhXmGsQrzCWIXER+nr7/y+wiok2ulfBdQqxMuHVYiXD6sQTzurkIGpVcjA1CrE y4dViJcPqxAvH1YhXj6sQrx8ahXi1cMqxMuHVYiXD6sQLx9WIX4TYxUS7fSfsKlViJcP2/NePmzP e9rZnp+YY3t+YL5vz3vxsD3vxcP2vBcP2/NePGzPe/GwPe/Vw/a8Vw/b84PL1fa8Vw/b837zYXve bz5szwfTt/Bre96rh+35iGH/uqC25z3vtT0fSH+Vdb07emhmN5/LuVlND/Ouss9dXscw7y57P79a TU/7/Xw93c6vVtN3uffrfO/z683n7YUTu/lczq9W89VvLd2XH4nSvIZY7KwLa4hC+g2JNQSttPSw hgBpBcUaYrGvhFhDLPbtU6whYPTsezD6+zNDO6whFvtmKdYQGmLWECBvD4XWGoI06BGfYLYj93/a //nfb5/odln2fCwf//jrtz9//M+33Z/un+vST0/4+T69/59lH9ntAPb/6pP4/E+//8tvluse4b/8 /dtv/3j++Ns/p7ranf17Ljerq0L6Op+6ArEniQN5WF0V0u931BWI7mrB+PtYYtj6k1/enagrRtXr nLqiHV2bFvKr9hMeV1nmk7NC+r4UOStE36EDojuRIL1+yBl96eMnZzB+L4Dp+SBnMPr0dzC6Ewnz qxKyT31fbfAoQkIKWX5nCSnEL6JCegRICK34Bo82EwkZmEoITN+LJSEam0iIM3URDUjdnIu56hvu PMixweNDrw0ejzIbPN4MGzzOsMGjUY4NHg1PbPAMTG3wOMMGjzNs8FDuev4yxvW27VsbPANzbPCQ irbQiA0emtFfZw5Gv2Yehs4Gz8DUBg+V2ncV2ODRSo0NHi3D2OCBeXtkvzZ4YPT3DYKxlxpFCPuG Chs8mq7Y4NEQxgaPM2zwOMMGjzNs8AxMbfAoExs8mvbY4NG0xwaPpj02eDTtscHjfWaDxxk2eJyp DZ4BqQ0eZ9jgIYT642B+w4wNHq3m2ODRm3Ns8Gi6YoNHxxUbPM6wwTMwtcHjzLHBw625b5awwaMR jA0eHzkbPFqoscHjUWaDxz+LDR73NDZ4BqY2eJxhg8cZNnj0Yo8NHh1XbPAQH9/g0WkyNnhg+iZQ bfD07oxifL/cvhJjR0qMB+QQY0dKjB0pMXYEMR6YEuOJOcQY5u17uxLjoZ0S44E5xHhCDjEemBLj gSkxHphDjAekxHhiDjEemBJjjzJi7O0gxhNziPHAlBgPTInxwJQYD0yJ8cR8F+MBKTEemBLjCHM/ wlpiPLRTYjwxhxg7gxgPTInxwJQYT8whxgNTYjwwhxgPSIlxhFm/+RzaKTEemBLjgSkxHpgSY78C EeNg9MEkHztiPDAlxgNTYjwxhxj72BHjgSkxHphDjCfkEOOBKTGOYfWXqpYYD0MvMXYGMZ6YQ4wH psTYx4UYD0yJ8cQcYjww38V4IEqMB6bEeGBKjCfmEOOBKTH2CxAxHjytxHhiDjEemBLjgSkxnrzx EOOBKTGemEOMB6bEeGAOMX4r5lmMr5cvxVgRxNiREuNC9EggSF9hIMaFvG29I8Y0o4eJox19Yt+7 E2KsAw8xdqbEeEBKjJ1BjJ1BjJ0pMS6k74iGGBOdt2ODJcaeLcSYdvqWKGKsXQ4xHpgSY2cQY+q0 n8xFjBlXP5mLGGuFhRgz9rffTTnEWMMcYqxhDjH2MCPGHh7EeGBKjJUJMXYGMXYGMSZd+mCSjz3E GKbvPJcYaypCjElF3yxHjH1YiLEziLEziLEziPHAlBjr3TvEWCs+xFjDHGKscQ4x1jiHGOu4Qoyd QYydKTH28CDGOp2EGOvUFWLsIUSMlQkx1jCHGGuYQ4w1PCHGziDGGsMQYx/XIcY+KsTYR4UY0xsX Y719hxjzWfrEfjB6JHDwNMR4YEqMnUGMnUGMUYRWqSHGxKcf6UGMB6bEWK+cEGONIWLckVmMVzu8 gxgrghg7UmKsCGKsCGKsSIixM4jxwJQYF/N2hg0x9nYQY2dKjAekxNgZxNgZxNiZEmNHEOOBKTF2 BjHWKIcYazshxgNTYuwMYuwMYuwMYuwMYjwwhxg7ghg7gxh7mBFjbwcxHpgSY2VCjJ1BjJ1BjAem xNgZxNiZEmNHEGMNc4ixt4MYO4MYO4MYO4MYD0yJsTIhxs4gxs4gxs4gxhrnEGNvBzF2BjF2psR4 QEqMnUGMnUGMnUGMlQkxHpgSYw1ziLG3gxg7gxgPTImxM4cYO4EYO4MYO4MYD0yJsTOIsUcZMXZP Q4wHpsTYGcTYGcR48MYSY2cQ44EpMXYGMXamxJgwr79CjDc7i40YK4IYO1JirAhiXEjf/UGMC+mb PyHGmx/oLjHe9PA4Ysxn6aFvH1WIsTMlxgNSYuwMYlzM0p8bQIw3PadeYuwIYrzpkXjEmAjqQ+Te ToixDj3EeGBKjJ1BjJ1BjDXMIcYawxBj4tO3g0uMtZhDjGmmdwcx1oIPMfahI8YDU2LMdexirKUR YqylEWJMO/09TIgx6eoPiCPGGh/EWFMRYqypCDHWEIYYO4MYO4MYO4MYe3gQYw1PiDGM7xg7gxjr VRFi3EtjnuTuXz5I4wiTnCM1ySnCJKcIk5wiMck5wyQ3MDXJFTPs/ng7THLO1CQ3IDXJOcMk5wyT nDM1yTnCJDcwNck5wySnUY5JTtuJSW5gapJzhknOGSY5Z5jknGGSG5hjknOESc4ZJjkPM5Oct8Mk NzA1ySkTk5wzTHLOMMkNTE1yzjDJOVOTnCNMchrmmOS8HSY5Z5jknGGSc4ZJbmBqklMmJjlnmOSc YZJzhkmux3me5J7HSSF9fBdEH98F0cd3QXySK6Q/B80kR1/87A+d6csZJjna8ZVcMb/msdvH5ctD VCDXuwQSRJ9dB+kyVIGMVuwQVSD67Hr0V3/2JHrz9qz0EchgdEns4cMWBuawhQk5bAHm7XH7soXo cq+zsoVgep0dthBR1sdug3k7aHXYgpcFtuAZxRY8PNhCZP3trY2HLQztlC1EO/0x4LIFL1RsIcKs tuAxLFvwEGILQ3fKFqI7fYVetjCEp2xhYg5bcAZb8DBjCz4ubMEvUmwhmP5ZZQse57KFoTtlCx5m bGEIT9nCwJQtDCEsWxj6XLYQfdanC/x+iS34Z2ELHh9swVOKLby1M09yy5cHIhxhknOkJrlC+rvY meRA7N0SgeiSeGCY5AamJjmY/nIgJjkdeExyztQkNyA1yTnDJOcMk5wzNck5wiQ3MDXJOcMkp5mI SU7biUluYGqSc4ZJzhkmOWeY5JxhkhuYY5JzhEnOGSY5DzOTnLfDJDcwNckpE5OcM0xyzjDJDUxN cs4wyTlTk5wjTHIa5pjkvB0mOWeY5JxhknOGSW5gapJTJiY5Z5jknGGSc4ZJrsd5nuRW29lmklvt ewgmORBbEgfSpgwmORBdya22WR+THEN6e8NYTXKrfVERk9xqX1TEJLfapn9McsX8qqX1rd4A8XbO 9EhIIV11SIgjlRAQtY6b7cCQELrbTYCEFPO2oCMh2uFICEz/zpWEaJcjIc6UdQxIWYczWEcx/fv6 sA4PT1mHjxzrGJiyDj6qb2BgHdpOWIcOPayDytBXlkd/+kM6WAf90V8UDcatw/uMdWg1Yx2arbAO LeawDg8z1uFdxjoGpqxDmbAOZ7AOreawDo1PWIeOPazDmbIOreawDm8G6/ChYx1URl82Yx0+dKyD /vQv0bEOwvz2WvOyDu1zWIdWc1iH9jmsQ8s5rEPjzDHMoc91DHNg6hjmwBzHMCfkOIbp1cwxzCE8 dQxzGHodw3SGY5hezhzDHNqpY5g+do5hejlzDNPHzjFML+fjGGZEub/UvI5hDj2uY5hezBzDHHpc xzC9mDmGOUS5jmF6nzmGOTHHMcyBqWOYA1PHML2aOYbp8eEYpo+dY5gDU8cwvZrrGOZbM7MY3798 o5UjiPHd3smAGIOoGIO0YkSM7+2tBD8SY2cQ44EpMYbxlYrGJsTYmRLjASkxdgYxdgYxdqbE2BHE eGBKjJ1BjLV0Qoy1nRDjgSkxdgYxdgYxdgYxdgYxHphDjB1BjJ1BjD3MiLG3gxgPTImxMiHGziDG ziDGA1Ni7Axi7EyJsSOIMWHuX4Ehxt4OYuwMYuwMYqxpDzEemBJjvXuHGOs9NcTYGcTYGcR4YEqM NT4hxs4gxs6UGA9IibGGOcRYp5wQYx86YqxMiPHAlBg7gxjr2EOMnUGMB6bEWGNYYqyFGmLso0KM nUGMB6bE2BnE2BnE2D0NMR6YEmNnEGNnEOPBG0uM1edCjHXsIcbOIMbOlBh3ZBbjZx3xsVNtIG8n rUqMC+krOcT4aUcHEGP60jb9EGMQ3zHW/oYYP+0L/xDjpx0cCDF+2qGJEGNnSowHpMTYGcSYIPuO sYenxNhHjhgPTInx045nhBhrOyHGOvQQYy3kEGMdeoixlnKIMZ/Vt2gRY+8zYqzVjBh7lxFjLeYQ Yw8zYuxdRowHpsRYmRBjZxBjreYQY41PiLGOPcTYmRJjreYQY28GMfahI8ZaYSHGPnTEmP7oj1zG Z70d2Cox1j6HGGs1hxhrn0OMtZxDjDXOIcbeZ8TYGcTYmRLjASkx1moOMfbwIMY+dMRYmRBjLecQ Y28HMdaxhxhTGl1oEWP60z8LMXbmEGPvMWLsPUaMtZhDjDVbIcZazCHG3mfEWPscYjwwJcbOIMbO IMZazSHGGp8QYx17iLEziLFWM2LcmxnF+Hn+8gCnIyXGA3KIsSMlxo6UGDuCGA9MifHEHGIM4880 Du2UGA/MIcYTcojxwJQYD0yJ8cAcYjwgJcYTc4jxwJQYe5QRY28HMZ6YQ4wHpsR4YEqMB6bEeGBK jCfmuxgPSInxwJQYD2EuMR7aKTGemEOMnUGMB6bEeGBKjCfmEOOBKTEemEOMB6TE2MOMGA/tlBgP TInxwJQYD0yJ8cQcYuwMYjwwJcYDU2I8MCXGHmfEeGinxHhgSowH5hDjCTnEeGBKjAemxHhgSoyd QYwn5hBjDzNiPLRTYjwwJcYTc4jxwHwX44EoMR6YEuOBKTGemEOMB6bEOKKsPw42eFqJ8cQcYjww JcYDU2I8eeMhxgNTYjwxhxgPTInxwBxiHGGuN1p9/qz1/wHK/wrDCmVuZHN0cmVhbQplbmRvYmoK MTEgMCBvYmoKPDwvRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDk3NjYKPj4gc3RyZWFtCnic rZ3dqixLlUbv91OsF7Cs36wsaBq0W722EXwAUaFBQX1/6FzHnXN8NfOMufZFc8A+rB4nMmLGjIwR kZFZl4/z9s8vLtv/PF/Xjz/97ds/vn3+Zblcf/rDP//87Y8ff//84+n5+In9/i+/uF2W9ZO/fHz+ 8z+/+/j3v/zzr99++bvzx1//tRVzv17vH+tl+bicH8tnUX/5qezPv1wvl+t5/9PPcvfr8/r9j+eV //j8eal//8t2qfv6cV/W5eN5/enan9Dnn17PW//TJ3X5uK7fi3oux799Yrfzz/6X7U+f1OXjtt6j sPa3T+yzBT/zX7Y/fVLbf3iNst7/9Pvtn5/tldvy2Rf//t/H7XW6Pj6u19PjBzrtexf9+g/ffvnb 88fl9vGHv3y77Kmw9cH9+tr+9rdv/3E+Px7/+fGH/+X/u9xO63n7LwK539+R5+W0nLfKZCmXhrxO 9/NtfSvl+Y6sz9N1++/eSrm+I6/H6Xxe3uryOL8jl/N1q+/zrUnnW2Mu563C62tq9uWynu5bYW81 XhtzXU5bYt+n2Fxu99P5cn1OLb/cL6f1vZhDle+v03J5vFXn0i/1eG5VXt764dKbtTy2Kj/fOuLy 68Y8b1uV36Jz/81PyG/+8FVyLufKzHX5/0nN23az+CI1QXoXVWoG0rN3T81ALDUDsdQM5GWpGcx/ WWoG81tLTW83qQlz+W9LTZhj3n1PzUB6ulRqRnV+ZanpvUlqBnOV1Aykt3w9n9br+a0nHr1V63pa rpe5J17LaZuoXlNeXM/30zaXvSXGtd2QrpfracvA9564dea11Xl574nW69frc6vz8zX11vX22Or8 eu+K1uvX++10vZ0fU5ivj8vpfLu81ef+6Mx6Wm/vYV4asiyn5Xa/jeF53k/32+OtOv3Gdl2vW5WX t8w4t+66vs5bldfLlPDX13Or8us+hXCLzGm5n5dpcN0ut9P93tKwhed2vZy2UF9n5nXabkf3KT63 7b663h/jIL3d71udn1OG3R7XrcrrdQrPbTlvVX49xkst62mbc9dpdN2ej9P6uF6m0XVbb6fl8Z4+ fXTdXpfT/XFfxvC8Piebx2tKn/v5udX5eZnic7/ctzqv4yjd7gZbnV/vcW43w034Tvfl/F6fpTPr 6bpc3/uipdj9vpy26eQx9df9sc1/rd97f92XbQJc3kJ4e3ZkmwCX5/tI7tV5bjPgpxgPo+K+blPg 8lrH6ry2KfB5eeuKW+vSx3mbAp/Xt67oc/aWgaf78zbaweOyTYHP+3jDfFy3KfC5vHdFL+e2TYHP 533qrk/nWZ7r2BWP+zYHrlOiPh7bDLi+C01PnscmPef1OibG47lNgeu70hxq/NymwPXdaQ712RTu vr5LzeFar20KXN+lpkdwM7Ctzu9Sc3l1ZpsCX+fLFMHlsk2BryY1/VrXbQp8Xceb4XLbpsDXbbxp LJv4nF/vVnN57hY82Or9bLUvW1UEWy2kZzW2WkgfiNgqSGsdtgrSFyXYqtY3bLWYg2Viq14Otkp9 uo9hqwSn62HZaiF9wghbpcq96dgqTK8OtupM2SpIl2dsFebSmbJVmC7P2Gox/cYatkp4utVhq/RW N1FsVXs0bJUe7dfCVunRtnAIW9UuDVslPl0hy1ZBepWxVZheHWwVpt3Jwla128NWYbo9Y6s0/Tdq q9qlYatcqw2csFW6YulM2aqmRtiqpga2qmM9bFXHetgqjNuqNwtb1QwLW9UuDVvVG0vYqmZq2KqG OWxVwxy2qpNJ2CpM1zpsFabdocJWYbpmYqsaZ2zVm4WtapjDVnUkh63C9KZjq3rjDVvVeSBslfB0 68VWNVXDVnXyD1uFWX7AW7b/+cpbFMFbHClvUQRvKeTeIoC3gPTdMbwFpu+U4i3F9HwNb9Eah7fA 9Bkeb+Fafc+qvAVk6Uh5C83qxeAtGsHwFqqs3lLIwerwFo8O3kKzuo7hLVS5b+3gLZTT7kDhLcUc d+LKW2hXn+DxFpheH7wFpk/eeAvx6R6Ft2gM8RYNYXiLJnx4C+V0J8FbPIR4C03vWoe3wPQdNLwF pk/eeIuXg7d4OXgLTJ+88RbtLrxFeyu8RbsivIWueHWmvIVyWqaGt2h3hbfQ9F4fvGVgyls0hOEt Wk54C0x3CbxFR2B4C0yXALxFuzS8Rfs0vEVvzniLTifhLXSpe4ve48NbNH3CWzQ1wlu0u8JbYNq1 wlv0phreoqM0vEVTI3bZ9I5Qu2ydmO3nef3SfhTBfhwp+1EE+1EE+ylksB8Yt59iBvvR6oT9wLj9 cC21HxC3H5rl9gPj9kOV1X4KGezHo4P90Cy3H6rs9kM5bj/FDPZDu9x+YNx+YNx+iI/bj8YQ+9EQ hv1owof9UI7bj4cQ+6Hpbj8wbj8wbj9eDvbj5WA/MG4/2l3Yj/ZW2I92RdgPXeH2QzluP9pdYT80 3e1nYMp+NIRhP5oaYT8wbj86usJ+/FrYj3Zp2I/2adiP3pyxH51Own7oUrcfvceH/Wj6hP1oaoT9 aLeH/cC4/ehNNexHR2nYj6ZG2I/eEcp+OjHbz7p+aT+KYD+OlP0ogv0U4ns/IG4/MG4/xQz2ozUO +6GczmA/1Efth2L6wx3sR6MT9uOXwn64Vn/YVPbj0cF+uFSXKOzHewv70SqH/VCfbhvYj/ZW2A9M VwDsh2v1+mA/Xh/sx9uF/WiXYj9UucsY9lPMd1kdR/z9rM/Y9xHvSI34AdlHvCM14h2pEQ/y0BHv xTDiJ2Yf8TCH87U14oPpw6dGfDB2pnJC9hEfTden1DD3vvyqEQ9zeCK+j3iQ229sxA8RrBE/RLBG vDedER+MPqUeyqkRH+HpQl8jPtrV1yk14r3tjPiBqREfdT7cFb6PeE8wRvzQ9FrvDEytdzzDWO94 hrHe8QxjveMpxnrHQ8h6x1OM9Y63nfVOMP1R7b7eCaQvm2q94xnGesczjPXO0PRa7wxNr/XO0PRa 73iKsd7xcljvDEytdzzFWO94irHe8RRjveMpxnrH48x6x+Nc652h6bXe8QxjveMpxnrHU4z1jqcY 6x1vOusdv/Gy3vG2s97xFGO9M5RT652B+b7e8QTjTKUnGGcqPcE4U+kJxpnKIcp1ptITjDOV3nLO VHr2cKZyKKfOVHqGcabykGGzZV6/PAvpCJbpSFmmIlimIlhmIYNlajFhmcX0nZuwTBhdVwbjlunl lGUOSFnm1Y7thGUWM1jm1Y72YJmFDJbpUcYyaZZbpjJhmc5gmVc7/hOWSXjcMmmXW6a2PSyT+nRl wzK127FMTbCwTC8Gy3QGy9QMC8vUDAvL1AwLy9QUC8vUMIdlamqEZTqDZWr6YJkgbpmaYWGZmmFh md50LJP69IcOWKZ2e1implhYppYTlukMlqkpFpapKRaWqSkWlqkpFpapcQ7L1PTBMgekLFNvCGGZ zmCZmmJhmZpiYZna9LBM6qO76t7tYZmaYmGZXg6W6cxumZpgYZmaYGGZmmBhmZpgYZkeZSxTsycs U5mwTJhfdaYsU3s0LFMzLCyzZ9hsmbcvT646gmU6UpapCJapCJZZyGCZWkxY5s2O24Rl3uy4TVjm zY7bhGV6OWWZA1KWebPTUWGZxQyWebNTTVhmIYNlepSxzJsdjgrLhOl2iGVqeMIyYbr+YJmExy2T drllatvDMr0+WKa3qyxTEyws04vBMp3BMjXDwjI1w8IyNcPCMjXFwjI1zGGZmmJhmZpiYZnOlGWC 9O1OLFMzLCxTMyws05uOZVIffT/cuz0sU1MsLFPLCct0BsvUFAvL1BQLy9QUC8vUFAvL1DiHZWqK YZmaPWGZMPp+eDD9XAaWqSkWlqkpFpapTQ/L1BQLy9R2hWVqioVlavqEZTqzW6YmWFimJlhYpiZY WKYmWFimRxnL1AQLy9QMC8vUKScs0xksUzMsLLNn2GyZjy9PCDuCZTpSlqkIlqkIllnIYJlaTFhm MYeXm7FMZ7DMhx1rCst82PEoLHNAyjIfdgotLLOYwTIfdnoMyyxksEyPMpb5sENoYZkwXVuwTGew TA1hWCbhccukXW6Z2vawTM2esMyB2S1TEyws82Hn5sIyPTxYpmZYWKZmWFimZlhYpqZYWKaGOSyT dulXiILpL0ljmV5OWaaGMCxTMywsUzMsLNObjmVq9oRlOoNlaoqFZWo5YZkan7BMTbGwTE2xsExN sbBMTbGwTI1zWCbtOhz/3S1zQMoyncEy9UYXlqkpFpapKRaWqU0Py9RuD8t0BsvUFAvL9HKwTL3T lWVqgoVlaoKFZWqChWVqgoVlepSxTG8VlqmZEZap5YRlwvhepmZYWGbPsNkyly9PYjuCZTpSlqkI lqkIllnIYJlaTFhmMYNlOoNlwrhlejllmQNSlrno8Wgss5jBMotxyyxksEyPMpZJs/qXSbFMGD2J 7eWEZcK4ZRKe8w8MjfXrI8uKMDQcqaGhCEOjkP5NI4ZGIf2TRjE09EoxNGB6DjE0iunfB4qhAdPf LmBoeH1qaAxIDY1i+meGYmgQHn1FM65lr2jGpfQVzaHKDA2qo69oRqfrK5pRjr6iGT2hr2hGu/ow ZAEG01chLMBgep1ZgMEcXkCoBdhq58dYgK3taNjPLcC0K2IBpl0RCzC6op/ZYQFGOb4A066IBRhN 79vzLMCcYQEG4wswL4cFmJdTCzAQP0yi3RULMO2uWIARZl+A0V1+mIRyerNYgHl3sQDTEMYCTLM5 FmDOsACD6a9xsgDzcliAOcMCjL7oe+8swLRPWYBpl8YCTEdXLMDoLt/mp5z+8S0WYNqlsQDTLo0F mIYwFmAw+oqmD51YgPm1WIDB9MMt+wJMkycWYF4bFmA6DcQCTBMjFmCaGbEA01k7FmAwrT6xACMz DkwtwPSmEQswzcJYgGmG8RlYzww+A+t9sdy3KfC1jFPp8tjmwNe71RwWlss2B75eU2psA+J0O5/H 1Fg287mc363m+6VGwd4G3FeC7UgJ9oDsgu1ICTZIf2m0BDuQvmdegg1zfMN3F2yvDYIdTJfwEuy4 Vn/ttgQ7mMMHTr4LdjTr8KLwLtjB6FvAUWX9cu3Q9F2wQVywh2JKsKMcfcIRjL4TODAl2BOzCzbM QcJLsIdySrCD6cJfgh3M4TcUdsEOpq+pd8GOMP/AO/3btPflaFaE0exIjWZFGM2F+GgG8dF8sQfj MZq1NjGaYXw0c62+IGQ0w+hoplk+mmF8NFNlH83e9BrNhQyj2YthNF/sUX6M5osdP4jRTDl9ecpo vtjj/hjNMH0UMpr9WoxmGP0OdcRH3+kP5vDVo300a5hZLkd1+gq/lstDmGu5HOX0a9VyeQhhLZeD 6Z/sqeVyMP0ZYi2XPYQslz2ELJc9hiyXoz59Sb0vlz08LJc9zCyXh3JquexhZrk8VLmWy8O1ark8 hLmWyx5mlsseZpbLfkdguRzM4RvT+3LZ28Vy2fuC5fJQTi2XvS9quTwUU8tl7y6Wy94VLJeHrqjl sncFy2UPM8tlbxfL5YGp5bJ3BcvloZxaLntX7MvloZRaLntPsFw+9MRsUbcvT305gkUV0md4LKqQ vo2GRYG0fsWitC5hUc5gUQNTFuUMFuUMFuVMWdSAlEU5g0U5g0U5UxZVyGBRXgwWpR0aFgXTN+ex KBj9TgqMP3SIdnVNwKJguiZgUTBdW7AoGD31FTG076QEog8dvCvCojzMWBSM/vbcUA4WpV0RFkV4 9KGDd1dYlHZXWBSMfhcymP4hxrIo7a2wKO2usCjtrrAovaGGRdEV/V0HLApGf81j6C4sSkMYFqXd FRblDBYFo7/mEUzfnceitC/CorQvwqKIof2aR3SXvsEaTP+ZOywKRh86eHeFRWmYw6I0PGFRGp6w KA1PWBTtOpzoKouiHP0upMenLEqjExZFdLo/YlF6f+KhwyFRZ9O6f3nyyRFMq5BriySmVcitnU7A tPRCmFYh/rtpwfT5HdMq5vDtSEzrrkeNMK27HjXCtChH96u05WFazmBazmBazpRpFTKYlheDaVGO 7z7f7SBWmNbdTzWVacEcjoCUaWl9wrRol36DMpiuP5iWxidMi/rYr/wO4cG0PDyYlocH0/LwYFoe HkxLwxOmpeEJ06I+XSUwLY1PmJbGJ0xL44NpaXjCtDQ8YVoeHkzLw4NpUZ9uUZiWhwfT8vBgWhqe MC2NT5iWxidMS+MTpqXxCdPy+mBaMF1tMC2NIabll8K0NMxhWs5gWt4VmJaWE6alXRGmpV0RpqVd EaZFffoBBkxLb4ZhWtoVZVreckxLIxim5eVgWh7BOt4xRLCOdwwRrOMd0fJ+NqGOd3h0ON7hCc/x Dr/vcrxjuFYd7/AYcrzD+4LjHUM5dbzj0Bez9S5fHmp2BOt1pKxXEaxXEax3sfM/Yb0wbr3F9PPT Yb2U05/AYr2Uo78WHOV0ey7r1ZaH9TqD9TqD9TpT1lvIYL1eDNZLOZ3Behc7HRXWq+WE9S52ZCms 18vBemmXWy+MW6/GJ6yX+hy+ordbr4cH66UY/d2ZoelYrzNYr4cZ69Vywno1zGG9GuawXg1zWC/1 8f1FjTPW683CejU8Yb1eDtbr4cF6PTxYr4cH69X0CeuF6XaI9WoIw3q17WG9GsOwXi8H69UYhvVq DMN6NYZYL9U5/DRNWS9M3zbFej2EWK83HevVEIb1ajlhvRrCsF4NYVivhjCsV1MsrFdvmGW9GsGw Xm851usRxHq9HKzXI4j1egSxXvc0rJe2H8y4rFfjE9arPRHWqz0R1uvXwno1hmG92hdhvV7Ofqj5 0BWzGK9fH0ZWBDF2pMRYEcRYEcR41RO5iHEx/afsQ4xhuvQixjD+4H2189MhxqufNN7FeNXz04ix F4MYawRDjJ0pMV5/4DCyF4MYr3awN8R41QPCiLGWE2K86gFhxHi1w9Ehxms7/ftzYgzjYqzxCTGm PirGHh7E2JuFGHsIEWNnEGMPM2Ks5YQYa5hDjDXMIcYa5hBj6uNirHFGjL1ZiLGGJ8TYy0GMPTyI sYcHMfbwIMaaPiHGMC7GGsIQY217iLHGMMTYy0GMNYYhxhrDEGONIWJMdVyMYVyMPYSIsTcdMdYQ hhhrOSHGGsIQYw1hiLGGMMRYUyzEWG+YJcYawRBjbzli7BFEjL0cxNgjiBh7BBFj9zTEmLa7GGt8 Qoy1J0KMtSdCjP1aiLHGMMRY+yLE2MspMe5dMYvx6+v3ehRBjB0pMVYEMVYEMS7ksEOLGBdz2FVG jAemxBimmyhiTH38LT0Y3THWlocYO4MYO4MYO1NiXMggxl4MYkwEe8sRY5i2jAkx1nJCjIny4XdX SoxhutAixi87gh5iTH36+QbE2MtBjImzivHLzqmHGDuDGDuDGDuDGMP07kKMCU8/3YkYa3hCjLU+ IcZanxBjHYAhxl6fEmOvDmLMqPATqTpyQoz1/hRiTLP8nISOrhBjzeYQYx2BIcYawhBjvVaIsZeD GGtfhBhraoQYa6qGGHt9Soy9OoixVwcx1kwNMfbqIMa9PqMDLNc6wti/pLw7AEifC8sBAmn6Ug4A og4QpdjvdQeiZyVh/Pe6h3LKAbzdOIBXGQfwlpcDRJX7F77KAeJS/UF3OUAwfV+wHGBo1u4AgRzm 990BhlaVA8D45piXgwMEo98i9l7HAYJRBxiYcoChzuUAUU7zKBzAu6scIIrpulEO4EmIA0SV+xsn 5QDRXT/wu9/L/cvzJo5w53Ck7hyF9FsUd45CrvY+WyD6VYBg9KsAMAc/5s6hjYo7RzF+3gTmMJzr znG3JzNx57jbE564c8DoVwEGpu4cIP3sOHeOuz0EijsHjH4VIPJCt9UjPPo+W/RWH/HcObRH485B j/Y7EHcOGP0qwMBw5yAz7JR1FON3Ds2euHPQFb1Z3Dnu9syO1UMwvVm1eghGvwrgKcbqIZiu0LV6 iLb3r6HX6sHTh9VDXKtr9r56iN7q1anVg2chqwfPQlYPca2+mKnVg997WD1EOYcvlO+rB08xVg+e YqwePMVYPXiKsXrwFGP14CnG6sFTjNWDpxirB0+xWj14hrF68Axj9eApxurBU4zVg6cY2+qeYmyr +82QbfUop3+SrrbVPcXYVvcUY1vdU2zfVvcEY1vdE4xtdU8wttU9wdhW9wRjW90TjG11zzC21T3D 2Fb3DGNb/ZBhapDLz8rfLJyLHWhAOAvpNzaEk1J0qQrSfAnhXGzfH+Fc7PFBCCfM4YcXSzhhDoeg Szhhui4hnDAunBpihBPk8OZfCSfdoOc4gtGvNsP0GxbCudhDoxBObxXCSXX0tb4oR382x3MnhFOT J4RTsyeEU7MnhFOzJ4RTsyeEU7MH4dQwh3Bq9oRwavaEcPbs2e4cv9/++ce3T3S5XLfRfP3455+/ /fHj7982VXs+NhvbBu/jpylh+z/XTSweO7D9V5/E53/6/V9+cbtsl/nT37798nfnj7/+a7grPc/P vTP7bXG/K4H0LYW6KwViLxsHsspdCaTf2+quFIjelWD8sy7RbP2ZBa8Od6VoVf8hgborRTm2DAb5 ka2L5/UmOwr0WSF9C4w+K0Q/xQOim54gPX/oM+rS20+fweirMsH0/qDPYPQF8WB00xPmhzrk9vpq L8kROqSQ66+sQwrxQVRIjwAdQim6l+TFRIcMTHUIjE7tHpvoEGdqEA3IPrXD3PQLkx5kpvah6fvU PkS5pvahmJraB6amdo8yU7uHh6l9YvapfWBqah+Ymtoj3fWIZrTrsMO8T+0T831qj65o6wym9ihG fxEvGH0SPTS99pImZt9LikztmxO1l+SZyl6SpyF7ScEc3urf95KC6Rs8tZcUTN+Y2feSIoR9X6b2 kry72EvyELKXNDC1lzQwtZc0MLWXNDH7XpIz7CV5t7OX5N3OXpJ3O3tJ3u3sJQ11rr2kgam9pIHZ 95ImZN9LGpjaS4oQ9m202kvyGyZ7SZ7N7CX5zZm9JO8u9pK8XewlDUztJU3Mvpc0MN/3kuLW3Pdu ai/JI8he0tDy2kvyRGUvaYhy7SUN16q9pMHTai9pYva9pIGpvaSBqSOaPtg5ount4ohmxOdwjHM/ ounTJEc0g7EfZDhUZxbjbdb8SowVQYwdKTFWBDFWBDFWJMTYGcR4YEqMizl8iAkx9nIQY2dKjAek xNgZxNgZxNiZEmNHEOOBKTF2BjHWKIcYazkhxgNTYuwMYuwMYuwMYuwMYjwwuxg7ghg7gxgT5r7d hxh7OYjxwJQYKxNi7Axi7AxiPDAlxs4gxs6UGDuCGBPmg/SWGHs5iLEziLEziLEziPHAlBjTLv1o qLc9xNgZxNgZxHhgSoy1XSHGziDGzpQYD0iJsTOIMc1yMXYGMYbp6ocYa5eGGMO4GGu7QoydQYwH psTYmV2MnUCMnUGMnUGMB6bE2BnEWKfAEGP3NMR4YEqMnUGMnUGMB28sMXYGMR6YEmNnEGNnSoz7 PWMW4+2W9pUYK4IYO1JiXIiePgTpKwzEuJDD1jtiTDF6bjnK0Zf6vTohxtrwEGNnSowHpMTYGcTY GcTYmRLjQvqOaIgx0ekPrhBj7y3EmHL6lihirFUOMR6YEmNnEGPytD9ZRYxpl/6Er2dYiDFtt98k 8jCHGGuYQ4w9zIixhwcxHpgSY2VCjJ1BjJ1BjOku/cartz3EGKbvPJcYa1eEGNMVfbMcMfZmIcbO IMbOIMbOIMYDU2Ksd+8QY834EGMNc4ixxjnEWOMcYqztCjF2BjF2psTYw4MY63QSYqxTV4ixhxAx VibEWMMcYqxhDjHW8IQYO4MYawxDjL1duxh7qxBjbxViTG1cjPX2HWLMtfSl/mD0J3wHT0OMB6bE 2BnE2BnEGEVomRpiTHz6kR7EeGBKjHXkhBhrDBHjjsxivNrhHcRYEcTYkRJjRRBjRRBjRUKMnUGM B6bEuJjD62iIsZeDGDtTYjwgJcbOIMbOIMbOlBg7ghgPTImxM4ixRjnEWMsJMR6YEmNnEGNnEGNn EGNnEOOB2cXYEcTYGcTYw4wYezmI8cCUGCsTYuwMYuwMYjwwJcbOIMbOlBg7ghhrmEOMvRzE2BnE 2BnE2BnEeGBKjJUJMXYGMXYGMXYGMdY4hxh7OYixM4ixMyXGA1Ji7Axi7Axi7AxirEyI8cCUGGuY Q4y9HMTYGcR4YEqMndnF2AnE2BnE2BnEeGBKjJ1BjD3KiLF7GmI8MCXGziDGziDGgzeWGDuDGA9M ibEziLEzJcaE+f61GK9nO4tdYuxIifGA7GLsSIkxSN/9KTEG6Zs/iDHM8UD3LsbBHD4Vu4txXEsP fXurEOOB2cV4QnYxHpgSY5hrf2+gxDia3s8q72I8ICXGwegZ44igvj7k5SDG3nTEeGJ2MR6YEuOB KTH2MCPGHkPEOOJjrw95MiPGUUyvTomxJzxiPDS9xHhidjGOcaxi7KmBGHtqIMZRTn/vvcQ4uqu/ aF5i7PEpMfauQIy9KxBjDyFiPDAlxgNTYjwwJcZDeEqMPTyIcTC6YzwwJcY+KhDjQ2rMk9zlyxdp HGGSc6QmOUWY5BRhklMkJjlnmOQGpia5Ynz3ZyiHSc6ZmuQGpCY5Z5jknGGSc6YmOUeY5AamJjln mOQ0yjHJaTkxyQ1MTXLOMMk5wyTnDJOcM0xyA7NPco4wyTnDJOdhZpLzcpjkBqYmOWViknOGSc4Z JrmBqUnOGSY5Z2qSc4RJTsMck5yXwyTnDJOcM0xyzjDJDUxNcsrEJOcMk5wzTHLOMMn1OM+T3G0/ KaSv74Lo67sg+vouiE9yhfT3oJnkqIue/YnK9OUMkxzl+EqumB957fZzQ+MrWyjk9rRAPuzRJYEs pMsQgXzYwzIC+bCnqBFI6qu/jBK10S9qBONLYg1f2IIzZQsDUrZQzOF1e2yBKvc8wxZgep6VLRBl fe02mMNBq7IFTYuwBe3RsAUNT9gCvX74QGTZgpeDLVBOfw0YW9BEDVsgzG4LGkNsQUMYtuDVwRYe 9pA5bMHDgy0MTNmCMmELGuawBW1X2IIO0rAFGP2Em8cZW/DqYAsa5rAFDw+24Ay24CHEFrzO2AJ1 1l9G8ftl2IJeK2xB4xO2oF0attDLmSe55csDEY4wyTlSk9zStqN/ZpID0UkORH/lIBj9toTXJiY5 mP61TiY5bXhMcs7UJDcgNck5wyTnDJOcMzXJOcIkNzA1yTnDJLfY06CY5LScmOQGpiY5Z5jknGGS c4ZJzhkmuYHZJzlHmOScYZLzMDPJeTlMcgNTk5wyMck5wyTnDJPcwNQk5wyTnDM1yTnCJKdhjknO y2GSc4ZJzhkmOWeY5AamJjllYpJzhknOGSY5Z5jkepznSW61nW0mudWeQzDJgeiSeLWnEExyIDrJ rbZZH5McTTp8YawmudUeVMQkt9qDipjkVtv0j0mumB9aWr/qCxCHc6Z7hxTSVYcOcaQ6BMTeTwLx jXiq202ADinmsKCjQ7TC0SEw/ZkrHaJVjg5xpqxjQMo6nME6iunP68M6PDxlHd5yrGNgyjq4VN/A wDq0nLAObXpYB5mhX0eP+vSXdLAO6qM/OhqMW4fXGevQbMY6tLfCOjSZwzo8zFiHVxnrGJiyDmXC OpzBOjSbwzo0PmEd2vawDmfKOjSbwzq8GKzDm451kBl92Yx1eNOxDuqjv60UYdavo3udwzo0m8M6 tM5hHZrOYR0aZ45hDnWuY5gDU8cwB2Y/hjkh+zFMz2aOYQ7hqWOYQ9PrGKYzHMP0dOYY5lBOHcP0 tnMM09OZY5jedo5hejrvxzAjyv3L53UMc6hxHcP0ZOYY5lDjOobpycwxzCHKdQzT68wxzInZj2EO TB3DHJg6hunZzDFMjw/HML3tHMMcmDqG6dlcxzAPxYxi/Lp8+UUrR0qMB2QXY0dKjEH6PlqJcSC6 UhmYEuOJ2cU4GF2peKsQ44HZxXhCdjEemBLjgSkxHphdjAekxHhidjEemBLjiLJ+0crLQYwnZhfj gSkxHpgS44EpMR6YEuOJ+S7GA1JiPDAlxkOYS4yHckqMJ2YXY2cQ44EpMR6YEuOJ2cV4YEqMB2YX 4wEpMfZbGGI8lFNiPDAlxgNTYjwwJcYTs4uxM4jxwJQYD0yJ8cCUGPv9GzEeyikxHpgS44HZxXhC djEemBLjaFZfW5QYD00vMXYGMZ6YXYwHpsTY24UYD0yJ8cTsYjww38V4IEqMB6bEeGBKjP2eihgH 05cEJcYuGojx4GklxhOzi/HAlBgPTInx5I27GEe72iIFMQ7mIM+7GPuoQIyHa+1ifLg3z2J8qyM+ dqoNRH/eCKSv5BDjmx0dQIypS9v0Q4xBdMfY6xtifLMH/iHGNzs4EGJ8s0MTIcbOlBgPSImxM4gx QdYd4yE8JcbecsR4YEqMb3Y8I8RYywkx1qaHGGsihxhr00OMNZVDjLlW36JFjL3OiLFmM2LsVUaM NZlDjD3MiLFXGTEemBJjZUKMnUGMNZtDjDU+Icba9hBjZ0qMNZtDjL0YxNibjhhrhoUYe9MRY+pz 2A0uMeZahwNbJcZa5xBjzeYQY61ziLGmc4ixxjnE2OuMGDuDGDtTYjwgJcaazSHGHh7E2JuOGCsT YqzpHGLs5SDG2vYQY1KjKyRiTH36tRBjZ3Yx9hojxl5jxFiTOcRYeyvEWJM5xNjrjBhrnUOMB6bE 2BnE2BnEWLM5xFjjE2KsbQ8xdgYx1mxGjHsxsxjfvzzA6Qhi7EiJsSKIsSKIsSIhxs4gxgNTYny3 w1ghxl4OYuxMifGAlBg7gxg7gxg7U2LsCGI8MCXGziDGGuUQYy0nxHhgSoydQYydQYydQYydQYwH ZhdjRxBjZxBjDzNi7OUgxgNTYqxMiLEziLEziPHAlBg7gxg7U2LsCGKsYQ4x9nIQY2cQY2cQY2cQ 44EpMVYmxNgZxNgZxNgZxFjjHGLs5SDGziDGzpQYD0iJsTOIsTOIsTOIsTIhxgNTYqxhDjH2chBj ZxDjgSkxdmYXYycQY2cQY2cQ44EpMXYGMSbK+uNgg6chxgNTYuwMYuwMYjx4Y4mxM4jxwJQYO4MY O1NiTJjri1afP2v9f9WAa/IKZW5kc3RyZWFtCmVuZG9iagoxMyAwIG9iago8PC9GaWx0ZXIgL0Zs YXRlRGVjb2RlCi9MZW5ndGggMTAyODcKPj4gc3RyZWFtCnicrZ3driTLUUbv5ynmBdzurq7+kxCS AeNr0JF4AASWkIyEeX+J2uNdsb6OmhV7LtARYLbXyc6MiKpcmZ1Vffl+3v753WX7X4/X8v3f//Lt f759/OV+WX784a//8e3fvv/3xx9Pj9sP9vM//O76up8/+Mv3j3/+9U/f//Yf/vrnb7//0/n7n/93 a+a2PM/fn5f798v5dv9o6j9/tP3xl+VyWc77n37K3a7L8/OP5+fnH0/r9/1/tg/M/3f72H/47dvv //n8ffs3fvvPb5d9XOtza2ldv//2l29/dz5f//D333/7L/7b+3K6rOd7IrdbQ56n1+P+hqz/+I48 rh8xyTYe78DzfDq/rm89WZ8/kD/+9iMk5x/j+fHPNpKPPj8v6/fH8iOiH0Pf/nS/XB7tTz+oy/fr ef3bnx73499+BOD1+tm/2f70QW0Bu9yjsfa3H9j295/8m/1PG7X1Ysm23v70L9s/P6216/0jmvf7 32L6Oi2378tyuv1CKX4W3o9C2DpzfSuE20cKn689Q70Qrqfnefs3EmmF8Lic7ufllsihEF6n9fwj z7TSS+FxWs63y1srz3fkdTudz/e3vpyv78jlvGz9fbwN6dI+6XI5bx3e/utglj/uVfdF+F/nPfbX rZn/l+Cv2wX/RfADseCDaPCjFQt+tGLBD2Sx4AfzsuB7hy+X52ndGntj/rkxy/203SvXqcuX63o6 X5ZHMpd/asx6OT3fmznU1Po63S+3t+6c/6Ext8fW5ft7Hlo2t/966/LjOaXzst0yz5e36Nx6lLeb 5nM5v1dFH/nzebovl/fi6sN63U/rsoyZWM7raVnW98JotbNcltNWgW8hXM6deW19vr9novVnWR5b nx9v/Tm3rC/X29bn1zKla1mvp+V6fk9Fy/pyu5zO18tjivNye56e1/eCb9latpvx/bq+p+LemMd6 Wq+32xie57J1+f5WGZe1Ma9trrw+31Jx7ul6PbYuv9ap4LfInO4/pnYP4fVyPa3rexn2i+u6XE5b qJepmq/L63Re13Vktvvqc709pvhsdrD1+TF2+bZsXX4uU4Vd7+ety6/bGJ5Na86383P8rMft9Lwt l+nquj6vp/utlU9L1/V1Oa239T5dXZtBbJPN7TWFZz0/tj4/LlP5rJd16/NzneKz3Q22Pr/Gq3S9 blZ5P7/15/KHzjxPy315z0VL+7reT9t08p6Ldumst23+a3nv+Vrv2wR4H6t5vW8T4P3xloprC/P6 2GbA+/P9htkqfn1uU+D99ZyuivW1TYGPy2Xqz+28TYGP5S0V13NntinwcX0vjZbS22WbAh/ra7q6 bss2BT7u4w3zdt2mwMdjndL14Tz3x/M9FW3st3WbA8cL53bbZsDnu9D0Qr1t0nN+vhvNoTePbQp8 vitNL4zbY5sCn+9Oc+jxc5sCn+9S023x9tqmwOe71PTP2gxs6/O71PQLZ7tmTs/X+TKN637ZpsDX bCP3ZZsCX01qejvXbQp8Xef+bOJzfr1bTc/Ffd2mwNe71XzGZzPlwWi3wpXel9EqgtEW0r0YowVp IcBoC+m3X4yWVvpSAaOlme5aGG0xB8vEaHXcYbTa5TBa+tz9sIy2kD5hhNHS5Zb7MFqYLvMYLUwf VhmtIxjtwJTRFtNvmmG0DL0rG0ZLJrplYrSarTBastU/C6PVUg6j1XSF0RKftgAJo4V5dmY3WpAu 8xgtTO8yRusfhdEyrD+q0Xq6MFo+qxV8GC1hbioRRqtpD6PVtIfR6kUaRqsXKUYL4kbrw8JotXrC aAlhl3CMVm8IYbQwvT8Yrd40wmi1msNoNV1htJquMFqdKcJoYbrWYbQwFzVanU/CaDVfGK2mIowW plVhGK23g9FqusJoNV1htFpiYbSEp1svRqslH0arU3IYLcz9F7xlmxK/8hZF8BZHylsUwVsKWVsE 8BaQvjuGt8D0CR5vKebgP3iL9ji8BaZvj+EtfFb3qPIWkHtHylsYVm8Gb9EIhrfQ5a5a5S2FHKwO b/Ho4C0My72FLvtOHO20O1B4SzHDThzjcm9hXL4Tp2MPb9Gxh7dopeItNNP1B2/xoeMtDL1/FN7i DN4C0/uDt8B0T8BbYNxbYJqPhbdousJbNF3hLZoKvIVU9Pkdb6GZ3mW8RdMV3sLQ++4Y3uIM3gLT rpzwFk17eIsy4S2aivAWTUV4i94Mw1v0/h3eQi46g7foTTW8RfOOt2hKw1s8hHiLVnx4i15d4S2a 9vAWmFdnylv6DXN2ie0m8pVLKIJLOFIuoQguoQguUcjgEjDuEsUMLqHdCZeAcZfgs9QlQNwlGJa7 BIy7BF1WlyhkcAmPDi7BsNwl6LK7BO24SxQzuATj6l9/4RKMy11Cxx4uoWMPl9BKxSVoxl3Ch45L MHR3CWdwCRh3CRh3CU1FuASMu4SmK1xC0xUuoanAJUiFuwTNuEtousIlGLq7hDO4BIy7hKcCl9C0 h0toKsIlNBXhEnozDJfQ+3e4BLlwl9CbariE5h2X0JSGS+jVFS6hFR8uoVdXuISmPVwCxl0C5ldc 4rlKcnEJRXAJR8olFMElCvF9CRB3CRh3iaedGwyX0B6HS9BOZ3CJ5yr1iks87ShkuIRGJ1zCPwqX 4LNaneESHh1c4mlHM8MlPFu4hHY5XIL+dAfAJTRb4RIwfSLEJfis3h9cwvuDS/i4cAlNKS5Bl7tq 4RLFfKrfeMVvd5OvrnhH6oofkP2Kd6SueEfqige56RXvzXDFT8x+xcMcvrKsKz6YfvnUFR/M4cDf 5xU/IfsVH0Pvl2Fd8TBrX8zUFQ9z+LZ2v+JBrn+0K36IYF3xA1NXvDNc8QNTV3yE8HBX2K/4CE/X 7Lri47N09TD0p674gakrPvps36B6gXHFe/WwehiYWj14hbF68Apj9eAVxurBS4zVg4eQ1cPE7KuH ganVw8DsqwevMFYPXmGsHrzCWD0M3anVg9+fWD142lk9eImxevB2WD0MTK0evMRYPXiJsXrwEmP1 4CXG6sHjzOphYPbVw4Tsq4eBqdWDlxirBy8xVg9eYqwevD+sHqI//VvW+gbV0843qF5ifIM6tFNn Agfm80ygFxhnAr3AOBPoBcaZQC8wzgQOUa4zgQNTZwIHT6szgROznwn0CuNMoFcYZwIPFTZb5uXL c3qOYJmOlGUqgmUqgmUWMlimNhOWebHzK2GZMLquDMYt09spyxyQssyLHSkJyyxmsMyLHTvBMgsZ LNOjjGU6g2UqE5bpDJapGQ3LJDxumXyWW6b3B8ukP13ZsExNO5apBRaW6c1gmc5gmVphYZlaYWGZ WmFhmVpiYZka5rDMgSnLdAbLdKYsUyssLFMrLCxTKyws07uDZdKf/mAOlqlpD8vUEgvL1HbCMp3B MrXEwjK1xMIytcTCMrXEwjI1zmGZzpRlDkhZpjNYppZYWKaWWFimllhYpvYnLJP+9H1sLFPTHpap JRaW6e1gmc7slqkFFpapBRaWqQUWlqkFFpbpUcYyncEy3dOwzIEpy9QKC8vUCgvL7BU2W+b1y1OV jmCZjpRlKoJlKoJlFjJYpjYTlnm14ythmTBumVc7vhKW6e2UZQ5IWebVDhKFZRYzWObVTgBhmYUM lulRxjKdwTKVCct0BsskhG6ZhMctk89yy/T+YJn0R58G8bRjmVpgYZneDJbpDJapFRaWqRUWlqkV FpapJRaWqWEOyxyYskxnsExnyjIJYT95gGVqhYVlaoWFZXp3sEz60w+cYJma9rBMLbGwTG0nLNMZ LFNLLCxTSywsU0ssLFNLLCxT4xyW6UxZ5oCUZTqDZRJCfb7ZSywsU0ssLFP7E5apJRaWqX0Oy9QS C8vU8gnLdGa3TC2wsEwtsLBMLbCwTC2wsEyPMpbpDJapN5+wTJ1ywjJh2rohLFMrLCyzV9hsmeuX 520dwTIdKctUBMtUBMssZLBMbSYsc7VDS2GZzmCZqx1sCsuEUcsckLLM1Y6YhWUWM1jmamfDsMxC Bsv0KGOZzmCZDL1rC5a52tGwsExvB8skPG6Z9NktU8cVlqnVE5Y5MLtlaoGFZcL0L96xTK2wsEyt sLBMrbCwTK2wsEwtsbBMDXNY5sCUZWqYwzI1PlimVlhYplZYWKZWWFimDwvL9GFhmc5gmVpiYZna TlimhjAsU0ssLFNLLCxTSywsU0ssLFPjHJbpTFmmhwfL1JtYWCZMDw+WqSUWlqklFpapwwrL1HGF ZTqDZWqJhWV6O1im3unKMrXAwjK1wMIytcDCMrXAwjI9ylimM1imRicsc2DKMrUKwzK1wsIye4XN lnn/8iS2I1imI2WZimCZimCZhQyWqc2EZRYzWKYzWCaMW6a3U5Y5IGWZdz0ejWUWM1hmMW6ZhQyW 6VHGMhlWP9mKZSoTlqnhCcuE6U9/Y5mE5/wLl8bz6yPLinBpOFKXhiJcGoX0d/JwaRTSX8kTl4Z+ UlwaML2GuDSK6e/AiUsDpj9dwKXh/alLY0Dq0iimv0onLg3Cow88xmfZA4/xUfrA49BlLg26ow88 RtL1gcdoRx94jEzoA4/e51iAOcMCzBkWYM6wACPO/W1NtQB7tqNhP1uA+UexANNUxAKMVOgDj9GO L8A0FbEAY+h9ZcACzBkWYDD93C4LMJh+CoQFGEx/mrEWYJqKWIBpKmIBRggPTC3ASEU/KMICjHb6 i6FYgHkqWIB5CFmAwfg2vzMswGD6i5hYgHk7LMAGphZg5KLvmbMA05zGAkxzygJMr5xYgJEuX4DR jj7w6CmNBRjh0Zc+BdMXMyzAnGEBpldXLMC8PyzAtDRiAQbTl1f7AgyiH7JmAaZJjwWYJj0WYDrb xgIMpveYBRhZ19eY+g0hFmBaYbEA0+qJBZjeC2MBppXBa0x9euM1pn4h8xpTr4z7bZsDX+9Wc+jP fZsDX6+fFM8sxq+vT1krghg7UmKsCGJcSH/YEzEG6asQxPhlB61CjLU3IcYwXZ4RYz5L3+8fzOE1 H7sYM6zDA74lxjD69G50uUsvYuxDLzEuZBBjbwYxpp2eCcT4ZcfZQoxppwskYgzja0aYLtiIsfY5 xNgZxJj46NO7wXTrLTHWMIcY050utIgxjH8z4alAjD3MiLH2J8RYUxFirJ8VYgzTvy5AjDUVIcaa CsRYUxFi7F1GjDUVIcaaihBjbwcx9jAjxt4OYqzjCjHWMIcYa5hDjDXOIcb0R9+G6uMKMdY4hxh7 OyXGGuYQY28GMdYrJ8TYw4wYa5hDjDXMIcbanxBjHVeIsTOIsaYixNjb2cVYMxFi7K0gxj5yxLhn YjStj19s+sK0HCnTGpDdtBwp0wJR0wpETQvGTct7g2kFo6YVn6WmFYyZVgxLTSsYNa3osprWMPTd tEDctIZmyrSinb5BW6YVjL4nJRg9aQzjW5BDO2VawXQbK9Ma2inTCubwPrXdtCKG9taEQHQL0lOB aQ1hLtMahlWmNTG7aXkqMC0PD6blqcC0gunHf8u0guk2VqblqSjT8lRgWp4KTCu6c3gv225ani5M y1OBaQ2pKNPyqxTT8lRgWj4uTMvHhWl5KjAtzwWm5bnAtLwMMS2Pc5nWhOymNTBlWp4uTGvocpmW pwvTCqafuSjT8pRiWod0zb6xfHnm1BF8w5HyDUXwjULcN0DcNxY7dhK+ob0J34Bx3+Cz+pd6+AaM +gbDct+Acd+gy+4bPvTyjUIG3/Bm8A3acd9Y7HBP+AbtuG/AHL4WLd/wdvANGN3ZibH3yRvfWNqh nJ/5hsYQ3/Du4BseQnzDh45veAjxDW8H34DRnR0PYfiGhjB8Q0MYvuH9wTd0XPiGhjl8w5vBNzTM 4RveDr7hw8I3PMz4hocZ39Awh2/o1R6+oeMK39A4h294O/iGxjl8w9vBNzTO+IaGOXxDwxy+4WHG N7w7+IYOK3xjYMo3NBXhG95O7ex4KtjZGdqpnR0f+76zc8jEbD/rl2chHcF+HCn7UQT74ffj1X5A 3H70l+rDfrQ3YT8wbj98ltsPjNoPw3L7gXH7octuPz70sp9CBvvxZrCf1c4nhv3A+G4LjNvPakc8 w368HewHxndbvB3sZ9UzntgPMfyVS/X+5dlMR7hUC+llxKVaSD+RxKUKYj8x732JS9UZLtWBqUvV GS5VZ7hUnalLdUDqUnWGS9UZLlVn6lItZLhUvRkuVU1oXKp3O+ASlyrM4evlulTvdugkLtW7HeCI S/VuB0HiUr3b4ZW4VO926CQuVWKoG6MgvjGqqYiFioeZhQrM4efja6Hi7bBQ0VTEQoXw+MaohjkW KpquWKjA6MNxwfTdylqoaLZioaLpioWKpisWKnpDjYUKqdAfowim7wmzUPF0sVAhPPoVtF9dsVDR MMdCBUZ/jGJiaqGiaY+FiuYrFiqaLxYqhFl/jCJSqj9GEUz/NXsWKjD6g5ye0lio6A0qFioanlio aHhioaLxiYUK49KH46KdvuDZFyoaHb6CHqJTX0EfojNb1OPLg3yOYFGFLC0CWFQh1/aABhalH4RF FdIPg4ZFwejPmsMcfj4Di4Lpn4VF8Vn6k17Rji54dORhUc5gUc5gUc6URRUyWJQ3g0U97DREWNTD zm+ERT3s5EVY1MNOcIRFaX/Coh7tdMbPLAqmqw0WpfEJi6I/3Q7Lojw8WJSHB4vy8GBRHh4sysOD RWl4wqI0PGFR9Ofw1XFZlMYnLErjExal8cGiNDxhURqesCgPDxbl4cGi6E83JCzKw4NFeXiwKA1P WJTGJyxK4xMWpfEJi9L4hEV5f7AomK4kWJTGEIvyj8KiNMxhUc5gUZ4KLErbCYvSVIRFaSrCojQV YVH0pz8LgkXpzTAsSlNRFuUjx6I0gmFR3k4d5BsiWE+4DBGsJ1yGCNYTLjHy/pRHPeHi0eEJFy94 nnDx+y5PuAyfVU+4eAx5wsVzwRMuQzv1hMshF7P1vr4+VKkI1utIWa8iWK8iWG8hB4PEemHceovp j5CH9dJO38LHemmn76tjvbTT7bmsV0ce1usM1usM1utMWW8hg/V6M1jvy84RhfW+7BxRWK+2E9b7 snNEYb3eDtb7ameEfma9MG69Gp+wXvpz+CGB3Xo9PFjvy45ZhfX60LFeZ7BeDzPWq+2E9WqYw3o1 zGG9GuawXvrjhyo1zlivDwvr1fCE9Xo7WK+HB+v18GC9Hh6sV8snrBem2yHWqyEM69Wxh/VqDMN6 vR2sV2MY1qsxDOvVGGK9dKdvZWK9MH37Fev1EGK9PnSsV0MY1qvthPVqCMN6NYRhvRrCsF4tsbBe vWGW9WoEw3p95FivRxDr9XawXo8g1usRxHrd07Bexn4w47JejU9Yr2YirFczEdbrn4X1agzDejUX Yb3ezv5c9yEVoxg/zl+e/nWkxHhAdjF2pMTYkRJjkMMhkBJjmFt/M1CJcTBdekuMg9Ev1aM//d1k JcbB2K8yx0cd3HkX46GZEmOPIGI8MLsYg7gYD82UGEc7KsYwLsbeDmIcjIpxMLodHONSMQ5Gxdjj gxhHf0yMh/CUGA/DKjEeQlhiPDAlxkOYS4y9HcTYw4wYe5gRYw8zYhz9UTH2OJcYD8MqMfbwIMZD OyXGQ3hKjIfwlBgP4Skx9vJBjINRMfYQIsY+dsTYY4gYD+2UGHsMEWOPIWLsMSwxju6oGAejYjyE sMR4GHqJsYcQMfZ2EGMPIWLsIUSMPYSIsZcYYuw3zF2MPYKI8TDyEuMhgiXGQzslxkMES4yHCJYY D55WYhxjVzH2+CDGngnE2DOBGA+fVWLsMUSMPReI8dDOLsaHVMxivHx5MNwRxNiREmNFEGNFEONC Dju0iPGiB7ER44EpMYbpJooYL3qiGzGGsR1jH3mIsTOIsTOIsTMlxsvXB8OHZhBjIqgvPAqmLWNC jLWdEGOi3O0QMYbpQosYL3rAHDGmP/18A2Ls7SDGix4wLzEG6eFBjOlOfx8UYuzdQYz5LP0phqHL iDFMP92JGMP0k6SIsX5WiPHAlBg7gxhrnBFjvY5DjDXMIcbeHcSYq6tLL2KsV2CIsd7nQowZ1+FE aomxXqUhxnpVhBjrlRxirDEMMfbPQoy9HcRYcxFirKWBGOsVGGLs3UGMvTuIsXcHMdZSDTHW/oQY 9/7MLnGro5AtKbhEIX1OxSVAmgbhEjc7WIBL0EorfFwCRM9cwhweRMMlvB1cQscdLqFdDpfQkeMS dLm/LB2X4KP6F+a4BEzfX8QlfFjlEiAHTyiX8FHhEjc7LBEuoe2ES8AcfrKpXEKzHi4B4y7hDC7h fcYlaKf5WLiEpguXoJmuLbiEFmG4BF3uT6XgEqTrn37hzvH48tyKI9w5Crn+we4cIDe7cxTSz3xz 56AVv3PQjJ5bgTl4NncOHXfcOYrxcyswh8u57hwP+4Yn7hx0ud8QuXM87BunuHM87Bse7hyOcOcY mLpzkC39PYIYer80uHM87Iu9uHNotuLOQbZ8ex6mKz13Dme4c5D1w0nsunPQjp3W9sqIOwep6DdW 7hww+jKwYHwVAtOHzipE24lVCOPSH4Tz0ohVCJ/VVw+sQkiF/h6BlxirEK2wWIXwUb4K0XtGrEJo p2/zswrR6olViJZPrEK0fGIVouUTqxBNe6xCtHxiFeLtsArR8olViJZPrEK0fGIVouXDKkSrJ1Yh Wj6xCtHyiVWI3sRiFUI7+nsEXj6xCtHyie15LZ/Yntfyie15TXtsz2v51Pa8t8L2vBZPbM9r8cT2 vBZPbM9r8cT2vFZPbM+7p7E9r9UT2/N684nteb35xPY8jG/Pa/XE9rxaTWzPw/Q+sz3v7dT2vMrR dkGcrufzOJXeN/O5nN+tpn/Pdt9U9rVZ55iKzWUf53Usjcf5erqd362m73Jv1/nW5/ebz+GFE5v5 XM7vVvPV7/c9lp9NdfMa4mlnXVhDFNJvWqwhaKWFnjUESCs61hBP+0qINUQhB5FmDQFz+Cm/WkN4 O6whnvZNV6whYPSJz2D6tkGtITQLsYaA0VfcRKr68ok1BMx9r49/2f75n28f6MfrIR+bJ//1P779 2/f//rY51uO2ecKmbLcf9/Lt/yzb6G87sP1bH8THv/r5H353fd3P3//9L99+/6fz9z//71B7z20W /8z3XWoPpO8FVO0FYk8bB/KU2gPp982qvUB0/Qrj72yJYetPTXp3qL0YVa/Pqr1ox9avIL+y5/Ax Mf58K4CcFdL3rshZIfqeHRDdrQTp9UPO6EsfPzmD0T2HYHo+yBmMPiEejO5WwvxSQrbF8RebQI6Q kEIW2wQC8YuokB4BEkIregP3ZiIhA1MJgek3XhKisYmEOFMX0YDsN3CYq97APcjcwIeh75tAQ5Rr E2hopjaBBqY2gTzKbAJ5eNgEmph9E2hgahNoYGoTKMpdz2jGuA5bw/sm0MR8bgJFKtqChU2gaKZv 8NQmUDB6RnMYem0CTcy+CRSV2ncnahPIK5VNIC9DNoGC0TfCB9M3eGoTKJi+M7NvAkUI+8ZMbQJ5 utgE8hCyCTQwtQk0MLUJNDC1CTQx+yaQM2wCedrZBPK0swnkaWcTyNPOJtDQ59oEGpjaBBqYfRNo QvZNoIGpTaAIof4opd8w2QTyamYTyG/ObAJ5utgE8nGxCTQwtQk0Mfsm0MB8bgLFrblvutQmkEeQ TaBh5LUJ5IXKJtAQ5doEGj6rNoEGT6tNoInZN4EGpjaBBqY2gfxiZxPIx8UmUMTnsFG0bwL5NMkm UDB9h2ffBDp0Zxbj5/VLMVYEMXakxFgRxFgRxFiREONijrsWJcYw/SADYgyjL+8d+oMYO1NiPCAl xs4gxs4gxs6UGDuCGBOd7umIMYy+ETSY/nZNxFj7E2I8MCXGziDGziDGziDGziDGA7OLsSOIsTOI MWHu33wixt4OYjwwJcbKhBg7gxg7gxgPTImxM4ixMyXGjiDGhPkgvSXG3g5i7Axi7Axi7AxiTJ/1 p5L8Sg4x1is5xFjvzSHGem8OMdZyDjHWsYcYO4MYO1NiPCAlxh5CxNjDgxhrukKMtZ0QY01XiLGW c4ixjj3E2BnEeGBKjJ3ZxdgJxNgZxNgZxHhgSoydQYw9W4ixexpiPDAlxs4gxs4gxoM3lhg7gxgP TImxM4ixMyXGvZhnMd7+la/EWBHE2JES40L02CBIX2EgxoUctt4RY5pxMaYdfarfuxNirAMPMXam xHhASoydQYydQYydKTEupO+IhhgTnf7FFWLs2UKMaadviSLG2uUQ44EpMXYGMaZO++ldxJhx9dO7 iLFWWIgxY7dfa/cwhxhrmEOMPcyIsYcHMR6YEmNlQoydQYydQYxJl77k1cceYgzTd55LjDUVIcak Qn+tfRgWYuwMYuwMYuwMYjwwJcZ69w4x1ooPMdYwhxhrnEOMNc4hxjquEGNnEGNnSow9PIixTich xjp1hRh7CBFjZUKMNcwhxhrmEGMNT4ixM4ixxjDE2Me1i7GPCjH2USHG9MbFWG/fIcZ8lj7VH0zf 5UaM3dMQ44EpMXYGMXYGMUYR+jE9xJj49CM9iPHAlBjrlRNirDFEjDsyivHrYod3SowdKTEekF2M HSkxdqTE2BHEeGBKjCdmF2OYwxMjJcZDOyXGA7OL8YTsYjwwJcYDU2I8MLsYD0iJ8cTsYjwwJcYe ZcTY20GMJ2YX44EpMR6YEuOBKTEemBLjifkU4wEpMR6YEuMhzCXGQzslxhOzi7EziPHAlBgPTInx xOxiPDAlxgOzi/GAlBh7mBHjoZ0S44EpMR6YEuOBKTGemF2MnUGMB6bEeGBKjAemxNjjjBgP7ZQY D0yJ8cDsYjwhuxgPTInxwJQYD0yJsTOI8cTsYuxhRoyHdkqMB6bEeGJ2MR6YTzEeiBLjgSkxHpgS 44nZxXhgSoyHKJcYD55WYjwxuxgPTInxwJQYT964i/HAlBhPzC7GA1NiPDC7GEeY118Q46udxUaM FUGMHSkxVgQxLqTv/iDGhfTNnxDjqx/oLjG+6uFxxJjP0kPfPqoQY2dKjAekxNgZxLiYRR80j6H3 MxAlxo4gxlc9Eo8YE0H9WTBvJ8RYhx5iPDAlxs4gxs4gxhrmEGONYYgx8enbwSXGWswhxjTTu4MY a8GHGPvQEeOBKTHmOnYx1tIIMdbSCDGmnf6UNGJMuvRBc48PYqypCDHWVIQYawhDjJ1BjJ1BjJ1B jD08iLGGJ8QYRneMBwYx1qsixLiXxjzJrV8+SOMIk5wjNckpwiSnCJOcIjHJOcMkNzA1yRUz7P54 O0xyztQkNyA1yTnDJOcMk5wzNck5wiQ3MDXJOcMkp1GOSU7biUluYGqSc4ZJzhkmOWeY5JxhkhuY fZJzhEnOGSY5DzOTnLfDJDcwNckpE5OcM0xyzjDJDUxNcs4wyTlTk5wjTHIa5pjkvB0mOWeY5Jxh knOGSW5gapJTJiY5Z5jknGGSc4ZJrsd5nuTu+0khfXwXRB/fBdHHd0F8kitEXxkWfdGzP9GZvpxh kqMdX8kV8yuP3b4eXx6iAunvOyOQD/vqkkAW0mWIQD7syzIC+bBvUSOQ9Fd/GiV6c3hWugIJ40ti DV/YgjNlCwNStlDM4XF7bIEu9zrDFmB6nZUtEGV97DaYw0GrsgUti7AFzWjYgoYnbIGsH97sWLbg 7WALtNMfA8YWtFDDFgiz24LGEFvQEIYteHewhYd9yRy24OHBFgambEGZsAUNc9iCjitsQS/SsAWY /lnYgsYZW/DuYAsa5rAFDw+24Ay24CHEFrzP2AJ91p9G8ftl2IJ+VtiCxidsQVMattDbmSe519cH IhRhknOkJrlC+oMRTHIg7fbLJAfSH2tjknOGSW5gapJ72fcUMcnpwGOSc6YmuQGpSc4ZJjlnmOSc qUnOESa5galJzhkmOY1yTHLaTkxyA1OTnDNMcs4wyTnDJOcMk9zA7JOcI0xyzjDJeZiZ5LwdJrmB qUlOmZjknGGSc4ZJbmBqknOGSc6ZmuQcYZLTMMck5+0wyTnDJOcMk5wzTHIDU5OcMjHJOcMk5wyT nDNMcj3O0yR3P19sZ3uf5AKxN+AlIkviRNq0sk9yichKLhDd980h2RvwgtE3WmV37A142U5fNe6T XDC/sLTe4llvgDicM90TUkhXHRLiSCUERJ5PCsQ24rO7Zh3BHBZ0JEQ7HAmB6d+5khDtciTEmU/r GJFP65iY3TqCObxHfLeOKTyf1jGNfLeOkfm0jvwoe3B/aKesYxh6WUdWxuHV55/Wkf3pD+ns1pH9 sV8dTcasY+rzbh1DNe/WMWSrrGMo5rKOKcy7dUxd3q1jZD6tY2DKOiZmt46hmss6hviUdQxjL+uY mE/rGKq5rGNqZreOaei7dWRl9GXzbh3T0HfryP70L9F368gw24P7Q5/LOoZqLusY+lzWMZRzWccQ 5zqGOfV5P4Y5MfsxzIn5PIY5Ip/HMIdqrmOYU3j2Y5jT0PdjmANTxzCHcq5jmFM7+zHMYex1DHMo 5zqGOYy9jmEO5fx5DDOj3F98vh/DnHq8H8McirmOYU493o9hDsVcxzCnKO/HMIc+1zHMkfk8hjkx +zHMidmPYQ7VXMcwh/jUMcxh7HUMc2L2Y5hDNe/HMI/NzGK8fvVGqwFBjB0pMVYEMVYEMVYkxNgZ xHhgSoyLObz1CjH2dhBjZ0qMB6TE2BnE2BnE2JkSY0cQY6Jz+F6qxNgjiBjDdDlEjLU/IcYDU2Ls DGLsDGLsDGLsDGI8MLsYO4IYO4MYa7pCjL0dxHhgSoyVCTF2BjF2BjEemBJjZxBjZ0qMHUGMCXP/ Cgwx9nYQY2cQY2cQY/pjv/eTjIsxjH3nlIy90WoowxBjvWuEGGucQ4w1PiHGziDGzpQYD0iJsYY5 xNjDjBh7mBFjDXOIsYY5xFjDHGKsYw8xdgYx5rO6HiLGOvYSY28FMdYohxh7BBFjnbpCjDXrIcY+ KsTYPQ0xHpgSY2cQY2cQY41hiLHGMMRY4xNi7O0gxlrNiHEv5lmMb3XER061BXI4aVViXEhfySHG Nzs6gBjTl7bphxiD+I6x9jfE+GZf+IcY3+zgQIjxzQ5NhBg7U2I8ICXGziDGBNl3jD08JcY+csR4 YEqMb3Y8I8RY2wkx1qGHGGshhxjr0EOMtZRDjPmsvkWLGHufEWOtZsTYu4wYazGHGHuYEWPvMmI8 MCXGyoQYO4MYazWHGGt8Qox17CHGzpQYazWHGHsziLEPHTHWCgsx9qEjxvTnIL0lxnzW4cBWibH2 OcRYqznEWPscYqzlHGKscQ4x9j4jxs4gxs6UGA9IibFWc4ixhwcx9qEjxsqEGGs5hxh7O4ixjj3E mNLomokY0x8XY2d2MfYeI8beY8RYiznEWLMVYqzFHGLsfUaMtc8hxgNTYuwMYuwMYqzVHGKs8Qkx 1rGHGDuDGGs1I8a9mVmMH18d4BwQxNiREmNFEGNFEGNFQoydQYwHpsT4YYexQoy9HcTYmRLjASkx dgYxdgYxdqbE2BHEeGBKjJ1BjDXKIcbaTojxwJQYO4MYO4MYO4MYO4MYD8wuxo4gxs4gxh5mxNjb QYwHpsRYmRBjZxBjZxDjgSkxdgYxdqbE2BHEWMMcYuztIMbOIMbOIMbOIMYDU2KsTIixM4ixM4ix M4ixxjnE2NtBjJ1BjJ0pMR6QEmNnEGNnEGNnEGNlQowHpsRYwxxi7O0gxs4gxgNTYuzMLsZOIMbO IMbOIMYDU2LsDGJMlO3HwSZPQ4wHpsTYGcTYGcR48MYSY2cQ44EpMXYGMXamxJgw1xutPn7W+v8A DfmDNgplbmRzdHJlYW0KZW5kb2JqCjE1IDAgb2JqCjw8L0ZpbHRlciAvRmxhdGVEZWNvZGUKL0xl bmd0aCAxMDI0OQo+PiBzdHJlYW0KeJytnd2qLMtxbu/XU8wXUKu7uvoPDgesY1nXNgI/gLEFBhks vz+cmluzYnwdtUbMdWE2yJvpsbMzI6IqR2ZnVV8+zts/v7ts//N4LR//9tcf//3j8y/3y/LbH/72 7z/+9eO/Pv94etx+Y7/+5Xfr4/Ybf/n4/Odf/vTx93/5219+/P5P54+//M/WzP16vX88L/ePy/l2 /2zqP35r+/Mvy+WynPc//ZS7X5/r1x/PT/7j8+dH/f1fto9anx/353Xr+/LbZ39C258el/Ol/ek3 6vKxPJa//+lxP/7tE1svt5/9l+1Pn9TlY73co7H2t0/scwQ/+S/bnz6py8c1O/b+p3/e/vlpVq73 z1z8/X9v19dpuX0sy+n2C0n7StEf/vzj9/+0xeD68ef/+HHZS2HLwX093z7+/Ncf/+d8vv3D//34 83/y/71fT8/z9l8kcntHHpfT/bx1JpD1/zXkdVrP1+dbK4935Pk4Lefb5a2V5zvyup3O5/tbX87X d+RyXrb+Pt6GdFkbczlvHX6+3phbZ56ndWvsjflDY5b7aSvs9a3Lf/yN+eOfv0vj/bzn8Lp1538n idtl9V0SQTSJhXgSaUWTSCuaRJBFkwjz0iRqhyOJMP+kSdQuX67r6XxZHslc/rEx6+X0fG/mUJvr 63S/3N66c+41dXtsXb6/56Flc/t/b11+PKd0Xh7Xrctv0bn1KD/Pp+dyfq+KPvLn83RfLu/F1Yf1 up/WZRkzsZzX07Ks74XRame5LKetAt9CuJw789r6fH/PROvPsjy2Pj/e+nNuWV+ut63Pr2VK17Je T8v1/J6KlvXldjmdr5fHFOfl9jw9r+8F37K13O+n+3V9T8W9MY/1tF5vtzE8z2Xr8v2tMvrdb3md ty4/31Jx7ul6PbYuv9ap4LfInLYb330K4fVyPa3rexn2i+u6XE5bqJepmq/L63Re13Vktvvqc709 pvhc13Xr82Ps8m3Zuvxcpgq73s9bl1+3MTz35+l8Oz/Hz3rcTs/bcpmuruvzerrfWvm0dF1fl9N6 W+/T1XV9fU42t9cUnvX82Pr8uEzls17Wrc/PdYrPdjfY+vwar9L1ej6t9/Nbfy6vzjxPy315z0Ur n3W9n7bp5D0X7aa63rb5r+W952u9bxPgfazm9b5NgPfHWyquLczrY5sB78/3G2bv8nObAu+v53RV rK9tCnxcLlN/budtCnwsb6m4njuzTYGP63tptJTeLtsU+Fhf09V1W7Yp8HEfb5i36zYFPh7vpdHb 2Zzn/ng+ppTe1m0OHC+c222bAZ/vQtML9bZJz/n5bjSH3jy2KfD5rjS9MG6PbQp83saL4vbcpsDn u9T0Yr69tinw+S41/bM2A9v6/C41PTrbNXN6vs6X6bPul20KfM02cl+2KfDVpKbFZ1upnZbX9TVd yNsa53R+vVtNj8993abA17vVXB67KQ9GuzUrvS+jVQSjLaR7MUYL0kKA0RbSb78YLa2020IYLc10 18JoizlYJkar4w6j1S6H0dLn7odltIX0CSOMli731RZGC9OHjtHC9C6X0TqC0cJ0wcZoi+k3zTBa ht6VDaMlE90yMVrNVhgt2eqfhdFqKYfRarrCaIlPu3WE0Q7MbrQg3VYxWs16GK1WTxitMxgtQ/+j Gq2mNIyWz2oXRRgtqWhjD6PV0gij1dIIo9ULGaPV6ziMFsaN1oeF0WqFhdFq2sNoNaVhtHolh9HC dB3DaDUVYbSaijBanSnCaGG61mG0MBc1Wp1Pwmg1FxitDwujJYR9WBithxmj1ZtGGK3ewMNoYQ7W W0ZLeLr1YrRazmG0OiWH0cLcf8FbthvWd96iCN7iSHmLInhLIWuLAN4C0nfH8BYY304t5uA/eIv2 OLwFps/eeAuf1WWivAXk3pHyFobVm8FbNILhLXRZvaWQg9XhLR4dvIVhubfQZd+Jo512BwpvKWbY iWNcfUcPb4Hpkzfewti7R+EtGp/wFo0P3qLFHN5CM60Iw1s8PHgLQ++OhLd4ePAWmKZ+4S2aivAW bwdvgTns1pW3aLrCWzRdeIumIryFVLRJJbyFdvqw8BZNV3iLpiK8xcODt3gq8BaY7gB4i35WeIuW WHiLpiu8RdMV3qI31fAWnQfCW8hXZ8pb9N4c3qKlEd6iaQ9v0bSHt8C4t8D0/R+8Baa7Dd6ipRHe AtPuUOEtMLdf8Jatmr7zFkXwFkfKWxTBWxTBWwoZvAXGvaWYwVu0O+EtMO4tfJZ6C4h7C8Nyb4Fx b6HL6i2FDN7i0cFbGJZ7C112b6Ed95ZiBm9hXO4tMH1PBm9h7O4tGp/wFo0P3qLFHN5CM+4tHh68 haG7t8C4t8C4t2gqwlu8HbwFposC3qLpCm/RdOEtmorwFlLh3kI77i2arvAWTUV4C4x7y8CUt8C4 t8B0l8BbtMTCWzRd4S2arvAWvamGt+g8EN5CvtRb9N4c3qKlEd6iaQ9v0bSHt8C4t+iVE96iKQ1v 0dIIb4Fxb+k33tFbtnF85y2OlLcMyO4tjpS3gOh+SyDqLcGot8C4t3iP8ZZopzPlLdEf85Zopp9G Km/x6OAtw0eVt8RntTorbxmiU94SH3X4Lmn3liFb5S3eZbwl+tMn1PIWzxbeEkyf4Mtb4rN6f8pb hv6UtwzjKm/xlJa3RJe71pW3wHxp5nzFX/R77briFeGKd6SueEW44hXhii/k5le8NhNX/MDUFX+x r+Hjiofplw9X/MW+8ueKH5C64hl6vwy54otZ+8KJK76Yw7fQdcUXcv2jXvEeQa54Z7jilYkr/tIO O/zsiofp3+hyxROerutc8fRHzzoOfeaKd4Yr/mKHU7jitcDiitfqYaUyMLVS8QpjpeIVxkrFK4yV ipcYKxUPISuVidlXKgNTK5WIT19i7CuVAamVilcYKxWvMFYqQ5drpeL3J1YqnnZWKl5irFS8HVYq A1MrFS8xVipeYqxUvMRYqXiJsVLxOLNSGZh9pTIh+0rFy4eVit/EWKl4ibFS8RJjpeJ9ZqUS/ekn /mql4mlnpeIlxkplaKfOOg7M11lHLzDOOnqBcdbRC4yzjl5gnHUcolxnHQemzjoOnlZnHb3COOsY TDNIzjp6hXHW8VBhs2Vevz1/6AiW6UhZpiJYpiJYZiGDZWozYZnF9H2ZsMyrHdAIy4Rxy/R2yjIH pCzzakdlwjKLGSzzasdpsMxCBsv0KGOZzmCZyoRlEp6DQZZlagjDMgmPWyb9ccv0PmOZVztKFJbp fS7L1AILy/RmsExnsEytsLBMrbCwTK2wsEwtsbBMDXNY5sCUZTqDZWqJYZkguh/uFRaWqRUWluld xjLpj54/9LSHZWqJhWVqO2GZzmCZWmJhmVpiYZlaYmGZWmJhmRrnsExnyjIHpCxTKywsU6elsEwt sbBMLbGwTO1zWCb96XvmWKamPSxTSyws09vBMp3ZLVMLLCxTCywsUwssLFMLLCzTo4xlOoNluqdh mVphYZma0bBMrbCwzF5hs2Wu354WdQTLdKQsUxEsUxEss5DBMrWZsMxiekLCMlc7KhOWudpRmbBM b6csc0DKMlc72BSWWcxgmaudSMIyCxks06OMZTqDZSoTlrnaYaOwzLUdNvqZZRIet0z645bpfcYy vc9YpqYdy9QCC8v0ZrBMZ7BMrbCwTK2wsEytsLBMLbGwTA1zWObAlGU6g2VqurBMkP4gDJapFRaW qRUWluldxjK9P1impj0sU0ssLFPbCct0BsvUEgvL1BILy9QSC8vUEgvL1DiHZTpTljkgZZmarrBM GH3KxUssLFNLLCxT+xyWqX0Oy4TpT7BgmVpiYZlaPmGZzuyWqQUWlqkFFpapBRaWqQUWlulRxjKd wTLd07BMzVZYpjNYplZYWGavsNky79+e7XUEy3SkLFMRLFMRLLOQwTK1mbDMux1+Cst0Bsu828Gm sMy7HZDCMgekLPNux9DCMosZLPNu58ewzEIGy/QoY5l3O4YWlgnTz4lgmRqesEyYw7fqZZmExy2T cbll6tjDMrV6wjIHZrdMLbCwTBh9lnoIIZapFRaWqRUWlqkVFpapJRaWqWEOy9QSC8uE6V91Y5ka QyzTm8EytcLCMrXCwjJ96FimVk9YpjNYppZYWKa2E5apJRaWqSUWlqklFpapJRaWqSUWlql3n7BM Z8oytXrCMr0ZLFNLLCxTSywsU0ssLFNLLCxT0x6W6QyWqSUWluntYJka57JMLbCwTC2wsEwtsLBM LbCwTM16WKaPCstUJiwT5g+dKcvUaTIsUyssLLNX2GyZz+9PYiuCZTpSlqkIlqkIllnIYJnaTFhm MYNlOoNlwrhlejtlmQNSlvnU49FYZjGDZRbjllnIYJl0Wd9BOTBYJozvZcL4uUxvB8skPOdfuDRe 3x9ZVoRLw5G6NBTh0iikv2uIS6OQ/qqhuDT0k+LSgOk1xKVRTH9vT1waMP3pAi4N709dGgNSl0Yx /fU/cWkQHn24Mj7LHq6Mj9KHK4cuc2nQHX24MpKuD1dGO/pwZWRCH66McfXFHguwl53pigUYTF/x sACD6V87sAB72RkzFmAg/a7AAuzVjo/9bAGm6YoFmKYrFmCkSx+ujHb6ljkLME1XLMAY+2FxVQuw gakFmKY9FmAwfcVTCzCQft6EBZhWRizAtDJiAaYpjQWYpjQWYKTCt/lJqT5cGe30NzGxANOUxgJM 0xULME1FLMAGphZgMPoyq6EdFmDO1AKMVPgCTFMaCzBNaSzA9AqMBRjpOjw4WQsw2vmHztQCTFMa CzC9icUCzBkWYHp5xQJML69agDnBAkwvwFiAafHEAsw/iwWYj4oFmFZGLMDc01iA6ewfCzAYX4BR GZ1hAaY3DV7P6lXI61m9wng9q980tnXg1ud3qzm8Lva+zYGv90Ltw3osp+v5PE639818Lud3q+nT yX1T2dd5eY5d3lz2cX63ml5ij/P1dDvfxotiu863Pt9/1p9R+D9/9+Eb4XekhH9AduF3pIQfpD/E WsIfSN/DL+GHOT5xvAu/9wbhD0Z/VCM+S3+PIZjDq1K+hD+GdXhweRf+YPSp5Ohyl/kS/mHou/CD uPAPzZTwRzu6Fg5G18IwR5nfhT/a6SZawh9MXziU8AfTDbuEP5i+KCjhj/joU8nBdFPfhd/DjPAP Qy/hH8Jcwj8wJfxDKkr4PTwIfzBdwkv4PV0Iv5cYwh+MfuPi6Srh92wh/J4uhD+6o8I/DKuE39OF 8Hu6EP4hFSX8ngqE38eF8Pu4EH5PBcLvuUD4PRcI/9CfEn6Pcwn/hOzC76lA+Id2SviHVJTw+x0B 4Z+YXfg9FQi/pwLh91Qg/NEfFX5P1y78HkGE3zOB8A/tlPB7JhD+oZ0S/iETJfyHTMxWt3x7WtsR rM6RsjpFsLpC3OpA3OoWO7AVVqe9CauDcavjs9zqYNTqGJZbHYxbHV12q/Ohl9UVMlidN4PV0Y6e ownGrc4ZrG5gyuoWO4IXVuftYHUw3Q6xOh17WJ1/VlkdYf6Vq3n99lScI1zNjtTVrAhXcyF+NYP4 1bzaF+NxNWtv4mqG8auZz+pfO3A1w+jVzLD8aobxq5ku+9XsQ6+ruZDhavZmuJppp39lxdW82vGD uJpp57D+qqsZ5vDFTV3N2p+4mmH8ambs+qVMMIf1V13NGkOu5tVOMcQazUPIGs2HxRrNQ8gazVPB Gk0/K9ZoGsJYo2kIY42mIYw1mvY51mje51qjaZhjjebNsEbTMMcazdthjQbT11as0TzMrNE8zKzR NMyxRtOrK9ZoOq5Yo2mcY43m7bBG0zjHGs3bYY2m5cMaTcMcazQNc6zRPMys0eiO/maeDyvWaBrm WKN5O6zRNMyxRtMQxhpNyyfWaD3Os9vcvz2L5Qhu40i5jSK4TSHuNiDuNnc9tITbaG/CbWDcbfgs dxsYdRuG5W4D425Dl91tfOjlNoUMbuPN4Da04yuV+/dnsQYGt7nbEbNwG28Ht4HpEzxu4+3gNjr2 cBtiaO/IC6QrEm6jqQi3oTtd63AbmN4d3MaHjttoKsJttJ1wG01FuI23g9toKsJtNBW4jaYi3EZT EW6jYQ638WHhNs7gNp4K3MbbwW00FeE22k64jaYi3KanYp7AHt+emHSECayQfnNlAiukn91hAgOx H7T3vsQE5gwT2MDUBOYME5gzTGDO1AQ2IDWBOcME5gwTmDM1gRUyTGDeDBOYJjQmMBh9ZC2Yw0sP agJ72NGLmMAedqwiJjAYX5w/7KxDTGAw+vOfwfTXitUERpj7pMIEBqMnJj1dMYF5KpjAYPQH7b2d mMA0XTGBER79+c9g+pe1TGCarpjAtDSYwPyjmMA0XTGBabpiAtN0xQSmN92YwEjF4SeyagKD0Z// 9HTFBKbpiglMYxgTGEyf5Ficw/RH31ica5xjca5xjsU58bGf/4xUHNbvtTiHaTfVWJzD6CNrQypY nOtNLBbnGp5YnGt4YnGu4YnFOeM6PI5Wi3PaOTzWVotzjU99garRiS9Qe3RmQ3p9f8RMEQypkKVF AEMq5NoeicCQ9IMwpEL6cccwJBj9gXSYww9WYEgw/bMwJD5Lf7Ar2tElvo48DMkZDMkZDMmZMqRC BkPyZjCkl313Hob0su/7w5Be9j19GNLLvu8PQ9L+hCG92nf5PzMkGP3BLo9PGBL9OTx3shuShwdD 8vBgSB4eDMnDgyF5eDAkDU8YkoYnDIn+9OkdQ9L4hCFpfMKQND4YkoYnDEnDE4bk4cGQPDwYEv3p ZoMheXgwJA8PhqThCUPS+IQhaXzCkDQ+YUganzAk7w+GBNOVBEPSGGJI/lEYkoY5DMkZDMlTgSFp O2FImoowJE1FGJKmIgyJ/vRH2zEkvRmGIWkqypB85BiSRjCOmHk7HDHzCHLEzCPIETOPIEfMGHnr M8+UeHR4psQLnmdK/L7LMyXDZ9UzJR5DninxXPBMydBOPVNyyMVovY/zt0fwHCnrHZDdeh0p63Wk rBfkYJBlvcGo9cL0h7ax3minf2lV1hvt9G+SynqjnW7Pu/X6yLHegSnrHZiy3oHZrRfErXdopqw3 2ulMWS+MW6+3g/UGo9Y7tFPWG+NS6w1Grdfjg/VGfw6v7v+y3iE8Zb3RjD5YMQy9rHdgynqHMJf1 ejtYr4cZ6/UwY70eZqw3+tP1sKzX41zWOwyrrNfDg/UO7ZT1DuEp6x3CU9Y7hKes18sH6w2m22FZ r4cQ6/WxY70eQ6x3aKes12OI9XoMsV6PYVlvdOfwtqvdeoPpZ5rKeocQlvUOQy/r9RBivd4O1ush xHo9hFivhxDr9RLDev2GuVuvRxDrHUZe1jtEsKx3aKesd4hgWe8QwbLewdPKemPsBzPerdfjg/V6 JrBezwTWO3xWWa/HEOv1XGC9Qzv7k9SHVMxivHx7mt0RxNiREmNFEGNFEONCDseeEONibv1dPIgx TJdexBhGvzCP/vS3gSHGMPY7yPFRB3cuMfZmEGONYIixMyXGhQxi7M0gxosdhgwxXuxQZYixthNi vNihyhDjxQ5Vhhgv7cDkz8QYxsVY4xNiTH9UjD08iLEPCzH2ECLGziDGHmbEWNsJMdYwhxhrmEOM NcwhxvTHxVjjjBj7sBBjDU+IsbeDGHt4EGMPD2Ls4UGMtXxCjGFcjDWEIcY69hBjjWGIsbeDGGsM Q4w1hiHGGkPEmO64GMO4GHsIEWMfOmKsIQwx1nZCjDWEIcYawhBjDWGIsZZYiLHeMEuMNYIhxj5y xNgjiBh7O4ixRxAx9ggixu5piDFjdzHW+IQYayZCjDUTIcb+WYixxjDEWHMRYuztlBj3VMxivH77 KIQjiLEjJcaKIMaKIMaFHHZoEeNiDrvKiPHAlBjDdBNFjOmPPrQdjO4Y68hDjJ1BjJ1BjJ0pMS5k EGNvBjEmgn3kiDFMW8aEGGs7IcZEWV/FE0wXWsR4tcPjIcb0p59vQIy9HcR4tcPjiDFIDw9iTHf6 gVTE2LuDGK92Tj3EGKYPHTGG0Xdv+tBDjJ1BjAemxFhjGGKsFylirCEMMfbuIMZcOXpOwq+uEGO9 h4UYM6zDKdESY70CQ4y14kOM9SoNMdYYhhj7ZyHG3g5irLkIMdbSCDHWywsx9u4gxt4dxNi7gxhr pYYYa39CjHt/Zk941DHH/m7D8oRC+nyJJ4D01w2WJzzs0ACeQCut8PEEED1PCXN4rBJP8HbwBB13 eIJ2OTxBR44n0OX+6nE8gY/qX4bjCTB97xBP8GGVJ4AcHKA8wUeFJzzsIER4grYTngCjP8XpWQ9P gHFPcAZP8D7jCbTTXCs8QdOFJ9BMn5fxBC3C8AS63J8UwRNI1z/+wp3j+f2ZFEW4cxRy7S8hrTsH yM3uHIX089zcOWjF7xw042dSijk4NHcOHXfcOYoZzqQUc7ic687xtG9v4s5Bl/sNkTsHTB86dw4Y e7v/gHDngOln0LlzkC19u38MvV8a3Dme9qVd3Dk0W3HnIFu+9Q7TdZ07hzPcOci6vt0/2rGT2F4Z cecgFf2xOO4cznDngNFn1bzCYoWhlRorDBhfYTD2/hNsrDC0fGKFwWf5CoN0dV2vFYZWYawwtApj hcFHHX6CrVYYel+JFQbt9G1+VhhaYbHC0BKLFYaWT6wwnGGFoSUWKwwtsVhhaInFCkNLLFYYWmKx wtASY4WhFRYrDK2wWGFoicUKQ0ssVhhaYrHC0JthbL3TTv8hMrbetcRi611LLLbetXxi692Zfetd Cyy23rXAYutdCyy23rXAYutdCyy23rXAYutdKyy23rXCYutdKyy23t0J2XrXCoutd72Jxda73sRi 6x2mb/Oz9a4VxtY7IeyL5Hq7fzD6dn8vQt7u7wbF2/29fHi7v4eZt/t72h+b+VzOzWr66n9z2de5 WU1vZ3PZx+W8/qR8dHHxWH+2dBjXIs+znYeptQhIv2nVWiRaafesWosE0rJcaxGQXk+1FgE5vCax 1iLB6Iufgnl1Zl+LBKM/whdMX6/UWsRDXGuRQA4Pju5rkUiDHgMKpn504p+3f/77xyf6+fLix1Z1 f/v3H//68V8/Ng973Da32bTu9tv9fvs/yxbF2w5s/9Un8fmffv3L79bH1pV/++uP3//p/PGX/5nq akvoz5ee1FUhfa+EuipE38cCortjIIvtjkVf+nqRuoLRNW4w/UcZqSsYfdo4GN0dg/mVTYfnthj7 ZtPBERJSyGKbDiC9TklIIT0CJIRW9A1v3kwkZGAqITD9pkJCNDaREGfqQh+QutCLueob3jzIbDoM Q983HYYo16bD0ExtOgxMbTp4lNl08PCw6TAx+6bDwNSmw8DUpkOUu573i3EdtiL3TYeJ+dp0iFQ0 aWXTIZrR33QPRs/7DUOvTYeJ2TcdolL7Src2HbxS2XTwMmTTIZjDI+L7pkMwfbegNh2C6av8fdMh QtgX+bXp4Oli08FDyKbDwNSmw8DUpsPA1KbDxOybDs6w6eBpZ9PB086mg6edTQdPO5sOQ59r02Fg atNhYPZNhwnZNx0GpjYdIoT9G9TadPAbJpsOXs1sOvjNmU0HTxebDj4uNh0GpjYdJmbfdBiYr02H uDXrpoNHkE2HYeS16eCFyqbDEOXadBg+qzYdBk+rTYeJ2TcdBqY2HQamNh38YmfTwcfFpkPER39S 0KdJNh2C6TsB+6bDoTuzGD+v34qxIoixIyXGiiDGiiDGioQYF3P42goxhtHf2g5Gv40b+oMYO1Ni PCAlxs4gxs4gxs6UGDuCGBOd7umIMUzfZ0CMNcohxtqfEOOBKTF2BjF2BjF2BjF2BjEemF2MHUGM nUGMCXM/MokYezuI8cCUGCsTYuwMYuwMYjwwJcbOIMbOlBg7ghhrNYcYezuIsTOIsTOIsTOI8cCU GCsTYuwMYqz35hBjmG6HiPHAlBh7fxBjZxBjZ0qMB6TEmC73NQFirOUTYuwMYqyfFWKsN/AQY/2s EGMde4ixM4jxwJQYO7OLsROIsTOIsTOI8cCUGDuDGOtFEWLsnoYYD0yJsTOIsTOI8eCNJcbOIMYD U2LsDGLsTIlxL+ZZjLf/5DsxVgQxdqTEuBA9pgbSVxiIcSGHrXfEmGZcjGlHnxD37oQY68BDjJ0p MR6QEmNnEGNnEGNnSowL6TuiIcZE53CUrcTYs4UY007fEkWMtcshxgNTYuwMYkyd9rNaiDHj0t+k 9goLMWbs9pvUHuYQYw1ziLGHGTH28CDGA1NirEyIsTOIsTOIMenSF4b62EOMYexBGE9FiDGpOPze dImxDwsxdgYxdgYxdgYxHpgSY717hxhrxYcYa5hDjDXOIcYa5xBjHVeIsTOIsTMlxh4exFinkxBj nbpCjD2EiLEyIcYa5hBjDXOIsYYnxNgZxFhjGGLs49rF2EeFGPuoEGN642Kst+8QYz5LnxAPpi9R EGP3NMR4YEqMnUGMnUGMUYRWqSHGxKcfm0OMB6bEWK+cEGONIWLckVGMX5fbd2LsSInxgOxi7EiJ sSMlxo4gxgNTYjwxuxjD+I7x0E6J8cDsYjwhuxgPTInxwJQYD8wuxgNSYjwxuxgPTImxRxkx9nYQ 44nZxXhgSowHpsR4YEqMB6bEeGK+xHhASowHpsR4CHOJ8dBOifHE7GLsDGI8MCXGA1NiPDG7GA9M ifHA7GI8ICXGHmbEeGinxHhgSowHpsR4YEqMJ2YXY2cQ44EpMR6YEuOBKTH2OCPGQzslxgNTYjww uxhPyC7GA1NiPDAlxgNTYuwMYjwxuxh7mBHjoZ0S44EpMZ6YXYwH5kuMB6LEeGBKjAemxHhidjEe mBLjIcolxoOnlRhPzC7GA1NiPDAlxpM37mI8MCXGE7OL8cCUGA/MLsYR5tsviPHVzmIjxoogxo6U GCuCGBfSd38Q40L65k+I8dUPdJcYX/XwOGLMZ+mhbx9ViLEzJcYDUmLsDGJczNLPLiDGVz2nXmLs CGJ81SPxiDER1J+Y8nZCjHXoIcYDU2LsDGLsDGKsYQ4x1hiGGBOfvh1cYqzFHGJMM/pgsxd8iLEP HTEemBJjrmMXYy2NEGMtjRBj2unvBkKMSVd/aBkx1vggxpqKEGNNRYixhjDE2BnE2BnE2BnE2MOD GGt4QoxhdMd4YBBjvSpCjHtpzJPc+u2DNI4wyTlSk5wiTHKKMMkpEpOcM0xyA1OTXDHD7o+3wyTn TE1yA1KTnDNMcs4wyTlTk5wjTHIDU5OcM0xyGuWY5LSdmOQGpiY5Z5jknGGSc4ZJzhkmuYHZJzlH mOScYZLzMDPJeTtMcgNTk5wyMck5wyTnDJPcwNQk5wyTnDM1yTnCJKdhjknO22GSc4ZJzhkmOWeY 5AamJjllYpJzhknOGSY5Z5jkepznSe6+nxTSx3dB9PFdEH18F8QnuUL0FVXRFz37E505PPJdkxzt +EqumF957Pb1+PYQFUh/vxaBfNhXlwSykC5DBPJhX5YRyId9ixqBpL/6MxvRm8Oz0hVIGF8Sa/jC FpwpWxiQsoViepmFLdDlXmfYAkyvs7IFoqyP3QZzOGhVtqBlEbagGQ1b0PCELZD1w5sEyxa8HWyB dvpjwNiCFmrYAmF2W9AYYgsawrAF7w628LAvmcMWPDzYwsCULSgTtqBhDlvQcYUt6EUatgDTPwtb 0DhjC94dbEHDHLbg4cEWnMEWPITYgvcZW6DP+jMbfr8MW9DPClvQ+IQtaErDFno78yT3+v5AhCJM co7UJFdIP9XPJAfSos0k97LvDmKSc4ZJ7tU2x382yXk7THI68JjknKlJbkBqknOGSc4ZJjlnapJz hEluYGqSc4ZJTqMck5y2E5PcwNQk5wyTnDNMcs4wyTnDJDcw+yTnCJOcM0xyHmYmOW+HSW5gapJT JiY5Z5jknGGSG5ia5JxhknOmJjlHmOQ0zDHJeTtMcs4wyTnDJOcMk9zA1CSnTExyzjDJOcMk5wyT XI/zNMk9zhfb2d4nuUDsTWmJyJI4kWba+ySXiKzkAtF93xxSXz/sk1ww+kar7E5f8uyTXLbTV437 JBfMLyytH+el3gBxOGe6J6SQrjokxJFKCIg8nxSIbcRnd/sD9ySkmMOCjoRohyMhMP07VxKiXY6E OPNlHSPyZR0Ts1tHMIf3Vu/WMYXnyzqmke/WMTJf1pEf1TcwdusY2inrGIZe1pGVcXjV9pd1ZH/6 Qzq7dWR/7BcskzHrmPq8W8dQzbt1DNkq6xiKuaxjCvNuHVOXd+sYmS/rGJiyjonZrWOo5rKOIT5l HcPYyzom5ss6hmou65ia2a1jGvpuHVkZfdm8W8c09N06sj+HH+H5so4M8+FV21/WMfS5rGOo5rKO oc9lHUM5l3UMca5jmFOf92OYE7Mfw5yYr2OYI/J1DHOo5jqGOYVnP4Y5DX0/hjkwdQxzKOc6hjm1 sx/DHMZexzCHcq5jmMPY6xjmUM5fxzAzyu2NDnUMc+rxfgxzKOY6hjn1eD+GORRzHcOcorwfwxz6 XMcwR+brGObE7McwJ2Y/hjlUcx3DHOJTxzCHsdcxzInZj2EO1bwfwzw2M4vx+t0brQYEMXakxFgR xFgRxFiREGNnEOOBKTEupr81JcTY20GMnSkxHpASY2cQY2cQY2dKjB1BjInO4XupEmOYvoxBjGFs O27oT4jxwJQYO4MYO4MYO4MYO4MYD8wuxo4gxs4gxpquEGNvBzEemBJjZUKMnUGMnUGMB6bE2BnE 2JkSY0cQY8LcvwJDjL0dxNgZxNgZxNgZxHhgSoyVCTFm7HYMcyjDEGO974YYD0yJsfcZMXYGMXam xHhASoydQYydQYydQYw1zCHGGsIQY2cQY+1PiLEziDGf1fUQMYbpgr2LMUTXfcTYo4MY+ychxv5Z iLEziLFOgSHG7mmI8cCUGDuDGDuDGOu4Qow1ziHGA1NirDfVEGPvT4lxD/Msxrc64iOn2gI5nLQq MS6kr+QQ45sdHUCM6Uvb9EOMQXzHWPsbYnyzL/xDjG92cCDE+GaHJkKMnSkxHpASY2cQY4LsO8Ye nhJjHzliPDAlxjc7nhFirO2EGOvQQ4y1kEOMdeghxlrKIcZ8Vt+iRYy9z4ixVjNi7F1GjLWYQ4w9 zIixdxkxHpgSY2VCjJ1BjLWaQ4w1PiHGOvYQY2dKjLWaQ4y9GcTYh44Ya4WFGPvQEWP6Yz+8mJ91 OLBVYqx9DjHWag4x1j6HGGs5hxhrnEOMvc+IsTOIsTMlxgNSYqzVHGLs4UGMfeiIsTIhxlrOIcbe DmKsYw8xpjS6iiLG9MfF2JldjL3HiLH3GDHWYg4x1myFGGsxhxh7nxFj7XOI8cCUGDuDGDuDGGs1 hxhrfEKMdewhxs4gxlrNiHFvZhbjx3cHOAcEMXakxFgRxFgRxFiREGNnEOOBKTF+2GGsEGNvBzF2 psR4QEqMnUGMnUGMnSkxdgQxHpgSY2cQY41yiLG2E2I8MCXGziDGziDGziDGziDGA7OLsSOIsTOI sYcZMfZ2EOOBKTFWJsTYGcTYGcR4YEqMnUGMnSkxdgQx1jCHGHs7iLEziLEziLEziPHAlBgrE2Ls DGLsDGLsDGKscQ4x9nYQY2cQY2dKjAekxNgZxNgZxNgZxFiZEOOBKTHWMIcYezuIsTOI8cCUGDuz i7ETiLEziLEziPHAlBg7gxgT5S7PiLF7GmI8MCXGziDGziDGgzeWGDuDGA9MibEziLEzJcaEud5o 9fkTyf8fg3MstgplbmRzdHJlYW0KZW5kb2JqCjE3IDAgb2JqCjw8L0ZpbHRlciAvRmxhdGVEZWNv ZGUKL0xlbmd0aCAxMDMxMgo+PiBzdHJlYW0KeJytne+qJMlxR7/PU9wXUKv/VleDMXhtWZ9tFvwA QhYYZLD8/uCa1a04v46aE3c+mIXVMjqTlRkRVXkyO6v78nHe/vndZfvX83X9+NNfv/3Pt+9/slyu v/3B3/787T8+/vv7H56ej9/Yz//43eNxv3/nLx/f//n3P378/T/+9pdvv//j+eMv/7s187xtLayX 5eNyfizfm/rP39r+/ifXy+V63v/oh9zzfj9//uF55S+fv1/q7/+xXeq+fjzXdfl4Xn+79ndo+6P1 8ry1P/qNunxc18+mnsvxz75j9+flR3+z/dF36vJxvyzRWPuz37BtBD/4m/2PNurycbtmW29/9G/b Pz/Mym35nou///txe52uj4/r9fT4iaR9puiXX7/9/l+3S90+fv3Pb5e9FLYcPO/L5ePXv377h/P5 8fjHj1//i/93uZ3W8/Y3Arnf35Hn5bSct85kK5eGvE738219a+X5jqzP03X7e2+tXN+R1+N0Pi9v fXmc35HL+br19/k2pPOtMZfz1uH19dbOP3VmPd23xqbQXK7LaSvs+9uo/rkxt/vpfLk+39ppI7/c L6e1NbN25HVaLo/rOKzHc+vy8paHS7/U8ti6/HxPxB9+Y/7w61eVt5z3srttEfz/qbvtGj9OAHUH 8rC6K6SHn7qjFa07Wlmt7kCuWncwL6077XDUHcy/at1pl6Puirn8i9VdIYeaou5gftG6ozv9dqLu NJ3bU3zr8lt0+gPgsp5P6/X8XhV95Ot6Wq6X9+Lqw3otp/v1Ombier6frtf7e2G02rlerqetAt9C eD135rX1eXnPROvP9frc+vx868+5Zf16e2x9fl2ndF3vt9P1dn5PRcv69XE5nW+X5xTn62M9rbf3 gm/Zui7Labnd31OxNOZ5P91vj8cYnvW6dXl5q4xLm2Cur/PW5fUtFeeertdz6/LrPhX8FpnTcj8v Uwhvl9vpfn8vw35z3a6X0xbq61TNt+vrtM2U95HZnqvr/fGc4nO737c+P8cuP65bl9frVGG35bx1 +fUYw7Osp21CXcdrPR+n9XG9THfXbb2dlkcrn5au2+tyuj/uy3R33V7fJ5vHawrP/fzc+vy8TOVz v9y3Pq/3KT7b02Dr82u8S++38+m+nN/6c/mlM+vpulzfc9HK535fTtt08p6LduvcH9v81/Le87VN 4adlGav5vmwT4PJ8S8Wthfn+3GbAZX1/YPYur9sUuLzW6a7Y/p/T+rxcpv48ztsU+Ly+peJ27sw2 BT5v76XRUvq4bFPg9v9Pd9fjuk2Bz2V8YD5u2xT4fL6XRm9nc57luT6nlD7u2xw43jiPxzYDru9C 0wv1sUnPeX03mm5wj+c2Ba7vStML4/HcpsD13WkOo1q3KXB9l5rDqF7bFLi+S02/1mZgW5/fpaaP a7tnTuvrfJluruWyTYGv2UaW6zYFvprUHNYr2xT4ur3Ga23ic369W82hz/dtCnwtP8rFZsputN+f n9L73WgdKaMF6V5cRhtIC0EZLUh//JbRRit9qVBGG8101yqjhTlYZhmtjxuj9S5jtNHn7oe70YL0 CQOjjS73oZfRBtO7U0Y7MLvRDkgZbTCXzuxGG0wrV4wWpj9YMdoIT9e6MtrIVjfRMlrPKEYbGe3X KqP1csdoPaUYbcTnl858Gq2nAqMdmDLaYLr1ltF6SjHaYF6d2Y02hv4HM1pPKUYb12o3DkYbqVg6 sxutlwZG66VRRuv3Okbr9zpGG8zBenejHYZVRusVhtFGCHuYy2gnZjdaf/hgtF7NGK2nAqP1VGC0 PptgtMGo0QbTSh6j9TmnjNZTgdFGeHp3ymj97sJo/W7HaD2lGK2nC6MdmDLaCE834zJaL2eM1qdt jDaY5SfcZhvmV26jCG7jSLmNIrhNIfcWAdwGpO+g4TYwfUbFbYo5OBJuoz0Ot4HpW2i4Ddc6bAHv bgOydKTchmH1ZnAbjWC4DV1WtynkYH64jUcHt2FYXdlwG7qsu3XRTp90cZtifLcuxtV3/XAbxqW7 dT72cBsde7iNVipuQzNdf3AbHzpuw9C7juE2Hh7cBmbtTLkNTJ+8cRuY9hQPt9E+h9tousJtNF3h NpoK3IZU9C7jNjTTu4zbaLrCbTQV4TYw3X9wGxjdrQvm0plyG017uI2mPdxG0xVuo+kKt9EHZriN PuPDbchXZ3AbffDiNloa4Taa9nAbTWm4jacCt9E7J9xGrxVuo6URbgPT/Qe3gXn8hJPcz186iSI4 iSPlJIrgJIrgJIUMTgLjTlLM4CTanXASGHcSrqVOAuJOwrDcSWDcSeiyOkkhg5N4dHAShuVOQpfd SWjHnaSYwUkYV59QcRLG5U6iYw8n0bGHk2il4iQ0407iQ8dJGLo7CYw7iYcQJ4FxJ4FxJ4FpYw8n 0XSFk2i6wkk0FTgJqXAnoRl3Ek1XOImmIpwExp0Exp0Exp1E2wkn0bSHk2i6wkk0XeEk+sAMJ9Fn fDgJ+XIn0QcvTqKlEU6iaQ8n0ZSGk+gdGE6id3s4iV4rnERLI5xE79JwEpifcZLtX185iSI4iSPl JIrgJIX4PgmIOwmMO0kxg5Noj8NJaKczOAn9USehmX7SCCfR6IST+KVwEq7VP7wpJ/Ho4CRc6vA5 UTmJZwsn0S6Hk9Cf/gEGTqLZCieB6RM8TsK1en9wEu8PTuLjwkk0pTgJXfbPgIr5VMj5jn/qZ9Z1 xyvCHe9I3fGKcMcrwh1fyMPveG0m7viBqTv+aR+xxx0P028f7vinfZzPHT8gdcczdP/Ut5h7XxRx xxdz+IS57vhCbn/QO94jyB3vDHe8MnHHEx6/4zWEcccTnq7r3PH0x1ch3mfueGe44+nz4amw3/Fa YHHH+9BZhTjDKkQrLFYhWmGxCtEKi1WIllisQjSEsQoZmFqF6I0cqxCY/tFnrUJAfBWiFRarEK2w WIX4sFiF+LBYhWjaYxWiJRarEG0nViHOsArREotViJZYrEK0xGIVoiUWqxCNc6xCnKlViFZPrEI0 XbEKcYZViJZYrEK0xGIVosOKVYiOK1YhmvZYhWiJxae+3g6f+jrzeY7RC4xzjF5gnGP0AuMcoxcY 5xiHKNc5xoGpc4yeCc4x+gOKc4xDO3WO0SuMc4yHCpstc/36bKEiWKYjZZmKYJmKYJmFDJapzYRl FtO9JSwTxteVMG6Z3k5Z5oCUZa52xCUss5jBMlc7B4NlFjJYpkcZy3QGy1ztGExYJkzfqMUyVzsG E5ZJeNwy6bNbpo4rLJP+dGXDMjXtWKYWWFimN4NlOoNlaoWFZWqFhWVqhYVlaomFZWqYwzIHpixT SywsU8sHywRxy9QKC8vUCgvL9GFhmfSnv0yEZWrawzK1xMIytZ2wTGewTC2xsEwtsbBMLbGwTC2x sEyNc1imM2WZWmFhmTD94wIs00OIZWqJhWVqiYVl6rDCMrXkwzK1z2GZWmJhmd4OlunMbplaYGGZ WmBhmVpgYZlaYGGZHmUs0xksUyssLFMrLCxTp6WwTK2wsMxeYaNlLucvT3k6UpY5ILtlOlKW6UhZ JohbpjeDZcJ0b8Eyg1HLDEYtc2hnt8wJ2S0zhq6WCeOWCaOWCeKWOUS5LHNgyjJj6H2fsizTw4Nl BnN4y2W3zAiPWmb0WS3Tx4VlRn8Ob6fsljmMa7dMLzAsc2imLHNgyjK9wrBMrzAs0ysMy/QSwzI9 zFjmxOyWGWPvxxPKMgdmt8xA1DK9wrBMrzAscxhWWWb0p5+WKMv0tGOZXmJYpreDZQ5MWaaXGJbp JYZleolhmV5iWKbHGcscmN0yvTSwzIEpy/SHBpbpJYZleolhmT4sLNNLDMsMpr95UpbpJYZlevlg mQPzaZleYFimFxiW6QWGZXqBYZlDlMsyB6Ys04UFy/RnGJbpFYZleoVhmYcKmy3z+uW5XUewTEfK MhXBMhXBMgsZLFObCcu82jm0sExnsMyrHWwKy7zaeTYsc0DKMq92DC0ss5jBMq92fgzLLGSwTI8y lsmw9D1pH3pYJkx/ORfLhDl8O1BZJuFxy2Rcbpk69rBMrZ6wzIHZLVMLLCwTpn/wjmV6mLFMrbCw TK2wsEytsLBMLbGwTA1zWKaWRlgmTD9Mi2VqDLFMkL4limVqhYVlaoWFZfrQsUytnrBMZ7BMLbGw TG0nLFNLLCxTSywsU0ssLFNLLCxTSywsU59QYZkw/TBtWaY3g2Vq+YRlegixTC2xsEwtsbBMLbGw TE17WKYzWKaWWFimt4Nl6l1alqkFFpapBRaWqQUWlqkFFpaplRGWqVNXWKZWT1imPufCMr0dLFMr LCyzV9hsmbcvT2I7gmU6UpapCJapCJZZyGCZ2kxYZjGDZTqDZcK4ZXo7ZZkDUpZ50+PRWGYxg2UW 45ZZyGCZdPlwWrssE6afS8AyYfQT82DcMrU/YZmE5/wTt8bjyyPLjnBrOFK3hiLcGoX07wji1ijk 8FW93Bp6pbg1YHoNcWsU079vJ24NmP52AbeG96dujQGpW6OY/rU9cWsQHn1xMq5lL07GpfTFyaHL 3Bp0R1+cjKTri5PRjr44GZnQFye9z7EAe9iZrliAwfgCDObwAkItwGAOB072BdijHQ370QLMh8UC TFMRCzBSoV+9Gu34AkxTEQswHXoswGD0q1eHdliAwfSDIizANKUswDQVsQDTVMQCjBAemFqAkYo+ dBZgtNMXhCzAPBUswBh6P/rMAkzDEwswZ1iAwfQvhmIB5u2wABuYWoCRC/2iKs9pLMA0pyzA9M6J BRjp8m1+2mnTWyzANKWxACM8fTHDAkzTHgswb4cFmLfDAkzTHgswTWkswPROrgWYJjQWYJrQWIDp TBoLMJjD17PWAoyMdoYFmN7ssQDT6okFmFZGLMB0WooFmGY9FmDeTn31qlcGX73qlcFXr/ozfnls c+Dr3Wp+5utZl+XLI9SOYL2OlPUqgvUW0t/kxHpB+kY21rvYKaqwXu1NWC9MN2Osl2vpDw4Ec/gu kN16GZb+0EUw+mpudFm/nnUYellvIYP1ejNYL+34xw6LnYoL613sPFtYr7eD9cJ0e8Z6tc9hvYud QwvrJT76am4wXWnLejXMYb2LHa8L6/XwYL0eZqzX28F6NcxhvdrnsF6Yw1evlvVqmMN6NcxhvRpn rJfu9EthvRqesF4Nc1ivVmFYr4cZ6/UQYr0w/VMZrFfDHNarYQ7r1TCH9erYw3o1zmG9zmC9mouw Xi1VrFdTEdbr3cF6NV1hvZ4KrFdTEdarqQjr1aGH9eptEdarYw/r1VSE9WpplPVqBMN6NVthvTDd 97FevXHCenu2Zhtbvz5qrAg25kjZmCLYWCFuYyBuY6udNgob096EjcG4jXEttzEYtTGG5TYG4zZG l93GfOhlY4UMNubNYGOrnUILG1vt9FjYmDPY2Gpnw8LGaKdv82Nj2uewMZhuddjYaue+wsb8WmVj hLkj2Njajn39yMY0XWFjdKe/N4iNeZexMU1X2JimK2yMdrqSYGMDUzam6Qob07FjYyDdbLAxTVfY mKYrbEzTFTbmQ8fGPBXY2MCUjSkTNqYpDRvTGIaNabrCxmD6oRRsTEs1bKzna5wsn+cvT0w6UpPl gOyTpSM1WYLoZBmITpYwPll6b5gsg9HJMq7VP5KqyTIYmyxjWDpZBqOTZXRZJ8th6PtkCeKT5dBM TZbRTv84syZLmMMkV5NltKOTZTCHD/X2ydL7w2QZTP+grSbLGHufMWqyDOawLbFPlh7DmiyjO33O rclyCGFNlkMIa7IcQliTZTCHD/X2ydKvxWTpIWSy9BAyWXoImSy9z0yWQ5/3ydLDzGQ5NFOTpYeZ yTIY/cAumP45ZE2WQ5hrshzCXJOlh5nJ0u8uJkuPD5Olx5nJcminJkuPM5OllwZbF8H0iXnfuvAw s3XhYWbrYghzbV14abB14eFh68LDzNbF0E5tXXiY2boY2qmtC08FWxeHOM9uc/nynJ4juI0j5TaK 4DaFuNuAuNtc9EAbbqO9CbeBcbfhWu42MOo2DMvdBsbdhi672/jQy20KGdzGm8FtaEc/lglGNwIG Bre52PHDcBtvB7eB6RM8buPt4DY69nAbYmgbAYHoYSRPRbjNRY9D4jYwh0V+uY0PHbfRVITbaDvh NpqKcBtvB7fRVITbaCpwG01FuI2mItxGwxxu48PCbZzBbTwVuI23g9toKsJttJ1wG01FuI2mItxG cxFuo7kIt6E//fQPbqP5wm186LiNpiLcxtvBbTwVuI22E26jqQi36amYXeL25cFmR3CJQvo8h0sU 0o/z4RIgLWm4hPYlXMIZXGJgyiWcwSWcwSWcKZcYkHIJZ3AJZ3AJZ8olChlcwpvBJTSh4RI3O0EW LgGjv8ALM7jEzU5ahUvc7KRVuASM75M4g0sQQ3UJEHcJTUW4hIcZl4DpZ8xxCW8Hl9BUhEsQHv1F mGD0F2E8XeES3g4uAWPfxezZCpfQdIVLaLrCJfSBGi5BKvqZblwCpg8dl/B04RJazeESMO4SMH3P AZfQVIRLDEy5hPcHl9B8hUtovnAJwqy/wBsp1e9iDqY9eMMlYPoBaVxCUxouoeEJl9DwxD6Jhif2 STQ+sU/CuPTN0min78l8HvHw6HDEY4hOHfE4RGe2qPuXB2UdwaIKubYIYFGF3NrbTViUXgiLKqSf tg6LgulzNxZVzOG3Z7AomH4tLIpr6e/qRTu6I6MjD4tyBotyBotypiyqkMGivBks6m7nmsKi7nY+ KizqbueswqLudvopLEr7ExZ1byebfmRRMF1tsCiNT1gU/el2WBbl4cGiPDxYlIcHi/LwYFEeHixK wxMWpeEJi6I/h1e/yqI0PmFRGp+wKI0PFqXhCYvS8IRFeXiwKA8PFkV//NMmDw8W5eHBojQ8YVEa n7AojU9YlMYnLErjExbl/cGiYLqSYFEaQyzKL4VFaZjDopzBojwVWJS2ExalqQiL0lSERWkqwqLo Tz++ikXpwzAsSlNRFuUjx6I0gmFR3k4dlB0iWAdlhwjW62FDBOv1sBh5f42qXg/z6PB6mBc8r4f5 c5fXw4Zr1ethHkNeD/Nc8HrY0E69HnbIxWy9y5cHkh3Beh0p61UE61UE6y3kYJBYL4xbbzH9+xfC emmnf8aI9dJO/+AP66Wdbs9lvTrysF5nsF5nsF5nynoLGazXm8F6FzvGGNa72DHGsF5tJ6x3sWOM Yb3eDta7tCOKP7JeGLdejU9YL/2xL0UYwoP10ky3VazXh471OoP1epixXm0nrFfDHNarYQ7r1TCH 9dIf3zvUOGO9PiysV8MT1uvtYL0eHqzXw4P1eniwXi2fsF6YbodYr4YwrFfHHtarMQzr9XawXo1h WK/GMKxXY4j10p3DF9eV9cLot9INIcR6fehYr4YwrFfbCevVEIb1agjDejWEYb1aYmG9+sAs69UI hvX6yLFejyDW6+1gvR5BrNcjiPW6p2G9jP1gxmW9Gp+wXs1EWK9mIqzXr4X1agzDejUXYb3ezrLN ga8fzeyzGK9fv3ygCGLsSImxIoixIohxIYdTaohxMY/+tVqIMUyXXsQYxj9Upz/9i/0QYxj7SfO4 1MGdS4y9GcRYIxhi7EyJcSGDGHsziPFqZ05DjFc7AxtirO2EGK92BjbEeLXzrSHGazvf+iMxhnEx 1viEGNMfFWMPD2Lsw0KMPYSIsTOIsYcZMdZ2Qow1zCHGGuYQYw1ziDH9cTHWOCPGPizEWMMTYuzt IMYeHsTYw4MYe3gQYy2fEGMYF2MNYYixjj3EWGMYYuztIMYawxBjjWGIscYQMaY7LsYwLsYeQsTY h44YawhDjLWdEGMNYYixhjDEWEMYYqwlFmKsD8wSY41giLGPHDH2CCLG3g5i7BFEjD2CiLF7GmLM 2F2MNT4hxpqJEGPNRIixXwsx1hiGGGsuQoy9nRLjnopZjF9fv7miCGLsSImxIoixIohxIYcdWsS4 mMOuMmI8MCXGMN1EEWP6o19hEYzuGOvIQ4ydQYydQYydKTEuZBBjbwYxJoJ95IgxTFvGhBhrOyHG RLnbIWIM04UWMX7ZAfMQY/rTzzcgxt4OYvyyA+aIMUgPD2JMd/pXTyDG3h3E+GVn2UOMYfrQEWOY LqKIsQ49xNgZxHhgSoydQYw1zoix3schxhrmEGPvDmLM3dWlFzHWOzDEWJ9zIcaM63AitcRY79IQ Y70rQoz1Tg4x1hiGGPu1EGNvBzHWXIQYa2kgxnoHhhh7dxBj7w5i7N1BjLVUQ4y1PyHGvT+jS6zX OgrZfyNjdwmQPqeWSwTSNKhcAkRdIlpphV8uEYieuYQ5vClbLjG0Uy7h48YlvMu4hI+8XCK63H9p oFwiLtU/MC+XCKbvL5ZLDMPaXSKQgyfsLjGMqlwCxjfZvB1cIpj+PfDlEp51XCIYdYmBKZcY+lwu Ee00H8MlPF3lEtFMn7vLJbwIcYnocn8rpVwi0vUvP/HkuH95bsURnhyF3P7JnhwgD3tyFNLPfPPk oBV/ctCMnluBOXg2Tw4ddzw5ivFzKzCH27meHHf7hCeeHHf7pCieHDD6/vzA1JPDEZ4cd/swKZ4c MHpaOzKq2/MRHn3nLbLV73ieHJrReHKQUf1a42C69vPkcIYnB5Vhp7WjGX9yaPXEk4NU9IcvTw5N O6uQidlXIcF09a1VSPRZf03R084qJK7VVw+1Cokw9/7UKsTLp1YhXj2sQuJSugrxZwarkGinb/PX KsRLg1WIlwarEC8NViGedlYhA1OrEC8NViFeGqxCvDRYhXhpsArx0mAV4qVRqxAvDVYhXhqsQvzh wyok2uk/NlmrEC8NViFeGqxCvDTYnve0sz0/MLU976XB9ryXxr4974XB9rwXBtvzXhhsz3thsD3v lcH2vFcG2/P+0GB73h8abM8H07fVa3veK4Pt+Yhhkyy254Pp2/y1Pe+VwfZ8MH11W9vzXqm1PT8g z+vpdj6PqVg287mc362ml8ayqexrs863dPXwbC77PL9bzeGLIs630+P8Q6tRmW/3/flndg0WO6OC +xfSH0i4P620sOL+IK3ocP/FPsrB/Qs5fC0z7r/YJ0vh/jD6G5fB9GU67g+jb2oOTLm/ZiHcH0a/ OytS1cOD+8Mse3382/bP/3z7ji6X7VKb3/7tz9/+4+O/v21u9HxsvrGp1uO3Z/D2P9dtJn/swPa3 vhPf/+rnf/zu8dgq8E9//fb7P54//vK/Q+29zk9ZDlbtgfT9i6o9EP2uFRDdsQK52o5V9KWv4ar2 gtF1ZzB91VS1F4y+JRyM7ljB/MxGwOt6+2ojwBESUsjVNgJAep2SkEJ6BEgIregX6XkzkZCBqYTA 6MPAYxMJcWZ/GEzI/jCAuenDwIPMw2AY+r4RMES5NgKGZmojYGBqI8CjzEaAh4eNgInZNwIGpjYC BqY2AqLc9ZxejOuwPbhvBEzM50ZApKKpLxsB0Uw/WlgbAcHoOb1h6LURMDH7RkBUal+h1kaAVyob AV6GbAQEo7/qGYz+vlEwfXW+bwRECPvivDYCPF1sBHgI2QgYmNoIGJjaCBiY2giYmH0jwBk2Ajzt bAR42tkI8LSzEeBpZyNg6HNtBAxMbQQMzL4RMCH7RsDA1EZAhFB/1dMfmGwEeDWzEeAPZzYCPF1s BPi42AgYmNoImJh9I2BgPjcC4tGsv2/kEWQjYBh5bQR4obIRMES5NgKGa9VGwOBptREwMftGwMDU RsDA1EaA3+xsBPi42AiI+BzO8u0bAT5NshEQTN902DcCDt2ZxXgbwVdirAhi7EiJsSKIsSKIsSIh xs4gxsUcVtuIMUxfSSPGfi3E2JkS4wEpMXYGMXYGMXamxNgRxJjoHOS5xNgZxBimf3SDGGt/QowH psTYGcTYGcTYGcTYGcR4YHYxdgQxdgYxJsz9qCNi7O0gxgNTYqxMiLEziLEziPHAlBg7gxg7U2Ls CGJMmA/SW2Ls7SDGziDGziDGziDGA1NirHdpiLGOPcTYGcRYn80hxt4fxFjHFWLsDGLsTImxPuhC jGH0BZYhPIixTkshxnqtEGOY/sIIYgzTf9oSMdbwhBg7gxgPTImxM7sYO4EYO4MYO4MYD0yJsTOI sVZPiLF7GmI8MCXGziDGziDGgzeWGDuDGA9MibEziLEzJcb95prFeJs1vxJjRRBjR0qMC9GjYyB9 hYEYF3LYekeMaUYPnUY7+ma3dyfEWAceYuxMifGAlBg7gxg7gxg7U2JcSN8RDTEmOofjZSXGni3E mHb6lihirF0OMR6YEmNnEGPqVH9WLsbVz08hxlphIcaM/fDTc7sYa5hDjDXMIcYeZsTYw4MYD0yJ sTIhxs4gxs4gxqRLv+jTxx5iDNN3nkuMNRUhxqSib5Yjxj4sxNgZxNgZxNgZxHhgSoz16R1irBUf YqxhDjHWOIcYa5xDjHVcIcbOIMbOlBh7eBBjnU5CjHXqCjH2ECLGyoQYa5hDjDXMIcYanhBjZxBj jWGIsY9rF2MfFWLso0KM6Y2LsT6+Q4y5lr7ZHUzf5UaM3dMQ44EpMXYGMXYGMUYRWqWGGBOffsYK MR6YEmO9c0KMNYaIcUdmMd4eaV+JsSKIsSMlxoogxoogxoqEGDuDGA9MiXExh3eJEGNvBzF2psR4 QEqMnUGMnUGMnSkxdgQxHpgSY2cQY41yiLG2E2I8MCXGziDGziDGziDGziDGA7OLsSOIsTOIsYcZ MfZ2EOOBKTFWJsTYGcTYGcR4YEqMnUGMnSkxdgQx1jCHGHs7iLEziLEziLEziPHAlBgrE2LsDGLs DGLsDGKscQ4x9nYQY2cQY2dKjAekxNgZxNgZxNgZxFiZEOOBKTHWMIcYezuIsTOI8cCUGDuzi7ET iLEziLEziPHAlBg7gxh7lBFj9zTEeGBKjJ1BjJ1BjAdvLDF2BjEemBJjZxBjZ0qMCfP9J8R4tbPY iLEiiLEjJcaKIMaF9N0fxLiQvvkTYrz6ge4S41UPjyPGXMsPfeuoQoydKTEekBJjZxDjYq79XQvE eNVz6iXGjiDGqx6JR4yJoB+l0HZCjHXoIcYDU2LsDGLsDGKsYQ4x1hiGGBOfvh1cYqzFHGJMM707 iLEWfIixDx0xHpgSY+5jF2MtjRBjLY0QY9o5/AhniTHp0peNPT6IsaYixFhTEWKsIQwxdgYxdgYx dgYx9vAgxhqeEGMY3zF2BjHWuyLEuJfGNMmt2+C/mOQGZJ/kJuRzkhuQfZIbkH2SG5Ca5CZmn+RG 5nOSC0Z3f6Z29kluYj4nuRH5nOQmZp/kJmaf5Cbmc5KbkH2SG5nPSW5i9kluiHJNckM7NcmNzOck NzH7JDcx+yQ3MfskNzH7JDcyf5/kJmSf5CZmn+SmMO+T3NTOPsmNzOckNzA1yU3MPslNzD7Jjczn JDcx+yQ3MZ+T3ITsk9wQ5prkpnb2SW5i9kluYvZJbmL2SW5kPie5galJbmL2SW5i9kluYvZJ7hjn eZK77CeF7PXdQOz13UDs9d1AfJIrxL42KvtiZ3+yM/bqeLZjK7lgfuK123WL35e2UEj/zisCebOP LglkIV2GCOTNPiwjkDf7FDUCSX/t5zGyN/Z2QTK2JB7CF7bgTNnCgJQtFNPLLGyBLvc6wxZgep2V LRBle+02mcNBq7IFLYuwBc1o2IKGJ2yBrB++3a9swdvBFminvwaMLWihhi0QZrcFjSG2oCEMW/Du YAs3+5A5bMHDgy0MTNmCMmELGuawBR1X2ILepGELMP1a2ILGGVvw7mALGuawBQ8PtuAMtuAhxBa8 z9gCfbafxxiel2ELeq2wBY1P2IKmNGyhtzNPco+vDkQMCJPco+01/2CSA5GTwomoLYD4JOcMk9zA 1CTnDJOcxiYmOWdqkhuQmuScYZJzhknOmZrkHGGSG5ia5JxhknvYp0ExyWk7MckNTE1yzjDJOcMk 5wyTnDNMcgOzT3KOMMk5wyTnYWaS83aY5AamJjllYpJzhknOGSa5galJzhkmOWdqknOESU7DHJOc t8Mk5wyTnDNMcs4wyQ1MTXLKxCTnDJOcM0xyzjDJ9TjPk9xiO9tMcot9DsEkB6JL4sU+hWCSA9GV 3GKb9THJMaTDN6XVJLfYBxUxyS32QUVMcott+sckV8xPLa3X+gaIwznTPSGFdNUhIY5UQkDUOlbb gSEhdLd/VRwJKeawoCMh2uFICEz/zJWEaJcjIc6UdQxIWYczWEcx/fP6sA4PT1mHjxzrGJiyDi7V NzCwDm0nrEOHHtZBZdhXW2d/+ks6WAf9sV+eTMatw/uMdWg1Yx2arbAOLeawDg8z1uFdxjoGpqxD mbAOZ7AOreawDo1PWIeOPazDmbIOreawDm8G6/ChYx1URl82Yx0+dKyD/vQP0bEOwtyvhXVon8M6 tJrDOrTPYR1azmEdGuc6hjn1eT+GOTH7McyJ+TyGOSKfxzCHaq5jmFN49mOY09D3Y5gDU8cwh3Ku Y5hTO/sxzGHsdQxzKOc6hjmMvY5hDuX8eQwzo9y/amA/hjn1eD+GORRzHcOcerwfwxyKuY5hTlHe j2EOfa5jmCPzeQxzYvZjmBOzH8McqrmOYQ7xqWOYw9jrGObE7Mcwh2rej2Eem5nF+PXVN1oNCGLs SImxIoixIoixIiHGziDGA1NiXEz/Oo8QY28HMXamxHhASoydQYydQYydKTF2BDEemBJjIuhiDNPN DzHWa4UYD0yJsTOIsTOIsTOIsTOI8cDsYuwIYuwMYkyY+0/ZIMbeDmI8MCXGyoQYO4MYO4MYD0yJ sTOIsTMlxo4gxoS5fwSGGHs7iLEziLEziLEziDF9PvwuTIkxjH2jVTK+HecMYqxPjRBj7w9irGMP MXYGMXamxHhASoydQYx9WIixhxAxVibEWMMcYqz9CTHWcYUYO4MY6xMqxNjHtYsxhH3V69QKYuzt IMYeHcTYR4UYw/TvFkOM3dMQ44EpMXYGMdY+hxjD2Fe9Tu0gxjr2EGNvBzHWXCDG/dk8ivHlUkd8 7FQbyOGk1S7GIH0lV2IMYt9olX1pm34lxoHojrH3FzGODuuOcTC6YwzjYjwwuxhPyC7GA1NiHEHW HeMhPLsYDyMvMZ6YXYzjUirG3g5i7ENHjL2QEWMfOmLspYwYx7X6Fm2J8dDnEmOv5hLjocslxl7M iPEQ5hLjocslxhOzi7EziPHAlBh7NSPGHh/E2MeOGA/MLsZezYjx0EyJ8TD0EmOvMMR4GHqJcfTn sBu8i3Fc63Bgaxdj7zNi7NWMGHufEWMvZ8TY44wYD30uMR6YEuOB2cV4QnYx9mpGjIfwlBgPQy8x dgYx9nJGjId2Sox97IhxlMZBencxjv7YjyFOzKcYDz0uMR56XGLsxYwYe7YQYy9mxHjoc4mx9xkx nphdjAemxHhgSoy9mhFjjw9i7GNHjAemxNirucT40MwsxrcvD3A6ghg7UmKsCGKsCGKsSIixM4jx wJQY3+wwVoixt4MYO1NiPCAlxs4gxs4gxs6UGDuCGA9MibEziLFGOcRY2wkxHpgSY2cQY2cQY2cQ Y2cQ44HZxdgRxNgZxNjDjBh7O4jxwJQYKxNi7Axi7AxiPDAlxs4gxs6UGDuCGGuYQ4y9HcTYGcTY GcTYGcR4YEqMlQkxdgYxdgYxdgYx1jiHGHs7iLEziLEzJcYDUmLsDGLsDGLsDGKsTIjxwJQYa5hD jL0dxNgZxHhgSoyd2cXYCcTYGcTYGcR4YEqMnUGMibL9ONjkaYjxwJQYO4MYO4MYD95YYuwMYjww JcbOIMbOlBgT5vpGq+8/kfx/PlD5GgplbmRzdHJlYW0KZW5kb2JqCjIwIDAgb2JqCjw8L0ZpbHRl ciAvRmxhdGVEZWNvZGUKL0xlbmd0aCAxMDI2NAo+PiBzdHJlYW0KeJytnd+qM0lyb++/p9gvMBqp VPoHhwPj4/Fc2zT4AYw9YBiDx+8Pp3b3rlg/RX0rdl+YhnGzvTqVGRlZuSKVki4f5+2fP1y2/3m8 lo9/+9uP//7x+Zf7Zfn1D3//9x//+vFfn388PW6/sl//8of79Xr/5C8fn//8y18+fvuXv//1xx// cv746/9szTzX9fnxvNw/Lufb/bOp//i17c+/LJfLct7/9FPuub7uX388P/mPz58v9du/bC/1+d+9 1q2ry6+v/Qltf3otl2v706/U5WN5LL/96XE//u0Tuy2Pn/2X7U+f1OVjvdyjsfa3T+xzBD/5L9uf PqnLxzU79v6nf97++emsXO+fc/Hb/96ur9Ny+1iW0+13TNrXFP3DLz/++E/njy1av/zHj8ueCtsc PG+Xx8cvf/vxf87n2+3/fvzyn/x/79fT87z9F4Gs6zvyuJzu560z2cqlIa/Ter4+31p5vCPPx2nZ /ru3VpZ35HU7nc/3t77czu/I5bxs/X28Del8bczlvHX4+Xpr50+deZ7WrbEpNJflftoSe30b1f9r zHU9nS9bFmQ7beSX9XJ6tmaeHXmd7pfbMg7r9ti6fH+bh0ubqu3/vXX58TYRlz6sx3Xr8lt0Lv/Q kOf59FzO71nx51+ZP//yXQLfz3v2XreJ+N9J321h/XweSV+Qm6VvIX0WSV9a0fSllaelL8ii6Qvz 0vTVDkf6wvyTpq92OdK3mMs/WvoWckhN0hem5xTpS3f6qiR9dTpJX5AeZdIXpo/8+Tzdl8t7cvVh ve6ndVnGmVjO62lZ1vfEaLmzXJbTloFvIVzOnXltfb6/z0Trz7I8tj4/3vpzbrO+XG9bn1/LNF3L ej0t1/P7VLRZX26X0/l6eUxxXm7P0/P6nvBttpb7/XS/ru9TcW/MYz2t19ttDM9z2bp8f8uM/vBb Xuety8+3qTj36Xo9ti6/1inht8ic7uv5PoXwerme1vU9Dfviui6X0xbqZcrm6/I6bRvuOjLbc/W5 3h5TfK7ruvX5MXb5tmxdfi5Thl3v563Lr9sYnvvztO3Lz/G1HrfTJkeXaXVdn9fT/dbSp03X9XU5 rbf1Pq2u6+tzs7m9pvCs58fW58dlSp/1sm59fq5TfLanwdbn17hK1+v5tN7Pb/3pG+x6fZ6W+/I+ Fy191vV+2raT97loD8z1tu1/bd77fK33bQO8j9m83rcN8P54m4prC/P62HbA+/P9gdm7/Ny2wPvr Oa2K9bVtgY/LZerP7bxtgY/lbSqu585sW+Dj+p4abUpvl20LfKyvaXXdlm0LfNzHB+btum2Bj8d7 avR2Nue5P56PaUpv67YHjgvndtt2wOe70PREvW3Sc36+G82lzdbtsW2Bz3el6Ylxe2xb4PPdaQ7t PLct8PkuNT3hb69tC3y+S01/rc3Atj6/S83l3pltC3ydx0V6v2xb4Gu2kfuybYGvJjWHsmfbAl/X 1zSu+yY+59e71Rz6s25b4Ov+s7nYTHkw2vtqvS+jVQSjLaR7MUYL0kKA0RbSH78YLa304gajpZnu WhhtMQfLxGh13GG02uUwWvrc/bCMtpC+YYTR0uVebGG0MD08GC1M73IZrSMYLcyhaCujLaY/NMNo GXpXNoyWmeiWidHqbIXRMlv9tTBaTeUwWp2uMFri0+qqMFqYrqJltDrrYbQwvcsYrXcHo4XpIcRo Gfqf1Wh1SsNoea22KMJomYoWnjBaTY0wWk2NMFpdyBitruMwWhg3Wh8WRqsZFkariz2MVldyGK1m YRitpk8YrU5FGK1ORRit7hRhtDBd6zBamHZKGEar+0kYrc4FRuvDwmgJYR8WRqtTGkar0xVGq0+N MFpd7WG0hKdbL0ar6RxGq1tyGC3M/Xd4y/P8rbcogrc4Ut6iCN5SyNoigLeA9NMxvAWmb/B4SzEH /8FbtMfhLTB998ZbeK3DKfHuLSD3jpS3MKzeDN6iEQxvocvqLYUcrA5v8ejgLQzLvYUu+0kc7bQn UHhLMcNJHOPqJ3p4C0x3CbyFsXePwls0PuEtGh+8RZM5vIVmWhKGt3h48BYPD94C8+xMeQtM37zx FmfwFpi2WYa36HSFt+h0hbfodOEtOhXhLUxF7zLeQjt/6kx5i05XeAtD7/3BWzyEeIuHEG+B6Q6A t8C0lRzeoukT3qLTFd6i0xXeog/V8BbdB8JbmK/OlLfoszm8RVMjvEWnPbyFEHb/wVt0dYW36HSF t+gTIbxFUyO8RdMwvKWnxuwtr/u33qII3uJIeYsieIsieEshg7fAuLcUM3iLdie8Bca9hddSbwFx b2FY7i0w7i10Wb2lkMFbPDp4C8Nyb6HL7i20495SzOAtjMu9Bca9hbG7t2h8wls0PniLJnN4C824 t3h48BYPD94C494C497iDN4C496i0xXeotMV3qLThbfoVIS3MBXuLbTj3qLTFd7C0N1bPIR4i057 eAuMe4u2E96i0xXeotMV3qLTFd6iD9XwFt0HwluYL/UWfTaHt2hqhLfotIe3EEL3Fl1d4S0w7i06 peEtmhrhLZqG4S39yTJ6y+cNq2+8xZHylgHZvcWR8hYQPW8JRL0lGPUWGPcW7zHeEu10prwl+mPe Es3020jlLR4dvGV4qfKWeK2WZ+UtQ3TKW+KlDu8l7d4yzFZ5i3cZb4n+dJcob/HZwluC6Rt8eUu8 Vu9PecvQn/KWYVzlLT6l5S3R5a515S0wX5o5r/hF39euFa8IK96RWvGKsOIVYcUXcvMVr83Eih+Y WvGLvQ0fKx6mLx9W/GJv+bPiB6RWPEPvy5AVX8zaCydWfDGHd6FrxRdy/bOueI8gK94ZVrwyseIX uzASKx6mLw1WPOHpus6Kpz9613HoMyveGVY8fT48FfYVrwkWK16zh0plYKpS8QyjUvEMo1LxDKNS 8RSjUvEQUqlMzF6pDExVKhGfrvR7pTIgVal4hlGpeIZRqQxdrkrFn09UKj7tVCqeYlQq3g6VysBU peIpRqXiKUal4ilGpeIpRqXicaZSGZi9UpmQvVLx9KFSCaYPvSoVTzEqFU8xKhXvM5VK9KffP6xK xaedSsVTjEplaKfuOg7M111HTzDuOnqCcdfRE4y7jp5g3HUcolx3HQem7joOnlZ3HSM6/XMrddfR HxrcdfQM467jIcNmy1y/vX/oCJbpSFmmIlimIlhmIYNlajNhmatd4gjLXO2CRlgmjFumt1OWOSBl matdlQnLLGawzNWu02CZhQyW6VHGMp3BMpUJyyQ8/YIdlrna9Z6wTMLjlkl/3DK9z1jmaleJwjJ1 2rFMTbCwTG8Gy3QGy9QMC8vUDAvL1AwLy9QUC8vUMIdlDkxZpjNYpqYYlqkzGpapGRaWqRkWluld xjLpT//AEZap0x6WqSkWlqnthGU6g2VqioVlaoqFZWqKhWVqioVlapzDMp0pyxyQskzNsLBMmG6i WKamWFimplhYpvY5LFP7E5ap0x6WqSkWluntYJnO7JapCRaWqQkWlqkJFpapCRaW6VHGMp3BMt3T sEzNsLBMmHYYE5apGRaW2TNstsz7t7dFHcEyHSnLVATLVATLLGSwTG0mLLOYPiFhmXe7chOWeber MmGZ3k5Z5oCUZd7tYlNYZjGDZd7tRhKWWchgmR5lLNMZLFOZsEzC09UGy4Q5vMNRlkl43DLpj1um 9xnL9D5jmTrtWKYmWFimN4NlOoNlaoaFZWqGhWVqhoVlaoqFZWqYwzIHpizTGSxTpwvLBOmfGMEy NcPCMjXDwjK9y1im9wfL1GkPy9QUC8vUdsIyncEyNcXCMjXFwjI1xcIyNcXCMjXOYZnOlGUOSFmm TldYJoxbpqZYWKamWFim9jksU/sclgnTP8GCZWqKhWVq+oRlOrNbpiZYWKYmWFimJlhYpiZYWKZH Gct0Bstk5P0MEsvU6IRlwrT4hGVqhoVl9gybLfPx7d1eR7BMR8oyFcEyFcEyCxksU5sJy3zYxaaw TGewzIddbArLfNhFKyxzQMoyH3YNLSyzmMEyH3Z/DMssZLBMjzKWybD6nQMsE6bbIZbpDJb5sOtj YZmExy2Tcbll6tjDMjV7wjIHZrdMTbCwzEe7Ffczy9QMC8vUDAvL1AwLy9QMC8vUFAvL1DCHZWqK hWXC9ENILFPjg2U6gmVqhoVlaoaFZfrQsUzNnrBMZ7BMTbGwTG0nLFPjE5apKRaWqSkWlqkpFpap KRaWqU+WsExnyjJ1BYZlejNYpj7EwjI1xcIyNcXCMjXFwjJ12sMyncEyNcXCMr0dLFPjXJapCRaW qQkWlqkJFpapCRaWqbMelqnPp7BMHXlYpmZPWKYu0rBMzbCwzJ5hs2W+vr+JrQiW6UhZpiJYpiJY ZiGDZWozYZnFDJbpDJYJ45bp7ZRlDkhZ5kuvR2OZxQyWWYxbZiGDZdJl/Q7KYNwydehhmTAHgyzL hDmcd5ZlEp7z90vjcf72yrIjtTQGZF8ajtTSAOnfNVRLA+TwrcC1NPyVWBrB9ByqpQHTv7eHpRFM /3RBLY2hP/vSmJB9acD0r/9haUR49MOV8Vr24cp4Kf1w5dDlWhrRHf1wZUy6Lo1oRz9cGTOhH66M cfVCrgqwYPTLrILpy7AKsIGpAmxivgqwCHOvB6sA86mgAPOpoACLqdCvZ412tADzqaAAi6H3wqkK sIGpAsynlAIsGC3Ahnb2AmxAqgALRj9c6VNKAeZTSgEWU9GZKsBiSvuFkyrAop3+3WNVgPmUUoD5 dFGABdOLtCrABqYKsGD6hz2rABvaqQJsYKoAi7k4VFdfBZhPKQWYTykFmK9ACrCYrl6kVQEW7bSH GAWYTykFmKczBZgvCwqwgakCLBj9etZgpADzJKQA8+ShABteqQowTx4KMM8MCjDPDAow39kpwILp R/hVgEVm6DG/PzQowDwL+XpWzzC+ntXni69n9bm437Y98PUY5/1+3/bA17idbAvidD2/W83hG243 87mc363m8E7JprKv8/KcUuy+uezj/G41vZ3H+Xq6nd+t5qvPs8wv394MdwSZd6RkXhFkvpD+AVVk HqRXTsj8YpfDQua1NyHzMF34kXleS39rIZjD16DsMs+w9KdCgtFPHEeXu6gj8z70kvlCBpn3ZpB5 2unDQuYXu+wXMr/YNb2QedrpJorMw/SiAJn3/iDzMPqJ44iPfuI4mG7qJfMa5pB5Hzoy78NC5p1B 5n0qkHna6Uf4yLz2OWRepytk3l8LmYfpho3M63Qh8zpbIfM6XSHzdKfLMzKvKydkXqcrZF6nK2Se dvSbUnwqQuZh+rsgyLz2OWRepyJkXuciZF7nImRe4xwy730umR+QknmdipB5bweZ96lA5rWdkHmY g/CXzOtUhMzrVITM61SEzHt/kHmdrpJ5HzkyrzMRMu/tIPM6EyHz3g4yr8/CkPk+E7PVrd/exHYE q3OkrE4RrK4QtzoQt7rVLmOF1Wlvwupg3Op4Lbc6GLU6huVWB+NWR5fd6nzoZXWFDFbnzWB1q10f C6tb7fpYWJ0zWN3AlNWtdr0urM7bwepguh1idd4OVgejR7SE+fes5tu3N94cYTU7UqtZEVZzIb6a QXw13+xN71jN2ptYzTC+mnmt/pYCqxlGVzPD8tUM46uZLvtq9qHXai5kWM3eDKuZdvrbUazmm10t iNVMO4f6q1YzzOFNmVrNMP3NFFaz9jlWM2Pv9Rer+dauBPxsNWsMWc03uw0RNZqHkBrtZjcdokbz EFKj+VRQo2mYo0bTEEaNpiGMGk1DGDWa9jlqNI0PNZqGOWo0b4YaTcMcNZq3Q43mw6JG8zBTo3mY qdE0zFGj0Z9e71Cj6biiRtM4R43m7VCjaZyjRtMYRo0GYzfePMxRo2mYo0bzMFOjeZep0TQ8UaMN TNVoOhVRo3k71Gg6FVGjeTvUaDr2qtH6TMz28/j2JpYj2I8jZT+KYD+FuP2AuP089MoS9qO9CfuB cfvhtdx+YNR+GJbbD4zbD112+/Ghl/0UMtiPN4P90I7bz8NudIX9wHgt87ALZmE/3g72A9MVAPvx drCfh91CC/shhvYNeYH4dROdirAfDzP248PCfgam7EenIuxH2wn70akI+/F2sB+dirCfPhXzU/P5 /SU9RXhqFtJXNE/NQvqVEp6aIPYb6t6XeGo6w1NzYOqp6QxPTWd4ajpTT80BqaemMzw1neGp6Uw9 NQsZnpreDE9NndB4asLop6SCOXybUz01n3YjIJ6aT3ubPp6aMP1Uhqfm096Cj6cmTK9PeWrCHH6V cn9qEub+tOOpCdOfrDw1dbriqelTwVMTRn9D3duJp6ZOVzw1CY/+4mQw/r6eTlc8NWH6m21VM4Lo p6R8uqJm1OmKmlGnK2pGfehGzchUHN6zq5oRpn8NATWjTlfUjMRHfwHBpzRqRp3SqBn9tagZNc5R M2qco2YkPvaLkzEVvY6jZoTpP8VOzQijn5IapoKaUVM1akYNT9SMGp6oGTU8UTMyrsMnoKpmpJ1D XVk1o8anakaNTryv16MzGtLz/O3NJ0fKkECWFoEyJJBru4VfhuQvVIYE0m/hYUjB6G9ywxx+I6EM KZj+WmVI8Vr6G1HRjtWVPnIMaWDKkAamDGlgdkMCcUMamilDinb0PbJIHT1Vj3a0rgxGT9W9PxhS jEt/ayEY/Y0ojw+GFP3p5rcb0hCeMqQhPGVIQ3jKkIbwlCEN4SlD8vBgSB4eDCn607f3MiSPD4bk 8cGQPD5lSB4eDMnDgyEN4SlDGsJThhT96WZThjSEpwxpCE8ZkocHQ/L4YEgeHwzJ44MheXwwpKE/ ZUjBdCUpQ/IYliENL1WG5GHGkAamDGmYijIkbwdD8qnAkHwqMCSfCgwp+qMfY/CHIYbkU7Eb0jDy MiSPIDefhnbq5tMQwbr5NESwbj4NEaybTzHyfi++Psbg0eFjDJ7wfIzBn7t8jGF4rfoYg8eQjzH4 XPAxhqGd+hjDYS5m612+vRnmCNbrSFmvIlivIljvoressN5Fb2thvcX0zwmH9dJOf6cE66Wd/vYF 1rvoTbWyXh15WK8zWK8zWK8zZb3L9zfDhmaw3sVuR4X1LnpbC+vVdsJ6F72thfV6O1jv0q5i/cx6 Ydx6NT5hvfTn8G3xu/V6eLBemtH7/sPQsV5nsF4PM9ar7YT1apjDejXMYb0a5rBe+tP1EOvVOGO9 PiysV8MT1uvtYL0eHqzXw4P1eniwXk2fsF6YbodYr4YwrFfHHtarMQzr9XawXo1hWK/GMKxXY4j1 0p3DdZOyXpj+i6ZYr4cQ6/WhY70awrBebSesV0MY1qshDOvVEIb1aoqF9eoDs6xXIxjW6yPHej2C WK+3g/V6BLFejyDW656G9TL2gxmX9Wp8wnp1JsJ6dSbCev21sF6NYVivzkVYr7ezf3j3MBWzGF+/ vWTtCGLsSImxIoixIohxIYe7NohxMbf+9S+IMUyXXsQYRt8wj/70L6BCjGHsp3fjpQ7uXGLszSDG GsEQY2dKjAsZxNibQYyvdm8uxPhq9/hCjLWdEOOr3eMLMb7aHb0Q42u7pfczMYZxMdb4hBjTHxVj Dw9i7MNCjD2EiLEziLGHGTHWdkKMNcwhxhrmEGMNc4gx/XEx1jgjxj4sxFjDE2Ls7SDGHh7E2MOD GHt4EGNNnxBjGBdjDWGIsY49xFhjGGLs7SDGGsMQY41hiLHGEDGmOy7GMC7GHkLE2IeOGGsIQ4y1 nRBjDWGIsYYwxFhDGGKsKRZirA/MEmONYIixjxwx9ggixt4OYuwRRIw9goixexpizNhdjDU+IcY6 EyHGOhMhxv5aiLHGMMRY5yLE2NspMe5TMYvx7dv7944gxo6UGCuCGCuCGBdyOKFFjIs5nCojxgNT YgzTTRQxpj/6WeJg9MRYRx5i7Axi7Axi7EyJcSGDGHsziDER7CNHjGFaGRNirO2EGBNl/YaYYLrQ IsY3uzweYkx/+v0GxNjbQYxvepm9xBikhwcxpjv90ipi7N1BjG92Tz3EGKYPHTHWYYUYO4MYO4MY D0yJscYwxFgXKWKsIQwx9u4gxqwcvyehqyvEWJ9hIcYM63BLtMRYV2CIsWZ8iLGu0hBjjWGIsb8W YuztIMY6FyHGmhohxrq8EGPvDmLs3UGMvTuIsWZqiLH2J8S492f2hGddc2yTgicU0vdLPAGkKQ6e 8LRLA3gCrbTExxNA/D5lMYfP8uEJ3g6eoOMOT9AuhyfoyPEEuty/7RpP4KX6m+F4Akw/O8QTfFjl CSAHByhP8FHhCU+7CBGeoO2EJ8Dorz/6rIcnwLgnOIMneJ/xBNpprhWeoNOFJ9BM35fxBE3C8AS6 3D8pgicwXf/4/ZPjdf72Tooj9eQAuf5JnhyB3OTJAdLvc9eTI1rRJ0c0o3dSYA4OXU8OHzdPDhi/ kwJzWM77kyO63N8JqCdHdLk/EOvJEUwPTz05grEvlB+QenIE0++g15MjZku/UD6G3pdGPTliJvpq rieHzxZPjpgtPXoPRr9QfmDqyRGzfrhlvT85oh27ie2ZwZMjpqJ/5K2eHMH0p0s9OTzDqDCC6UOv CiOYfm5cFYanGBVGjL3/6ldVGJ4+VBjxWlphxHTZF8p7FlJheBZSYcRLHX71a68w/LlChRHt9GP+ qjA8w6gwPMWoMDzFqDA8xagwPMWoMDzFqDA8xagwPMWoMDzFqDA8xarC8AyjwvAMo8LwFKPC8BSj wvAUo8LwhyFH79FO/+2rOnr3FOPo3VOMo3dPMY7ePcX2o3dPMI7ePcE4evcE4+jdE4yjd08wjt49 wTh69wzj6N0zjKN3zzCO3gcnrKN3zzCO3v0hxtG7P8Q4eg+mH/PX0btnWB29Rwh7kVxfKO8CxRfK exLyhfKehnyhfDDdq+sL5T3M2zrf+vz+8OnT/tjM53JuVtOr/81lX+dmNb2dzWUfl/P6k/TR4uKx /qx0mGuRi92HoRYppD+0qEVopT2zqEVA2ixTi1zsbSNqkYu9lxq1yMXeoYpaBEbf7Qimyz+1yMXe fYpaRENMLQJy+OBo1SJMg14DCua+z/0/b//8949P9H5ZtvlYPv7+7z/+9eO/fmwe9rhtbrNp3e3X 5/32f5Ytircd2P6rT+LzP/36lz9sD5r7x7/97ccf/3L++Ov/THm1zdbPS0/yqpB+VkJeFaLfxwKi p2Mgi52ORV96vUhewXiNC9N/B5C8gtFPGwejp2Mwv+vQYXskfXfooAgTUsiihw6F9DxlQgrpEWBC aEW/VsybiQkZmJoQmP5QYUI0NjEhztRCH5Ba6MVc9WvFPMhx6OBDr0MHjzKHDt4Mhw7OcOigUY5D Bw1PHDoMTB06OMOhgzMcOpDuet8vxnU4iqxDh4HZDx2YiiatcehAM/oz4sHofb9h6Bw6DEwdOpCp vdLl0EEzNQ4dNA3j0AHm8BHxOnSA6acFHDrA9Cq/Dh0IYS/yOXTQ6YpDBw1hHDo4w6GDMxw6OMOh w8DUoYMyceig0x6HDjrtceig0x6HDjrtcejgfebQwRkOHZypQ4cBqUMHZzh0IIT9vIVDB31gxqGD ZnMcOujDOQ4ddLri0EHHFYcOznDoMDB16ODMfujAo7kfS3DooBGMQwcfOYcOmqhx6OBR5tDBX4tD B/c0Dh0Gpg4dnOHQwRkOHXSxx6GDjisOHYjP4U5gHTroNhmHDjD9jmIdOvTuzGK8Paq/E2NFEGNH SowVQYwVQYwVCTEu5lC5IsYw/Y1zxBjG343z/iDGzpQYD0iJsTOIsTOIsTMlxo4gxkSnezpiDONi rFEOMdb+hBgPTImxM4ixM4ixM4ixM4jxwOxi7Ahi7AxirFMRYuztIMYDU2KsTIixM4ixM4jxwJQY O4MYO1Ni7AhirNkcYuztIMbOIMbOIMbOIMYDU2KsTIixM4ixPptDjGH6O3+IsaZziLH3BzF2BjF2 psR4QEqMvcuIsQ8dMYbpbzIixjBd/RBjbSfEWLfJEGMde4ixM4jxwJQYO7OLsROIsTOIsTOI8cCU GDuDGOuiCDF2T0OMB6bE2BnE2BnEePDGEmNnEOOBKTF2BjF2psS4P5snMd6CdflGjAdkF+MJ+RLj QOyaWiC9wtjFOJDD0fsuxtmMiXG2Y58QH7pTYjwMvMR4Yr7EeES+xHhidjGemF2MJ+ZLjAPpJ6Il xhmdw1W2LzGeZmsX42ynH4nuYjx0ucR4ZL7EeGJ2Mc487fe5djHOcfXborsYDxlWYpxjl59KHsJc YjyEucR4CvMuxlN4djEemS8xHpgS44nZxXhidjHO6bIvDB3GXmKcjHwQZpiKEuOcisPPIH+J8TSs XYwnZhfjidnFeGJ2MR6ZLzEent4lxkPGlxgPYS4xHuJcYjzEucR4GFeJ8cTsYjwxX2I8hWcX42E7 KTEetq4S4ymEuxgPTInxEOYS4yHMJcZDeEqMJ2YX4yGGJcbTuH4T42lUuxhPo9rFOHtjYjw8vkuM 87XsE+LJ9FPuXYwnT9vFeGS+xHhidjGemF2MUxH6PaxdjDM+/T7XLsYj8yXGw8opMR5iuIvxEZnF eBvld2KsCGLsSImxIoixIoixIiHGziDGA1NiXIyeGE/tIMbOlBgPSImxM4ixM4ixMyXGjiDGA1Ni 7AxirFEOMdZ2QowHpsTYGcTYGcTYGcTYGcR4YHYxdgQxdgYx9jAjxt4OYjwwJcbKhBg7gxg7gxgP TImxM4ixMyXGjiDGGuYQY28HMXYGMXYGMXYGMR6YEmNlQoydQYydQYydQYw1ziHG3g5i7Axi7EyJ 8YCUGDuDGDuDGDuDGCsTYjwwJcYa5hBjbwcxdgYxHpgSY2d2MXYCMXYGMXYGMR6YEmNnEGOPMmLs noYYD0yJsTOIsTOI8eCNJcbOIMYDU2LsDGLsTIkxYV5/hxivdhcbMVYEMXakxFgRxLiQfvqDGBfS D39CjFe/0F1ivOrlccSY17JL38OoQoydKTEekBJjZxDjYpb2VlyI8ar31EuMHUGMV70SjxgTQfuJ qaGdEGMdeojxwJQYO4MYO4MYa5hDjDWGIcbEpx8HlxhrMocY04x9sHlI+BBjHzpiPDAlxqxjF2NN jRBjTY0QY9rp3w2EGDNd/UPLiLHGBzHWqQgx1qkIMdYQhhg7gxg7gxg7gxh7eBBjDU+IMYyfGDuD GOuqCDHuqTFvcrfvPkgzIGxyjtQmpwibnCJscorEJucMm9zA1CZXzHD64+2wyTlTm9yA1CbnDJuc M2xyztQm5wib3MDUJucMm5xGOTY5bSc2uYGpTc4ZNjln2OScYZNzhk1uYPZNzhE2OWfY5DzMbHLe DpvcwNQmp0xscs6wyTnDJjcwtck5wybnTG1yjrDJaZhjk/N22OScYZNzhk3OGTa5galNTpnY5Jxh k3OGTc4ZNrke53mTe+w3hezju4HYx3cDsY/vBuKbXCH2FVXZF7/7Q2d6OcMmRzteyRXzOz52+zq/ vr9EVUj/fi0C+bK3LglkIV2GCOTL3iwjkC97FzUCSX/tZzayN4fPSlcgYbwk1vCFLThTtjAgZQvF 9DQLW6DLPc+wBZieZ2ULRNk+dpvM4aJV2YKmRdiCzmjYgoYnbIFZP3yTYNmCt4Mt0E7/GDC2oIka tkCY3RY0htiChjBswbuDLbzsTeawBQ8PtjAwZQvKhC1omMMWdFxhC7pIwxZg+mthCxpnbMG7gy1o mMMWPDzYgjPYgocQW/A+Ywv02X5mY3hehi3oa4UtaHzCFnRKwxZ6O+Mmdzl/eyHCkdrkBmTf5ED6 JwhqkwukXdivTS6Q/jWKtckFozeFvTdscsH03aA2OR84m9zA7JvchOyb3MDUJjcwtckNzL7JDUht chOzb3IDU5tcRFlLYm+HTW5i9k1uYGqTG5ja5AamNrmBqU1uYr42uQGpTW5gapMbwlyb3NBObXIT s29yzrDJDUxtcgNTm9zE7JvcwNQmNzD7Jjcgtcl5mNnkhnZqkxuY2uQGpja5galNbmL2Tc4ZNrmB qU1uYGqTG5ja5A5xnje5xU622eQWex+CTQ7ESuJA2vbEJgdilRyIn/vGkHr9wCa32BsVsckt9kZF bHKLHfrHJlfM7ymtL9f6BojDPdN9QgrpqsOEOFITAmKfTwLRg/jorltHMYeCjgnRDseEwPT3XJkQ 7XJMiDNlHQNS1uEM1lHM4XursQ4PT1mHjxzrGJiyDl6qH2BgHdpOWIcOPayDzDh81XZZB/3pH9LB OuiP/YJlMm4d3mesQ7MZ69DZCuvQZA7r8DBjHd5lrGNgyjqUCetwBuvQbA7r0PiEdejYwzqcKevQ bA7r8GawDh861kFm9LIZ6/ChYx305/AjPGUdhPnwVdtlHdrnsA7N5rAO7XNYh6ZzWIfGmWuYQ5/r GubA1DXMgdmvYU7Ifg3Ts5lrmEN46hrmMPS6hukM1zA9nbmGObRT1zB97FzD9HTmGqaPnWuYns77 NcyIcv+qgbqGOfS4rmF6MnMNc+hxXcP0ZOYa5hDluobpfeYa5sTs1zAHpq5hDkxdw/Rs5hqmx4dr mD52rmEOTF3D9Gyua5iHZmYxvn33jVYDghg7UmKsCGKsCGKsSIixM4jxwJQYF3P41ivE2NtBjJ0p MR6QEmNnEGNnEGNnSowdQYyJzuF9qRJjjyBifGvfM/EzMdb+hBgPTImxM4ixM4ixM4ixM4jxwOxi 7Ahi7AxirNMVYuztIMYDU2KsTIixM4ixM4jxwJQYO4MYO1Ni7AhiTJj7W2CIsbeDGDuDGDuDGDuD GA9MibEyIcaMXa9hehqGGMPYN1oNcQ4x9j4jxs4gxs6UGA9IibEziLEziLEziLGGOcRYwxxirGEO Mdb+hBg7gxjzWl0PEWPdLEqMvceIMUyXXsTYXwkx9h4jxj4TiLG/FmLsnoYYD0yJsTOIsTOIscY5 xFi3/xBjmIM8lxhrpoYYezslxj3Msxg/6oqP3WoDOdy0KjEupFdyiPHDrg4gxvSlHfohxiB+Yqz9 DTF+2Bv+IcYPuzgQYvywSxMhxs6UGA9IibEziDFB9hNjD0+JsY8cMR6YEuOHXc8IMdZ2Qox16CHG msghxjr0EGNN5RBjXqsf0SLG3mfEWLMZMfYuI8aazCHGHmbE2LuMGA9MibEyIcbOIMaazSHGGp8Q Yx17iLEzJcaazSHG3gxi7ENHjDXDQox96Igx/bEfXszXOlzYKjHWPocYazaHGGufQ4w1nUOMNc4h xt5nxNgZxNiZEuMBKTHWbA4x9vAgxj50xFiZEGNN5xBjbwcx1rGHGJMaXQ8RY/rjYuzMLsbeY8TY e4wYazKHGOtshRhrMocYe58RY+1ziPHAlBg7gxg7gxhrNocYa3xCjHXsIcbOIMaazYhxb2YW4+f3 FzgVQYwdKTFWBDFWBDFWJMTYGcR4YEqMn3YZK8TY20GMnSkxHpASY2cQY2cQY2dKjB1BjAemxNgZ xFijHGKs7YQYD0yJsTOIsTOIsTOIsTOI8cDsYuwIYuwMYuxhRoy9HcR4YEqMlQkxdgYxdgYxHpgS Y2cQY2dKjB1BjDXMIcbeDmLsDGLsDGLsDGI8MCXGyoQYO4MYO4MYO4MYa5xDjL0dxNgZxNiZEuMB KTF2BjF2BjF2BjFWJsR4YEqMNcwhxt4OYuwMYjwwJcbO7GLsBGLsDGLsDGI8MCXGziDGRNl+HGzy NMR4YEqMnUGMnUGMB28sMXYGMR6YEmNnEGNnSowJc32j1edPJP9/DL4YLwplbmRzdHJlYW0KZW5k b2JqCjIyIDAgb2JqCjw8L0ZpbHRlciAvRmxhdGVEZWNvZGUKL0xlbmd0aCAxMTQ5NQo+PiBzdHJl YW0KeJytneGu48pxbv/PU8wLWJYoiZKAiwvYN7Z/JzCQBwgSAwEcIM77A5d7zqi+xeJetedHYCQ5 2V6n2ayu7l7dbFKX7+ftP7+7bP/r8Vq+/9vfv/33t4+/rJflxx/+8e/f/vX7f3388fS4/2B//sPv Hpfl+cFfvn/851/+8v23f/jH3779/i/n73/7n62Y1/18+/68rN8v5/v6UdR//Cj74y/L5bKc33/6 lHvd78vPP56f+ZfPH5f67R+2S/3xr99+/+fz98vj+1//49vlfS+35/ZvP5fvf/37t/9zPl8e//f7 X/8z/+39cXqt59cOueyR9XZaztcLkfO1Ia/Tdbk9iCyvPfJ4nZbbbVfK8v/2yPN6ej22ewZyXfbI 6356Lc/HVJfL+XI638/X3ZXujblcTpf1ubvt27Mxy/n0uu6Qe4vM5bohj9uuxre1M6/T+rjuGuH2 x8bcXlse7WJzPzdka4XrumUGi7k15nE93S7rfVdOv/PneavOaxfB25868zq9XtdljPLWEpfLvsn7 tZbzcro+HmOUl8ty2orZh3ntzOt0W1+7+tx7OcvrdL/s63z7c2Out9N9WffXav1huV1Oy/oYs325 rafrc5+F9z805r416RagXX1avi9bmz6e+zof6vO4bHW+3qb0WZ6X03o+r2Odn+vpsSzztV5b/qzL eK3r+Xq6X1/r1KbXy/V0Pb/G+lwvz9PtetnXpzPL83S9jil/vT5O19f5Mt3W9XY7bT31NvXS6305 PZd2W7/d+p/+KgPrdb1+cocbfj7dvr//52N8xv+LUXo9jNLr+f4e9lpP2kbpx+MjEkF60Nd1G8if u1J6sB730+31kQJAHlXpw3SyVWkbS+/374/lx1z2Men8+NttWQ9/++A+snX57W+P9ZM//rjH9fzp v7v/2w9sC8prTXH9bz+w5fXZv9n/tFEf2cSydn/65+0/n8701/Vjfn9tw8b3LelPy30bZ7Z//toD fs76P9r6Yw7YtfXHFLfezzJqrtfT87z9G0B6Wz8++vtWGSK9rbch8/wx6wBp/eL52Ob1j1knSO+l 2yC/jSu7unSBuGyD/PP82N3Szwlly6ovInu5PD7+z8/YXh/n/6XgPi9fBrcQD24QDW4QDW6Qo8v8 DG6QlwY3TJtFLpdtRj9/uAyYP3dmG223wqb7vizrafkxY4W5/FOXmdvpfFkeZA5msM2gz30x5098 Z73c99VpE+hlG/N+k5kwy0GK7luVH7uGuPTqbFJ0vuyic+4R3JzouZyv450/n6d1uazjnb/W020T kSkBl/Nm08ttlxgHd9icaMvA23TrH0703ERlvNby2Or8eE2tvly3SWF5LVO6L7frabme71O+L/fN uq+XfWZ0/7o/T8/rdQrhsk1j6/W27xT9th63TR3uu+oclPG5bFVen2MIX+etys9dU/TRbXk9tiq/ 9k3RdHmLzGm9nXepcWnX+lCibfXzmq51XTb1vF2X8Vqb5m7ts++kbVy5bmPr83Z/jNfanGi9PcZL bUp0uz2X8bbW81bl13281Pr8WI89x2ttavK8L5fxtrbl4Xrfp8/hWq/L6Xa/reO1Xh+Tzf013dft /Njq/BhT43a5bXV+jqmxjQZbnV9jr7hti8jbjyW4d4vb9bmtSpZ9L22j2G1blWzTyb5btBje7tv8 19q9jwi3dZsA1/0seunIx4Lsse+lbRC7PbYZcH2OU9ftuU2B6+s5Xuu1TYHbSmqq8v38sahdxgFz y8DT7XHd31ebbu+XbQp83F7TyHJftinwse6a4t7u637dpsDH4zbd1/3HvsBz3xT9WrdtDnxOyXO/ bzPg8zKOqfdNes7PZjS9xpt6PZ97penJc39sU+CzOU3fTtqWmbfnOgrWffPp5bmXmr702wxsq/Ne avq6buszp+ePtR+u1Uxj3QRzfV3GzFiXH+ui/XjZpsD1uk2BP3aDwLx268PPTXTr+5+rXUw0SGuU mGghvU1iooX0sSwmGqTNqDHRIH2rKyYapu8/xUS9nJhomPXrQD4+/vbb2niVQAbpmVKBDKJKj1Lu Eki/UAUSF+qyWYFEMYcgvQOJcrofVyDDHIS0lD7ModFK6Ydy3krvdx6lR5W79pfSe5Cj9Cinr1Te Su/NGaUH8/iFNNvs/YuVYxBdOQLRNAtiK0cgtnIEoitHMLpyBKMrR79vpFkxvnIM42lWyNKrkzTT GCPN0pr91pNmHp5KsxTTO2rSLMX0jlorx+HOa+WIa7VGz8oRzCe76b+tHMH0JU2tHL0+WTmiKfru da0cw/THNVk5DteqlaNnc60cvbWycvRkzsrRkzkrR1S5b7jXyhFh7ivQWjminFbnrBw9PFk5en2y ckQH7DvltXIcrlUrR2+KWjl6U2Tl6E2RlaM3RVaOw63XytGbIivHoSlq5TiEp1aOCHNfYdXK0cvJ ytGbKytHMK0psnIEoyvHgamV48DUyhG9va+a3ytHDBr3jrxXjoemmOfl9csd3SA+LwfReXm1pU3m 5SA6LwfxeXm1xRjm5TA+L+t9Y14uZpiXV1tAZl4Ooju6qI7u6HprYl5ebR2aeTlIv/PMy6stMTEv e0tkXta8wLy82hIT83Jaos+DmZcT5j6BZV7W1sK8nDq3Vse8rGHGvJxy+lZszcspps+DmZc9PJmX V1uBY17OrXcHyLysCY95WUOIeVk7F+ZlzWbMywNT87LGB/Oy17nmZc0wzMsaHszLfqnMy9q7MC9r 78K8rL0L87KHJ/Oypg/mZY0P5mVnMi+H6TvDmZfDrJ2peTlMG1QxL2t7YV7W9sq8XMi1Vznzsg6q 2dH1XpEd3aE6taOL+hx2a987uj5nZ0fXe1d2dL0+2dFFmHs5taMLpu9r1o7ucK2fO7qeqNnRBdP3 qWtH1xMjO7rDXdWO7lCf2tEdrlU7up7w2dGdmPeOrkcwO7p+X9nR9Y6cHV2/93UTn/NrbzWXX9lF ejys9mWrisRWC+lZHVstpHfE2GqQdnex1SB6uMPrC1st5mCZsVUvJ7b6sB1x2OrDtsRjq4Uczg3E Vh+6mR1bDdNlPrYaxs4fDEhsdWDKVovpgyZsNbeu5w/QEt0yY6vaWrDVtJaeP0Br9bMOsVVtLtjq wx5hwFbDHM4ovG01SN/9ia0+2uOJz2zVqxNb1SaFrebW/9SZslVtUthqrtUfnsdW0xR9myS2qqkB W9XUgK1qR46taj+GrYY5mGjZqt9WbFUzDLaqnR22qoMGbFXTB7aqDGxV0we2qs0FW9Xmgq3qhANb DdPVL7Yapj/Mj62GOajo21a1uWCrfluxVW1S2KozsdUwbRCDrYb5Y2fKVvVasNWEp5txbFVTHraq ggBbDfMrD2Jf9pQwbqNI3MaRchtF4jaF9JMTcZsgfQctbvPSx4hxm5c++YzbaI3hNmH6FlrcJtfq +1rlNkH6+yZxm5c+QI3baAThNqmyPiEr5HhustzGoxO3yW2526TKffsnbpNy2ggEtynGz1bivtxt cl+H903KbfTe4TZ673AbzdS4TYrpT63iNn7rcZvcuu/EeXjiNmEO/lNuE6ZPunGbMG30hds4E7fR 5oLbaHPBbbQp4jZpin5bcZsU06sct9Hmgttoc8FtwvRdtrhNmNZz4DZhDk/Rym2UgdtoU8BttCng NjoYwm10/IbbpC06E7fRQRVuo+0et9Emhdt4COM2YfrmYdxGmxRuoz0ZbtN71+gJz7M9/ixPcKQ8 YUDenuBIeYIj5QlB3BPAqCeEcU/w6sQTwKgn4FrmCUDUE3Bb6glg1BNQZfOEIO4JQ3TKE3Bb6gmo snoCylFPCOOegPtST8B9qSf4vccT/N7jCZ6p5QkoRj1huPXyBNy6esIQnvIEMOoJYNQTwKgngNGT NN5c8QRvrniCN0V5AppCPQHFqCd4c8UTvLniCWD6Q8/yBDDqCd7s8QRvrniCN0U8wZsinuCDYTzB x+94AtpCPcEH1XiCt3t5gjdpPAEhVE/wjI8neJPGE7xJ4wmHJp09YblLw8UTFIknOFKeoEg8oRDd TwDinrDYG9jwhGIGT9AawxNSTmfiCYu9NB5PSDH9RE48QaMDT/BLxRNyrX6ctjzBoxNPyKUO73OW J3hrxRO0yvCE1OfwbYryBG0teEKYPunGE3ItfVdzqE88we8rnqBNGk9IlfXEbZifWjf3+Js+260e r0h6vCPV4xVJj1ckPb6Qu/d4LQY9fmCqx9/sUTR6fJjD52iqx9/ssXd6/IBUj8+t926YHl/M4XWP 9PhiDk9iq8cXcj18jaZ6vEcwPd6Z9Hhl0OOdSY+/2QEE9PiEpyt0enyu5SsDr096vDPp8anzYVR4 93hNMPR4zR6sDJzJykAzDCsDzTCsDDTDsDLQFMPKQEOIlcHA1MrAmawMnKmVgWYYVgaaYVgZaIZh ZeDVycpAxyesDLTZsTLQFMPKQMvBysCZrAw0xbAy0BTDykBTDCsDTTGsDDTOWBk4UyuDAamVgTNZ GSSEfVGUlYGmGFYGmmJYGWh98nTUUz5PR73Z83TUUyxPR4dy6unowPw8y+cJlrN8nmA5y+cJlrN8 nmA5yzdEuc7yDUyd5Rs8rc7yTcz7LJ9nWM7yeYblLN8hw2bLvH95Bs+RWKYjZZmKxDIViWUWMlim FgPLvNuZEljm3c6UwDLDuGV6OWWZA1KWebejILDMYgbLvNt5kVhmIYNlepRjmc7EMpWBZToTy9QW hWUmPG6ZuZZbptcnlpn66DeAvNljmZpgsEwvJpbpTCxTMwyWqRkGy9QMg2VqisEyNcywzIEpy3Qm lulMWaZmGCxTMwyWqRkGy/TqxDJTH99/1maHZWqKwTK1HFimM7FMTTFYpqYYLFNTDJapKQbL1DjD Mp0pyxyQskxnYpmaYrBMTTFYpqYYLFPrA8tMffQMnjc7LFNTDJbp5cQynXlbpiYYLFMTDJapCQbL 1ASDZXqUY5nOxDLd02KZA1OWqRkGy9QMg2X2DJst8/HlaUhHYpmOlGUqEstUJJZZyGCZWgws89GO gnxmmWHcMh92NAWW6eWUZQ5IWebDDgDBMosZLPNhJ3dimYUMlulRjmU6E8tUBpbpTCwzIXTLTHjc MnMtt0yvTywz9ek6FsvUZo9laoLBMr2YWKYzsUzNMFimZhgsUzMMlqkpBsvUMMMyB6Ys05lYpjNl mZphsEzNMFimZhgs06sTy0x99L1kb3ZYpqYYLFPLgWU6E8vUFINlaorBMjXFYJmaYrBMjTMs05my zAEpy3QmlpkQ9vDEMjXFYJmaYrBMrQ8sU1MMlhmmv6ERy9QUg2Vq+sAynXlbpiYYLFMTDJapCQbL 1ASDZXqUY5nOxDLd02KZiU5/nziW6UwsUzMMltkzbLbM59dnaRWJZTpSlqlILFORWGYhg2VqMbDM px1sgmU6E8t82sEmWGYYtcwBKct82vExWGYxg2U+7dxXLLOQwTI9yrFMZ2KZufVuh7FMZ2KZTzuG BstMeNwyU2e3TL0vWKZmDyxzYN6WqQkGywzTH7zHMjXDYJmaYbBMzTBYpmYYLFNTDJapYYZlDkxZ pqYPLDOMfZUOiD8x1wyDZWqGwTL9tmKZmj2wTGdimZpisEwtB5apKQbL1BSDZWqKwTI1xWCZmmKw TI0zLNOZskztgbDMMP3WY5kewlimphgsU1MMlqm3BcvUZodlOhPL1BSDZXo5sUyNc1mmJhgsUxMM lqkJBsvUBINlepRjmc7EMvXOYZk6BcIyNcNgmZphsMyeYaNlvs5fnsR2pCxzQN6W6UhZpiNlmUHc Mr2YWGYYt8yBKcsEo5Y5lPO2zAl5WyZuXS0zjFtmGLXMIG6ZQ5TLMgemLBO33r//W5YJRn81Zyin LBPhOf9C11i+PLLsSLqGI9U1FEnXKKR/byddo5D+uR10Db0SukaYnkPpGsX0b9ega4Tpbxeka3h9 qmsMSHWNYvoncNA1Eh59mRHXspcZcSl9mXGocrpGqqMvM6LR9WVGlKMvM6Il9GVG3Fd/o68WYGD6 CqMWYGB6V60FGBh9SQGMvaQApPf4WoChKfSz4N5cWYB5c2UBhubqIawFGMo5/MbmewHmzZUFmIcn CzAwugDzpsgCzFOjFmBDdWoB5s2VBZg3VxZg3lxZgCHMnakFGJqr33otwFCOvszozZUFmIcwCzBv rizABqYWYGD6G321ABvKqQXYwNQCDG1hPyjlTZoFmDdpFmDeu7IAQ3PpB51Qjn7QyZs0CzCER48s g+lLnlqAeQ/MAsy7VxZg3r3eC7ChlFqAefJkATZcqRZgQ3RqAeaZkQWYZ0YWYD6zZwEGRn9QCpnR mVqA+aCRBZhnYT4/6hmWz4/6oLHetinwtbeawwGY+zYHvh5jW6zrNge+9onaTGPrEKfreW81hzXs Zj6X895qDr8nv6ns67w8p3Fl3Vz2cd5bza+8Tfy6fnnq25GIuiMl6opE1AvpL59G1IP09/0i6lc7 +AVR19pA1MN0mY+o51r99d2IepjDJ0Xeop7bOrxwXKIeRt8mRpX193uGWy9RL2QQdS8mop5yektE 1K92vA6innIOT0FK1K92nA2iHqYLf0Q9TH9HMaIexkU98XFRv7ajc5+IuoYZop7qdHmOqIdxUfem iKj7tSLqei2IujYFRF3TB6KuzQVR16aAqGtTRNS1KSDqXp2Iuocnoq5NAVH3ciLqHuaIutc5oh6m y2pEXcMMUdcwQ9Q1zhB1r09EXeMDUdc4Q9Q1PhF1DTNE3asTUdfeBVH3MEfUNcwQdQ0zRF3rA1HX +4KoOxNR16aAqHs5b1HXloCoeykRdZ1OIOq9JWbTun958tmRmJYjZVqKxLQKcdMK4qZ1t8NPMC2t DUwrjJtWruWmFUZNK7flphXGTStVdtPyWy/TKmQwLS8mpnW3Y1YwrTD63RYwevI5zLAlmnJ8SzRM t7GYVphufjGtMHomBTG0rzgA6TIW09KmgGl5mGNaHuaY1sCUaWlTwLQ0NWBa2hQwLa0PTMuvFdPS pohpaVPAtLQpYFqpzuEbcGVa2lwwLb/1mJY3RUzLy4lpaVPAtLR3wbT0WjAtbQqYlrYFTEvbAqbl dY5peZ3LtAakTMuZmJY2F0zLy4lpaXPBtLQcmJaGB6bVm2v2jfXLM7COxDccKd9QJL5RiPtGEPeN 1Y7BwDe0NvCNMO4buVZ/yBjfCKO+kdty3wjjvpEqu2/4rZdvFDL4hhcT30g57hurHTaCb6x2cAe+ EebwmLZ8w8uJb4TpThLfyL33yTu+sbZDQp/5hsYwvpHq9HOp8Q0PYXzDbz2+4SGMb3g58Q0NIXxD Qwjf0BDCNzSE8A2vT3xD7yu+oWGGb3gx8Q0NM3zDy4lv+G3FNzzM8Q0Pc3xDwwzf0N4O39D7gm9o nOEbXk58Q+MM3/By4hsa5/iGhhm+oWGGb3iY4xupju/s6G3BNzTM8A0vJ76hYcbOjpeTnR0NM3Z2 epxnt3l+ffJSkbiNI+U2isRtCnG3CeJu89QjinEbrQ3cJoy7Ta7lbhNG3Sa35W4Txt0mVXa38Vsv tylkcBsvJm6TcnprxW3C9MkybuNM3OZpB0rhNl5O3CaM76V4OXEbvXe4TWKoeylBfC9FmwJuk+ro 7wWC6dWJ2/itx220KeA2Wg7cRpsCbuPlxG20KeA2vSmmUfOyUV+dyp2Y97hJpnfq98BJph8je4+c O6Y13XvonOpTY+cIvQfPGfo5eo7Qe/gcoff4OUI/B9CZ+TmCjtB7CB2h9xg6Qj8HUTI6io4FvYfR qXVrHN1B9qLkDjp8N/jnSEpIh9Ld3fUn9++xdAfZYLqDbGd6Bx1O4v4cTnfBlPF0x9iAOrVKjahj wN9D6g7qW+rvMXUs6T2oTq1So+ouTLZFPTVdjatT09XAOrVKjaw7SJaNU8vVunFqulo4Tk1XK8dp 2K2l465V7DfLdlA/Sv1ePI5N91497sLUl1Dv5ePUdLV+nAJeC8gpU2oFOUM/l5Bjnd5ryKntahE5 td17FbkLuP0sya59eyzf68gd1MbnWkjuIDvMO7VvLSWnVqm15BSmWkxOYarV5BSnWk7u7s7eqdyV 1Bevvx0VmKJUZwXGKL0PC0y9oE4L7KDXr8ja5auTmRMDWStmadGCrBVzbS8BQdb0WpC1YvoxYMpa oO4FkLWCDj/TAlkL1C8HWcvl7IfhdiXJancKAGXNIciaQ5A1hyJrxUyy5gVB1i520oeydrEjOpS1 i52uoaxd7JgOZU3rRFm7tEM4n8paoK5PkDWNE2UtdeoqGlnzMEHWPEyQNQ8TZM3DBFnzMEHWNEyU NQ0TZS11sqObU5woaxonyprGCbKmYaKsaZgoax4myJqHCbKWOnUPg6x5mCBrHibImoaJsqZxoqxp nChrGifKmsaJsuZ1gqwF6tIDWdNgQtb8apA1DThlzSHImrcKZE1Loqxpq1DWtFUoa9oqlLXU6fBq VWRNB0zKmrZKZM0DAFnTUFLWvCTImofy/RLWGMr3W1hjKN+vYe0C0F/+eb+HNUWpXsSaekG9iTUN z/Uq1ni597tYUzDrZaypWeptrLGk9+tYnzTLFw59/erM7cTAoZ2JQysDh1YGDl3MQUXh0IEGhy6o f/iADp2S+rMgOHRK6k9o4NApqdt4HFoDQId2CA7tEBzaoTh0MZNDe0Fw6Kud86JDX+3gGR1aS6JD X+3oGR3aS4JDX9vBsk8dOtDg0BonOnTqdPhBjHJoDxMcOgXZi09jBODQDsGhPeBwaC2JDq0Bp0Nr wOnQGnA6dOrUNRMOrRGHQ/vNwaE1THRoLwkO7WGCQ3uY4NAeJji0phMdOlC3TDi0xpIOrSGgQ2sw 6dBeEhxag0mH1mDSoTWYcOhUqf9+BRw6UN+ohUN7LOHQHgE4tMaSDq0l0aE1lnRojSUdWmNJh9ac o0PrqBqH1lDSoT0AcGgPJRzaS4JDeyjh0B5KOLTLHhw6ITiIdhxa40SH1kahQ2uj0KH9cnBoDSYd WpuFDu0l/fymwSet8oVn3746az4x8Gxn4tnKwLOVgWcXcziZBM8u6N6/jgXPDtQVGp4daDhYkDr1 T/TBswPJr5PvrnZw8Xi2FwTP1lDSsx2KZxczebYXBM++2ZlDevbNDkHSs7UkevbNjkHSs292xpGe fWuHHD/17ECDZ2uc6Nmpk3u2hwme7TcHz/ZYwrMdgmd7wOHZWhI9WwNOz9aA07M14PTs1GnwbI04 PNtvDp6tYaJne0nwbA8TPNvDBM/2MMGzNZ3o2YEGz9ZY0rM1BPRsDSY920uCZ2sw6dkaTHq2BhOe nSoNnh1o8GyPJTzbIwDP1ljSs7UkerbGkp6tsaRnayzp2Zpz9GwdVePZGkp6tgcAnu2hhGd7SfBs DyU820MJz3bZg2cnBINna5zo2doo9GxtFHq2Xw6ercGkZ2uz0LO9pHh2b5UvPHv96r2HiYFnOxPP VgaerQw8u5jD3jE8u6DDpjc8e4Di2YG61MKzUyf7mMQO8v1sDQA92yF4tkPwbIfi2cVMnu0FwbMT Svt81w5qKyR6tpZEz068u2XCswN1O4Znr3aUn56dOvWTHPBsLwmenYi7Z6924p+eHagfu4VnB+qf 7oVn++Xg2QMUz1aInh2oJQE9OwEfDvBqwOnZXid4ttcpnq1dnJ7tVYJne5Xg2el1XaHh2do16dk6 FNKzc3eHU77xbO2/9GztK/Rs7eT0bA0mPdsvB8/2kuDZ2iz0bM0UeLYmLz3bqwTP9irBs71K8GzN Xnq21ome3ev0hYa86rho/ypqNKSYPg9DQ8I0jYKGvOz4BDQk5bTeAA0JMxxNLejwoiY0xEuChujt U0O03tQQDQA0JPXuv18ADcnV+nEAaEigvuEJDfGbi4aEORhGNMTvDRrysoMh1BAtiRoSyL4jOuUA NSTQoCEOQUO84tCQlNScjhqiTQcNSUFdeqAhmpbUkNS7uwo0JE33T78wrlwuXx/XcSbjysDUuBKm j2EZV8Is+n4iGHu3ewfZy92EDuKeccVvDeNKoOG4TqBDT69xBfXuTzYyrqDePQIZVwC9dFwBpMub gcm4MkE1riAD7GdEdhHoHSbjChqld/SMK95yGFfQcv4YAVB/iTHjygBlXEEOHE6z17iCkvTIuycK xhW0Si8o4wqgvpjKuALI3vneQf5+oicvljeA+u8NZnmDEPTffszyxtMJyxtcrq9csrxB0/UVQC1v PC+xvPG8xPIGVzusgWp548MOljcoqT+1yPLGUw7LG885LG8857C88ZzD8sZzDssbzzksbzznsLzx nMPyxnMOyxvPuSxvPOWwvPGUw/LGcw7LG885LG8857C88QETyxuU1H8aMY8RPOfwGMFzDo8RPOfw GMFzrh4jeMbhMYJnHB4jeMbhMYJnHB4jeMbhMYJnHB4jeMrhMYKnHB4jeMrhMcLglnmM4CmHxwg+ zOExgg9zeIwAqD+1yGMET7k8RnC7qt8gGaH3j5DsoD906OevkOyg/nMm758hmS73OF9P93OToh7w bQzYKt4Gp54Ej82cLucuRX2/YRPj17lLUS9pE+PH5Xz7LJ100fK4fboi+WKRs9hpISxyiumjGhY5 Kac1AhY5YVqCYZGz2FMxLHIWewbHRY5DWOQE6jsVWOQEsm9YjZfDIkeDjUVOmMO7v1nkpEX8rBSg 9Z0J/7z957+/fbDrZdmaZvn+j3//9q/f/+vbpnKP+zYpvn7MCluTLlso7+//dvtXPv7rj3/v5z/8 7nHZOt+//f3b7/9y/v63/xlTbGuPn03aR82kWDF9cwIpFkZfHQfz1BQrpg+GSLEwvj8XaPjOD+7e fp92qhJTLPc2pFhK8nV0Mb+2D7Jlyed7E2i/Yvr+GtqvGP9OUxjfXw3T8wntl/r0MKD9AvlrS4B6 06D9Avmr/4B8fzXQL7XNsi11vtqjciZtE2b5g7VNGO9bYXog0jYox/eovCC0zQRV2wDqG81pG48R 2maAqm9NTA3fga72GcIp3NijGiJQe1RDvLNHNRSUPaoByh6Vxxt7VB4m7FFNUO1RDVD2qAYoe1To A37UFXd32NauPaoJeu9RoVXa0gV7VCio7yxljwqQH3UdIpA9qgmqPSokb98NyR6VJy/2qDwxsUcF 6PDthtqjAtS3lrJHBahvCNUeFWLZ94OyR+VNhz0qjyX2qAYoe1QDlD2qAcoe1QTVHpVD2KPyJMAe lScB9qg8CbBH5UmAPaqh4tmjGqDsUQ1Q7VFNTO1RDVD2qBDL/jA/e1Q+qmKPyjMce1Q+iGOPypsO e1R+d9ijGqDsUU1Q7VEN0HuPCkO4/TDuFErsUQ0ByB6V5y72qIZ4Z49quFz2qAbZyx7VBNUe1QBl j2qAskflAwH2qPzusEeFOB3Ow9Yelc+s2KMC1LeWao/qUKUvPHvrOl96tjLwbGfi2crAs5WBZytD z3YInj1A8eyCDk854dleEjzboXj2wMSzHYJnOwTPdiie7Qw8e4Di2Q7BsxPv/nAWnq0l0bMHKJ7t EDzbIXi2Q/Bsh+DZA1Se7Qw82yF4dgLuz4KHkuDZAxTPVoie7RA82yF49gDFsx2CZzsUz3YGnq0j Cj3bS4JnOwTPdgie7RA8OxU//BpKPFv7Lz070ODZGid6tkPw7AGKZ2sI6NkOwbMdimcPTDzbwwTP 1lahZwfyb9V6mOjZAxTPdgierSGgZzsEzx6geLZD5dmOwLMdgmc7BM8eoHi2Q/Bs7XT0bJc9ePYA xbMdgmc7BM8eBDSe7RA8e4Di2Q7Bsx2KZ/cE/8Kzb+evPVsZeLYz8exi/MxlmL5ygWcXc3hCAM9O QX6WGyX5pxu8SvRsvX96tkPx7IGJZzsEz3YInu1QPLuYvk9Lz06U+mM3eLa3HDw7JfWNWni21pue PUDxbIfg2UndfpgQnp2762ei4dmacvTshODwG4Pl2RpwerYGnJ7tAYdne5jg2QMUz1aInu0QPNsh eHaazl8p8xDQswP1nfF4trYKPTut0nf04dl+c/Bsh+DZDsGzHYJnD1A8W4d5erZ2A3q2BpyerRGn Z2vE6dl6d/Rsh+DZDsWzPUzwbJ176Nk61dGzPZbwbIXo2RpwerYGnJ6tYaJnOwTP1mDSs/3uyrP9 3uDZfm/w7NRo8Gwd5+nZuZx/ugFQ34iHZ7vswbMHKJ7tEDzbIXh25KIlLz07ceonmuDZAxTP1g5F z9ZgwrM784Vn3+3oEjxbGXi2M/FsZeDZysCzlaFnOwTPHqB4dkHTfraXBM92KJ49MPFsh+DZDsGz HYpnOwPPHqB4tkPwbI03PVtLomcPUDzbIXi2Q/Bsh+DZDsGzB6g82xl4tkPwbA84PNtLgmcPUDxb IXq2Q/Bsh+DZAxTPdgie7VA82xl4tgacnu0lwbMdgmc7BM92CJ49QPFshejZDsGzHYJnOwTP1ojT s70keLZD8GyH4tkDE892CJ7tEDzbIXi2QvTsAYpna8Dp2V4SPNshePYAxbMdKs92BJ7tEDzbIXj2 AMWzHYJne7zh2S578OwBimc7BM92CJ49CGg82yF49gDFsx2CZzsUz07A77/i2Q87zw7PVgae7Uw8 Wxl4djF99wmeXUzffKJnP/xYfDz7oafw4dm5nJ+d93ujZzsUzx6YeLZD8OyCFv+GACLQT3rHs52B Zz/0DQN4dkLpP5vnJdGzNQL07AGKZzsEz3YInq0Bp2drMOnZiVPfq45na4LTs1NQrxI8W3sBPdsj AM8eoHh2uvjg2Zop9GzNFHp2Suqf9YJnp+n8N449TvBsbRV6trYKPVtjSc92CJ7tEDzbIXi2hwme rWGiZwca9rMdgmdrX6Fn90z5YlZ8/sJbS8pgVnQms6IymBWVwayoDGdFhzArDlBmxYKm3ScvCbOi Q5kVByazokOYFR3CrOhQZkVnMCsOUGZFhzArarw5K2pJnBUHKLOiQ5gVHcKs6BBmRYcwKw5QzYrO YFZ0CLOiBxyzopeEWXGAMisqxFnRIcyKDmFWHKDMig5hVnQos6IzmBU14JwVvSTMig5hVnQIs6JD mBUHKLOiQpwVHcKs6BBmRYcwK/aIz7Pi9fw+KuXvWYfx96zD+HvWYXxWDNPfXM+siPr42SdUyD/F gJJ8rRjol96Ovi5fnyYLc31oRBd7DIuIFtNlChFd7GkfIrrYU2FGNJX2HwJCjQ6vtyeigXz17XGE ZwxQecbElGcEOnwxIZ6BevfEi2cA6olXnoF4+9vRgA5HzsozPEvgGd688AwPEzwDOXD4fGh5xlBS PAMl9Te24xmeu/AMBNw9w4MZz/BYwjOGKsUzUKW+IRDPGMIUz5ig8gyH4BkecHiG3x08w7svPANQ v1w8wyMezxiqFM/wgMMzhjDFMwYonjHEMp4xVDyegYr7DwH5oArP8MvBMzxO8AxvX3jGoaQvZsXr 12c/nMGs6ExmxWL6B5YwK4ZpozRmxTD9G0yYFa/2MISz4gBlVvTLYVbU++es6FBmxYHJrOgQZkWH MCs6lFnRGcyKA5RZ0SHMitoonBW1JM6KA5RZ0SHMig5hVnQIs6JDmBUHqGZFZzArOoRZ0QOOWdFL wqw4QJkVFeKs6BBmRYcwKw5QZkWHMCs6lFnRGcyKGnDOil4SZkWHMCs6hFnRIcyKA5RZUSHOig5h VnQIs6JDmBV7xL+YFe+2945Z8W5PTTArhvHV992emWBWDONrxbs9VOCsmBs7fIAus+LdnqtwVrzb cxXOind7PMFZsaBfW8ev9VmPw6HcaptiuimhbZxJ24TRt8LC+PMC1HkwloIOS0a0jdaabROoP0RG 22i92TYOxVgGJsbiEIyloMNn72EsHqYYiwcAxjJAMZZcre+bwFi0JBqLRoDGkkQ5fK4/xpI69Xei YCypk/+gL6DBWLziMBbNcBiLthyNRROcxuIBh7F4vWEsAxRjUYjG4hCMRTOcxqJxorFoCGgsDsVY NMNpLF4QjMUjAGNJovQlOozFIwBjSZ36EQEYSwJ++Fx/jEUrTmPRDKexaMVpLJriNBaNOE6rDhXP adUBymnVAarTqhNTp1U9w3FadQhTTqsOEchpVYdwWtVTHKdVh5JyWtVDgNOqnuI4reohwGlVT/E6 rYp4t00VnFYdqp3Tqp7gOK06VDunVT3BcVp1iHdOq3rFcVp1guq06gDltOoA5bSqZzhOq3qccFrV Q4DTqgOU06qe4TmteijoC89+fv2VM2fg2cX0LyXBs5/twxCfeXYYfwIZxj8G7xejZw9QPNsvB8/W GNGzHYpnD0w82yF4tkPwbIfi2c7Aswconu0QPFsbhZ6tJdGzByie7RA82yF4tkPwbIfg2QNUnu0M PNsheHYC3j8rB8/2kuDZAxTPVoie7RA82yF49gDFsx2CZzsUz3YGnq2DJT3bS4JnOwTPdgienTr5 D/oC8q+cAeoKDc/WENCzdSSgZzsEzx6geLbGiZ7tEDzboXj2wMSzNeD0bO2/9OxA/jVhDxM9e4Di 2Q7BszUE9GyH4NkDFM/WFI9na+7SszV36dkeAHj2AMWzHYJnOwTPdtmDZw9QPNsheLZD8GzNS3q2 dgN6tnoaPVvjRM92KJ7dmdmzb+c64aQn/cIczpyVZ4fpa8V4dhj/yhnq07Yg49lgfD/bKw3PRq19 PxuQ72cHGjx7gMqzJ6Y8e4Di2Qi372cPYSrPHgIQz56g8mxczfezvSR4tkcAnu3JDc/2CMCzPb3h 2bhc3zyOZw8Vj2d7hsezh3rHsz3B4dlDwOPZQ73j2RNUnu0QPHuA4tme4fBsjxM820MAzx6g8mzP cHj2UFA8e4hAPNtTDp49RCCejTr5L8vicu7ZXnF4tmc4PNsrDs/2FIdne8Th2UPF49kDFM8eoPLs iSnP9gyHZw9himcPEYhnOwTP9hSHZw8lxbM9BPBsZEr/Tm48G3Xql4tnD9Dbs4dqx7OHasezPcHh 2d5y8GxPcHj2UPF4tlccnj1B5dkDFM8eoHi2Zzg82+MEz/YQwLMHKJ7tGR7PPhT0hWdfvj7p6gw8 25l4tjLwbGXg2crQsx2CZw9QPPti59Lo2V4SPNuhePbAxLMdgmc7BM92KJ7tDDx7gOLZDsGzNd70 bC2Jnj1A8WyH4NkOwbMdgmc7BM8eoPJsZ+DZDsGzPeDwbC8Jnj1A8WyF6NkOwbMdgmcPUDzbIXi2 Q/FsZ+DZGnB6tpcEz3YInu0QPNshePYAxbMVomc7BM92CJ7tEDxbI07P9pLg2Q7Bsx2KZw9MPNsh eLZD8GyH4NkK0bMHKJ6tAadne0nwbIfg2QMUz3aoPNsReLZD8GyH4NkDFM92CJ6deHcZh2e77MGz Byie7RA82yF49iCg8WyH4NkDFM92CJ7tUDw7Aa+vnH38dPz/BzG2K0QKZW5kc3RyZWFtCmVuZG9i agoyNCAwIG9iago8PC9GaWx0ZXIgL0ZsYXRlRGVjb2RlCi9MZW5ndGggMTAwNjAKPj4gc3RyZWFt CnicrZ3diuTKlUbv+ynqBZyWUj+phGHAP8e+9nDAD2Bsw4ANtt8fRtWnK75VW7129sVwwG6qV0eG 9t6KWBGKVM1v0/nfr+bzfx7P+9tf/vHlX1/ef7LP968/+Pdfv/z57Z/vP7w9tq/stz/86vzb6Z2f 397/+58/vv3yh3///cuv/zi9/f0/ZzPztO3T2zHvX//03tbfvjb+/pP7PJ///tuPvg/O0z4d3356 /mH88+n90375w/lp6/E2z9NyvD3uXz//nfr6s2WdLz975+a3ZVp/+dlj/84P38FpX57f/cflZ1+5 +W2dJjRYf/gLOH3/H19+Nn3tzP1Te59+9qfzv+8madnfU/PL/27L83bf3u732/YDOfyWsd/+/OXX fzjjvrz9/Lcv80dlbO/x3+/Pt5//8eW/pmnb/vvt5//NX+/L7ZjOf0JmPT4zj/m2T2d3PjFrYZ63 9WsmyDw+M8fjdp+2+RMzLZ+Z53abzsh+YubyWfN0Pzv9+Hxhc7mwM+Fnr4/nZ+i3FTpu69nc527/ 9BX66edXuTqL/iNRz+P/K1Nnoy8zNZgmU2E8U2E8U2HumqkwT89UoN95pgL9wTOllz/f99s5Mq2f oPn3BVrW2zTfH5+gWobzOt+O0tBUi2d93vZ5K136TYG2x9nvfWtTe/792e/H55xsJd7zYzn7/TlK Ww3AMd2O+/Q5KVu9tuO47ff5RVKe+229359tmdyn9Xa/r5/r5D4VaL7fzposSVkq9Dw7vpeklBq4 3x9nxx/PNnP3ZTs7/ixZKTVwX5fbfZm2NuD3bb5Ny/y5T+tWoeN2LCXge2H2/bYv69KH6bHe1mX7 3KU68t2P+9nv/XOhTCV19+d09vuY27vg/nyc/X6ubSzPCN32ddrbm26Zl9u61sIsYVru8+0M+v0F 9Lydw9Xaxmk5x95j3frbd1nXs+OPtuSW7X72+7i3YVr26ez3c+s/bT9u0zYd7V23PLbbsd3n9q5b juW2b6Wc6l23POfbuq17H6bn+8S0PdtyWqfH2fHH3MZpndez40d//55DxdnxZ4l46fi6TLd1n0qf tgodt/t+v/fQut/OuWdrc7du54xZq6Dmbt3PKXP/HMul9ns/p8z9UW7ychusj3PO3I/+XlmPc9Lc n0ffpec5aT7mz1lZSn636Zw0H/fPWalT/VmTt/Wx9F6xzeek+Vj7UXW7n5PmY++zsi3npPl4lEo5 KnTOmo+jz8q2nrPm0dbutp1z5lF8aC7+sZ3SNB33vk62xzlpHsuLj3uck+ZRlOhybcc5aR7FiS4f 9zwnzaM4Ub1VTos7O16caP5dhc5J8znNbSj3+Zw0n9WJqlvez0nzee8HzH05J83n0g8p+ylO07NI 0berO426M98zJHIJMV9lYL6DqZUO8x1MvUNhvmHKNcJ8w5QE0ny10zTfAV18FebrLcF806fqdDDf BKlqZsx3MHVyofmm33WVBvMNVIUV5huo9jvm6wzMN1A1dpjvgOrISvNNBKr2wXyTlOqrMF/NHM03 masfB/NN5spSg+arqaP5Jk5VWGG+ml+Yr6aX5usQzDdQGclpvppfmm8i8JObr+aX5puPK7cKzTdZ KbGk+Wql0Hy1Umi+eo/DfPUWp/kGukhtzNcvDuarJUfzTSxrwGG+gRrz1Uqh+WrN0Xw1KzRfzQrN V2cemm+g2ieYb6DZzTdQFVaYr6YF5usXB/P1WMJ8dbSg+eogTvPVlmi+OoLRfBOm6tAwXy1xmq9a A8030P4j1nOO1y+tRxlYjzOxHmVgPYNZSyBgPWHqLh2sJ1AVA1jPgGoJ03q027SeQHXSh/Xk46qH xHrC7JWJ9eTiakOwHg0lrSf9dusZzMUNYT0eJVhPLq6xnvS77izBetJSGaFoPQO6bgrGenJ1jfUE qq4C6wlUNyphPYlT1TVYjwYT1qOxpPXoXUDrSUtVaGA9HktYj0cA1hOoygOsJ1Cz3xeo2gOsJ1Cd qWE9WgS0Hk0drEczR+vRrNB6kpXab1hPWio2TuvR1NF6NHW0Hg84rEdjSesJVA0D1uMtwXq0T7Qe TR2tR3NH69Hxmdaj8wqsJ6lr9vt0oKf1aKXQerQIaD2JZWM9Wim0Hr3raD2aOlqPVgqtJ9BvK5T9 vtqnXo0e5x32So2ciRo1zFAjZ6JGzkSNwjRqBMjVKFCjRt4lqBEgVyN8nKoRGFcjXJyrESBXI/Rb 1ShMo0ZNlKJGuDhXI/Tb1QgtuRoFatQIV+dqBMjVCFDdfYkaIU6uRh7MqJHHEmrkdwHUCC25GjWx jBohAq5GgFyNPJZQI0CuRoBcjTy/UCNPXdTIMwc18qxAjZAVVyO05GrkqYMaeeqgRk3Ao0Z+Q0GN ALkaeRFAjQCV6oUaeeqgRp47qJGPz1Ajn1eiRkidq5EP9FAjrxSokRcB1AixdDXySoEa+V0HNfJy ghp5pUCNvAigRpc+vVCj82pfqpEyUCNnokbKQI0G47tGYBo1CtSo0YA6NdJuU43SUoWgRumTq1Ea qme7oEYaJaqRfxrUKB9XCg9q5FGCGuXTLg/UokaeOaiR9ptqlD5VV4EaaeaoRoGqGECN8nG1T1Aj 7xPUyK8OaqT5hRql39UgoUYD+ma1L0aDVR/+ZzRQBqOBMxkNlMFooAxGg8FszWigDXE0aKCMBqsd WOBoEKjeVRgNVjsfgdGgYTIaJAL1/sRoMKC1rt0wGgzo8qA+o8Fglp98NPBQYjRwCKNBIlBvBYwG q51H4WjQQBkNEqa6AsBokI7X5Q1GA706jgYOYTRIxy9DxhgNtOI4Gmg1caHkEBZKWnJcKGnJcaGk JceFktYcF0oaSy6UGigLJa05LpQcykIpTIkAF0paclwoaclxoeQXh4WSDmBcKGkRcKGkNceFkrbE hZJDWChpzXGhpDXHhZLWHBdKWnNcKGnEuVByKAslrSYulHQg4ELJY4mFktYcF0pac1wo6cVxoaRD LxdK2nEulLTmuFDylrBQcujjzKhXHM6MesXhzKhXHM6MesXhzGgT75wZbaCcGfWSw5lRQM8KjTOj nl6cGfWSw5nRS8m98NXt9UlPZ+CrzsRXlYGvKgNfHUznq9oQfXWzQyv01UDN6jVQ46veUny1YeKr mx03oq8OqPPVzc4kwVcH0/mqxxu+6hB8VZNCXw1Uj17CVzWW9NWEqfHVdLzxVb06+mr6VL0Pvuod j69qxdFXvSH4qkPwVS05+qqWHH1VS46+qjVHX9WA01cbKL6qNUdf1ZqDr4ZpfFVLjr6qJUdf9YuD r6ZP9SEJfFWLgL6qNUdf1Zboqw7BV7Xm6Ktac/RVrTn6qtYcfVUjTl91KL6qJUdf1XKirwaqT1Lg q1pz9FWtOfqqXhx9NX1qNva1COirWnP0VW8JvurQ8FWtOPqqVhx9VSuOvqoVR1/1eMNXHYKvasnR V7Xk6KuaXvqqlhx9tZbcC199vD6j6wx81Zn4qjLwVWXgq4PpfFUboq8OqM5A9NWHnSSirz7sJBF9 1VuKrzZMfPVhZ8DoqwPqfPVhR7fgq4PpfNXjDV91CL6aCDS+Gqg+a4CvPuw0GX01YWp8NR1vfFWv jr76sNNk9FUtAviqVhx91RuCrzoEX9WSo69qydFXteToq1pz9FUNOH21geKrWin0VY0TfDVMPT4C X9WSo69qydFX/eLgq3qv0Ff94uCrWnP0VW2JvuoQfFVrjr6qNUdf1Zqjr2rN0Vc14vRVh+KrWnL0 1UBVReGrWnP0Va05+qrWHH1VL46+qjVHXw1Uv3QEX9Wao69qOdFXHRq+qhVHX9WKo69qxdFXteLo qx5v+KpD8FUtOfqqJoW+qqGkr2rJ0Vdryb3w1eMHDk4rA191Jr6qDHxVGfjqYDpf1YboqwO6fEcc vuoQfPWw41301cMOisFXGya+etjBPPrqgDpfPew8HXx1MJ2verzhqw7BVw87c0dfDdTsr3pL8NWE qfHVdLzxVb06+qpWE321gYavasXRVwP5N+m95OirWnL0VS05+qqWHH1Va46+qgGnrzZQfFXLib4a qEptfFVjSV/VkqOvasnRV/3i4KtaTfRVh+CrWnP0VW2Jvqpxoq9qzdFXteboq1pz9FWtOfqqRpy+ mqurmhlf1WqirwaqX5KHr+owR1/VmqOvas3RVzUC9FUtAvqqQ/BVrTn6qrcEX9XBML6qFUdf1Yqj r2rF0Ve14uirHm/4ql8bfFVLjr6qdyZ9NVDdhIWvasnRV2vJ9b56TK9PszsTX22Y4avOxFedia+G aXzVG4KvBmp8tYHiq4DcV5uWhq92zPBVRMB9NVDjq4HcV8M0vop+11O+8VVA9ah+fBWQ+yqgy6P+ 4auAqj7FVxGm6UdumPvrA9/O4IZxJjeMMrhhBlNfXIUbZjD1vVW8YfTDeMME8tc5B6rvf+INE6iW Am4Y71NumIbJDTOg+iIp3jAJk38zFh+n34zFp/k3Y5t+44ZJl/ybsSgB/2YsWvJvxiIp/s1YXJ1/ /QOQvyoNUD1fngUeIH9VWtOnscBDwOuaMws8zwoWeJ4VLPCQlfod2yzw0JIv8DwrWOB5mLDAa6As 8DwrWOABqgulLPCajxsLvObTssDz1GGB56nDAg8B9wUeUucHaNBS7XgWeE3qssBDCPwlwR5wLPAa KAs8QPU7tlngNS1lgddAWeAhLfUIdhZ4nuAs8Dy/WOD5XYcFHlLnCzy0VEYwLPA8v1jg+W2ABZ4X ARZ4gOopmyzwANU1VxZ4zcdlgeej6ljgeTFhgdd8WBZ4gOqKKws8rxMs8LxQsMDzyR4LPEC+wEOh XKCxwPMhBQs8r0ss8Lzk8JJgH1XxkmBP776ek+azSNHldcPbOWs+ixRdWtrPWfPZj3LnbXJbpiJF tZz205zmqUjR5dNOL35O9+N7d8ELrV9en4t3BlrvTLReGWj9YOr3fqH1Yfy3tAS6fmE7Wq89otYH qu4Prc/H1a9QQ+sDXV5mM7Q+F3f55ne0PpB/qxv9rsYOrfcIROsH02m9NwStT0vNOjiQf6sbULMO XuxsIrXeW4LWB6qrCGi9twStD3R5uXG0PsHU73GCabRes0Kt94BD6/3ioPUNFK3XrFDr01J9jACt D3R5l020XlNHrfeWoPWB9Fy8Z45ar6mj1mvqqPXeJWi95pdar6mj1nvqoPV6k1PrNSvU+kD1kQy0 Xq+OWq9podZrWqj1mhZqvd7k0HoNOLXeIWi9Q9B6TR21Xlui1mvqqPVaBNT6QNXYofU1dS+EZnt9 cNoZCI0zERplIDSDaYQmTCM0mx2BotBojyg0gRqhycfVHTgITSAXmlxcIzSBGqFJvxuh8QhEaAbT CY03BKFJS43QbHbkjEKz2cktCk2gy2ZmhCZQNQwIjUMQmoTA3+AHyN/g58GE0Gx2Co5C47GE0Hgs ITQeSwiNtwShCVTnVwiNxpJCo7Gk0GgsKTSaXwqNXh2ERgNOofGGIDQacAqNtwSh0Uqh0HjAITQe cAiNBpxCoyMBhUavjkKjEafQeEsQGo04hcZbgtD41UVoNOAUGg04hcYDDqHRCqfQ6MVRaBooQqNZ odB4SxAazQr3Kb0l7FNqCLJPWZPywp7218d4nYE9ORN7Ugb2NJjGnsI09rTbgRzak/aI9hSosad8 XGNPgdyecnGNPQVq7Cn9buzJIxB7GkxnT94Q7CktNfa024kk2tNuh41oT7sdpaI9eUuwp0DNdpC3 BHsK1GwHJZi+HbSX80/ftSfNCu3JAw578ouDPTVQ7EmzQnvSlmhPmhXak7cEe9Ks0J5qVl4MrccP nDhTBkPrYW98xNA6mPq0F0PrYS+qxNCq/eHQ6hCG1gbK0OoQhlaHMLQ6lKG1YTK0OoSh1SEMrQ5l aB1MN7R6QxhaNbscWgP5N3oB+RsTA3VD62FH5Ti0BmqG1kD+W3cAXdacGVoTTB9awzRDq2aFQ6sH HENrIP8t201LGFo1KxxaNSscWgPVxRSGVk0dh1bNCodWh7Iw1cxxYaqp48JUU8eFqQ67XJgmK83C NFB9dQ4Wpp46LEwTpmanXe8VLkwdwsI0UD0ZgoWpt4SFaaB6EAULU80dF6aaOyxME/DL7yPMwjT5 bQ7QBKq/+BsL00D14rAw1fxyYaq3JhemGiYuTDVMXJhqnLgwzdX5NyTQUl3ijoWpRokHaDxKOECj YxMP0HgocYCm3k+90T2n10einYnRhbmXkMbowizlIG+Mzj8rRhem+e3RgKo8xOgCXd5jH6MDVD8u RoeP818WhJZ0sewBgNE1UIyugWJ0DTSMLkxjdE1DMTq0VLc5YnQoJX/UgJZ8sQzIHzV4n2B0uDp/ Iz6g6lgxOo8TjA59qr46jK4JU4yuCVOMrglTjK4JU4yuCVOMzsMEo/MwwejQp6oqMTqPE4zO4wSj 8zjF6DxMMDoPE4yuCVOMrglTjA59qrIWo2vCFKNrwhSj8zDB6DxOMDqPE4zO4wSj8zjB6Jo+xegA VTOK0XkwY3TNp8XoPOAwugaK0TVZidF5SzA6zwqMzrMCo/OswOjQp3rYN0bnAyaMzrMyjK4JQIzO Qwmja1qK0TWhjNE1ocyR6CaUORKNANQTujkS7VHCkWi/C3Ak2odnHIluPi5Hoj2YOBLtacGR6Kal HIm+pOWFQ8+vzx87A4d2Jg6tDBxaGTj0YC4qCocO1Dj0gOqXGOnQaak+S4JDp6X6dAcOnZaqjceh NQB0aIfg0A7BoR2KQw+mc2hvCA492/k2OvRsJ+Xo0NoSHXq2k3J0aG8JDj2XQ3DfdehAjUNrnOjQ sx3Mg0N7mODQach/4WYTATi0Q3BoDzgcWluiQ2vA6dAacDq0BpwOnT5VzYRDa8Th0H5xcGgNEx3a W4JDe5jg0B4mOLSHCQ6t5USHDlQtEw6tsaRDawjo0BpMOrS3BIfWYNKhNZh0aA0mHDpd8vdyA/L3 HDaxhEN7BODQGks6tLZEh9ZY0qE1lnRojSUdWmuODq2jahxaQ0mH9gDAoT2UcGhvCQ7toYRDeyjh 0C57cOiE4CLacWiNEx1ak0KH1qTQof3j4NAaTDq0poUO7S2NrxVesvLCs5fXx+KdgWc7E89WBp6t DDx7MJdjTfDsAW31DRfw7EBVoeHZgfz0AfpU38EDzw6kv70Vn3Zx8Xi2NwTP1lDSsx2KZw+m82xv CJ692OlFevZiJyrp2doSPXuxE5X07MUOS9Kzl3Jc8rueHajxbI0TPTt9cs/2MMGz/eLg2R5LeLZD 8GwPODxbW6Jna8Dp2RpwerYGnJ6dPjWerRGHZ/vFwbM1TPRsbwme7WGCZ3uY4NkeJni2lhM9O1Dj 2RpLeraGgJ6twaRne0vwbA0mPVuDSc/WYMKz06XGswM1nu2xhGd7BODZGkt6trZEz9ZY0rM1lvRs jSU9W2uOnq2jajxbQ0nP9gDAsz2U8GxvCZ7toYRneyjh2S578OyEoPFsjRM9W5NCz9ak0LP94+DZ Gkx6tqaFnu0txbNrVl549vr6CxTOwLOdiWcrA89WBp49mMveMTx7QJdNb3h2A8WzA1WphWenT/4+ DUC+n60BoGc7BM92CJ7tUDx7MJ1ne0Pw7ISyBgCeHaiskOjZ2hI9O/GulgnPDlTtGJ692qF/enb6 5K/Ja1qCZ692nh+eHaZ+Gjw7UH1vHTx7ta8Y0LMdgmc3UDw7UMkvPTuxrC+KgGdrLOnZ2id6tvcJ nq23LzzbuwTP9i7Bs3NDNWdC9K6jZ+soR8/OxfkpX7816dl6G9Cz9f6lZ2sw6dn+cfBsbwmerWmh Z2ul0LO1euHZ3iV4tncJnu1dgmdr8dKztU/07NqnF4ZxjJOg9RVnMYzB1CkWhhGmGBIM47CTETCM tFPuBhhGmObU6YAu3+OEYXhLMAy9fBqG9puGoQGAYaTf9S3DMIx8Wn3SD8MIVPcyYRh+cTGMMBd5 iGH4tcEwDjvzQcPQlmgYgS4vpY5haA3QMAI1huEQDMM7DsNIS0XXaBiaOhhGGmoMQ8uShpF+168I wTCSut//yLjy/IGTOMpgXHEm48pg6hiGcWUwd/1+Ihj/6jcg/+p3oIuTY1zRS+O4MqDuJM6ALnd6 xpWnPXTiuJJ+1whgXAlUhwyMK4H0Bd8Ng3ElUD3Oj3ElFeAv+EYE6g2DceVpTx45rmjmOK4kc80T gkD+gu8GwriSGrgcVM+4kpb8NLsWCseVZKUZV7SaOK5oNXHl0kBZuQSqb93DyiVQs3JJCOqvecLK RcuJK5d8XLNySer0Bd9el1y5aF1y5ZJPu/yap6xcdNjhyiUt1QcSWLloyXHlojXHlYvWHFcuWnNc uWg5ceXiEFYuWnNcuWjNceWiNceVi9YcVi5acly5aMlx5aI1x5WL1hxXLlpzXLnogMknBGmpvkwb Twi05viEQGuOTwi05viEQGsuTwi0mPiEwCE8IdCK4xMCrTg+IdCK4xMCrTg+IdCS4xMCLTk+IdCS 4xMCd0s8IdCS4xMCHeb4hECHOT4hCFQfSOAJgZYcnhBoEeAF315xeMG36xVe8N20dIrxYypSVO+C x7TctqlIUQ34OQacHS+DUy2Cx2lO81SlqO43nGL8nKoU1ZZOMX7M0/q9ctJFy2P97oqkXeSccbaD QGORA6aOamORw3bKoDYWOWRKgY1FDphaX2ORA+Zi72ORQ0g3Twjpr/3qWhqLHEK6edIEeyxyyFy+ 1vuxyGFG9BgUof2jEv50/vevL+/sfv7t4yzDf//1y5/f/vnlVLnHdorYu6p8nRjO/7ufl7d9AOe/ eife/+m3P/zq/OfT21/+8eXXf5ze/v6ftspO+/mW1TpwpsoGU/cnUGVh7IvhZA6tssHU8RBVFka3 6AD5q3549U2VaZdYZbm2p1dZWrKlNJgf2Qo5B7VNtieQv8HULTbkbzD6qiYwusUKptYT8pf+1DAg f4H0S0mEamqQv0D6xX5CzSgxoB/LzXN+tU3VMMjNYO6/0dwMprm3BlMDgdykHd2mahpibhoouQlU 95qRG40Rc+NQ7q2GyQg+oEXfUNiEO9tUXQQ+tqm6eI9tqq6hsU3VQWObqol3tqmaMGWbqoU+tqk6 aGxTddDYpuI9oAdZeXWXne2PbaoW+rZNxayU1Uu2qdiQ/qJxQvqAvYvA2KZqoY9tKhZv3RAZ21RN 8WabqinMbFMRuryZ4WObilDdXRrbVITqntDHNhVjWbeExjZVk7psUzWxzDZVB41tqg4a21QdNLap Wuhjm6qBsk3VFEG2qZoiyDZVUwTZpmqKINtUXcfHNlUHjW2qDvrYpmqZj22qDhrbVIxl3V4b21TN qJptqqbCs03VDOLZpmpSl22q5uqyTdVBY5uqhT62qTro2zYVh/C63TW2qZpQZpuqC8DYpmpqN9tU XbzHNlX3cWObqpO9sU3VQh/bVB00tqk6aGxTNQNBtqmaq8s2FeN0Oe36sU3VzKzZpiJUT+B+bFNd u9R79unlLz3bmXh2wwzPdiae7Uw82xl4dgPFsztoeHagZqekaSme3UDDsztmeHYDxbMbKJ7dQMOz Gyae3UHDsxsonu3xhmd7S/DsDhqe3UDx7AaKZzdQPLuB4tkd9OHZDRPPbqB4NgJeDwXHs5uW4tkd NDzbIXh2A8WzGyie3UHDsxsont1Aw7MbJp6NgOvj4K6leHYDxbMbKJ7dQPFsdPzyzHh4NqBqx/Fs QPX5bDzb4wTPbqB4dgcNz/YQwLMbKJ7dQMOzO2Z4tgccnu33Lzy7yUo828MEzwakv++5g+LZHgJ4 dgPFsztoeHYDfXh2g8SzGyie3UDx7A4ant1A8WyfNOHZjezFsztoeHYDxbMbKJ7dCejw7AaKZ3fQ 8OwGimc30PDsy63ywrPvy2vPVgae7Uw8ezB67BJMXbnAswdzeUIAz05D/kQSLemLGZou0bP1+unZ DsWzGyae7RA82yF4tkPx7MHUfVp6dqJ0OZsZz/bMwbPTUt2ohWdrv+nZDRTPdgiendKtxxfh2bm6 eiwanq0lR89OCC6/1HB4tgacnq0Bp2d7wOHZHiZ4dgPFsxWiZzsEz3YInp3U6UuEmxDQswPZF8aa rNCzk5Wqq/Bsvzh4tkPwbIfg2Q7Bsxsonq3DPD1bbwN6tgacnq0Rp2drxOnZenX0bIfg2Q7Fsz1M 8Gyde+jZOtXRsz2W8GyF6NkacHq2BpyerWGiZzsEz9Zg0rP96oZn+7XBs/3a4NnpUePZOs7Ts/Nx +mIGQnVdA8922YNnN1A82yF4tkPw7MhFPU8Iz06c6okmeHYDxbP1hqJnazDh2ZV54dmLHV2CZysD z3Ymnq0MPFsZeLYy9GyH4NkNFM8e0OW7fvBsbwme7VA8u2Hi2Q7Bsx2CZzsUz3YGnt1A8WyH4Nka b3q2tkTPbqB4tkPwbIfg2Q7Bsx2CZzfQ8Gxn4NkOwbM94PBsbwme3UDxbIXo2Q7Bsx2CZzdQPNsh eLZD8Wxn4NkacHq2twTPdgie7RA82yF4dgPFsxWiZzsEz3YInu0QPFsjTs/2luDZDsGzHYpnN0w8 2yF4tkPwbIfg2QrRsxsonq0Bp2d7S/Bsh+DZDRTPdmh4tiPwbIfg2Q7Bsxsonu0QPNvjDc922YNn N1A82yF4tkPw7EZA49kOwbMbKJ7tEDzboXh2Av4DvxRvnjc7zw7PVgae7Uw8Wxl49mDq7hM8ezB1 84mevfmx+Hj2pqfw4dn5OD8779dGz3Yont0w8WyH4NkDutevY8CzNz3zH892Bp696TcM4NkJpf5S vKYlerZGgJ7dQPFsh+DZDsGzNeD0bA0mPTtxqnvV8WwtcHp2GqpdgmfrXUDP9gjAsxsonp1bvPFs rRR6tlYKPTst1Td7wbOTuvqGAHi2xgmerVmhZ2tW6NkaS3q2Q/Bsh+DZDsGzPUzwbA0TPTtQs5/t EDxb7xV6dq2UF7Pi4/W3lpzBrOhMZkVlMCsqg1lRGc6KDmFWbKDMigPqdp+8JcyKDmVWbJjMig5h VnQIs6JDmRWdwazYQJkVHcKsqPHmrKgtcVZsoMyKDmFWdAizokOYFR3CrNhAY1Z0BrOiQ5gVPeCY Fb0lzIoNlFlRIc6KDmFWdAizYgNlVnQIs6JDmRWdwayoAees6C1hVnQIs6JDmBUdwqzYQJkVFeKs 6BBmRYcwKzqEWbFG/MWseHwclfLvWYfx71mH8e9Zh2lmxcHUb65jVkx/mrNP6VBdKWFWTEvNWnFA P/Tt6PdJ4pVnhFkeFtEw/uaBMFWmElG0o6fJwPibB9Bp/TU/7NHl6+0jooB89e1xhGc00PCMjhme EejyxoR4BvpdCy+eAagW3vAMxNu/HQ3ocuRseIZXCTzD0wvP8DDBM1ADlzeIDs9oWopnoKX6je14 htcuPAMBd8/wYMYzPJbwjKZL8Qx0qW4IxDOaMMUzOmh4hkPwDA84PMOvDp7hty88A1D9uHiGRzye 0XQpnuEBh2c0YYpnNFA8o4llPKPpeDwDHddf89MMqvAM/zh4hscJnuH5hWdcWnoxK86vz344g1nR mcyKg6lfp8CsGKbEHbOiMpwV9cM4KzZQZsVAdeLArKjXz1nRocyKDZNZ0SHMig5hVnQos6IzmBUb KLOiQ5gVZ3uMxVlRW+Ks2ECZFR3CrOgQZkWHMCs6hFmxgcas6AxmRYcwK3rAMSt6S5gVGyizokKc FR3CrOgQZsUGyqzoEGZFhzIrOoNZUQPOWdFbwqzoEGZFhzArOoRZsYEyKyrEWdEhzIoOYVZ0CLNi jfiLWXGxvXfMios9NcGsGEZX32CKumNWDONrxcUeKnBWzIVdXkCXWXGx5yqcFRd7rsJZcbHHE5wV B/Rj6/htvNbjcih35GYw1ZSQG2eSmzD6rbAw/rwAfa7vfUduBnRZMiI32mvmJlB9iIzcaL+ZG4di LA0TY3EIxjKgy5vvYSwephiLBwDG0kAxlnxaNUQYi7ZEY9EI0FhSKJc39sdY0qf6nSgYS/qkv66X UGMs3nEYi1Y4jEUzR2PRAqexeMBhLN5vGEsDxVgUorE4BGPRCqexaJxoLBoCGotDMRatcBqLNwRj 8QjAWFIodYkOY/EIwFjSp8tvCIuxJOD+9gXvOI1FK5zGoh2nsWiJ01g04jit2nQ8p1UbKKdVG2ic Vu2YcVrVKxynVZsw5bRqE4GcVnUIp1W9xHFatWkpp1U9BDit6iWO06oeApxW9RIfp1UR7/p2/JxW bbqd06pe4Dit2nQ7p1W9wHFatYl3Tqt6x3FatYPGadUGymnVBsppVa9wnFb1OOG0qocAp1UbKKdV vcJzWvXS0AvP3l+/5cwZeLYz8ezd3sQBzw5T7ip4dpjGs/fyForvenYDxbMDNWsgvX56tkPx7IaJ ZzsEz3YInu1QPNsZeHYDxbMdgmdrmdCztSV6dgPFsx2CZzsEz3YInu0QPLuBhmc7A892CJ7tAYdn e0vw7AaKZytEz3YInu0QPLuB4tkOwbMdimc7A89OwOsjPHi2twTPdgie7RA82yF4dgPFsxWiZzsE z3YInu0QPFsHenq2twTPdgie7VA8u2Hi2Tod0rMD1WeP8GydxejZGiZ6dgPFsx2CZ2sI6NkOwbMb KJ7t0PBsR+DZDsGzHYJn60BAz9bhmZ6tNUDPdtmDZzdQPNsheLZD8GytcHq2Xh09u4Hi2Q7Bs1Uw 4dm1oReefYwTTnrSL8zlzFk8ezB1rQjPPux8BDw7/SlbkPDsMI1na6fp2YedaaBnH3Y6gp592AER erZD8eyGiWc7BM9OuJv9bA9TPNsDAM9uoHj2YadR6NnaEj1bI0DP1uKmZ2sE6Nla3vTsfFzdPIZn e8fh2Vrh8GzvNzxbC5ye7QGHZ3u/4dkNFM9WiJ7tEDxbK5yerXGiZ2sI6NkOxbO1wunZ3hA82yMA z9aSo2d7BODZ6ZP+cll+3OXwWjxbO07P1gqnZ2vH6dla4vRsjTg92zsOz3YInu1QPLth4tla4fRs DxM82yMAz1aInq0lTs/2luDZGgJ6diqlaiY8O33SXy7bQcOzvdvwbO82PFsLnJ6tmaNna4HTs73j 8GztOD27geLZDsGzHYJna4XTszVO9GwNAT3bIXi2Vjg8uzb0wrOfP3DSVRl4tjPxbGXg2crAs5Wh ZzsEz26gePbTzqXRs70leLZD8eyGiWc7BM92CJ7tUDzbGXh2A8WzHYJna7zp2doSPbuB4tkOwbMd gmc7BM92CJ7dQMOznYFnOwTP9oDDs70leHYDxbMVomc7BM92CJ7dQPFsh+DZDsWznYFna8Dp2d4S PNsheLZD8GyH4NkNFM9WiJ7tEDzbIXi2Q/BsjTg921uCZzsEz3Yont0w8WyH4NkOwbMdgmcrRM9u oHi2Bpye7S3Bsx2CZzdQPNuh4dmOwLMdgmc7BM9uoHi2Q/DsxLvKODzbZQ+e3UDxbIfg2Q7BsxsB jWc7BM9uoHi2Q/Bsh+LZCfh4y9n7b4//P4KhrXcKZW5kc3RyZWFtCmVuZG9iagoyNiAwIG9iago8 PC9GaWx0ZXIgL0ZsYXRlRGVjb2RlCi9MZW5ndGggMTAwNzMKPj4gc3RyZWFtCnicrZ3diuzIlUbv z1PUCzid+s0UDANuu+1rDw1+AGM3DNhg+/1hVKe74lu1ddbOczE04zHl1aHQji3tFaGQcnq7n//8 Zjr/43HMb3/9x5d/fXn/yz7NX//w7799+cvbP9//eHtsX9lf/8tvno/pKz+9vf/zP396++W//Pvn L7/90/3t5/+czUzTPu1vz/P/pvu2v7f196+Nv/9lnqb5/vGnb4PnX/f117/en/nX7+9H++W/nEdb n29nS8f69pi/Hv+d+vq35Xhc/vbOTW/Lff3lb4/9G398B6f92L75L5e/feWmt/V+R4P1j7+A+7f/ 5cvf9q+dmT+19+lvfz7/+eYgLfv70Pzyn9ty3ObtbZ5v23eM4a8j9sNPX377xzPuy9tPf/8yfWTG OSBnF57b20//+PJf9/u2/ffbT/+b/3lfbs/7+a+QWdfPzGO67fezO5/amQpz3Nb78vzczuMz83zc 5vNf/NzO/Jk5ttv9vn/uz3b/zEz3+ez04/OJ3ZcCTfez18+jPftpet7Ws7nP3X4WaN5vZ76vbYym Zb3dp/nRBmBap9uzNHTp93rc9mn73KWpNrQ9zn7vn4dk/fEr9ONPkgmV//XIJ/8iJ/f7SMjj+f+U kY/zvvAqI8PUIUlGgqlZOzISjGYkGM1IMIdmJKDfa0YC+qNmpJ8+MjLQ9AfNyEDXbPvISDA/aEai S7/TjPShPf/3s9+Pz2NS7wDTYzn7/TlKWw3A8357zvfPg7LVc3s+b/s8vRiUY7+t83y0aTLf19s8 r5/zZC53pXmab2dOlkFZKnScHd/LoJQcmOfH2fHH0Y7cvGxnx48yKiUH5nW5zct9awM+b9Ptvkyf +7RuFXrenksJ+F6Yfb/ty7r0YXqst3XZPndpKtfu/JzPfu+fE+Vehm4+7me/n1N7FczH4+z3sbax PCN029f73l50y7Tc1rUmZgnTMk+3M+jzC+i4nbertY3Tct57n+vWX77Lup4df7Qpt2zz2e/n3IZp 2e9nv4+tP9r+vJ3F+Nledctjuz23eWqvuuW53PatpFO96pZjuq3buvdhOt4L03a06bTeH2fHH1Mb p3Vaz44/++v3vFWcHT9KxEvH1+V+W/d76dNWoedt3ue5b2ndb2ft2dqxW7ezYtYsqGO37mfJ3D/H cqlH28+SuT/KRV4ug/Vx1sz92V8r6/Msmvvx7Lt0nEXzMX0elaWM73Y/i+Zj/jwqtdSfOXlbH0vv Fdt0Fs3H2t9Vt/ksmo99boduW86i+XiUTHlW6Kyaj2c/Ktt6Vs1nm7vbdtbMZ/GhqfjHdkrT/Tn3 ebI9zqL5XF4c7nEWzWdRoqkU++15Fs1ncaLL4Y6zaD6LE02lZJwWd3a8ONH0Q4XOonncpzaU+3QW zaM6UXXL+Syax9zfMPflLJrH0t9S9lOc7sf2zctXDfwX8z2rg5xCzFcZmO9gaqbDfAdTr1CYb5hy jjDfMHUmAvPVTtN8B3TxVZivtwTzTZ+q08F8E6SqmTHfwdTiQvNNv30uBqh2CebrUMzXGZhvoKlC Md9AdcYK8x1Qvf3SfBOm6oYw34xclVqYrw4vzTfDWw8H883wllsUzVfHl+arEYf5hmnMN1C55Gi+ fjSYb6AaAZhvTu5HN18dOppvDlcuFZpvAl5CQPPVJKD5ahLQfPUah/nqJU7zDXSR2pivnxzMV7OJ 5qtDR/PVdKL5NlDMV9OJ5huo1Fearw4dzVeHjuar5YnmG6gaJMw3ULnP0XwDXax2mK8OHc3XTw7m qxc5zVdLBs03UJUsmG+gKpAwX+0TzTdhqqIN89XrgOarakHzDbR/jxqdLvxSjZSBGjkTNVIGajSY tQQCahSmLuVBjQLVKgw1GlBNYaqRdptqFKiu5UGNcri6uhY1CrNXJmqUk6sNQY00lFSj9NvVaDAX gYQaeZSgRjm56n1Qo/S7VmqoUVoqdyiq0YCuK4dRo5xdVQyoUc6uShbUSENANdIQUI00eaFGaahk HNXIIwA1SgSaRUGHoEaBLv4UNdKAU430cFQjHRWqkY4K1UgDTjVKxMtiANQoDZXiQjXSUaEaJQJ1 lQ5qFKhZFAxUrhWqUaC6Cgs18qGDGumoUI10VKhGei+kGuk9nGqUYblAUSO9q1KNNAmoRjrAUCPN cKqRji/VSEeFaqRJQDUKVJe7oEaBfv89hrGvrw1DGRiGMzEMZWAYysAwBtMZRqDGMAbUGYZ2iYYR qDGMHM4NI0xjGDm5xjACNYaRfrthDKYzDI8SDCMn1xhG+t0YRlpqDGNAnWHk7BrDyNk1hqEhoGFo CGgYmrwwjDTUGIZHAIaRCDSG4RAMI1BjGBpwGoYejoaho0LD0FGhYWjAaRiJuBtGGmoMQ0eFhpEI NIYRqDGMQI1hBCrpRMMIdNGQGIaOCg1DR4WGofdCGobew2kYGZbGMPSuSsPQJKBh6ADDMDTDaRg6 vjQMvaBoGJoENIxAzeJL7dMLw3jeXxuGMjAMZ2IYysAwBtOsYYRpDCNQYxgD6gxDu03DSEsVgmGk T24YaahuR4JhaJRoGH40GEYOV5/KxDA8SjCMHO3yDCiG4SMHw9B+0zDSp+oFMAwdORpGoFqpYRg5 XO0TDMP7BMPws4Nh6PjCMNLvKmIwjAH9Kocv7gaHPq/O3UAZ3A2cyd1AGdwNlMHdYDBbczfQhng3 aKDcDQ57xs67QaB6VeFucNgjfdwNGiZ3g0Sgedg7oLVOgXA3GNDl2XLuBoNZfvS7gYcSdwOHcDdQ iHcDh3A3OGxbB+8GCVNVctwNcrhmvuF9wt3AIdwN0vHLLWPcDTTjeDfQbOJ8wyHMNzTlON/QlON8 Q1OO8w3NOc43NJacbzRQ5hsOYb7hUOYbiWWdJWC+oSnH+YamHOcb3iXMN/QGxvmGJgHnG5pznG9o S5xvOIT5huYc5xuac5xvaM5xvqE5x/mGRpzzDYcy32iYzDccwnxDc47zDc05zjc05zjf0D7xYW/6 1Gxz1CTgw17NOT7s9ZbwsNehsc1RM47bHDXjuM1RM47bHDXjuM3R441tjg5hm6PLHrY5NlC2OWqB 4jZHTTluc6wp1/vqc3q9OdGZ+GrDDF91Jr7qTHw1TOOr3hB8NVAVGvgqIJ+9AnJfbVoavtoxw1cR AffVQI2vBnJfDdP4ahPv+GoDxVcdgq82UHwVsTwqNHwVYXJfxeHcV5s+xVfRp+p98VVPgviqZxx8 tWkovtpA8VVPOfiqpxx81VMOvuo5B1/1gMNXO2j4agPFVxGnuslv+KrHEr7qKQdf9ZSDrzb9jq+i T3VVO77adDy+6jkHX/WW4KsNFF/1nIOves7BVz3n4Kuec/BVjzh8tYGGr3bM8FVPOfhqE8v4qucc fNVzDr7qHYev+q0Xvuodh696zsFXm5biqw304auecfBVzzj4qmccfNUzDr7axDu+2kDx1Ub24que cvBVQPU9mfiqpxx89ZJyL3x1fr1j1Bn4qjPxVWXgq8rAVwfT+ao2RF8dUH1+RV+dbdcOfXW2DTn0 VW8pvtow8dXZdknRVwfU+epse5vgq4PpfNXjDV91CL6qEH01YfKXaQDVPQjw1YSp8dX0qfFV7zh8 NX2qSgdf1SSAr2rG0Ve9IfiqQ/BVTTn6qqYcfVVTjr6qOUdf1YDTVxsovuoQfFVzDr7qDHxVU46+ qilHX/V+w1fTJ3+N3JOAvqo5R1/VluirDsFXNefoq5pz9FXNOfqq5hx9VSNOX3Uovtow8VVNJ/pq IF9f9Zyjr2rO0Ve14/RVzTn6aqD6Cgx8VXOOvqrpRF91aPiqZhx9VTOOvqoZR1/VjKOverzhqw7B VxMAf428g+KrmnL0VU05+mpNuRe+ur7ef+wMfNWZ+Koy8FVl4KuD6XxVG6KvDujyWjN81SH46mrb u+irq+0mg682THx1tT139NUBdb662k45+OpgOl/1eMNXHYKvJgK+OwjQRUXjq4Ga9dWEqfHVdLzx VT07+qpmE321gYavasbRVz2W8FVNOfqqphx9VVOOvqopR1/VnKOvasDpqw0UX9U40VcD1S268VVn 4KuacvRVTTn6qp8cfFWzib7qEHxVc46+qi3RVzXn6Kuac/RVzTn6quYcfVVzjr6qEaevOhRfTQR+ V5n4qocJvuoQfFVzjr6qOUdf1ZOjr2oS0Fcdgq9qztFXvSX4ql7k8VXNOPqqZhx9VTOOvqoZR1/1 eMNXHYKvasrRVwM166uBGl/VlKOv1pR74av7693szsBXnYmvKgNfVQa+OpjOV7Uh+uqAOl91CL4a qPFVbym+2jDx1V33l8NXB9T56oAaXx1M56seb/hqTs4/VgSo7jqFrwZq1lcDXTYNxFcTpvv3XDCP 1xu+ncEF40wuGGVwwQymfkYJF8xg6leUeMHowXjBBKo5hQtmQPWTRbxgAtW3NnDBeJ9ywTRMLpgB 1W8f8YJJmPwFUxxOXzDF0fwF06bfuGDSJX/BFCngL5iiJX/BFIPiL5h6xznBe9gGN07wAtWLGBO8 QHXSiQleoMsumzHBe5Sdct+c4PnJYYKno8IJXkbFXzBFS80ET0eFEzyNACd4GktO8ALVmQsmeIHq e6GY4On4YoIXppng6dBxgqdDxwleAt5M8DJ0zQaatFRPDhM8HzpM8DTgnOBpnDjBC1RfisQET0eF E7xAdQ6ECV4i7i+Y+thxgqdjhwmeXlCc4GVU6uusmOClpTpVxARPh44TPB06TvAC+Xdtm5YwwdNR 4QQvkH/dyzOFEzy9iWeCp3cUTvA0BTjB0xTgBE9LNCd4geqkExO85ECFMMHTGwEneJpynOBpNnGC p4nCCZ7mAL5r20D5rm1zuPUsmkeRostO9e2smkeRoktL+1k1j6PNuPMyuS33IkWXfp/mNN2LFE3f 89G55/EdW96VgbE7E2NXBsY+mPpKL4w9TJ0owdgP27xGY9ce0dgDVa2Hsedw9e1oGHugy+dehrHn 5C4vdcfYA/kL2+i3f4+3iUCMfTCdsXtDMPa0VAcFxn7YZkEa+2F77mjsgS5aH2MPVL+iC2M/bB8g jT0hqFoPYz/KFr9vGrsGE8buEYCxB2qM3QMOY09LdeEBxu6Hg7HrqNDYNVNo7Dp0NHYdFRq7jgqN XUcFxq6JQmPXMNHYdVRo7B4BGLsHHMbuLcHYveMwdg04jV0DTmPXgNPY06d6OBi7dpzGrhGnsfvh YOx6scDYdVRo7IHqxAbGrjlHY/dRgbHrqNDYdVRo7Hp3orHrqNDYHYKx69DR2L2lYew6KDR2bwfG roNCY6+D0svacX+939uZyFrDDFlzJrIWxmUNjMtaoEbWvEeQNUAuazicyxoglTWcnMsaIJc19Ntl rYnAkLUwjaw1DUXW0FJdhI6sAfL9M4AujxqGrAVqllebliJrgKr2RdaaliJrgC4fzhmyhmDq9zTA +PKqjwpkrQl4ZK05uchaBw1Z81GBrHmfIGs+KpA1QPVrEpG15nCRNR+VyJqPCmTNRwWyhi5Ve4qs Nf2OrPmoQNaaUYmsNbGMrPmoQNa8T5A1QL5/xkcFsubDAlnzYYGseQgga83ZDVnrmCFrDRRZ86GD rDUtRdZ86CBr3hJkze/0kLXL0L1wlfn1Xl9n4CrOxFWUgasMpnGVMI2rzLZrh66iPaKrBGpcJYer DznhKoHcVXJyjasEalwl/W5cxSMQVxlM5yreEFxltn1EdJXZdknRVdJS4yqBfGGpaQmuEqgaBlwl IfCFJUC+sOTBhKvMtt2KruKxhKt4BOAqHku4ircEVwlUCx5cRWNJV9FY0lU0lnQV7xNcRc8OrqIB p6t4Q3AVDThdxVuCq/jJwVU84HAVDzhcRQNOV9E7AV1Fz46uohGnq3hLcBWNOF3FW4KraMThKjXg Lwr1+nqTozMo1Kt9ZBSFejB1gwEK9WrfRkWh1v6wUDuEQt1AKdQOoVA7hELtUAp1w6RQO4RC7RAK tUMp1IPpCrU3hEKto8tCHcg3OQLyTY6BukWFnF0tryjUgZpFhUD+i4yA/KNHCKYvKoRpFhV0VFio PeAo1IH8t8ibllCodVRYqDVMLNSB/LfIfehYqAPVHypAodZMQaHWkWOh1qFjodahY6HW2y4LdUal xhKFOpD/KEAzdCjUCVN9aoFCHahZVNBRYaHWTGGh1kuThVqHhYVah4WFOsHUHwXA0PmeLUD1l8+x qBCobuzCooIOHRcVdFS4qKBh4qKChomLChomPAHC2V3etxlPgNCS79nyOI0nQB4lPAFClOrTrTwB AnR8j4dtr/fOOwMPG8xcogUPG8xSdnzDw/RY8LDN9onRwzbbJ0YPG9DlBw/gYZvtXaOH5XD+40xo yRdMNAD0MIfgYQ7BwxyKhw2m8zBvCB622aY0ethm+/LoYZvtSqOHbba/jR6mfaKH5ex87zygakbw MI0TPSx9qpYZD/MwwcM8TPAwDxM8zMMED/MwwcM0TPQwDRM9LH2qggEP0zjRwzRO9DCNEzxMw0QP 0zDRwzxM8DAPEzwsfaqKBQ/zMMHDPEzwMA0TPUzjRA/TONHDNE70MI0TPcz7BA8L5D/O5MGEh/nR 4GEacHqYQ/AwHxV4mLZED9NRoYfpqNDDdFToYemT7533GyY9TEclHuYBgIdpKOlh3lJ24jShzN75 JpTZO9+EMnvnEYC63zt75z1K2DvvVwH2zvvtGXvnm8Nl77wHE3vnfViwd75pKXvnL8PywqEfr3ez OwOHdiYOrQwcWhk49GAuKgqHDtQ49IDq26506LRUnyfCodNSfcIHh05L1cbj0BoAOrRDcGiH4NAO xaEH0zm0NwSHftg+QDr0w3Ym0qG1JTr0w/Ym0qG9JTj0o2w7/KZDB2ocWuNEh06f9P3TJkxw6DRU zRcO7RGAQzsEh/aAw6G1JTq0BpwOrQGnQ2vA6dDpU9VMOLRGHA7tJweH1jDRob0lOLSHCQ7tYYJD e5jg0JpOdOhA1TLh0BpLOrSGgA6twaRDe0twaA0mHVqDSYfWYMKh06XLjvc4dCD/IGYTSzi0RwAO rbGkQ2tLdGiNJR1aY0mH1ljSoTXn6NB6V41Dayjp0B4AOLSHEg7tLcGhPZRwaA8lHNplDw6dEFxE Ow6tcaJD66DQoXVQ6NB+ODi0BpMOrcNCh/aWxvunl1F54dnP73gRQRl4tjPxbGXg2crAswdz2doG zx7QVj+FAs8OVBUanh2o2TOQPtWPNcGzA+nP/OJoFxePZ3tD8GwNJT3boXj2YDrP9obg2U/bw0rP ftq+Wnq2tkTPftq+Wnr203YW07OfZcvsNz07UOPZGid6dvrknu1hgmf7ycGzPZbwbIfg2R5weLa2 RM/WgNOzNeD0bA04PTt9ajxbIw7P9pODZ2uY6NneEjzbwwTP9jDBsz1M8GxNJ3p2oMazNZb0bA0B PVuDSc/2luDZGkx6tgaTnq3BhGenS41nB2o822MJz/YIwLM1lvRsbYmerbGkZ2ss6dkaS3q25hw9 W++q8WwNJT3bAwDP9lDCs70leLaHEp7toYRnu+zBsxOCxrM1TvRsHRR6tg4KPdsPB8/WYNKzdVjo 2d5SPLuOSuvZ82lqrzy7YYZnd8yHZzfM8OyGGZ4N5rJ2PDwb0GXRe3h2C314NqEqtcOz2Sd94ZeQ rWc3AYhnd9Dw7A4ant1BH54Nxj27a2h4NkOpX2chVGZI8eympXg2460/8ESo2vHwbEAXqR2ezT7p 9xS7loZnM+K14x+eTaYebXg2ofqBw+HZhNSzO2h4dgt9eHYDxbMJlSSIZzPg+nWWJuDx7K5Pw7O7 Pn14dnOJx7O7Lg3P7ro0PJtXXVXo4dnNpRnPbm6F8Wye3WUD74dnN9dvPLu5VuLZzUUez26CGc/u Djc8u2tpeHYzLPHsJlOGZzfJG8/uujQ8u+vS8OyuS8Ozm+yNZzd9imdf+/RCQ5axXbR+pT0aMpha h6EhYYpGQUMW2z4BDUk75WqAhoTRramALi/8QkO8JWiInj41RPtNDdEAQEPS7/rNamhIjla3A0BD AtUFT2iIn1w0JMzFMKIhfm7QkMU2hlBDtCVqSKBaX6EhmgPUkECNhjgEDfGOQ0PSUnE6aogOHTQk DTUaomlJDUm/69s/0JAM3R++576yvdyu0zC4rziT+8pg6j0M95XBzPbqIRn9RgAh/UYAoIu4476i p8b7yoB8uw6gy5We+8pmT6Z4X9nsGRfvK4H0GwEdlPuKM7ivbPYcjPeVQLrlnWmijxEYJn31kCNX 7wa4r+jw8r6S4a13KNxXAumrhx2E+0oSxba8s6HmvqLZxPuKji/vK4H045OEdMt7czhObzRTOL3J 2dUfDcP0RjOF05scrs5cML3JqNQZQKY3mnKc3mjKcXqTo+nHJ5vbDqc3aUk/PtmkE6c3mk6c3uj4 cnqj6cTpjaYTpzd+OExvNJ04vdF04vRG04nTG00nTG80mzi90XTi9EbTidMbTSdOb/Q2x+lNWqo/ h4XpjaZTHiM06ZTHCM345jFCk055jNCk08djhO5g4zFCk0x5jNAkUx4jNMmUxwhNMuUxQpNNeYzQ ZFMeI3RGOB4jNNmUxwjNzSmPEZqbUx4jEKrPGsZjhCab8hihyYE8Rmj0ajxGIFN/W218Lp5QjeX4 XHzXpdOLj1OE+4CfYvy4FymqSfC4L7ftXqSoLsSf94Cz4+XmdPmSyGlO071I0aufcnvM3zStF1OT 3fb4YGoymHrDwtQk7ZT7FaYmYcqIY2qy27MsTE0GU8eSU5PdnpxxahKori9garLb8y5OTbxPmJpo sDE1CXN5YzdTk4yI7nAiNH4h4c/nP//68s7u03wOzfz27799+cvbP7+cAvbYTn16F4yvt/Pz/83n 6W0fwPlvvRPv/+qv/+U3Zz2b3/76jy+//dP97ef/dFk2TdvHqNaf6x5ZFqauKiTLwNg732SelmVh 6v0wWQbGF9YC+bd3ePb6A4NNl5BlOLdDswwt6QQ4zHctYEzLJIsKGL/B1IUxjN9g9NtJYHxhNEzN J4xf+lPDgPELpO8bEapDg/ELpO/sE/KF0UDfNzanDL9aXHIGYzOY+Xc6NoNprq3B1EBgbNKOLy55 QxybBsrYBKorxBgbjRHHxqFcWw0z7uCBFv0AZRNuLC41ERiLS028s7jUNJTFpQbK4pLHG4tLHiYs LnXQWFxqoCwuNVAWl3AN6B5Vnt1lPXosLnXQx+ISRqXMXrC4hIb0x+YJ+bPzJgJZXOqgsbiE5K3L GFlc8uTF4pInJhaXAF0+ujAWlwDpL5sQqis5Y3EJsawLOVlc8qHD4pLHEotLDZTFpQbK4lIDZXGp g8bikkNYXPIkwOKSJwEWlzwJsLjkSYDFpabjWVxqoCwuNdBYXOqYsbjUQFlcQiz1twibuyoWlzzD sbjkN3EsLvnQYXHJzw6LSw2UxaUOGotLDfSxuIRbeF2jyeKShxKLS00AsrjkuYvFpSbeWVxqDpfF pUb2srjUQWNxqYGyuNRAWVzyGwEWl/zssLiEOPnikldWLC4Bqptrx+LSpUsvPPscpZeerQw825l4 tjLwbGXg2crQsx2CZzdQPHtAlyeP8GxvCZ7tUDy7YeLZDsGzHYJnOxTPdgae3UDxbIfg2Rpvera2 RM9uoHi2Q/Bsh+DZDsGzHYJnN9DwbGfg2Q7BsxPwut8Xnu0twbMbKJ6tED3bIXi2Q/DsBopnOwTP diie7Qw8OwH3h7hNS/Bsh+DZDsGzHYJn66VJzw6k74I1IaBnOwTPdgie3UDxbA0BPdsheLZD8eyG iWc7BM/OydUP9sKzPQLwbIXo2Q0Uz3YInq1nR892CJ7dQPFsh4ZnOwLPdgie7RA8u4Hi2Q7Bs/XK pGe77MGzGyie7RA82yF4diOg8WyH4NkNFM92CJ7tUDy7JvgLzz414qVnKwPPdiaePRjfLBmmzlzg 2YO5PCGAZ6ch34SNlvSbC02X6Nl6/vRsh+LZDRPPdgie7RA826F49mDqOi09O1G67KiMZ/vIwbPT Ul2ohWdrv+nZDRTPdgiendTVH1Ti2dVNh/BsTTl6dkJw+dWl4dkacHq2Bpye7QGHZ3uY4NkNFM9W iJ7tEDzbIXh2hs7fBfMQ0LMD1ZXxeLaOCj07o1JX9OHZfnLwbIfg2Q7Bsx2CZzdQPFtv8/RsvQzo 2RpwerZGnJ6tEadn69nRsx2CZzsUz/YwwbO19tCztdTRsz2W8GyF6NkacHq2BpyerWGiZzsEz9Zg 0rP97IZn+7nBs/3c4NnpUePZep+nZ+dw+s0FQr5ZspE9eHYDxbMdgmc7BM+OXJTkpWcnTnVHEzy7 geLZekHRszWY8OzKvPDsw7YuwbOVgWc7E89WBp6tDDxbGXq2Q/DsBopnD6jZ+de0BM92KJ7dMPFs h+DZDsGzHYpnOwPPbqB4tkPwbI03PVtbomc3UDzbIXi2Q/Bsh+DZDsGzG2h4tjPwbIfg2R5weLa3 BM9uoHi2QvRsh+DZDsGzGyie7RA826F4tjPwbA04Pdtbgmc7BM92CJ7tEDy7geLZCtGzHYJnOwTP dgierRGnZ3tL8GyH4NkOxbMbJp7tEDzbIXi2Q/BshejZDRTP1oDTs70leLZD8OwGimc7NDzbEXi2 Q/Bsh+DZDRTPdgie7fGGZ7vswbMbKJ7tEDzbIXh2I6DxbIfg2Q0Uz3YInu1QPDsB377Ds09NfOnZ zsSzG2Z4tjPx7DB19SmeHaYuPsGzA123xQ/PBnT50PDwbBzO9877ucGzG2h4dscMz26geHagub6O Ec9GBOpO7+HZDRPPBuT7sxFKf/nfW4JnewTg2R00PLuB4tkNFM/2gMOzPZjwbMSprlUPz/YEh2ej odqleLZfBfDsJgLx7A4ano1L3D3bMwWe7ZkCz0ZL9Xtc8WwMnf7ucBOneLaPCjzbRwWe7bGEZzdQ PLuB4tkNFM9uwhTP9jDBswH5enYDxbP9WoFnXzLlRVWcX7+15AyqojOpisqgKiqDqqgMq6JDqIoN lKo4oGb1qWkJVdGhVMWGSVV0CFXRIVRFh1IVnUFVbKBURYdQFTXerIraEqtiA6UqOoSq6BCqokOo ig6hKjbQqIrOoCo6hKroAUdV9JZQFRsoVVEhVkWHUBUdQlVsoFRFh1AVHUpVdAZVUQPOqugtoSo6 hKroEKqiQ6iKDZSqqBCrokOoig6hKjqEqlgj/qIqrh9bpfw96zD+nnUYf886TFMVB1PfXEdVTH98 7xM6VGdKqIppqZkrDui73o6et9e7ycIsD43oZo9hEdHBVJlCRDd72oeIbvZUmBFNp/UXfNijy+vt iWigZvatcaRnOBTPaJh4xoAuX0yAZ6TfNfHgGYFq4sUzEm9/OxrQZctZPEOzhJ6hw0vP0DDRM5ID l+9+xjO8JXhGWqpvbMMzNHfpGQl44xkaTHiGxpKe4V2CZ2z28Jye4WGCZzRQPEMheoYGnJ6hZ0fP 0MuXnhGoHg6eoRGHZ3iX4BkacHqGhwme4RA8w2MJz/COwzPScX9rw2+q9Aw9HD1D40TP0PGlZ9SW XlTFx+u9H86gKjqTqvgoy+Xfqoph9JshYJrZt0Ooig2Uqhio3spRFfX8WRUdSlVsmFRFh1AVHUJV dChV0RlUxQZKVXQIVVEHhVVRW2JVbKBURYdQFR1CVXQIVdEhVMUGGlXRGVRFh1AVPeCoit4SqmID pSoqxKroEKqiQ6iKDZSq6BCqokOpis6gKmrAWRW9JVRFh1AVHUJVdAhVsYFSFRViVXQIVdEhVEWH UBVrxF9UxcPW3lEVD3tqgqoYxmffhz0zQVUM43PFwx4qsCrmxC4foEtVPOy5CqviYc9VWBUPezzB qjig75rHL/fxWY/LptyPsQlTTSlj0zBjbMCosYTx5wXo8w86NoEuU8aMjfcaYwOoPkTO2Hi/MTYN NIylY4axNFCMJVDdkABjacI0jKUJQIylg4ax4Gh13STG4i3BWDwCMBYkin9CH32q70TFWNAn/SVe Qm4sTcdjLJ7hMRYfORiLJziMpQl4jKXpd4ylg4axOARjaaAYi2c4jMXjBGPxEMBYGmgYi2c4jKVp KMbSRCDGgkSpU/QYSxOBGAv6VLcIxFgQ8Mt39oexeMdhLJ7hMBbvOIzFUxzG4hHHbtWm49mt2kDZ rdpAY7dqx4zdqp7h2K3ahCm7VZsIZLeqQ9it6imO3apNS9mt6iHAblVPcexW9RBgt6qn+NitiniX D8Zht2rT7exW9QTHbtWm29mt6gmO3apNvLNb1TuO3aodNHarNlB2qzZQdqt6hmO3qscJu1U9BNit 2kDZreoZnt2ql4ZeePb8+itnzsCzZ/vKBjw7jHt2mJKe8Oy5fGDim57tEDy7geLZgXwO5DGiZzsU z26YeLZD8GyH4NkOxbOdgWc3UDzbIXi2phI9W1uiZzdQPNsheLZD8GyH4NkOwbMbaHi2M/Bsh+DZ HnB4trcEz26geLZC9GyH4NkOwbMbKJ7tEDzboXi2M/DsBLw+woNne0vwbIfg2Q7BszUJ6NkNFM/W 2zw9W++89GyH4NkOwbMbKJ6tcaJnOwTPdiie3TDxbA04PVsLFD3bIwDPVoie3UDxbIfg2RoCerZD 8OwGimdrMOPZmrv0bD83eLZD8OwGimc7BM92CJ7tsgfPbqB4tkPwbIfg2Y2AxrPVCunZGgJ6tkPw bIfi2ZV54dnL2OGkO/3CXPacxbMHU+eK8OzF9kfAs9OfsgQJzw7TrGdrp+nZi+1poGcvtjuCnr3Y BhF6tkPx7IaJZzsEz064m/VsD1M82wMAz26gePZiu1Ho2doSPVsjQM/W5KZnawTo2Zre9Owcri4e w7O94/BszXB4tvcbnq0JTs/2gMOzvd/w7AaKZytEz3YInq0ZTs/WONGzNQT0bIfi2Zrh9GxvCJ7t EYBna8rRsz0C8Oz0yX8SFoe7bF6LZ2vH6dma4fRs7Tg9W1Ocnq0Rp2d7x+HZDsGzHYpnN0w8WzOc nu1hgmd7BODZCtGzNcXp2d4SPFtDQM9OplQ7hmenT/6TsA00PNu7Dc/2bsOzNcHp2Tpy9GxNcHq2 dxyerR2nZzdQPNsheLZD8GzNcHq2xomerSGgZzsEz9YMh2fXhl549vZ6p6sz8Gxn4tnKwLOVgWcr Q892CJ7dQPHszfal0bO9JXi2Q/HsholnOwTPdgie7VA82xl4dgPFsx2CZ2u86dnaEj27geLZDsGz HYJnOwTPdgie3UDDs52BZzsEz/aAw7O9JXh2A8WzFaJnOwTPdgie3UDxbIfg2Q7Fs52BZ2vA6dne EjzbIXi2Q/Bsh+DZDRTPVoie7RA82yF4tkPwbI04Pdtbgmc7BM92KJ7dMPFsh+DZDsGzHYJnK0TP bqB4tgacnu0twbMdgmc3UDzboeHZjsCzHYJnOwTPbqB4tkPw7MS7yjg822UPnt1A8WyH4NkOwbMb AY1nOwTPbqB4tkPwbIfi2Qn4+MrZ+6/H/x8cIAmsCmVuZHN0cmVhbQplbmRvYmoKMjggMCBvYmoK PDwvRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDk3NDcKPj4gc3RyZWFtCnicrZ3diizJdUbv z1P0C6hUlX+VCcYgyZKuZQb0AEIWGCSQ9P7g7DPT8a3eedauvjAD8tBeExm5Y2fsFVGRVY+3+/nP rx7n/zyP6e0vf//2z2/vf9ke0/c//Ouv3/789o/3P96e63f2l3/51bHel3f+8fb+z3//8e3nf/nX 3779+o/3t7/9+2zmMW3P6W1/bG+P+7q9t/U/3xt//8v0eEz3jz/9GHxMz+n+y1/ve/7z+/vVfv6X 82rL/vaYH/Py9py+X/+d+v63ZX5e/vbOPd7ms+ff//bcfvDHd3B6ro8f/sflb9+587/d2WD948/g eSs/+o8vfzu587+dPrX36W9/Ov/54SDN2/vQ/Py/63zcpvVtmm7rF8bwlxH77U/ffv2H81Lz20// 8+3xkRnngJxdWB5vP/3923/c7+v6n28//W/+39t82+/nf0JmWT4zz8dtu5/d+dTOozDHbbnP++d2 np+Z/Xmb7u/BZTvTZ+ZYb/f79rk/6/0z87hPZ6efn2/sPhfocT97vR/t3T8e+205m/vc7b1A03Y7 831pY3Rm1+1+Am0AHsvjtpeGLv1ejtv2WD936VGvtj7Pfm+fh+RRb25bz34/P4/J47cFes5nvz9H afn9d+b3P71K1u0+MnXf/r9S9ZxKXqbqYOpYIVXD1HROqobxVA3jqRrm8FQN9DtP1UB/8FTV22eq DujxX56qA7qm4UjVMDV3kKrp0m88VXVomaqBJk3VMDUA+/22T/fPg7LWe9v32zY9XgzKsd2WaTra NJnuy22als95MpXpanpMtzMny6DMFTrOjm9lUEoOTNPz7PjzaEdumtez40cZlZID0zLfpvm+tgGf 1sftPj8+92lZK7Tf9rkEfCvMtt22s6L3YXout2VeP3epTnzTPp393j4nyr0M3XTcz37vj/YpmI7n 2e9jaWN5Rui2LfetfehOs7gtS03MEqZ5etzOoE8voON2TldLG6f5nHv3Ze0f33lZzo4/25Sb1+ns 9z61YZq3+9nvY+2vtu23s0rv7VM3P9fbvk6P9qmb9/m2rSWd6lM3H4/bsi5bH6bjvTCtR5tOy/15 dvz5aOO0PJaz43v//J5Txdnxo0S8dHyZ77dlu5c+rRXab9M2Tf3llu121p61HbtlPStmzYI6dst2 lsztcyzn2u/tLJnbszzk5TFYnmfN3Pb+WVn2s2hux9536TiL5vPxeVTmMr7r/Syaz+nzqNRSf+bk bXnOvVesj7NoPpd+Vl2ns2g+t6kdunU+i+bzWTJlr9BZNZ97PyrrclbNvc3ddT1r5l586FH8Yz2l 6b5PfZ6sz7No7vOLyz3PorkXJaoJvp4auOzFiS6XO86iuRcnqqE8Le7seHGix+8qdBbN4/5oQ7k9 zqJ5VCeql5vOonlM/YS5zWfRPOZ+StlOcbofRYp+idNp1J357rvdQsxXGZjvYGqmw3wHU59QmG+Y co8w3zB1rQPz1U7TfAd08VWYr7cE802fqtPBfBOkqpkx38HU4kLzTb9rBGC+gWqXYL4OxXzDVBmH +QZ6VCjmG6jKOMx3QHX6pfkmTNUNYb4ZuSq1MF8dXppvhrdeDuab4S3rEZqvji/NN3GqLhrzDVPm X5qvJgrNV0eF5uuXg/kGqhGA+SYCv3fz1fGl+eZy5Xmi+WZUtgrFfDVTaL6aKTBfnQdovjoP0HwD NebrNwfz1ZSj+WrK0Xw1U2i+Ol3QfAOVjtN8dVRovjoqNF8tTzTfQNUgYb6ByjxH8w10sdphvjoq NF+/OZivxxLmq+NL81WI5huoWibMV2cCmm/CVEUb5qspTvNVtaD5Btq+oEb7fXmpRs5EjRpmqJEz UaMwSwlE1AhM3cqLGgGqW7xRo0A1haFG3m2oEaCqD1EjXK7urg01ArNVZqgRbq42FDXyUEKN0G9V ozAXgYwaNVGKGuHmqvdFjdDvuv0UNUJLZYaCGgW67hwONcLdlVhCjQDVPkWNmpaiRohT1bWokQcz auSxhBr5UwA1Qksld6FGTSyjRohA3aiMGgGq209RI28JatRAUSNAteZHjQCVEECNfOiiRj5yUCMf FagRRuWo0FAjtFQqENTIhw5qhAjUlqJGHTTUCFDVkKgRoGoYUSN/oKBGzeWiRj50UCMfO6iRz89Q I68rUSMMnW8K+kQPNfJMgRp5EkCN/FmBGnkSQI2alqJG/vxCjTxToEZNS1EjQMdX1Gi6v1YjZaBG zkSNlIEaKQM1GkynRoEaNRpQp0baJapRoEaNcjlXozCNGuXmGjUK1KhR+u1qNJhOjTxKUKPcXKNG 6XejRmmpUaMBdWqUu2vUKFCjRt4S1ChxatRIgwk10lhSjfQpoBqlpUaNPJZQo0SgUaNAjRppS1Qj h6BGgRo1CtSokQ4d1EhHjmqko0I1yqg0apSWGjXSoaMaJQKNGjVQ1ChQo0aBGjXSp45qFMh3jXzo qEY6dlQjnZ+pRlpXoEYZukaNdKKnGmmmUI00CahG+qxQjTQJqEbeEtRIn1+qkWYK1chbghoF+pIa nTfyUo2UgRo5EzVSBmo0mGbXKEyjRoEaNRpQp0babapRWqoQ1Ch9cjVKQ/UzJ6iRRolq5FeDGuVy 9XOwqJFHCWqUq1XHghr5yEGNtN9Uo/SpFmGokY4c1ShQFQOoUS5X+wQ18j5BjfzuoEY6vlCj9Lva GtRoQL9Y7YvZYNUTApkNlMFs4ExmA2UwGyiD2WAwazMbaEOcDRoos8Fqpxo4GwSqTxVmg9UOUWA2 aJjMBomAf7weaKlrN8wGA7p8mp/ZYDDz73028FBiNnAIs0EiUD+jxWygg8LZYLUTQJwNEqa6AsBs kI7X5Q1mA707zgYOYTZIxy9TxpgNNOM4G2g2caHkEBZKmnJcKGnKcaGkKceFkuYcF0oaSy6UGigL Jc05LpQC1XVZFkph6gfQWChpynGhpCnHhZLfHBZK+qxwoaRJwIWS5hwXStoSF0oOYaGkOceFkuYc F0qac1woac5xoaQR50LJoSyUNOW4UApUI4CFkscSCyXNOS6UNOe4UNKb40JJnwMulLTjXChpznGh 5C1hoeTQx8FSzzgcLPWMw8FSzzgcLPWMw8HSJt45WNpAOVjqKYeDpV57cLDUKysOlnrK4WDpJeVe +Or2+jioM/BVZ+KrysBXlYGvDqbzVW2IvrrZQRr6aqBm9Rqo8VVvKb7aMPHVzY4b0VcH1PnqZoeS 4KuD6XzV4w1fdQi+utlJIvrqZmeS6KuBGl9NmBpfTccbX9W7o6+mT9X74KuaBPBVzTj6qjcEX3UI vqopR1/VlKOvasrRVzXn6KsacPpqA8VXNefoq5pO8FUdXvqqphx9VVOOvuo3B1/VZ4W+qklAX9Wc o69qS/RVh+CrmnP0Vc05+qrmHH1Vc46+qhGnrzoUX9WUo69qytFXNefoq5pz9FXNOfqq3hx9VTtO X9UkoK9qztFXvSX4qkPDVzXj6KuacfRVzTj6qmYcfdXjDV91CL6qKUdf1ZGjr2rK0Vc15eirNeVe +Or+hTO6ysBXnYmvKgNfVQa+OpjOV7Uh+upux43oq4EaX93tJBF91VuKrzZMfHW3M2D01QF1vrrb 0S346mA6X/V4w1cdgq/qoNBX90WeKvpqoOpY8NWEqfHVdLzxVb07+mr6VJUOvqpJAF/VjKOvekPw VYfgq5py9FVNOfqqphx9VXOOvqoBp682UHxVc46+6lB8NUyzv6opR1/VlKOv+s3BV9Mnf3Hfk4C+ qjlHX9WW6KsOwVc15+irmnP0Vc05+qrmHH1VI05fdSi+qtlEX9XZgr7qEHxVc46+qjlHX9Wbo69q ztFXA9WXjuCrmnP0VU0n+qpDw1c14+irmnH0Vc04+qpmHH3V4w1fdQi+qilHX9WRo69q+aWvasrR V2vK9b563F8fnHYmvtoww1edia86E18N0/iqNwRfDXR52Ty+2kDxVUDuq4DUVztm+Coi4L4aqPHV QO6rYRpfbeIdX22g+CoiUKUnvgrIX7f3WMJXESb3VXTcfdXvDr7q2QRf7aAPX/WMg68CqtvQ8dUm TPFVTzn4qqccfNVTDr7qOQdf9YDDVzto+KrnHHwVkPoqmLoHG1/1lIOvesrBV5ubi696NsFXGyi+ 6jkHX/WW4Kuec/BVzzn4quccfNVzDr7qOQdf9YjDV3F39d324asdM3wVkJ8HaKD4quccfNVzDr7q EYCvehLAVxsovuo5B19tWoqv+mQ4fNUzDr7qGQdf9YyDr3rGwVebeMdXPZvgqw7BV30Gg6/64wtf 9ZSDr15S7oWvPl6fZncGvupMfFUZ+Koy8NXBdL6qDdFXB9T5qkPw1UCNr3pL8dWGia8+9Hw5fHVA na8OqPHVwXS+6vGGr+bm6lFg+Gqgxlc1TPTVQH4eAGG6f+WBmV8f+HYGD4wzeWCUwQMzmPrFVXhg BlO/t4oPjF6MD0ygmlN4YAZUv/+JD0yg+tYGHhjvUx6YhskDM6D6RVJ8YBImfzMWl9M3Y3E1fzO2 6TcemHTJ34xFCvibsWjJ34zFoPibsbg7fzMWkH+fGiD/PjVAlzc7ssCb7TwdFnhzOSn3wwWejgoX eDoqXOBlVOoJeyzw0lKzwNNR4QIvEaiHlLHA0zBxgaejwgWeXw4LPL9cFnhh6koRCzwdOi7wdOi4 wEvAmwVehs4P0KCly1elZYHnQ4cFnsaSC7xAzQLPISzwAtV3bLHA85awwHMIC7wMS31XFQs8HWAs 8HR8ucDTp44LvAydvxmLlurnH1jg6fhygafjywWePitc4AWqyyks8HTC5AJPH3Iu8LxPY4GnycQF nl8MCzyHsMDTPOECTxOFCzwt9lzgBarHXrDAS6JcoCzwdErhAk/zkgs8TTl8k7CnHL5J2Av5tpxF 8yhSdDn5s55V8yhSdIG2s2oepUtloj8fk9t8L1JU02k7zelxL1J0udrpxcd9Kl363Ve0fnl9Lt4Z aL0z0XploPWDqe/9QuvD1NUUtH6xE27Ueu0RtT5QdX9ofS5XX6GG1ge6fJnN0Prc3OXN72h9IH+r G/32r0luIhCtH0yn9d4QtH6xg3nU+kD+Vjegy0cy0frFziZS69NS9VVo/WLnAKn1gepSA1ofqNF6 v7tofQJewwStX8qpwx9qvQ4dtd7DBK33fkPrFaLWOwSt1/Gl1qelKtrQes05ar2OL7ReE4Var48B tb6O74uJfHt9YNQZTOTOZCJXBhP5YJqJPEwzkW929IMTufaIE3mgZiLP5erOAybyQD6R5+aaiTxQ M5Gn381E7hHIRD6YbiL3hjCRpyX/vvtAl+kXE/lmh384kQe6bOJkIg/UTOSB6sYDJvKEwL+5DJB/ c5kHExN5ulRrCyZyjyUmco8lJnKPJSZybwkTuQacE7nGkhO5xpITucaSE3n61EzkeneYyJ3BRK6j woncW8L+jI4K92e8JezP+Khgf6aOyovisn/hdJcyKC6DqdMdistg6k4VikuYMsooLtofFheHUFwa KMXFIRQXh1BcHEpxaZgUF4dQXBxCcXEoxWUwXXHxhlBcdHRZXAJVj0ZxCdSc7trtE3YWl90+qGZx CVSnVhSXQE1x2e3DcxaXBFO/7QVMs/mvo8Li4gFHcQnkPyPYtITioqPC4pIwNZv/gS77+ikuOnQs Ln45FBcdXxQXHTkWFx06FhcdOhYXnXZZXDIqdeWC4hKoLqZQXHzosPmfMPmPqfhTx81/h7D5H6h+ TyU2/zVTuPmvw8LNfx0Wbv4nmJcfShmb/xm65nRXoPqjhdj8D9Rs/uvQcfNfA87Nfw0TN/81TNz8 1zBx8z93dzm4lc3/tHQ5ApbNf41TNv81Stz813mAm/86o3DzP9D6FVk7vnC0SRnI2mCmElLI2mDm cr4AsqbXgqwNpvnlO0DVCyBrA7p8vSZk7dBzRJC1Q88RQdbSku8EaAAoaw5B1hyCrDkUWRtMJ2ve EGTt+MrRpsNOW1HW0lKzExCo2QnQPlHWcnf+RZ2Aqj5B1jROlLX0SX/zuQkTZM3DBFnzMEHWPEyQ NQ8TZE3DRFnTMFHW0qdqIZA1jRNlTeNEWdM4QdY0TJQ1DRNlzcMEWfMwQdbSp+phkDUPE2TNwwRZ 0zBR1jROlDWNE2VN40RZ0zhR1rxPkLVA1YwgaxpMyJpfDbKmAaesOQRZ81GBrGlLlDUdFcqajgpl TUeFspY+NSc1dMKkrOmoRNY8AJA1DSVlzVuCrHkocVLDQ4mTGh5KnNRIAOrBAZzU0CjxpIY+BTyp odMzT2r45XBSQ4PJkxo6LDyp4S3hpEYdltah59PuXjl0wwyH7pgPh26Y4dANMxwazEVFh0MTUocG VM9Wx6HZUv2gbDg0W9Jfj2ZL1cY/HLoJQBy6g4ZDd9Bw6A76cGgw7tBdQ8Oh2ZK+zgrIHbppKQ5N SB26a2k4NO9OHZqQOnQTpzg0+3T5TsFfHLoL03BoNqS/A9RFYDh0Bw2H7gI+HLppKQ7dBDwO3QQ8 Dt0EPA7NPunrrE3Eh0N3NzccuglTHLpraTh0F6bh0F2YhkN3YRoO3aRTHJpQtczh0E0s49BNCOLQ TTDj0F1Lw6GbYMahm2DGoZtgDodml+pG7XBoQvU3lYZDd7EcDt1FYDh0E8s4dNNSHLqJZRy6iWUc uollHLrJuTh0M6t+OHQTyjh0F4Dh0F0oh0N3LQ2H7kI5HLoL5XDoTvaGQzMEF9H+cOgmTnHoZlDi 0M2gxKG7yw2HboIZh26GJQ7dtfRx2vk6Ki88e3p5/Lhh4NnOxLOVgWcrA88ezOXMFjx7QGt98Q6e HagqNDw7kB4sYJ/qq8Hw7ED2o1K82sXF49neEDxbQ0nPdiiePZjOs70hePZkh0rp2ZMdKqVna0v0 7MnOi9KzJz1XC8+eylnQH3p2oMazNU707MkOscKzPUzwbL85eLbHEp7tEDzbAw7P1pbo2RpwerYG nJ6tAadnp0+NZ2vE4dl+c/BsDRM921uCZ3uY4NkeJni2hwmerelEzw7UeLbGkp6tIaBnazDp2d4S PFuDSc/WYNKzNZjw7HSp8exAjWd7LOHZHgF4tsaSnq0t0bM1lvRsjSU9W2NJz9aco2frrBrP1lDS sz0A8GwPJTzbW4Jneyjh2R5KeLbLHjw7IWg8W+NEz9ZBoWfroNCz/XLwbA0mPVuHhZ7tLcWz66i8 8Ozl5dshDQPPdiaerQw8Wxl49mAue8fw7AFdNr3h2Q0Uzw5UpRaenT7pa36EfD9bA0DPdgie7RA8 26F49mA6z/aG4NkJZQ0APDtQWSHRs7UlenbifflNx3h2oGrH8OzFzvPTs9OnepIDnu0twbMTcffs xc7807Mdgmc7BM92CJ4dqA4dPDthqsdu4dkaJnq29omerX2iZ+uTSc/2PsWzvUvw7DwreoC3eaDo 2TqB0bNzc3ompHnq6Nma4fRsfTTp2RpLerZejp7tLcGzdVjo2Zop9GzNXnq29yme7V2CZ3uX4Nma vPRs7xI8u/bphTw8xyHPMj6Qh8HU6gl5CFPkB/LwtEMPkIe0Y7/8TkYPlALyX37vWoI86O1THrTf lAcNAOQh/a6v0UMecrX6IT7kIVDdpoQ8+M1FHsJcvCDy4PcGeXjacQ7Kg7ZEeQhU3/WEPGgOUB4C NfLgEOTBOw55SEvFxCgPOnSQhzRUVQXyoGlJeUi/64s9kIcM3Rd+Q36+H184ZKMM5hVnMq8Mps5h mFcGM9lbhWT0lXVC+so6oItuY17RW+O8MqDukM2ALk965pXDPk/ivHLYJ1OcVwLpK+sdlHklTD2E j3nlsE+vOK8E0lfWmSbN5n/CpG8VcuTqbIB5RYeX80qGt85QmFcC6VcKdhDmlSSKHVRnQ828otnE eSWjUm8O80qg+jYkFiWal1yUaBJwUaKX46IkkL5VyBDUb5/HokTTiYuSXK56exYlGbm6SsCiRPOS ixLNSy5Kcrm6TsKiROcmLkrS0uXr4LMo0ZzjokRzjosSzTkuSjQJuCjRnOOiRHOOixK/HBYlmnNc lGjOYVGiKcdFiaYcFyWac1yUaM5xUaI5x81/zTlu/uuEyc3/tFS/fB2b/5pz3PzXnOPmv+ZcNv81 Bbj5rxnHzX/NOG7+++Ww+a8Zx81/zThu/mvKcfNfU46b/5py3PyvKacquv3YIXt1fdzt5EbUNUyd +qKuaEeXxGDKkEddw9QcjLqGufwIfNQVkC+JO2ioKyBfEgOqK9moqwc76grm8h7mUFeMiJ9bAaTf hg2ozmhRVzC+JG7uLeqKLuk7lmxJvz6+ySWoqycT1NWzCerqiQJ1baCoq2cT1NWzKerqAYe6ejZB XT2boK6XbDrnlT+d//zz2zu7PabzQZ/e/vXXb39++8e3U/ee6yl052O9fi8e5/+Zzodl/QDO/+qd eP9Pf/mXXx3rfXn7y9+//fqP97e//buds85e/zKsdd7MnDWYuoeBOSuMvRdOZtc5azB17sOcFca3 8QL5l/jw7vUnL5oucc7KvR0+Z6UlXW6H+dJ2yeOU1x9vYWD8BlO34TB+g9EvYQLj27Bhaj5h/NKf GgaMXyB/JwlQHRqMXyB9r5+Qb8MG+trYnJL3aivLGYzNYKbf6NgMpnm2BlMDgbFJO76V5Q1xbBoo YxOo7r9gbDRGHBuH8mw1THxgQLN++2ITbvqARyA+4PGGD3hD8AGH4AMab/qAhok+0EDxAYfgAw7B B/IM+DlW3N1l9zs+0EDDBzIqZfFCH0hD+vOHhPzz9SYC2crqoLGVheT1rSxPXmxleWJiKwvQ5YsZ xlYWoLq5lK0sQHVLaGxlIZZ1RyhbWT502MryWGIrq4GyldVA2cpqoGxlddDYynIIW1meBNjK8iTA VpYnAbayPAmwldV0PFtZDZStrAYaW1kdM7ayGihbWYhl3V3LVpbPqtjK8gzHVpZP4tjK8qHDVpbf HbayGihbWR00trIa6GMrC1N4/YKHbGV5KLGV1QQgW1meu9jKauKdrazmctnKamQvW1kdNLayGihb WQ2Uc6w+EeAcq98dzrEiTpfDruMcq1dWnGMFVA/gjnOsly71nv0uSa8825l4dsMMz3Ymnu1MPNsZ eHYDxbM7aHh2oMvmXDy7aSme3UDDsztmeHYDxbMbKJ7dQMOzGyae3UHDsxsonu3xhmd7S/DsDhqe 3UDx7AaKZzdQPLuB4tkd9OHZDRPPbqB4NgJe9zDj2U1L8ewOGp7tEDy7geLZDRTP7qDh2Q0Uz26g 4dkNE89GwC8KPTy7aSme3UDx7AaKZzdQPLuDhmfj7vSLaJsQwLMbKJ7dQPHsDhqe7XcHz26geHYD Dc/umOHZDRTPxs25ZzdQPNsheDag+jlnPBuQ/sx4c3fw7AaKZ3fQ8OwG+vDsBolnN1A8u4Hi2R00 PLuB4tleNOHZjezFsztoeHYDxbMbKJ7dCejw7AaKZ3fQ8OwGimc30PDsy4zywrPPgvXSs5WBZzsT zx6MH80MU1cu8OzBXD4hgGenIf98Gy359zJ4l+jZev/0bIfi2Q0Tz3YInu0QPNuhePZg6j4tPTtR qh+7wbN95ODZaalu1MKztd/07AaKZzsEz07q1s+J4dm5u3p0Gp6tKUfPTgguPzk0PFsDTs/WgNOz PeDwbA8TPLuB4tkK0bMdgmc7BM/O0Ol3CDchoGcHqjvj8WwdFXp2RqXu6MOz/ebg2Q7Bsx2CZzsE z26geLZO8/RsfQzo2RpwerZGnJ6tEadn693Rsx2CZzsUz/YwwbO19tCztdTRsz2W8GyF6NkacHq2 BpyerWGiZzsEz9Zg0rP97oZn+73Bs/3e4NnpUePZOs/Ts3M5/14GQPprz53swbMbKJ7tEDzbIXh2 5KL+kDM8O3GqJ5rg2Q0Uz9YHip6twYRnV+aFZ892dAmerQw825l4tjLwbGXg2crQsx2CZzdQPHtA l/cB4dneEjzboXh2w8SzHYJnOwTPdiie7Qw8u4Hi2Q7BszXe9GxtiZ7dQPFsh+DZDsGzHYJnOwTP bqDh2c7Asx2CZ3vA4dneEjy7geLZCtGzHYJnOwTPbqB4tkPwbIfi2c7AszXg9GxvCZ7tEDzbIXi2 Q/DsBopnK0TPdgie7RA82yF4tkacnu0twbMdgmc7FM9umHi2Q/Bsh+DZDsGzFaJnN1A8WwNOz/aW 4NkOwbMbKJ7t0PBsR+DZDsGzHYJnN1A82yF4tscbnu2yB89uoHi2Q/Bsh+DZjYDGsx2CZzdQPNsh eLZD8ewEfPmKZ692nh2erQw825l4tjLw7MHU3Sd49mDq5hM9e/Vj8fHsVU/hw7NzOT877/dGz3Yo nt0w8WyH4NkDmurrGPDsVc/8x7OdgWev+oYBPDuh9Pe1vCV6tkaAnt1A8WyH4NkOwbM14PRsDSY9 O3Gqe9XxbE1wenYaql2CZ+tTQM/2CMCzGyienUe88WzNFHq2Zgo9Oy3Vb/+CZ2fo6jcEwLM1TvBs HRV6to4KPVtjSc92CJ7tEDzbIXi2hwmerWGiZwdq9rMdgmfrs0LPrpnyoipur99acgZV0ZlURWVQ FZVBVVSGVdEhVMUGSlUcULf75C2hKjqUqtgwqYoOoSo6hKroUKqiM6iKDZSq6BCqosabVVFbYlVs oFRFh1AVHUJVdAhV0SFUxQYaVdEZVEWHUBU94KiK3hKqYgOlKirEqugQqqJDqIoNlKroEKqiQ6mK zqAqasBZFb0lVEWHUBUdQlV0CFWxgVIVFWJVdAhV0SFURYdQFWvEX1TF/eOolL9nHcbfsw7j71mH aariYOqb66iK6U9z9ikdqislVMW01KwVB/Slt6On4wunyQYzPzWih30Mi4gOpsoUInrYp32I6GGf CjOi6bT/yg96dHm9PREN1Ky+NY70DIfiGQ0TzxjQ5RsT4Bnpd008eEagmnjxjMTb344GdDlyFs/Q LKFn6PDSMzRM9IzkwOVbRuMZ3hI8Iy3VN7bhGZq79IwEvPEMDSY8Q2NJz/AuwTMO+/CcnuFhgmc0 UDxDIXqGBpyeoXdHz9DHl54RyL/ozyMOz/AuwTM04PQMDxM8wyF4hscSnuEdh2ek4/4rPz6p0jP0 cvQMjRM9Q8eXnlFb6qvi/Hh99sOZVMWGGVUxTD1vn6oIRqsiGP9NjgZKVQTkvzHt3UZV9PtHVWyg URU7ZlTFBkpVbKBUxQYaVbFhUhU7aFTFBkpVRLx99e0toSp20KiKDZSq2ECpig2UqthAqYod9FEV GyZVsYFSFZuApyo2LaUqdtCoig6hKjZQqmIDpSp20KiKDZSq2ECjKjZMqqIHHFWxaSlVsYFSFRso VbGBUhU7aFRFh1AVGyhVsYFSFRsoVfES8RdVcba9d1TF2T41QVUMo6tvMEXdURXDeFWc7UMFVsXc WFMVZ/tchVVxts9VWBVn+3iCVXFAX1rHz8v4Wo/LodwxNoOppoSxcSZjE0bfCgvjnxegz/UL5DE2 A7osGTE22muOTaD6ITLGRvvNsXEoxtIwMRaHYCwDqgcSaCwephiLBwDG0kAxllyt7pvAWLQlGotG gMaSRPEv7Eef6jtRMJb0yX+tF1BjLN5xGItmOIxFR47GoglOY/GAw1i83zCWBoqxKERjcQjGohlO Y9E40Vg0BDQWh2IsmuE0Fm8IxuIRgLEkUeoSHcbiEYCxpE/+K2IIuH9hv3ecxqIZTmPRjtNYNMVp LBpxnFZtOp7Tqg2U06oNNE6rdsw4reoZjtOqTZhyWrWJQE6rOoTTqp7iOK3atJTTqh4CnFb1FMdp VQ8BTqt6io/Tqoh3/fqJnFZtup3Tqp7gOK3adDunVT3BcVq1iXdOq3rHcVq1g8Zp1QbKadUGymlV z3CcVvU44bSqhwCnVRsop1U9w3Na9dLQC8/eXn/LmTPwbGfi2crAswdTv3EJnr2VL5j4oWc7BM9u oHh2oGYNpPdGz3Yont0w8WyH4NkOwbMdimc7A89uoHi2Q/DsxNu/5cxbomc3UDzbIXi2Q/Bsh+DZ DsGzG2h4tjPwbIfg2R5weLa3BM9uoHi2QvRsh+DZDsGzGyie7RA826F4tjPwbJ3k6NneEjzbIXi2 Q/Bsh+DZDRTPVoie7RA82yF4tkPwbJ3o6dneEjzbIXi2Q/HsholnOwTPzs3VNQs82yMAz1aInt1A 8WyH4Nl6d/Rsh+DZDRTPdmh4tiPwbIfg2Q7Bsxsonp1QVhmHZweqCwR4tssePLuB4tkOwbMdgmc3 AhrP1rujZ2uc6NkOwbM1eeHZYb7k2c9xwklP+oXxX/EKU9eK8OynnY+AZ6c/ZQsSnh2m2c/WTtOz n3amgZ79tNMR9OynHRChZzsUz26YeLZD8OyEu9nP9jDFsz0A8OwGimc/7TQKPVtbomdrBOjZmtz0 bI0APVvTm56dy9XNY3i2dxyerRkOz/Z+w7M1wenZHnB4tvcbnt1A8WyF6NkOwbM1w+nZGid6toaA nu1QPFsznJ7tDcGzPQLwbE05erZHAJ6dPl22quPZudzl8Fo8WztOz9YMp2drx+nZmuL0bI04Pds7 Ds92CJ7tUDy7YeLZmuH0bA8TPNsjAM9WiJ6tKU7P9pbg2RoCenYypX5PLjw7faqXg2c7NDzbuw3P 9m7DszXB6dk6cvRsTXB6tnccnq0dp2c3UDzbIXi2Q/BszXB6tsaJnq0hoGc7BM/WDIdn14ZeePbx hZOuysCznYlnKwPPVgaerQw92yF4dgPFsw87l0bP9pbg2Q7Fsxsmnu0QPNsheLZD8Wxn4NkNFM92 CJ6t8aZna0v07AaKZzsEz3YInu0QPNsheHYDDc92Bp7tEDzbAw7P9pbg2Q0Uz1aInu0QPNsheHYD xbMdgmc7FM92Bp6tAadne0vwbIfg2Q7Bsx2CZzdQPFsherZD8GyH4NkOwbM14vRsbwme7RA826F4 dsPEsx2CZzsEz3YInq0QPbuB4tkacHq2twTPdgie3UDxbIeGZzsCz3YInu0QPLuB4tkOwbMTb/91 vEb24NkNFM92CJ7tEDy7EdB4tkPw7AaKZzsEz3Yonp2Aj285e//1+P8DeAC7ggplbmRzdHJlYW0K ZW5kb2JqCjMwIDAgb2JqCjw8L0ZpbHRlciAvRmxhdGVEZWNvZGUKL0xlbmd0aCAxMDM0NQo+PiBz dHJlYW0KeJytnd2OK8lxBu/PU8wLiCL7lwQMA/pZ6VrGAnoAQRJgQAIkvT/gntkz9QWzTyTnwljY lkexxerM7KqoYnXz9nY9/vnV7fhf+2N6+8s/vv3r2/tfttv08Yd///Xbn9/++f7Hy75+sN//w69u 1+mxvf8Lt7f3f/7nj2+//Id///3br/94ffv7f452bvM+3d/ut+3tdl2398b+9tH6+1+m2226fv7p x+Dx1337/tfr/ftfL8vb5/8cn8n/9/jk3/787dd/OP7F7e3nv327fV7bcn9v6rian//x7b+u1/k3 //328//mv96my225bk/Muhbmfnns2zOz/O6Z2ef30Dy1sj8T9+vl+pife7PcP5iffv6IzPXjoj7+ OS7nvePLtB2pmD5C+x6Aj7+t23r62zt3e5uvyy9/27cf/PEjEvdl/+G/XP72wR3Bu21osP7xF/D4 /370L5/+9p7Jt3l6au/pb386/vlhAc7be2y3j/+9zo/LtL5N02X9Sn1+r8aP0jj6Mz+Vxvqe0vu8 fyaslsZ8uV+Pf+WJKaWx3y7bdVqfmFNpPC7L9SPxaKcWx36ZruvtuZ37M/NYL9fr9tyf6/zMHBd+ dHp/vrBb+bDb7Xr0+v54/rSfPkvxRR4e188kzEc7/19ZOOrhZRbCeBYG02Qh7XgW0o5nIczkWQj0 8Cxor2+3+2U5mnuG/lCgabscI+rS9vs2L5frbdqfoNvvC7TcLvfS0KnElsdlu63PXbr+tkDrfvR7 KykpqT3++6Pf+73N7e0YWa+35yitNd7H2HqfrqVIagDu98s23Uq11Yt7bJdlmvqkTNflMk1LqZNS S9Ntuhw1+RzL6Vqhx9HxrSSl9Gma9qPj+3OfrqUGpnk9Ov6Y2tRNy3yZ5mvJSqmBab1drvNtbyM+ rffLfS53QcncdIzY27yUrGwF2pfLMq9rH6b7dPR7ey6U21KgxzHBHlPKcwRq6h770e/H0t4FR4Qu 2y9S4LGcb/NlWUph1ptunm6XI+hTW+Hz9Lhcl2XpoWPwvS/r3sZpXpaj43vf73U6+n2f2pKbt+vR 78fah+nwout6vfcft6+X+zrd2rvumIYu21rLqaRuftwuy7ps7V03P95npvXRhmm57kfH91tbTstt OTp+X9o4HUPF0fFHf/8u8yGo2/W5T3VOXub75fCwkpYSgmXZLsfcU9JS5pVlPWbMWgU1d8t2TJlb X+HLdkyZ2/6clbn2ez/mzO1eRtVyGyz3Y9LcHvf2Xlkex6S5325tn9brMWnu03NW5muFjklzn0ul lPyut2PS3JdHe9et0zFp7ls/qq7zMWnue6mU2tKhTNt+L1kpIViXY9bs76d1PebMe/GhWrvrIU3X +9TXybofk+a9GFGtk3U/Js17UaJbGcPX+zFp3osT3cpkvz6OSfNenKh+3GFxR8eLE9WPO26ly/1x 7W/f7XZMmo8XKrNNx6T5qE5UV3/zMWk+5kd7j2+HOF0fRYpOfVqOSfNRpOj7QHB4d6fHxzgklxA9 VgZ6PJiq2dDjMCUS0OPB1FEaepx26voDepyGqq5Bjwd08lXosV4+9Vj7TT1Ox6tnRo8HUycX6nH6 XSqBehyoLhCgx4HqxUWPnYEeN1D0eEB1ZKUeJwLV+6DHSUr1VeixZo56nMzVj4Mea3lTjzV11OPE qaxsqMcNNPQ4TO0S9FgLhXocqDo09DgX95PrsacOepyPK3cB9TgBL32iHmsRUI+1CKjHevtSj/X2 hR6HafTYLw56rNVEPU4sq9ZDjx2CHmulUI+15qjHWrzUY00d9VhTRz3WWYV6HKjKIfQ40M31WCcf 6rHmDnqsAaceeyyhx4Fql6DHOhtQj/XjqMeBqmVCjxOm6tDQY70PqMc6kVOPA21fsJ7H9frSepyJ 9TTMsB5nYj1hlhKIWA+YuksX6wFUxSDWE+jkT7Ee7zasB1Ddpov14OOqiQ3rAbNVZlgPLq42FOvx UMJ60O9qa8N6wpzcMNbTRCnWg4tz60G/fVMQLZURCtYTqNkUxNW59eDqfFPQQwDr8RDAerx4Yz1o qJpRrKeJQKyniUCsB1AZNWE9gO4VGtbjHwfrAVSuDtYDqO5RxXo8dbAeTx2sx7MS60FWSphgPWio ykOsx1MH62kiEOtpAh7rAVRuKFgPoKprsR5A5epgPZ4VWI9nBdbjAyasxwd6WA/SUqFYjw+9sB6v gliP5xfW46mD9XhWYD0NFOvxIoD1+EgA6zkVwQsNeT9R8EpDlIGGOBMNUQYaogw0ZDCdhgRqNGRA nYZol6ghgRoNyce5hoRpNCQX12hIoEZD0m/XkMF0GuJRgobk4hoNSb8bDUlLjYYMqNOQXF3deoCG 5OoaDdEQUEM0BNQQLV5oSBpqNMQjAA1JBBoNCdRoSKBGQ/TjqCGBGg0J1GiIpo4aoqmjhmhWoCHJ SqMhaajREE0dNcQjAA3xgENDAjUaokVADdE+UUM0K9QQzQo1RAdMaogO9NSQpKXREB16qSFaBdAQ zS81RFNHDdERjBriEDREi4AaokVADQn0JQ2ZJ8k0NEQZaIgz0RBloCGDaXZDwjQaEqjRkNnOVVJD tNvUkLRUIWjIPEkJQ0NmOzBKDdEoUUP806Ah+bhSeNAQjxI0ZLZDrNQQzxw0RPtNDUmfqjxAQzRz 1JBAdTqHhuTjap+gId4naIhfHTRE8wsNSb+rrUFDBvTdIF+MBot+qZ3RQBmMBs5kNFAGo4EyGA0G szajgTbE0aCBMhos9kU8R4NA9a7CaLDY9/4YDRomo0EiUO9PjAYDWuo6CaPBgE5fQGc0GMz8k48G HkqMBg5hNFCIo4FDGA0WO/vB0SBhqt6O0SAf1yxKvE8YDRzCaJCOn4aMMRpoxXE00GriosQhLEq0 5Lgo0ZLjokRLjosSrTkuSjSWXJQ0UBYlDmFR4lAWJVpyXJRoyXFRoiXHRYl3CYsSHcC4KNEi4KJE a46LEm2JixKHsCjRmuOiRGuOixKtOS5KtOa4KNGIc1HiUBYlDZNFiUNYlGjNcVGiNcdFidYcFyXa Jy5K0qd6gjHfCHsR4Bthrzl8I9y0lAOTDfR5YNIrDgcmveJwYNIrDgcmveJwYLKJdw5MNlAOTDay lwOTHTQOTHrJ4cCklxwOTJ5K7oWvbq9PMDoDX3UmvqoMfFUZ+OpgOl/VhuirA6php68GalavgRpf 9Zbiqw0TX93sGA19dUCdr2521ga+OpjOVz3e8FWH4KsK0Vcdgq9qeumrCVPjq/m4xle9T/DVzY4b 0Ve1COCrWnH0VW8IvuoQfFVLjr6qJUdf1ZKjr2rN0Vc14PTVBoqvOgRfdSi+qiVHX9WSo69qydFX vUvw1fSpPgcFX9UioK9qzdFXtSX6qkPwVa05+qrWHH1Va46+qjVHX9WI01cdiq82THzVIfiq1hx9 VWuOvqo1R1/VPtFX06e60w5f1SKgr2rN0Ve9JfiqQ8NXteLoq1px9FWtOPqqVhx91eMNX3UIvuqy B19toPiqlhx9VUuOvlpL7oWv3r9w9lQZ+Koz8VVl4KvKwFcH0/mqNkRfvdsxKfpqoMZX73Zqh77q LcVXGya+erejVPTVAXW+ercTUPDVwXS+6vGGrzoEX1WIvuoQfPVu563oqwlT46v5uMZXvU/w1fTJ n7jxIoCvasXRV70h+KpD8FUtOfqqlhx9VUuOvqo1R1/VgNNXGyi+6hB81aH4amJZT2HAV7Xk6Kta cvRV7xJ8NX2qjxPBV7UI6Ktac/RVbYm+6hB8VWuOvqo1R1/VmqOvas3RVzXi9FWH4qsNE191CL6a WPoD6V5z9FWtOfqq9om+qjVHX9WO01e15uirWk70VYeGr2rF0Ve14uirWnH0Va04+qrHG77qEHxV 50P6qmaOvqqhpK9qydFXa8m98NXHFw4pKwNfdSa+qgx8VRn46mA6X9WG6KsDqrmhrzoEX33Y8S76 aiD31YaJrz7szB19dUCdrz7sqBx8dTCdr3q84asOwVcTgSo98NVA/gKlDoqvJkyNr6bjja/q1dFX tZroqw00fFUrjr4aqB4sgK9qydFXteToq1py9FUtOfqq1hx9VQNOX22g+KrWHH01UD1bHF8N0xxS 1pKjr2rJ0Vf94uCrWk30VYfgq1pz9FVtib6qNUdf1Zqjr2rN0Ve15uirWnP0VY04fdWh+KqHCb6q Ixh91WMJX9Wao69qzdFX9eLoq3p19FWH4Ktac/RVbwm+qoNhfFUrjr6qFUdf1Yqjr2rF0Vc93vBV h+CrGiX6qo5y9NVA9bVH8FUtOfpqLbnWV5fDpF75asMMX+2YT19tmOGrDTN8FYz7atNQfBWQ+2oH DV8lpL7atfTpqy3z6auMgPoqIPdVQOqrYNxXu3gPX+XF1aPAw1cbKL7aQcNXCdWnxYavMkzXr9ww 08sD3w2DG8aZ3DDK4IYZTH0hE26YwdT3MfGG0Q/jDROo1hRumAHVlx/xhglUn9rADeN9yg3TMLlh BlTfosQbJmHSp1D5cfYUKj9Nn0Lt+o0bJl3Sp1BZAvoUKlvSp1CZFH0Ktel4FngdNBZ4HTQWeB00 FniMuB34ZkN1zTkWeN2njQVek5Us8JgVfUMuW9IFXpOVLPAYgXpIeSzwCOkCj9DpUdXPBV4T8Czw CNkCj4wu8JrUZYHXpC4LPAZcF3hMnR6gYUv1UdWxwOtSNxZ4TeqywGtSlwVeB40FHqH64q6xwOta Ggu8DhoLPKZFn0JtEjwWeE1+s8Br7ros8Ji6+szrWOCxpfqKkrHAa/KbBR7DdHpP2OcCr4PGAo+Q LvCacsoCr7nJs8BrKuVzgde1MxZ4hOqby8YCrymBLPCaGsgCr5nHs8AjVDs+FnisgQJlgdeMFlng NSWXBV5TTVngNYNq3pBLqK4nxxtym9zlDbnN+Lytx6z5KFJUC2Xbjlnz8egb2qfLfC1S9L3fL2R8 eXmavWEg485ExpWBjA+mPq0LGQ9T10CQ8cXOpVHGtUeU8UDV2CHj+Tj9uQpCp9e9DBnPxZ2e146M B9Jnsdnv6tmQcY9AZHwwnYx7Q5DxtFSTAhlf7BwgZTwt1bfRQsYDNavXQFXrIeOBqopCxgM1Mp44 6bPYhKpoR8Y14JRx7zdk3PsNGfesQMY94JBx7RNlXLNCGQ9UzRcyrn2ijGtWKOOaFci4ZoUy7l2C jGtWKOOaFcq4twQZ94BDxr0lyLheHWVcA04Z14BTxjXilPH0Sd/H21wdZVwjThn3liLjGnDKuDcE GdcbijLuAYeMa8Ap4xpwyrj2iTKuV0cZdwgyrlmhjHtLQ8Y1KZRxbwcy7gGAjNekvJC17eVR7oaB rDkTWVMGsjaYRtbCNLK22aEsypr2iLIWqJG1fFwja4Fc1nJxjawFamQt/W5kzSMQWRtMJ2veEGQt LdX9ZchaIH1xDiE9yg2o2zn1liBrgarRQda8JchaoNOr+SJrCabvnG7lDNwPZU2zQlnzgEPW/OIg aw0UWdOsUNY0TJQ1zQplLVDd74SsBaraB1nTrEDWNCuUNc0KZc37DVnT1FHWNCuUNc8KZE3vX8qa ZoWyph2nrGkIKGuaFcqapoWypmmhrGlhUtY04pC1homsOQRZ09RR1rzfkDVNHWUtULNz6hBkrabu havsL4/xNgxcxZm4ijJwlcE0rhKmcZXdDuTQVbRHdJVAjavk4+r3l3CVQO4qubjGVQI1rpJ+N67i EYirDKZzFW8IrrLb6R+6ym4HoOgqaalxlUCnr4LjKt4SXCVQfYANrpIQ1DkfrrKXY0s/dBUNJlwl XarnauEqHku4ikcAruKxhKt4S3CVQM3GksaSrqKxpKtoLOkq3ie4il4dXEUDTlfxhuAqGnC6ircE V/GLg6t4wOEqHnC4igacrqIjAV1Fr46uohGnq3hLcBWNOF3FW4KraMThKhpwuooGnK7iAYerpEvN xpJeHF1FA05X8ZbgKhpwbix5S9hY0oBzY6lG/IUaPb5wYlQZqJEzUSNloEaDadQoTKNGDz1UCTXS HlGNAjVqlI9r1CiQq1EurlGjQI0apd+NGnkEokaD6dTIG4IaPewIJ9UoULONE6hRo4cdh6UaeUtQ o0DNNo63BDV66FFXqFGC6ds4YZptHM0K1cgDDjXyi4MaNVDUSLNCNdKWqEaaFaqRtwQ10qxQjWpW +qH1dn19ttiZDK1h6s2eoTVMPQiXoRVMSXOGVu8PhtYGytDaQWNobaAMrQ2UobWBxtDaMWNobaAM rQ2UobWBxtAaphlam4YytHp2MbQCqiu8DK2ATicVxtAaqBlacXV1YMnQCsiHVkD1R2AytAKqHc/Q imDq0ArGh1bPCobWJuAZWgHVE98ZWpuWMrR6VjC0Iky+Qw7otPk9hlZPHYZWQPrwKKH64r2x6vTM YdXpqcOq01OHVacPu1h1Iiv1dHVWnYBqBLLqbFKXVadXOFadftdh1en5xarTs4JVZ/NxWXU2UFad njusOj13WXUi4PoLN8yvny0GVMZnrDoB6c8LN/nFqhNh8uMMHiasOj1MWHV6nLDqxNXpw6Nsyc4W e5RwnKGJUo4znKL0wsOm18dKnYGHDWYqgYCHDWYuTybBw/Sz4GGDqUeV6WGB6pQPDxvQ6Yd54GGB 6sfBw/Jx+kuDbEmXuB4AephD8DCH4GEOxcMG03mYNwQPm+ysDT1ssiNC9LDJjvbQwyY7JEQP0z7R w6Zy/ueHHhaomhE8TONED0ufTrI2PMzDBA/zMMHDPEzwMA8TPMzDBA/TMNHDNEz0sPRJf2mwiRM9 TONED9M4wcM0TPQwDRM9zMMED/MwwcPSp6pY8DAPEzzMwwQP0zDRwzRO9DCNEz1M40QP0zjRw7xP 8LBA1WfgYRpMeJh/GjxMA04Pcwge5lmBh2lL9DDNCj1Ms0IP06zQw9Kn+ogTPEwHTHqYZiUe5gGA h2ko6WHeUo6VNqHMM15NKPOMVxPKPOOFANRnl/KMl0cJz3j5XYBnvHx4xjNezcflGS8PJp7x8rTg Ga+mpTzjdUrLC4deXp/2dQYO7UwcWhk4tDJw6MGcVBQOHahx6AHVtzLQodNS/QYIDp2W6ncycOi0 VG08Dq0BoEM7BId2CA7tUBx6MJ1De0Nw6MUOr9GhFzu8RofWlujQix1eo0N7S3DopZxL+6FDB2oc WuNEh06fTr9GMhzawwSHXuyMHx3aIwCHdggO7QGHQ2tLdGgNOB1aA06H1oDTodOnZi9TIw6H9ouD Q2uY6NDeEhzawwSH9jDBoT1McGgtJzp0oGqZcGiNJR1aQ0CH1mDSob0lOLQGkw6twaRDazDh0OlS 3YOFQwfSX+vuYgmH9gjAoTWWdGhtiQ6tsaRDayzp0BpLOrTWHB1aR9U4tIaSDu0BgEN7KOHQ3hIc 2kMJh/ZQwqFd9uDQCcFJtOPQGic6tCaFDq1JoUP7x8GhNZh0aE0LHdpbGu9JOGXlhWevr0+qOwPP diaerQw8Wxl49mBOh5Hg2QNa6yu74NmBqkLDswM1ZwbSp/pSQXh2IPs5en7aycXj2d4QPFtDSc92 KJ49mM6zvSF49mqnDunZq52EpGdrS/Ts1U5C0rNXO+RIz17LIccfenagxrM1TvTs9Mk928MEz/aL g2d7LOHZDsGzPeDwbG2Jnq0Bp2drwOnZGnB6dvrUeLZGHJ7tFwfP1jDRs70leLaHCZ7tYYJne5jg 2VpO9OxAjWdrLOnZGgJ6tgaTnu0twbM1mPRsDSY9W4MJz06XGs8O1Hi2xxKe7RGAZ2ss6dnaEj1b Y0nP1ljSszWW9GytOXq2jqrxbA0lPdsDAM/2UMKzvSV4tocSnu2hhGe77MGzE4LGszVO9GxNCj1b k0LP9o+DZ2sw6dmaFnq2txTPrll54dn768cenIFnOxPPVgaerQw8ezCnvWN49oBOm97w7AaKZweq UgvPTp/87RWAfD9bA0DPdgie7RA826F49mA6z/aG4NkJpb9qDFBZIdGztSV6duJ9ept2PDtQtWN4 9m5H9enZ6VM9yQHP9pbg2bs+PhDPDlPDBM9Ol+ob2eDZ3iV4dj5Of9il6zc8O1D9CRF4dqD6Ogl4 tn4cPbuB4tkOwbM14vBsvcXp2RpwerZ3CZ6du64qNDxbb016tg6F9Oxc3ekAbzxb7196tt4r9Gy9 yenZGkx6tn8cPNtbgmdrWujZWinwbL016dneJXi2dwme7V2CZ2v10rO1T/Ts2qdeQ46R/Ts+1xe7 Dg0JU+fhaAiYolHRkDCuIWin3A3REDB+NDXQ6RHNaEjTUjTELx8a4v2GhngAoiHod/1thWgIPq0e B4iGAKobntGQ5uKGhoA5GcbQkObaoiGBmu0+bwkaAqi+6D4a4jUADQHkGtJA0ZCm49EQtFScDhri qYuGoKEqPdEQL0toCPpdn/6JhiB1v//KuDK9Pq7jDMaVwcy/0XElzKrjymDq0XmMK2mnGVfSkB/X CXQSd4wrevkcVwbUHNcJdLrTM65M9s0Ux5X0uw6aGFcm+7aM48pk30xhXHEG40oDZVxJ5vxnTRCB esNgXJns60mOK5o5jivJnH+NAKguEzCuOIRxJTXgjx6iJT3y7oXCcSVZqcMvxhWHMK4EqheX5Y3X JZY3gOob3rK8wdXp71Y2lYLlDT6uLkqyvEFWqtxneeM1l+WNlxyWN/g0X974iILlDVqqX0hkeePV hOWNlxOWN14pWN44hOWNlxOWN15OWN54OWF54+WE5Y2XE5Y3Xk5Y3ng5ZXnj1YTljZcTljdeTlje +DCH5Q1aOr13byxvvJywvPFywtcIXin4GqGB8jWClxO+RvByGl8jeDHhawQvJnyN4MWErxG8mPA1 ghcTvkbwasLXCI3s5WsEryZ8jeCDE75G8MEJXyMA8q8RvJrwNYKnF18jeDXhawRAtU/jawS3K/ys SfNphzndrkWK6hPP2+HFj/efRmmzcojxfl36Stmv82W9FimqG/HHGHB0vAxOpzeJHOZ0uxYpevWT o/v0w7nxxdJktjM+WJoMpo5qWJqknZIELE3ClIxjaTLbd1lYmgzmJOZYmgQ6/Zhiliazfb3Gpcls X9RxaeIfh6VJID3h5Anh0iSQv3AKaaufhqVJoO2zXP50/POvb+/sdpuO/E1v//7rtz+//fPbYWn7 egjGoX3rx5h//J/piMH6CRz/1jvx/q9+/w+/OmL42N7+8o9vv/7j9e3v/2lr8bjnv+d+01ocTN17 QC2G0SfDwdy1FgdTh1bUYphmmTyg5g09uHr/uVzvEmsx11YLFrWYlnyZPJgvbXPMt1W2HpK/MHX7 LPkL429YCuPbp2FqPSV/6E8NQ/IHyLc5ANXUJH+A/Ml+QL59GuhruZlvL7egnEFuBjPpFlQYv7fC 1EAgN2nHx3lviLlpoOQmUN1HRm40RsyNQ+Pe6pgxzgeafZz3cGOcbyIwtqCaeGcLqmkoW1ANlC0o jze2oDxM2ILqoLEF1UDZgmqgbEHhHvCTrLi606712ILqoM8tKGSlLISwBYWG6gncbEEB8pOsTQSy BdVBYwsKxetbUF682ILywsQWFCD/ZV1AdXcpW1CA6qbQ2IJCLOueULagPHXYgvJYYguqgbIF1UDZ gmqgbEF10NiCcghbUF4E2ILyIsAWlBcBtqC8CLAF1XQ8W1ANlC2oBhpbUB0ztqAaKFtQiGXdzssW lI+q2ILyCscWlA/i2ILy1GELyq8OW1ANlC2oDhpbUA30uQWFIbxu92QLykOJLagmANmC8trFFlQT 72xBNR+XLahG9rIF1UFjC6qBsgXVQNmC8oEAW1B+ddiCQpxO+1RjC8pnVmxBAarbS2ML6tSlF559 jAsvPVsZeLYz8Wxl4NnKwLOVoWcP6LTpAs8OVM9rwLMdgmd7n+DZDsWzGyae7RA82yF4tkPxbGfg 2YlSVX94diB/yyygemYSnq19omc3UDzbIXi2Q/Bsh+DZDsGzG2h4tjPwbIfg2R5weLa3BM9uoHi2 QvRsh+DZDsGzGyie7RA826F4tjPw7AT8pNDxbG8Jnu0QPNsheLZD8Ox03H+HzW9yenag+lgZPFvH cHq2twTPDvS7CsWzNQT0bIfg2Q7Fsxsmnu2xhGd7mODZmjp6trZEz9Y+0bP14+jZGgJ6tkPw7AaK Zzs0PNsReLZD8GyH4NkNFM92CJ7tmYNnu+zBsxsonu0QPNsheHYjoPFsh+DZDRTPdgie7VA8u47h Lzz7yNJLz1YGnu1MPHswfqQyTF25wLMHc/qGAJ6dhhrPTkv+ZgbvEj1br5+e7VA8u2Hi2Q7Bsx2C ZzsUzx5M3aelZydK9Ws3eLZnDp6dlupGLTxb+03PbqB4tkPw7JSu/4Ygrq4eeYZna8nRsxOC0w8N Ds/WgNOzNeD0bA84PNvDBM9uoHi2QvRsh+DZDsGzkzp/i7CHgJ4dqO6Mx7M1K/TsZKXqKjzbLw6e 7RA82yF4tkPw7AaKZ+swT8/W24CerQGnZ2vE6dkacXq2Xh092yF4tkPxbA8TPFvnHnq2TnX0bI8l PFsherYGnJ6tAadna5jo2Q7BszWY9Gy/uuHZfm3wbL82eHZ61Hi2jvP07Hycv5kBUN2Ih2e77MGz Gyie7RA82yF4duSiHoSEZydO9UQTPLuB4tl6Q9GzNZjw7Mq88Ozdji7Bs5WBZzsTz1YGnq0MPFsZ erZD8OwGimcP6PRQDjzbW4JnOxTPbph4tkPwbIfg2Q7Fs52BZzdQPNsheLbGm56tLdGzGyie7RA8 2yF4tkPwbIfg2Q00PNsZeLZD8GwPODzbW4JnN1A8WyF6tkPwbIfg2Q0Uz3YInu1QPNsZeLYGnJ7t LcGzHYJnOwTPdgie3UDxbIXo2Q7Bsx2CZzsEz9aI07O9JXi2Q/Bsh+LZDRPPdgie7RA82yF4tkL0 7AaKZ2vA6dneEjzbIXh2A8WzHRqe7Qg82yF4tkPw7AaKZzsEz/Z4w7Nd9uDZDRTPdgie7RA8uxHQ eLZD8OwGimc7BM92KJ6dgC9f8eyHnWeHZysDz3Ymnq0MPHswdfcJnj2YuvlEz374sfh49kNP4cOz 83HN2Xm9Nnq2Q/HsholnOwTPHtDkrwhABOpxj3i2M/Dshz5hAM9OKP1X8bwlerZGgJ7dQPFsh+DZ DsGzNeD0bA0mPTtxqnvV8WwtcHp2GqpdgmfrXUDP9gjAsxsonp1bvPFsrRR6tlYKPTst1cfa4dlJ nb8iwOMEz9as0LM1K/RsjSU92yF4tkPwbIfg2R4meLaGiZ4dqNnPdgierfcKPbtWSj8rLrfXTy05 k1mxYcas6ExmRWcyKzqDWbGBMit20JgVAzW7T01LmRUbaMyKHTNmxQbKrNhAmRUbaMyKDZNZsYPG rNhAmRU93pgVvSXMih00ZsUGyqzYQJkVGyizYgNlVuygz1mxYTIrNlBmxSbgmRWbljIrdtCYFR3C rNhAmRUbKLNiB41ZsYEyKzbQmBUbJrOiBxyzYtNSZsUGyqzYQJkVGyizYgeNWdEhzIoNlFmxgTIr NlBmxVPEX8yK0+dRKX/OOow/Zx3Gn7MO08yKg/HXyaE/fvYJHaorJcyKacnXioG+9HT0srw+TRam vhAPEV3sa1hEdDBVphDRxb7tQ0QX+1aYEU2n/Xd+0KPT4+2JaCBffXsc6RkOxTMaJp4xoNMbE+AZ 6XctPHhGoFp48YzE25+OBnQ6chbP0CqhZ2h66RkaJnpGauD0dtB4hrcEz0hL9YlteIbWLj0jAW88 Q4MJz9BY0jO8S/CMdKluCMAzPEzwjAaKZyhEz9CA0zP06ugZevvSMwLVj4NnaMThGd4leIYGnJ7h YYJnOATP8FjCM7zj8Ix03H/nxwdVeoZ+HD1D40TP0PzSM2pLL2bF9fXZD2cwKzqTWXEw9aFAzIph yiiNWTFMfQ8TZkWHMCtqjzgrBvIz1n79nBUdyqzYMJkVHcKs6BBmRYcyKzqDWbGBMis6hFlxta+x OCtqS5wVGyizokOYFR3CrOgQZkWHMCs20JgVncGs6BBmRQ84ZkVvCbNiA2VWVIizokOYFR3CrNhA mRUdwqzoUGZFZzArasA5K3pLmBUdwqzoEGZFhzArNlBmRYU4KzqEWdEhzIoOYVasEX8xK+62945Z cbdvTTArhvHV927fmWBWDONrxd2+VOCsmAs7vYAus+Ju36twVtztexXOirt9PcFZcUBfW8c/xms9 TodyR24GU00JuXEmuQmjT4WFab4vSJ8bYxnQacmI3GivmZtA9Utk5Eb7zdw4FGNpmBiLQzCWAZ3e ag9j8TDFWDwAMJYGirHk0/ztC94SjUUjQGNJoZzexh9jSZ/qM1EwlvTJf68XUGMs3nEYi1Y4jEUz R2PRAqexeMBhLN5vGEsDxVgUorE4BGPRCqexaJxoLBoCGotDMRatcBqLNwRj8QjAWFIodYkOY/EI wFjSp3pEAMaSgPvbF7zjNBatcBqLdpzGoiVOY9GI47Rq0/GcVm2gnFZtoHFatWPGaVWvcJxWbcKU 06pNBHJa1SGcVvUSx2nVpqWcVvUQ4LSqlzhOq3oIcFrVS3ycVkW86wv7c1q16XZOq3qB47Rq0+2c VvUCx2nVJt45reodx2nVDhqnVRsop1UbKKdVvcJxWtXjhNOqHgKcVm2gnFb1Cs9p1VNDvWev19dv OXMmnt0ww7OdiWc7E892Bp7dQPHsDhqeHah5y1nTUjy7gYZnd8zw7AaKZzdQPLuBhmc3TDwbUTp9 qTY8uwllPNsheLb3CZ7dQcOzGyie3UDx7AaKZzdQPLuDPj27YeLZDRTP9tTBs5uW4tkdNDzbIXh2 A8WzGyie3UHDsxsont1Aw7MbJp6NgNev8OLZTUvx7AaKZzdQPBt98h+0AuSeDci/LwPkbznzwoRn Ny3FswH5W848TvDsBopnN9Dw7I4Znu0Bh2c3AY9nN2GKZ3vA4dneEjzbAw7P9hDAsxsono2Pq5oZ z246/unZTTvxbI83PLsJZTwb0EnGh2c3fYpne6HAsxvZi2d30PDsBopnN1A824MJz/ZgwrM9BPDs pqV4thdKPPuUuheePY0TTnrSL8zpzFk8ezB1rQjPnux8BDw7/SlbkPDsML6f7Z2mZ092poGePdnp CHr2ZAdE6NkOxbMbJp7tEDw74fb97CZM8WwPADy7geLZk51GoWdrS/RsjQA9W4ubnq0RoGdredOz 83F18xie7R2HZ2uFw7O93/BsLXB6tgccnu39hmc3UDxbIXq2Q/BsrXB6tsaJnq0hoGc7FM/WCqdn e0PwbI8APFtLjp7tEYBnp08nhY5n5+NOh9fi2dpxerZWOD1bO07P1hKnZ2vE6dnecXi2Q/Bsh+LZ DRPP1gqnZ3uY4NkeAXi2QvRsLXF6trcEz9YQ0LNTKVVY4dnpU+PZDg3P9m7Ds73b8GwtcHq2Zo6e rQVOz/aOw7O14/TsBopnOwTPdgierRVOz9Y40bM1BPRsh+DZWuHw7NrQC8+eX590dQae7Uw8Wxl4 tjLwbGXo2Q7Bsxsonj3buTR6trcEz3Yont0w8WyH4NkOwbMdimc7A89uoHi2Q/BsjTc9W1uiZzdQ PNsheLZD8GyH4NkOwbMbaHi2M/Bsh+DZHnB4trcEz26geLZC9GyH4NkOwbMbKJ7tEDzboXi2M/Bs DTg921uCZzsEz3YInu0QPLuB4tkK0bMdgmc7BM92CJ6tEadne0vwbIfg2Q7Fsxsmnu0QPNsheLZD 8GyF6NkNFM/WgNOzvSV4tkPw7AaKZzs0PNsReLZD8GyH4NkNFM92CJ6dePuv4zWyB89uoHi2Q/Bs h+DZjYDGsx2CZzdQPNsheLZD8ewEfLzl7P3n4/8PZkJsMwplbmRzdHJlYW0KZW5kb2JqCjMyIDAg b2JqCjw8L0ZpbHRlciAvRmxhdGVEZWNvZGUKL0xlbmd0aCAxMDM1Mwo+PiBzdHJlYW0KeJytnd2K 7MiVRu/PU9QLOJ1S6i9hGHDbbV97MPgBjG0YsMH2+8OoTp+Kb9XWWTvPxdDgaWpWh0I7thRrh0LK 6e1+/vOr6fyf/Tm//eUfX/715f0v2zR//cO///rlz2//fP/jbV+/st/+5VfTdD+O9/9genv/53/+ 8PbLv/z7719+/Yf729//c7YzLfuxvB3T9jbd1+29sb99bf39L/M0zfePP30fnJbjMX/76/3If35/ P9ov/3IebTnepnXej7d9/nr8d+rr39ZjuvztnZveHvfll7/t23f++A4ux/b47n9c/vaVm96WaUOD 9Y+/gOepfO8/vvzt5M7OzJ/a+/S3P57/fHeUHtv72Pzyv+vjeZvXt3m+rT8yiN+G7Kc/ffn178/A P97+9Lcv00dunCNy9mE9//iPL/91v6+/+e+3P/1v/t/b43bcz//kE7N+Zvbptt3P/pBZfluY5225 P47P7eyfmWO/zfd1+tzO8Zl5rrf7ffvcn/vjMzPd57PT++cTm34q0HQ/e308Px/t56/Qz396NQ7b /WMQHmc7/1+jcF4mL0chjI/CYJpRSDs+CmnHRyHM7KMQ6OmjoL2epuO2nM19hn5foHm7nbedpe33 9Fhu92neP0HT7wq0TLejNHRJseV5286bxmeopti6n/3eypCUoT3//2e/96Md22l/nP3+HKW1xvu4 3475XpKkBuA4bts8lWyrJ/fcbss894My35fbPC8lT0ouzdN8O3Pycyzne4WeZ8e3MiilT/O8nx3f P/fpXnJgfqxnx59zO3Tz8rjNj3sZlZID8zrd7o9pbyM+r8fteJSroIzcvG237bGUUdkKtC+35bGu fZiO+ez39jlRpqVAz/vZ7+PzqNzr0D33s9/Ppb0KzgjdtuW+tbF8TI/bspTErBfdY55uZ9DnNsMf 8/N2X5alh86b77Gsexunx7KcHd/7fq/z2e9jblPusd3Pfj/XPkzbcbuv96M/3L7ejnWe2qvucTxu 21rTqQzd4zndlnXZ2qvu8XyfmdZnG6blvp8d36c2nZZpOTt+LG2czlvF2fFnf/0uj/tt2e6f+zTt FTpu8zaXYSnptCzb7Zx7yrCUOC3rOWPWLKhjt2znlLn1Gb5s55S57Z9H5VH7vZ9z5naUu2rt93FO mtvzaK+V5XlOmvs0tX1a7+ekuc+fR+Vxr9A5ae6PkillfNfpnDT35dledet8Tpr71t9V18c5ae57 yZTa0qlM237s7fiuyzlr9tfTup5z5lF8qObuekrT/ShCNBX/WPdz0jyKEdU8Wfdz0jyKEl3O7Tgn zaM40VRu9OvznDSP4kT1cKfFnR0/+kvlvJRux/M+tVfBNp2T5vOFymzzOWk+qxOVs9se56T5fDzb a3w7xen+LFJUh2VbzknzWaTo29md3t3o8fN+t1MYeuxM9DhM1ezoMZgSiehxmHqXjh6jnTKA0GM0 VHUtehzo4qvRYz996LH3G3qMjlfPHHocpk4u0GP0u2QC9BhQjUD0GFDt99DjhokeA6rGHj0OVO+s 0GNEoHpf9BiDUn01euwjBz3GyNXDRY89vaHHPnTQY8Sp3Dahxz500WMw1Xyjx83RosdNS9HjpqXo MSLws+qxjy/0GIcrlwr0GKNSOg499kyBHnumQI/9Go8e+yUOPQbketycXPTYUw567NkEPfZbCvQY UO149BhQuVagxz4q0GMfFeixzyrQY0BVDqPHgCbVY598oMc+LNHj5uSix4hlPbnosY8v9NiHDnrs Fzn02G8X0GOEqTp09NhTHHrsEzn0GND2I9ZzzkYvrUcZWI8zsR5lYD2DWUogYD1h6iodrCdQFQNY z4Au/gTr0W7TegLVSR/Wk8NVD4n1hNkqE+vJydWGYD0aSlpP+u3WM5iLG8J6PEqwnpxcYz3pty8K oqVyh6L1DKhZFMTZ1eVFWE+gqkawnoSgmhisR+NE69E4wXo0wWk9aaikJa3HwwTrSQSqiMF6AlU5 hPUEqlMnrEcPR+vRUaH1+OFgPTp0tB4dOliPjgqtJ6NSJhdaT1r6TYViPTp0tJ5EoPYJ1uMBh/U0 UKwnUJUHWI9mCq3HIViPDh2tR4eO1qO3XlqPThm0noxdhWI9eg+n9Wim0Ho0CWg9eq3QegJdhCbW ozdMWo/mHK1HM4XWo0lA66kX+QvrOe8ML61HGViPM7EeZWA9ysB6BtNZT6DGegbUWY92idYTqLGe HM6tJ0xjPTm5xnoCNdaTfrv1DKazHo8SrCcn11hP+t1YT1pqrGdAnfXk7BrrCVSXQ2A9CUFjPRon Wo/GCdajCU7rSUON9XiYYD2JQGM9gRrrCdRYjx6O1hOosR4/HKxHh47Wo0MH69FRofVkVBrrSUuN 9ejQ0XoSgcZ6POCwnkC147CeQI316AVF69HxpfXo0NF6dOhoPXrrpfXolEHrydi59eg9nNajmULr 0SSg9XgsYT06KrQevWHSejTnaD2aKbQezTlaT6Df/oj1LMdr61EG1uNMrEcZWM9gmrWeMI31BGqs Z0Cd9Wi3aT1pqUKwnvTJrScN1R1ZsB6NEq3HjwbryeFK4sF6PEqwnhzt8hgs1uMjB+vRftN60qeq IbAeHTlaT6AqBrCeHK72CdbjfYL1+NnBenR8YT3pd5VDWM+Avgnri7vBpo/sczdQBncDZ3I3UAZ3 A2VwNxjM2twNtCHeDRood4PNthnwbhCoXlW4G2y2qwF3g4bJ3SARqNcn7gYDWmpZhrvBgC6P13M3 GMzjZ78beChxN3AIdwOFeDfYbNMK7waBqpLjbpAw1QoAd4P0ybeDNh3H3cAh3A3ScX/erRnHu4Fm E2sgh1ADacqxBtKUYw2kKccaSHOONZDGkjVQA6UGcgg1UOJUK4DUQJpyrIE05VgDacqxBvJ+owbS GxhrIE0C1kCac6yBtCXWQA6hBtKcYw2kOccaSHOONZDmHGsgjThrIIdSAzVMaiBNOdZAgeoOVdRA mnOsgTTnWANpx1kDpU91DyNqIE0C1kCac6yBvKVsB22gj+2gnnHYDuoZh+2gnnHYDuoZh+2gTbyz HbSBsh20kb1sB/U7GLaDeiixHdRTDttBLyn3wlePH9ifqQx81Zn4qjLwVWXgq4PpfFUboq8ettWE vnrYzhb6aqDGV72l+GrDxFcP2yREXx1Q56uH7SSCrw6m81WPN3zVIfiqQvTVhOmiovHVw3Y30VcT psZX06fGV73j8NX0qXoffFWTAL6qGUdf9Ybgqw7BVzXl6KuacvRVTTn6quYcfVUDTl9toPiqQ/BV zTn4qg4vfVVTjr6qKUdf9X7DV9OnZn+mJgF9VXOOvqot0Vcdgq9qztFXNefoq5pz9FXNOfqqRpy+ 6lB8tWHiq5py9FWH4Kuac/RVzTn6qnacvpo+1ZV2+KomAX1Vc46+6i3BVx0avqoZR1/VjKOvasbR VzXj6Kseb/iqQ/BVlz34aqL0U4XiqypO9FVNOfpqTbkXvvr8gZ21ysBXnYmvKgNfVQa+OpjOV7Uh +urTdu3QV5+bpDl99WmbhOir3lJ8tWHiq0/b3kVfHVDnq0/blQVfHUznqx5v+KpD8FWF6KsJUzUj +GoDxVcTpsZX06fGV73j8FXvE3xVkwC+qhlHX/WG4KsOwVc15eirmnL0VU05+qrmHH1VA05fbaD4 qkPw1cTJ3ycK0+wx0ZSjr2rK0Ve93/BV7zd8VZOAvqo5R1/VluirDsFXNefoq5pz9FXNOfqq5hx9 VSNOX3Uovtow8VUdOvpqoLoIC1/VnKOvas7RV7Xj9FXtOH01UH1VCL6qOUdf1XSirzo0fFUzjr6q GUdf1Yyjr2rG0Vc93vBVh+CrenOir2qU6Ks6i9FXNeXoqzXlWl9dT5N65asNM3y1Yz58tWGGrzbM 8FUw7qtNQ/FVQJc3u4evdtDwVULqq4TMV1vmw1cZAfVVQO6rgNRXwbivdvEevsqTq1srhq8S0vff O2j4KqHLpoEPX2WY1Fd5duqrTQjiq002xVdb6JuvNhkXXyVUNxYMX21SLr7apFx8tUm5+GqTcvHV Jufiq03A46tNzsVXCdVH/cNXCdn6apOX8dUm5eKrTcrFV7sIDF9tsim+2kHDV5uci682LcVXm5yL rzY5F19tci6+2uRcfLXJufgqO143Tg9f7aAPX+0iMHy1Saf4anObi682ORdfbXIuvtrkXHy1SYL4 agcNX21yLr7atTR8tbkZfvhqk3Hx1Sbj4qtNxsVXm4yLrzbJFF9tbmDx1Sbl4qtNKOOrzeHiq03K xVevKffCV+eXu9kbBr7qTHxVGfiqMvDVwXS+qg3RVwfU+apD8NVAja96S/HVhomvzrq/HL46oM5X B9T46mA6X02/6+dj4auBdH2VUH05Db6qh6Ov6uHoqwnT/UcumOXlhu+GwQXjTC4YZXDBDKZ+bgoX zGDq16Z4wejBeMEEunyqORfMgOpXm3jBBKpvbeCC8T7lgmmYXDADqp9/4gWTMOlLrzycvfTKo+lL r12/ccGkS/rSK1NAX3plS/rSKwdFX3ptOs4CLyHQl14J1WoKBV4g/cAZobo9KAXeUnbKfbfA85ND gaejwgIvo6Lf/2VLTYGno8ICLxGoZRkKPIdQ4AXS7/8248sCL1CtplLg6cixwPMuocDT8WWBp+PL Ai+jUiEUeBlffemVLdWOo8DT8WWBp0PHAk8DzgLPIRR4geqLuCjwvCUUeA6hwMuwXF5oHQWeji8L PB1fFnh6abLAy9DVpxYo8NJS/awGCjwdXxZ4eptjgac3TBZ4gS61Wwo8TScWeN6nUeD5wVDgaTKx wNMbAQu8BkqBp4nCAk8ThQWeGgELvED6/V8mSoVQ4OkthQWe5mW+/9ukXL7/2+RAvv/bRHxbz1nz WaTosj1oO2fN57NNpvMyuT3uRYouDZ3mNN2LFF26dHrx813ou5zbTjHe70WKap/2++O23osUfYNe FAjbyx32DYMCwZkUCMqgQBhMfYMYBUKY+hwBBcJme+VYIGiPWCAEqlUECoQcTn8ghNDlizejQMjJ Xd4hT4EQSN8PZ7+r+6NA8AikQBhMVyB4QygQ0lI9ORQIm22EZIGw2QZGFghpqUotCoRAtdRAgbDZ tkMWCIHqQxIUCImTvh9OqMp/CgQNOAsEjwAKBA84CgSHUCD4qKBASEv1WQMKBO04CwQdOhYI3hIK BE1MFgg6dCgQdORYIOjQsUBIly5VRAoE7zcKBB06Fgg6dCwQ/HAoEHRUWCBon1gg6AXFAkFHhQWC DgsLBB0WFggaAhYIfnYpEBomBYKOCgsEbwkFgo8KCoRA+gVkQpcqIgWCjgoLBB0VFgg6KiwQvE8o EHToUiBoKFkg6KCwQPCWUCDooLBA0BsYCwQ/NxQIdVBeuOH+cjd7w8ANnYkbKgM3HEzjhmEaN9xt XxrdUHtENwzUuGEO17hhIHfDnFzjhoEaN0y/Gzf0CMQNB9O5oTcEN9xtOx3dcLftdHRDh+CGDRQ3 3G3bId3QW4IbBqqWCTf0luCGgXzxOAH/oQv9+QPbAJXBhe5MLnRlcKEPprnQwzQX+tMe6PNC1x7x Qg/UXOg5XH3+gQs9kF/oObnmQg/UXOjpd3OhewRyoQ+mu9C9IVzoaak+S8OF/rQNFLzQn7ajgRd6 oMujpFzogfQjYU3HeaEnBM1TomfZ9vDdC12DiQs9XaolJ4pAjyWKQI8likCPJYpAbwlFYKCmCNRY sgjUWLII1FiyCEyfaqWIIlDPDkWgBpxFoDeEIlADziLQW0IRqJnCItADjiLQA44iUAPOIlAvKBaB enYsAjXiLAK9JRSBGnEWgd4SikBNJxSBGnAWgRpwFoEecBSB6ZL+SmRzciwCNeAsAr0lFIEacBaB GksWgQ2UIrBGvFej6f56x5kzUaOGGWrkTNQojKsRGFejQI0aeY+gRoBcjXA4VyNAqkY4OVcjQK5G 6LerUROBoUZhGjVqGooaoSWvgQB5DdRAUaNAzQaapqWoEaAqBlGjpqWokYcAaoRg2hcTyfgGGh8V qBG6VL0vagRIvxrfRSBq5KMCNfKWoEY+KlCjpqWokY8K1MhHJWrkowI18lGBGnnAoUbNyUWNGihq 1IxK1KhpKWrkowI18pagRj4qUCMfFaiRDwvUyIcFaoQ+1c+9RI187KJGTQSiRj4qUKOmpahRMypR I28JauSjAjW6jMoLDZlf7+N1BhoymDoxQkMGU/eTQUPClAGEhmh/qCEOQUMaKBriEDTEIWiIQ9GQ homGOAQNcQga4lA0ZDCdhnhD0BAdXWpIoLpnFhoSyF/UDNRpSM5Of6iYkC/FAvKlWED+oiYg+4w7 A169ABoSqLoKNESHjhriowINCVRPDhqiLVFDdOioIRomaogGnBqiQ0cNCaQvano2UUN06KghOnTU EB06aojem6khGZW6jAMNCfRThaIhOnTUkMTJ9/H6VUcNCeQvagKqD/yhId4SNESHhRqiwwINSSwv P2YcDcnQ1TBBQwKV+zM1JJDv4/Who4boqFBDNExYofEwYYXG44QVGpydv6iJluqC0Mdjeo8SHtM3 Ucpj+kuUXnjY8nq7pDPwsMHMJRDwsME8yltA8DA9FjxsMHUzLz0sUJ3y4WEDuvwIDjwsUD0cPCyH 0x8RZEu+HKQBoIc5BA9zCB7mUDxsMJ2HeUPwsMV2ftDDFtuNQg9bbDcKPWyx3Sj0MO0TPWwpG02+ 62GB9EcEmzjRw9Kn6obxMA8TPMzDBA/zMMHDPEzwMA8TPEzDRA/TMNHD0qfqDvAwjRM9TONED9M4 wcM0TPQwDRM9zMMED/MwwcPSp2pP8DAPEzzMwwQP0zDRwzRO9DCNEz1M40QP0zjRw7xP8LBA1Wfg YRpMeJgfDR6mAaeHOQQP81GBh2lL9DAdFXqYjgo9TEeFHpY+1e9FwMP0hkkP01GJh3kA4GEaSnqY t5Ttkk0os12yCWW2SzahzHZJBKC+lJP3qTxKeJ/KrwK8T+W3Z7xP1Rwu71N5MPE+lQ8L3qdqWsr7 VJdheeHQ6+ttpc7AoZ2JQysDh1YGDr3q7kw49Kr7POHQA6pfQKBDp6X6tBQOnZbq80s49KpbXePQ GgA6tENwaIfg0A7Fodcf2FbaNASHXm1PJR161X2ecGhtiQ696j5POLS3BIdeyx7O7zp0oMahNU50 6PTp8ssfw6E9THDoNOSvHDURgEM7BIf2gMOhtSU6tAacDq0Bp0NrwOnQ6VPVTDi0RhwO7ScHh9Yw 0aG9JTi0hwkO7WGCQ3uY4NCaTnToQNUy4dAaSzq0hoAOrcGkQ3tLcGgNJh1ag0mH1mDCodOly4fp 4tCB/KNzTSzh0B4BOLTGkg6tLdGhNZZ0aI0lHVpjSYfWnKND6101Dq2hpEN7AODQHko4tLcEh/ZQ wqE9lHBolz04dEJwEe04tMaJDq2DQofWQaFD++Hg0BpMOrQOCx3aWxrfJLiMygvP3l+/1eEMPNuZ eLYy8Gxl4NmDuWzcg2cPaK2fx4JnB6oKDc8O1OwZSJ/qB/zg2YHsp995tIuLx7O9IXi2hpKe7VA8 ezCdZ3tD8Ozd9vrSs3fbNUzP1pbo2bvtGqZn77YhmJ69lw3B3/XsQI1na5zo2emTe7aHCZ7tJwfP 9ljCsx2CZ3vA4dnaEj1bA07P1oDTszXg9Oz0qfFsjTg8208Onq1homd7S/BsDxM828MEz/YwwbM1 nejZgRrP1ljSszUE9GwNJj3bW4JnazDp2RpMerYGE56dLjWeHajxbI8lPNsjAM/WWNKztSV6tsaS nq2xpGdrLOnZmnP0bL2rxrM1lPRsDwA820MJz/aW4NkeSni2hxKe7bIHz04IGs/WONGzdVDo2Too 9Gw/HDxbg0nP1mGhZ3tL8ew6Ki88+/iBV4SUgWc7E89WBp6tDDx7MJe1Y3j2gC6L3vDsBopnB6pS C89On/wzCYB8PVsDQM92CJ7tEDzboXj2YDrP9obg2QllDQA8O1CpkOjZ2hI9O/H2T2gBqnYMzz5s qz49O32qOzng2d4SPPuwrfrw7DA1TPDsdKl+bwGe7V2CZx/2YgA9O1CNADw7UHVaeLZGgJ7tEDy7 geLZGkx6tl6+8GyNJT3buwTPzgXV7AnRq46erXc5enZO7rLtNp6tlyY9Wy8DerZev/RsDSY92w8H z/aW4Nk6LPRszRR6tl528GzvEjzbuwTP9i7BszV56dnaJ3p27VNvGPM8doKW8YlhhKlTbAwDTDGk GEYYNwy0U66GGAYY33Ua6PKmcgyjaSmG4acPw/B+wzA8ADEM9Lv+REEMA0erT/pjGIDqWmYMozm5 YRhgLvIwDKM5txhGoGYlz1uCYQDSnxVucgCGAcgNo4FiGE3HYxhoqegaDMOHLoaBhupsHsPwtIRh oN/1nZ0YBobudz9yX3m83onjDO4rg3n8Ru8rYVa9rwym7orHfSXtNPeVNOQ7cQJdnBz3FT193lcG 1OzECXS50nNfedhDJ95X0u9608R9JVCNAO4rgfTXQRoG95VAdTs/7isZOf91EETAf/4Rg1IvdNxX dOR4X8nI+RMCQLUCwH3FIdxXkgOXjeq5r6Ql3c3uicL7Skal/hYJ7iuaKLyvBPK3CpvDpXLxllC5 eEuoXBCC+huRqVw8nVC54HBeuWDoasdH5eJ5icrF8xKVC452+Y3IUbn4bQeVC1qqDyRSuXjKoXLx nEPl4kOHysVzDpWLZwoql+ZwqVyallK5NC2lcvGcQ+XiOZfKxVMOlYunHCoXzzlULp5zqFw851C5 +A0TTwjQUv2JxDwh8JzDEwLPOTwh8KHDEwLPufGEwPMETwiag+UJQdNSnhA0LeUJgWccnhB4xuEJ gaccnhB4yuEJgaccnhA0bpknBJ5yeELgtzk8IfDbHJ4QAKoPJPKEwFMuTwgQy1qQ59dBPOPw6yCA apjy6yCeKfh1EM85/DqIB/y8B5wdLzenmgT7aU7TvUpRXW84xfh5r1JUWzrFeJ/uy/fSSYuWfflu RfKiyFltIxCKnMHUuxqKnLRTbmoocsKUBEORs9oDLxQ5qz0bZpETqClyVnvexSJntad5LHICNYsn GmwUOWEur/WmyMmI+DYoQNtHJvzx/OdfX97ZbZrPoZnf/v3XL39+++eXU+X29TSj0w3XrxPD+X/m 8/TWD+D8r96J9//027/86gzkcbz95R9ffv2H+9vf/9Om2Xlj+Das9c6ZNBtMXaBAmoXRN8PBHJpm g6k3RKRZmKaWHlDzhR6cvf80rXeJaZZzq7mINEtLXksP5ofWQh7TKusTGb8wdY0t4xfGv7AUxtdY w9R8yvihPzUMGT9AfpsAVIcm4wfI3+wH5LeJQD82No/p5TqVMxibwcy6ThXGr60wNRAYm7TjH+H0 hjg2DZSxCVRnDIyNxohj49C4tjpm3MIDPfwjnB5urFM1ERjrVE28s07VNJR1qgbKOpXHG+tUHias U3XQWKdqoKxTNVDWqXAN+E5WnN1laXusU3XQxzoVRqWUL1inQkN1CSrrVIB8J2sTgaxTddBYp0Ly 1hWRrFN58mKdyhMT61SALp9mGOtUgOryUtapANVFobFOhVjWNaGsU/nQYZ3KY4l1qgbKOlUDZZ2q gbJO1UFjncohrFN5EmCdypMA61SeBFin8iTAOlXT8axTNVDWqRporFN1zFinaqCsUyGW9Xl+1qn8 rop1Ks9wrFP5TRzrVD50WKfys8M6VQNlnaqDxjpVA32sU+EW7utUHkqsUzUByDqV5y7WqZp4Z52q OVzWqRrZyzpVB411qgbKOlUDZZ3KbwRYp/KzwzoV4nTZ7jrWqXxmxToVoLooNNapLl164dnnfeGl ZysDz3Ymnq0MPFsZeLYy9OwBXR6ZwrMD1U0d8OxAvs+k6RM826F4dsPEsx2CZzsEz3Yonu0MPDtR quoPz/ZQwrMD+T4T7xM9u4Hi2Q7Bsx2CZzsEz3YInt1Aw7OdgWc7BM9OwOuuYHi2twTPbqB4tkL0 bIfg2Q7Bsxsonu0QPNuheLYz8GzNcHq2twTPdgie7RA82yF4dgPFsxWiZzsEz9Z7OD1b7+H07ECX 7a7xbO8TPNsheLZD8eyGiWfrDZOerelEz/ZYwrMVomc3UDxbA07P1hDQsx2CZzdQPNuh4dmOwLMd gmc7BM9uoHi2Q/BsHxR4tssePLuB4tkOwbMdgmc3AhrPdgie3UDxbIfg2Q7Fs+tF98Kzz1F66dnK wLOdiWcPxvddhqmVCzx7MJcnBPDsNNR4dlryLzN4l+jZev70bIfi2Q0Tz3YInu0QPNuhePZg6jot PTtRumzOjGf7yMGz01JdqIVna7/p2Q0Uz3YInp3UrbsO4dk5u7ovGp6tKUfPTgguP8o5PFsDTs/W gNOzPeDwbA8TPLuB4tkK0bMdgmc7BM/O0PlXhD0E9OxA+saYjwo9O6NSV/Th2X5y8GyH4NkOwbMd gmc3UDxbb/P0bL0M6NkacHq2RpyerRGnZ+vZ0bMdgmc7FM/2MMGzde6hZ+tUR8/2WMKzFaJna8Dp 2RpweraGiZ7tEDxbg0nP9rMbnu3nBs/2c4Nnp0eNZ+t9np6dw/mXGQDVhXh4tssePLuB4tkOwbMd gmdHLkry0rMTp7qjCZ7dQPFsvaDo2RpMeHZlXnj2bluX4NnKwLOdiWcrA89WBp6tDD3bIXh2A8Wz B3TZHwjP9pbg2Q7Fsxsmnu0QPNsheLZD8Wxn4NkNFM92CJ6t8aZna0v07AaKZzsEz3YInu0QPNsh eHYDDc92Bp7tEDzbAw7P9pbg2Q0Uz1aInu0QPNsheHYDxbMdgmc7FM92Bp6tAadne0vwbIfg2Q7B sx2CZzdQPFsherZD8GyH4NkOwbM14vRsbwme7RA826F4dsPEsx2CZzsEz3YInq0QPbuB4tkacHq2 twTPdgie3UDxbIeGZzsCz3YInu0QPLuB4tkOwbM93vBslz14dgPFsx2CZzsEz24ENJ7tEDy7geLZ DsGzHYpnJ+DLj3j20/azw7OVgWc7E89WBp49mLr6BM8eTF18omc/fVt8PPupu/Dh2Tlcs3dez42e 7VA8u2Hi2Q7Bswc019cx4NlP3fMfz3YGnv3UNwzg2Qml/yqet0TP1gjQsxsonu0QPNsheLYGnJ6t waRnJ06XnzAenq0JTs9OQ/4dAb8K6NkeAXh2A8Wzc4k3nq2ZQs/WTKFnp6X6aS94doaufiIAnq1x gmfrqNCzdVTo2RpLerZD8GyH4NkOwbM9TPBsDRM9O1Cznu0QPFuvFXp2zZR+Vlym128tOZNZsWHG rOhMZkVnMis6g1mxgTIrdtCYFQM1q09NS5kVG2jMih0zZsUGyqzYQJkVG2jMig2TWbGDxqzYQJkV Pd6YFb0lzIodNGbFBsqs2ECZFRsos2IDZVbsoI9ZsWEyKzZQZsUm4JkVm5YyK3bQmBUdwqzYQJkV GyizYgeNWbGBMis20JgVGyazogccs2LTUmbFBsqs2ECZFRsos2IHjVnRIcyKDZRZsYEyKzZQZsVL xF/MivPHVil/zzqMv2cdxt+zDtPMioPxb86hP773CR2qlRJmxbTktWKgH3o7elle7yYLU7+ah4gu 9hgWER1MlSlEdLGnfYjoYk+FGdF02n/nBz26vN6eiAby6tvjSM9wKJ7RMPGMAV2+mADPSL9r4sEz AtXEi2ck3v52NKDLlrN4hmYJPUOHl56hYaJnJAcunxCNZ3hL8Iy0VN/Yhmdo7tIzEvDGMzSY8AyN JT3DuwTPSJfqggA8w8MEz2igeIZC9AwNOD1Dz46eoZcvPSNQPRw8QyMOz/AuwTM04PQMDxM8wyF4 hscSnuEdh2ek4/47P35TpWfo4egZGid6ho4vPaO29GJWXF/v/XAGs6IzmRUHU1/MwKy4liX1782K YfyzT4D8VzkA+fd4ADWzop4/Z0WHMis2TGZFhzArOoRZ0aHMis5gVmygzIoOYVZc7TEWZ0VtibNi A2VWdAizokOYFR3CrOgQZsUGGrOiM5gVHcKs6AHHrOgtYVZsoMyKCnFWdAizokOYFRsos6JDmBUd yqzoDGZFDThnRW8Js6JDmBUdwqzoEGbFBsqsqBBnRYcwKzqEWdEhzIo14i9mxd3W3jEr7vbUBLNi GK++d3tmglkxjNeKuz1U4KyYE7tMeJkVd3uuwllxt+cqnBV3ezzBWXFAP1bHP8dnPS6bcsfYDKaa EsbGmYxNGH0rLEzzvCB9rh9+x9gM6FIyYmy01xybQNV9MDbab46NQzGWhomxOARjGdDl0/cwFg9T jMUDAGNpoBhLjuZfX/CWaCwaARpLEuXyyf4YS/pU34mCsaRP/nu9gBpj8Y7DWDTDYSw6cjQWTXAa iwccxuL9hrE0UIxFIRqLQzAWzXAai8aJxqIhoLE4FGPRDKexeEMwFo8AjCWJUkt0GItHAMaSPl1+ IizGkoBfPtkfY9GO01g0w2ks2nEai6Y4jUUjjt2qTcezW7WBslu1gcZu1Y4Zu1U9w7FbtQlTdqs2 EchuVYewW9VTHLtVm5ayW9VDgN2qnuLYreohwG5VT/GxWxXxLp/ywG7VptvZreoJjt2qTbezW9UT HLtVm3hnt6p3HLtVO2jsVm2g7FZtoOxW9QzHblWPE3aregiwW7WBslvVMzy7VS8N9Z693l9/5cyZ eHbDDM92Jp7tTDzbGXh2A8WzO2h4dqDLp9Di2U1L8ewGGp7dMcOzGyie3UDx7AYant0w8WxE6fJQ bXh2E8p4NqDqtPFs7xM8u4OGZzdQPLuB4tkNFM9uoHh2B314dsPEsxsonu1DB89uWopnd9DwbIfg 2Q0Uz26geHYHDc9uoHh2Aw3Pbph4NgJeH+HFs5uW4tkNFM9uoHh2A8WzO2h4tkPwbITAd6t6YsKz /Z4CzwbkX19oOh7PbqB4dgMNz+6Y4dkNFM9uoHh2A8WzPeDwbA84PNsDDs/2PsGzGyiejcNVzYxn N3368Gyfe+DZTZTi2c3B4tkdNDy7iXc8u+l4PLuRvXh2Bw3PbqB4dgPFs73j8GyPODy7g4ZnezDh 2T4s8exLv1949jx2OOlOvzCXPWfx7MHUWhGePdv+CHh2+lOWIOHZYXw92ztNz55tTwM9e7bdEfTs 2TaI0LMdimc3TDzbIXh2wu3r2U2Y4tkeAHh2A8WzZ9uNQs/WlujZGgF6tiY3PVsjQM/W9KZn53B1 8Rie7R2HZ2uGw7O93/BsTXB6tgccnu39hmc3UDxbIXq2Q/BszXB6tsaJnq0hoGc7FM/WDKdne0Pw bI8APFtTjp7tEYBnp0/+67I43GXzWjxbO07P1gynZ2vH6dma4vRsjTg92zsOz3YInu1QPLth4tma 4fRsDxM82yMAz1aInq0pTs/2luDZGgJ6djKlWi08O31qPNuh4dnebXi2dxuerQlOz9aRo2drgtOz vePwbO04PbuB4tkOwbMdgmdrhtOzNU70bA0BPdsheLZmODy7NvTCsx+vd7o6A892Jp6tDDxbGXi2 MvRsh+DZDRTPfti+NHq2twTPdiie3TDxbIfg2Q7Bsx2KZzsDz26geLZD8GyNNz1bW6JnN1A82yF4 tkPwbIfg2Q7BsxtoeLYz8GyH4NkecHi2twTPbqB4tkL0bIfg2Q7Bsxsonu0QPNuheLYz8GwNOD3b W4JnOwTPdgie7RA8u4Hi2QrRsx2CZzsEz3YInq0Rp2d7S/Bsh+DZDsWzGyae7RA82yF4tkPwbIXo 2Q0Uz9aA07O9JXi2Q/DsBopnOzQ82xF4tkPwbIfg2Q0Uz3YInp14N+vZLnvw7AaKZzsEz3YInt0I aDzbIXh2A8WzHYJnOxTPTsDHV87efz7+/wCHvFY1CmVuZHN0cmVhbQplbmRvYmoKMzQgMCBvYmoK PDwvRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDEwNDU3Cj4+IHN0cmVhbQp4nK2d3aoky3Vu 7/sp1guoVP8/cDhgHcu6thH4AYwtMMhg+f3h5Nrqjm+smT2+6guzYXurPToqcsbMmCMio7JOH8ft n9+dtn89XuePf/vrt//+9vkn99P5tz/4279/+9eP//r8w8Pj9hv7/T9+dzo9n8fPv3D6+PznX/70 8ff/+Ntfvv3+T8ePv/zP1s7p9rweP56n+8fpeLt/NvYfv7X++Sfn0+l8/PFHPwe3P30+v//p8Zm/ fvz8tL//x/Zp1+fH6X65XD8e598+/5P67c/u2/+ef/bJnT7Oz++tPe4/+cNP8PY6vX76l8ef/cad Pq6nOxqcf/gb+HkpP/vL888+udPH5cz2vv7ZP2///HSULvfPsfn7v2+X1+F8+zifD7dfGcTvQ/aH P3/7/T9tgb98/Pk/vp1+5MY2Iltfj6+PP//12/85Hm+3//vx5//M//t+OTyP218hc71+ZR6nw/24 9edLO6fBvA7X4+X5tZ3HV+b5OJy3v/i1nfNX5nU7HI/3r/25Hb8yp+N56/Tj64UdLwM6HbdeP19f W/qHCT0P1625GqLT+X7YEv769dr+34Au18PxdH58bWkE4HQ9HZ6zoedkXof76XbuF3d7bP2+fx2S 0/y0+23r92OMyR9/g/74Z0mX0+Pys0/e+DeZez/+SNvLFvn/rbzdJpafjxzyNsxN83Yxc9yQt2nH 8zbtPDVvw5w9bwO9PG+118zbQP/keav9Zt4u6PSPmreL2aUk8jbQHzxv06V5UyJvdWy3PN36/TVK cy45PY+H5/k4kmQG4Pk8bMVmZNu8uNf9cD2f+6Ccj9fD+XwdeTJy6Xw6H7ac/BrL83FCr63j9zEo o0/n82Pr+ONrn44jB86X29bx17kO3fl6OZwvxzEqIwfOt9PheDk9asTPt+fheRl3wRi58/1+uF+u Y1TuA3pcD9fL7dbD9Dxv/b5/TZTTKF7n13Hr9/PrqBzn0L0eW79f13oXbBE63K/He43l5XQ5XK8j MedNdzmfDlvQzzXDL+fXYSvF1w5tk+/zenvUOF2u163jj97v23nr9/NcU+5yP279ft16mO7Pw1ay n/3jHrfD83Y+1bvu8rwc7reZTmPoLq/T4Xq73utdd3l9Vqbbq4bpenxsHX+cajpdT9et489rjdM2 VWwdf/X793o5Hq7349c+zRp+vTwP5/t5DMsoGdfr/bDVnlvv+G2rmDML5thd71vJvPcMv963knl/ fB2Vy+z3Y6uZ9+eYVcdtcH1uRfP+etZ75fraiubjdKp9uh23ovk4fx2Vy3FCW9F8XEamjDDdTlvR fFxf9a67nbei+bj3WfV22Yrm4zEyZba0KdP98RyjMkJwu25Vs99Pt9tWM5/Dh2YK3DZpOj6HEM1k uj22ovkcRjTz5PbYiuZzKNFpzOG351Y0n8OJdtBrK5rP4UTz4zaL2zr+7LfKdisdnq9jv33vp61o vt6ozP28Fc3XdKLdwmkrmq/Lq97j902cjq8hRbs+Xbei+RpS9P3q1NP/rsfbv+USosfKQI8XMzUb ehxmRAJ6vJg5S0OP085cr0CP09DUNejxgna+Cj3Wy6cea7+px+n49Mzo8WJmcaEep98zAtDjQLNL 0GOHosfOQI8DnSYUPQ40Mph6vKA5/VKPE6Yph9DjjNyUWuixDi/1OMM7Pw56rPcA9VjHl3qcOP1h QkuPnYEeB5pdgh4HGjM59VjHl3rsHwc9TgT+6Hqs40s9zseN+4l6nFG5Tyh6rJlCPdZMgR7rPEA9 1nmAehxo59DRY7846LGmHPU4sZwBhx4XKHqsNzn1WOcU6rGOCvVYR4V6rKWHehxoGiT0ONC4D6jH WqGgxzoq1OOEaXYJeqx3HfVYb3LqsbZEPdaJh3qs40s9TpimaEOPNcWpx1rtqceB7r+iRo/zezVS BmrkTNRIGajRYq4jEFCjMHMrD2oUaFZhqNGCdpIFNdJuU40Czb08qFE+brebvdQozH0yUaNc3GwI aqShpBql365Gi9kJJNTIowQ1ysVN74Mapd9l5zAtzUoNNVpQ2znM1c09SKhRrq7sHGoIqEYaAqqR Ji/UKA0VNfIIQI0SgdkS1CjQ9CeoUaCiRgpRjQLNTUGoUaARJqqRDh3VSIeOaqSjAjXKqIwwUY3S 0IwA1EiHjmqkyUs1cghq5AGHGgU6TShqpB9HNdKcoxrp0FGNdOioRjqrUo20GlCNMnYTghrp/Aw1 0kyhGmkSUI0Sy3lxUCMfFaiRjgrVKNBuezFqpJlCNdL7l2o0J543QvN8vhcaZSA0zkRolIHQKAOh WUwTmkBFaBbUhEa7RKEJVIQmH+dCE6YITS6uCE2gIjTptwvNYprQeJQgNLm4IjTpdxGatFSEZkFN aHJ1RWhydUVoNAQUGg0BhUaTF0KThorQeAQgNIlAEZpARWgCFaFRiEITqAhNoN1TzgiNDh2FRoeO QqOjAqHJqBShSUNFaHToKDSavBQahyA0HnAITaAiNJpOFBpNAgqNDh2FRoeOQqOzKoVGqwGFJmNX hEbnZwiNZgqFRpOAQpNYFqHRJKDQ6NBRaAIVodFModAEKkIzJ54qNPct5O+EpjBLaBrzQ2gKs4QG jO7QkFGhIaRCA8iFpnQ7QsOWJrSEhn0yoWFD87DVEpoSpQhN+7QlNPy4+czph9C0KC2h4aftnnD9 EJo2cktoSr8jNOzTPLS0hKaMXISG0CznS2j4cbNPS2han5bQtKtbQlPGdwkN+z29bwkNoO8u+mY2 OOvT+MwGymA2cCazgTKYDZTBbLCYW5kNtCHOBgXKbHC2EwScDQLNuwqzwdkOLGA2KExmg0RAH2UD us4VF2aDBe2enGc2WMzljz4beCgxGziE2UAhzgZnOyDC2SDQvPMwGyRMcwWA2SB90uVN6zhmA4cw G3jHMxtoxnE20GzK8qZBa3lTUi7Lm5JyWd6UlMvypuRcljclllneVOjH8qZBa3nDOM3H1D+WN2Tm UmItb0rKZXlTUi7Lm9bvtbwpE1iWNyUJsrwpOZflTWkpy5sGreVNybksb0rOZXlTci7Lm5JzWd6U iGd506Afy5syKlneEJqrwLW8KbNFljcl57K8KTmX5U25uCxvyvyc5U1JgixvSs7lUXZraT3KbtD3 k54l43LSs2RcTnqWjMtJz5JxOenZ4r1OejZonfQsg5KTniWbctKzQeukZ0m5nPTcp9wbX728PXpZ GPiqM/FVZeCrysBXF9N8VRuiry5obmTQVwOV1Wug4qveUny1MPHVix3toa8uqPnqxQ4AwVcX03zV 4w1fdQi+mghMFYWvXuyQEH31YoeE6KsJU/HVdLz4ql4dfTV9mt4HX9UkgK9qxtFXvSH4qkPwVU05 +qqmHH1VU46+qjlHX9WA01cLFF/VnKOvajrBV8PodnxJOfqqphx91S8Ovpo+zZ12+KomAX1Vc46+ qi3RVx2Cr2rO0Vc15+irmnP0Vc05+qpGnL7qUHxVU46+GmieUIWveizhq5pz9FXNOfqqXhx9Ve8D +qp2nL6qOUdf9Zbgqw4tX9WMo69qxtFXNePoq5px9FWPN3zVIfiqphx9VVOOvqpTIX1VU46+OlPu ja/e3p6HLQx81Zn4qjLwVWXgq4tpvqoN0VcXNK2Hvnqz8z/01Zud/6Gvekvx1cLEV292cou+uqDm qzc7cAVfXUzzVY83fNUh+OrNjnfRVzVM9NVAu03Y+GrCVHw1HS++qldHX73ZkTP6ql9dfFUzjr7q DcFXHYKvasrRVzXl6KuacvRVzTn6qgacvlqg+GpCMLdF4auaTvBVZ+CrmnL0VU05+qpfHHw1fZr7 wvBVTQL6quYcfVVboq86BF/VnKOvas7RVzXn6Kuac/RVjTh91aH4aiKgx0caBF/1WMJXNefoq5pz 9FW9OPqq5hx9NdD8gg98VXOOvqohoK86tHxVM46+qhlHX9WMo69qxtFXPd7wVYfgq6o79FWdCumr WsXoq5py9NWZcm989f72uHNh4KvOxFeVga8qA19dTPNVbYi+uqDdl7bhqw7BV+92vIu+ereDYvDV wsRX73Ywj766oOardztPB19dTPNVjzd89W4H8+irdzuYR191CL6qsaSvJkzFV3N1xVc1BPRVzSb6 aoGWr2rG0VcDzYMF8FUPE3xVU46+qilHX9WUo69qztFXNeD0VY0lfTXQfGQOX9VgwlfDFF/VlKOv asrRVz0C8FWPAHzVIfiq5hx9VVuir2rO0Vc15+irmnP0Vc05+qrmHH1VM4W+qrMTfNUjAF/1T4Ov BppSC1/VnKOvas7RVzXn6KuaBPRVh+CrmnP0VW8Jvqr3b3xVM46+qhlHX9WMo69qxtFXNQfoq35t 8FVtib6q2URfDVR8VVOOvjpT7o2vPn/hNLsy8FVn4qvKwFeVga8upvmqNkRfXVDzVYfgq4GKr3pL 8dXCxFefer4cvrqg5qsLKr66mOar6ffcqoavBir7q4Hmd9Pgq4GKrxYovpowHX/hhjkd3x/4diY3 TGHWDeNMbpgw8yVRuWHC7F4knRvGPww3DKCZU7lhAs13LeGGATS/tZEbpvRp3TCNWTdMoPnSJtww CJN+n5UfZ99n5afp91lbv3PDoEv6fVamgH6flS3p91k5KPp91tJxLPAQgnkTZ4EHSN9d1qAs8ADt Ttn8WOCh33PNmQVeubgs8HxUsMDDqOirfdmSL/B8VLDA8whggVegLPAA7d7/uxZ4gOY7wLLAAzQP x6wFno8KFng+KljgIZY7aC3wMCrz4rLAQ0tz0ZkFXhmVLPD8NsACD5A/kChQFniA5nvCssArLWWB 16C1wMOw6LvLygBjgecDnAWe31BY4GHo/IEEWtJ3l5XxxQIPYZrrpCzwPJZY4PkNhQWe379Y4HkS YIHniYkFns8Ea4Hno4sFno8uFnhefbHAAzQDkAUehndCWeD5RIAFnmcTFnieKFjg+chhgeeJggWe J0pe7VvyMq/2LYmSV/uWRLnftqr5GlL0vUa/cejT+0PozsChnYlDKwOHXsz8ki0cOsz8FiMc+mTH yejQ2iM6dKAp2nDofJz+PAah3ftelkPn4vRnXQj5V6jRb33/b4tAHHoxzaG9ITh0WtL3/xKa3geH PtkZPzq0twSHDjRtHA59suN7dOiTncyjQydO/hVqQNOP49AacDq09xsO7WGCQ3vA4dDeEhxaA06H DlQcOpD+PEYJOB1aA06H1ojDob1LcGgNEx1aA06H1huKDu0Bh0MHmgsEOLRmOB1aA06H1oDToTXg dGgNAR1aI06HdggOrcNCh9bbDg6to0KHDuSH0AsEh/ZRgUPrqNChdVTo0BoBOrTeLHRoHRU6tI4K HVozJQ4dZJovHFpHjg6t9xMdWmc5OvQcuTdGd3l/TNsZGJ0zMTplYHSLKUYXphjdxQ5c0ei0RzS6 QMXo8nHF6AK50eXiitEFKkaXfhej8wjE6BbTjM4bgtFd7Fweje5ip+lodA7B6C52VI5Gl5bm6+tg dNpxGl2g6YYwOm8JRnexo3IwugR8NgSju4xDcD81Oh06Gl26tHuBX4zOLw5Gp0NHo9Oho9GlJX1t cfs4GJ0OHY0u0Nw6jdHpyNHodOhodDp0NDodOhqdhwlG52GC0RUoRqcQjU7Hl0anAafR6dDR6ALt zsbE6HQmoNHNsXtTX6/vj5U6g/rqTOqrMqiviyn1NUypr1c7IML6qj1ifQ1U6ms+bj5PQ30N5PU1 F1fqa6BSX9PvUl89Aqmvi2n11RtCfU1L89ks6uvVDuSwvqalUl8D7R5Npr5e7awN62ug+SQU9TUh mNUF9fU6jtH8tL5qMFFf06VZg1FfPZaorx5L1FePJeqrxxL1VZOA9VVjyfqqsWR91ViyvmrHWV81 TqivGnDWV28I9VUDzvoaaO5zoL76xaG+esBRXz3gqK8acNZXTQLWV40T66tGnPXVW0J91YizvnrH sWMSaJbz7JhowLljogHnjokHHDsm6VLZMdEwccdEA84dE28JOyYacO6YeEvYMdEU547JjPgbNbq/ P8HoDNTImaiRMlCjxRQ1ClPU6K6H/KBG2iOqUaCiRvm4okaBXI1ycUWNAhU1Sr+LGnkEokaLaWrk DUGN7nY4kWp0t2OOVCOHoEZ3O55JNfKWoEaBphhAjbwlqJGGgGqUYOobOMGUA1k6KlSjdGk+cYIa BdrtKkSNPAJQIx0VqpG2RDXSUaEaeUtQIx0VqpGOCtRIR4VqpKNCNdKAU4384qBGDkGNfFSgRt4S 1EhHhWqkLVGNdFSoRjoqVCMdFqqRDgvVKH2ap5+gRjp2UCOPANRIR4Vq5C1BjXxUoEbaEtVIR4Vq pKNCNdJRoRrpqFCNvE9QI7+69TDJETxMcggPk3Tk+DDJW8LDJB05HsjylnAgS6PEA1lz5N4I5PMX TvQrA4FczFQaCORi5hlVCGSYkXgQSO0PBdIhCGSBIpAOQSAdgkA6FIEsTATSIQikQxBIhyKQi2kC 6Q1BIHV0KZBPOwtJgQzkr3AP1ATyaWf8KJBPOy1IgQxU9tYCzUdlEMgE0wUyTBFIHRUKpAccAhlo 7i5CIL0lCKSOCgVSw0SBDDQPx0AgdegokIHKiX7vUwRSR44CqUNHgdSho0DqtEuBzKjoL1QR2h00 ikD60EEg9TagQGosKZCB5j4WBDLQ/Ho0BFJvcgqkQxBIHTsKpI4dBDIB118j5/j6V7YBjfmZAhlI f428jC8FMmEqp5E0TBRIDRMFUuNEgczV+Ve20dI0sSWQGiUKpEcJAjmj9MbDXr9wKlwZeNhiziMQ 8LDFXMb3AeFh+lnwsMXMrxrQwwLNkg8PW9Du57DgYYHmx8HD8nH6S6FsyTfyNAD0MIfgYQ7BwxyK hy2meZg3BA972bk8etjLTvjRw152VpAe9rLje/Qw7RM97DVO5v3UwwJNM4KHaZzoYenTtMx4mIcJ HuZhgod5mOBhHiZ4mIcJHqZhoodpmOhh6VP5ZqXGiR6mcaKHaZzgYRomepiGiR7mYYKHeZjgYenT VCx4mIcJHuZhgodpmOhhGid6mMaJHqZxoodpnOhh3id4WKDpM/AwDSY8zD8NHqYBp4c5BA/zUYGH aUv0MB0VepiOCj1MR4Uelj7Ng9rwMJ0w6WE6KvEwDwA8TENJD/OWsJHnocRGnocSG3keSmzkJQDz u4fYyNMo8ZuVehfwm5U6PfOblf5x+GalBpPfrNRh4TcrvSV8s3IOS3fozTjeOrQzcejCLId2Jg7t TBw6zE5F49CA3KEDzXehwKHR0nzOHYdGS/PJcxwaLU0bXw7tAYBDFygOXaA4dIGWQ4cpDl0aikOj pQnFoQMVh/aW4NCA3KFLS3FoXJ07NCB3aI8THBp90nP4JUxxaDQ0zTcOXSIQhy5QHLoEPA7tLcGh PeBwaA84HNoDDodGn6ZmxqE94nHocnFxaA8THLq0FIcuYYpDlzDFoUuY4tCeTnBoQNMy49AeSzi0 hwAO7cGEQ5eW4tAeTDi0BxMO7cGMQ6NL80sGcWhAczc3Dl1iGYcuEYhDeyzh0N4SHNpjCYf2WMKh PZZwaM85OLTPqsuhPZRw6BKAOHQJZRy6tBSHLqGMQ5dQxqGL7MWhEYKdaC+H9jjBoX1Q4NA+KHDo 8nFxaA8mHNqHBQ5dWrpvVfP1UyN449nn99/HcQae7Uw8Wxl4tjLw7MXsjlzCsxd0my/Kg2cHmgoN zw7kZwbQp/kqT3h2oN2B0uXZ+bSdi8ezvSF4toaSnu1QPHsxzbO9IXj22c5W07PPdt6bnq0t0bPP dt6bnn22o9z07PM4yv1Tzw5UPFvjRM9On9yzPUzwbL84eLbHEp7tEDzbAw7P1pbo2RpwerYGnJ6t Aadnp0/FszXi8Gy/OHi2home7S3Bsz1M8GwPEzzbwwTP1nSiZwcqnq2xpGdrCOjZGkx6trcEz9Zg 0rM1mPRsDSY8O10qnh2oeLbHEp7tEYBnayzp2doSPVtjSc/WWNKzNZb0bM05erbOqvFsDSU92wMA z/ZQwrO9JXi2hxKe7aGEZ7vswbMTguLZGid6tg4KPVsHhZ7tHwfP1mDSs3VY6NneUjx7jsobz76+ /3KXM/BsZ+LZysCzlYFnL2a3dwzPXtBu0xueXaB4dqAptfDs9MnfKwPI97M1APRsh+DZDsGzHYpn L6Z5tjcEz04oZwDg2YHGComerS3RsxPvaZnw7EDTjuHZVzuqT89On+ZJDni2twTPvtpRfXh2mBkm eHa6NN8GA8/2LsGzr/bFAHp2oBkBeHag6bTwbI0APdsheHaB4tkOwbM14vBsvcXp2RpwerZ3CZ6d u24qNDxbb016tk6F9Oxcnb8p0O9ferbeK/Rsvcnp2RpMerZ/HDzbW4Jn67DQszVT4Nl6a9KzvUvw bO8SPNu7BM/W7KVna5/o2bNPbzTksY6Lzh9Sj4YsZtZhaEiYoVHQkIcdn4CGpJ1xN0BDwvjR1EC7 L6JDQ7wlaIhePjVE+00N0QBAQ9Lv+Ysm0JB82jwOAA0JNDc8oSF+cdGQMDvDiIb4tUFDHnYwhBqi LVFDAs2XA0NDNAeoIYGKhjgEDfGOQ0PS0nA6aogOHTQkDc2SDw3RtKSGpN/z2z/QkAzdP/7KvPL6 heM6ymBeWczlH3ReCXPTeWUx8+g85pW0U+aVNFSO6yxoJ+6YV/TyOa8sqB3XWdDuTs+88rInU5xX XvaMi/NKIH93RYEyrziDeeVlz8E4rwTyI+8Y3vIYIWHyrx5i5OZsgHlFh5fzSobXf0wIkP+YUIEw ryRR9Mg7GirzimYT55WMypyjMa84hOVNoN27NbO8CTS/wYflTTruvxbrScDlTT5uLkqwvEnA/auH nk5Y3mg2cXmTTyvLG51RuLxJS/OBBJY3milc3mimcHmjScDljUJc3mimcHmjmcLljWYKlzeaKVze aKZweaOZwuWNZgqWN5opXN5opnB5o5MTlzdpaY4KljeaKVzeaKZweaNJwMcIDuExgmYKHyNopvAx gmZKHiNonvAxguYJHyNonvAxguYJHyNoovAxgiYKHyPolMLHCDql8DFCIP8xIU8UPkZIMIen8TGC 5gAfI2gO8DGCt4THCJqXeIygUrTdJofL8dhH5b6Z0+k4pGhmyn3z4tfmr1+HboZpE+PHcUjR7v0f x8vhdvy5FOkCYc4Jx1/ZqLgc7WROFhRh5oSVBQXaGWOQBQWYkYVZUISZSZgFRZjd29yzoAA0F/xZ UAB6TWgtKEpLWVAA8u/QAtKf8/UBwYICkL8MD8M2w5QFBaD7j3T55+2f//72yd5P5238zh9/+/dv //rxX982t3rcNlXZZO3220y9/Z/zVv9vP4Dtb30Sn3/1+3/8bgvk8/jxb3/99vs/HT/+8j81Fze9 /T72ux8qW7m4mLljgFwMo9/nBvPUXFzMnFqRi2F8cRuovFcHV+8/Le1dYi7m2mZWIxfTki5uw/zS 5sSWGLJhgPFbzNz0wvgtxt+LFMY3PcPMfML4pT8zDBi/QL45AWgODcYvkH8fH5Bvegb6tbHZlOrd xpEzGJvFnHXjKEy5txYzA4GxSTv+0lNviGNToIxNoLn7i7HRGHFsHMq9VZjM8wu6lHlew8153iOw No5KvLNxVBrKxlGBsnHk8cbGkYcJG0cNWhtHBcrGUYGycYR7wM+f4up2e81r46hBPzaOMCpjGYSN IzQ0z81m4wiQnz8tEcjGUYPWxhGS198H78mLjSNPTGwcAdq9UGFtHAHyX9ADNLdy1sYRYjl3crJx 5EOHjSOPJTaOCpSNowJl46hA2Thq0No4cggbR54E2DjyJMDGkScBNo48CbBxVDqejaMCZeOoQGvj qDFr46hA2ThCLP1XqH1WxcaRZzg2jnwSx8aRDx02jvzqsHFUoGwcNWhtHBXox8YRpnD/BT0PJTaO SgCyceS5i42jEu9sHJWPy8ZRkb1sHDVobRwVKBtHBcrGkU8E2Djyq8PGEeK0O6S6No68smLjCNDc globR7sudc/eFPitZzsTzy7M8mxn4tnOxLOdgWcXKJ4daLczE88G5Gug8nHx7AItz27M8uwCxbML FM8u0PLswsSzEaWdjC/PBuQHPwDNR4HxbO8TPLtBy7MLFM8uUDy7QPHsAsWzG/TDswsTzy5QPNsT HJ5dWopnN2h5tkPw7ALFswsUz27Q8uwCxbMLtDy7MPFsvw3g2aWleHaB4tkFimcXKJ7doOXZfv/C sz0E8OwCxbN9DodnA/IHtH518OwCxbMLtDzbJ0x4dul3PLuEKZ4NaP6cdTzbPw6eDWi6aDzbRwWe 7WGCZxcont2g5dkF+uHZBYlnFyieXaB4doOWZxconu3ZBM8ushfPbtDy7ALFswsUz24Cujy7QPHs Bi3PLlA8u0DLs3c33RvP3grWW89WBp7tTDx7MX4QMsxcucCzF7N7QgDPTkN+wBot+fsUvEv0bL1+ erZD8ezCxLMdgmc7BM92KJ69mLlPS89OlHanJePZPnLw7LQ0N2rh2dpvenaB4tkOwbOTuvOgMjw7 VzcPKsOzNeXo2QnB7kdQl2drwOnZGnB6tgccnu1hgmcXKJ6tED3bIXi2Q/DsDJ2/+9dDQM8ONHfG 49k6KvTsjMrc0Ydn+8XBsx2CZzsEz3YInl2geLZO8/RsvQ3o2RpwerZGnJ6tEadn69XRsx2CZzsU z/YwwbO19tCztdTRsz2W8GyF6NkacHq2BpyerWGiZzsEz9Zg0rP96pZn+7XBs/3a4NnpUfFsnefp 2fk4f58CoLn4gWe77MGzCxTPdgie7RA8O3Ixf1gMnp04zRNN8OwCxbP1hqJnazDh2ZN549kXO7oE z1YGnu1MPFsZeLYy8Gxl6NkOwbMLFM9e0G4TFp7tLcGzHYpnFyae7RA82yF4tkPxbGfg2QWKZzsE z9Z407O1JXp2geLZDsGzHYJnOwTPdgieXaDl2c7Asx2CZ3vA4dneEjy7QPFshejZDsGzHYJnFyie 7RA826F4tjPwbA04Pdtbgmc7BM92CJ7tEDy7QPFshejZDsGzHYJnOwTP1ojTs70leLZD8GyH4tmF iWc7BM92CJ7tEDxbIXp2geLZGnB6trcEz3YInl2geLZDy7MdgWc7BM92CJ5doHi2Q/Bsjzc822UP nl2geLZD8GyH4NlFQOPZDsGzCxTPdgie7VA8OwG//opn3+w8OzxbGXi2M/FsZeDZi5m7T/DsxczN J3r2zY/Fx7Nvegofnp2PK+dG9Nro2Q7FswsTz3YInr2g8/w6Bjz7pmf+49nOwLNv+g0DeHZC6V/s 95bo2RoBenaB4tkOwbMdgmdrwOnZGkx6duI096rj2Zrg9Ow0NLsEz9a7gJ7tEYBnFyienVu8eLZm Cj1bM4WenZZ2Pzwcz87Q+Rf7PU7wbB0VeraOCj1bY0nPdgie7RA82yF4tocJnq1homcHKvvZDsGz 9V6hZ89MeVMV7++/teQMqqIzqYrKoCoqg6qoDKuiQ6iKBUpVXFDbffKWUBUdSlUsTKqiQ6iKDqEq OpSq6AyqYoFSFR1CVdR4sypqS6yKBUpVdAhV0SFURYdQFR1CVSzQqorOoCo6hKroAUdV9JZQFQuU qqgQq6JDqIoOoSoWKFXRIVRFh1IVnUFV1ICzKnpLqIoOoSo6hKroEKpigVIVFWJVdAhV0SFURYdQ FWfE31TF54+jUv496zD+Pesw/j3rMKUqLsZfAof+lLNP6dDudQypimmprBUX9Evfjv5csr/1jMXM 19ghoi97DIuILmbKFCL6sqd9iOjLngozoum0/zoPelS+tRGorL41jvQMh+IZhYlnLGj3xgR4Rvo9 Ew+eEWgmXjwj8fZvRwPaHTmLZ2iW0DN0eOkZGiZ6RnJg907PeIa3BM9IS/Mb2/AMzV16RgJePEOD Cc/QWNIzvEvwjJc9PKdneJjgGQWKZyhEz9CA0zP06ugZevvSMwLNj4NnaMThGd4leIYGnJ7hYYJn OATP8FjCM7zj8Ix03H+dxydVeoZ+HD1D40TP0PGlZ8yWelW8nd6f/XAmVTHMPEufqghGz1iDUc8A M1/WlKoIaL6sKVURkK++Ac3CkaroMUJVLNCqio1ZVbFAqYoFSlUs0KqKhUlVbNCqigVKVfRBQVX0 llAVG7SqYoFSFQuUqligVMUCpSo26EdVLEyqYoFSFUvAUxVLS6mKDVpV0SFUxQKlKhYoVbFBqyoW KFWxQKsqFiZV0QOOqlhaSlUsUKpigVIVC5Sq2KBVFR1CVSxQqmKBUhULlKq4i/ibqnixvXdUxYs9 NUFVDKOrbzCjUKEqhtG1YpiyJ40L272ALlXxYs9VWBUv9lyFVfFijydYFRf0S+v423W91mN3KHeN zWKmKWFsnMnYhHFjudruD8YmfZ4ygrFZ0G7JiLHRXnNsAs2HyBgb7TfHxqEYS2FiLA7BWBY0DyTQ WDxMMRYPAIylQDGWfNrcN4GxaEs0Fo0AjSWJ4q/HR5/md6JgLOmT/8ouoGIs3nEYi2Y4jEVHjsai CU5j8YDDWLzfMJYCxVgUorE4BGPRDKexaJxoLBoCGotDMRbNcBqLNwRj8QjAWJIoc4kOY/EIwFjS p3lEAMaSgM+Pg7Fox2ksmuE0Fu04jUVTnMaiEcdp1dLxnFYtUE6rFmidVm3MOq3qGY7TqiVMOa1a IpDTqg7htKqnOE6rlpZyWtVDgNOqnuI4reohwGlVT/F1WhXxnu/iz2nV0u2cVvUEx2nV0u2cVvUE x2nVEu+cVvWO47Rqg9Zp1QLltGqBclrVMxynVT1OOK3qIcBp1QLltKpneE6r7hp649n39285cwae 7Uw8Wxl4tjLwbGXo2Q7BswsUz15QeWt8aQme7VA8uzDxbIfg2Q7Bsx2KZzsDzy5QPDuhLJ4dyJ+X +cfRswsUz3YInu0QPNsheLZD8OwCLc92Bp7tEDw7AZ+/sQXP9pbg2QWKZytEz3YInu0QPLtA8WyH 4NkOxbOdgWdrhtOzvSV4tkPwbIfg2Q7Bs9Px3W9VxbMD+VvOAJWdQYfg2Tqn0LMD+dsXPAT0bIfg 2Q7FswsTz3YInu0Bh2d7LOHZCtGzNeD0bA04PVuvjp7tEDxbpzB6tl/d8uwg/jbh0g482yF4tocS nh1o/sQWPNsheLbLHjy7QPFsh+DZOij0bE0Uera3BM8OtJPxeLZOvfRszQJ49mzojWc/1gknPekX ZnfmLJ69mLlWhGc/7HwEPDv9GVuQ8OwwZT9bO03PftiZBnr2w05H0LMfdkCEnu1QPLsw8WyH4NkJ d9nP9jDFsz0A8OwCxbMfdhqFnq0t0bM1AvRsTW56tkaAnq3pTc/Ox80FAjzbOw7P1gyHZ3u/4dma 4PRsDzg82/sNzy5QPFsherZD8GzNcHq2xomerSGgZzsUz9YMp2d7Q/BsjwA8W1OOnu0RgGenT7ut 6nh2Pm53eC2erR2nZ2uG07O14/RsTXF6tkacnu0dh2c7BM92KJ5dmHi2Zjg928MEz/YIwLMVomdr itOzvSV4toaAnp1M2Sl0PDt9mh8Hz3ZoebZ3G57t3YZna4LTs3Xk6Nma4PRs7zg8WztOzy5QPNsh eLZD8GzNcHq2xomerSGgZzsEz9YMh2fPht549usXTroqA892Jp6tDDxbGXi2MvRsh+DZBYpnv+xc Gj3bW4JnOxTPLkw82yF4tkPwbIfi2c7AswsUz3YInq3xpmdrS/TsAsWzHYJnOwTPdgie7RA8u0DL s52BZzsEz/aAw7O9JXh2geLZCtGzHYJnOwTPLlA82yF4tkPxbGfg2Rpwera3BM92CJ7tEDzbIXh2 geLZCtGzHYJnOwTPdgierRGnZ3tL8GyH4NkOxbMLE892CJ7tEDzbIXi2QvTsAsWzNeD0bG8Jnu0Q PLtA8WyHlmc7As92CJ7tEDy7QPFsh+DZibf/Ol6RPXh2geLZDsGzHYJnFwGNZzsEzy5QPNsheLZD 8ewEfL3l7PPn4/8/wZVSyQplbmRzdHJlYW0KZW5kb2JqCjM3IDAgb2JqCjw8L0ZpbHRlciAvRmxh dGVEZWNvZGUKL0xlbmd0aCAxMDM0OAo+PiBzdHJlYW0KeJytnd2KJMmRRu/7KeoFlMqM/4RlQVqN dK1lQA8gJIFAAknvDxvV0+3fKYs+lnWxDGiH2jOeHuYW7sc8PCMfb/fzn189zv/Zn9Pbn//x5V9f 3v+yPaavf/j3X7786e2f73+87etX9tu//OoxbfvX/+Dx9v7P//7h7Zd/+fffvvz6D/e3v/3nbOex Hc/t7Xhsb4/7ur039tevrb//ZXo8pvv3P/0YfGzPZfn21/uR//z+/mm//Mv5acvx9tjn53kB09fP f6e+/m17zpe/vXOPt/m+/PK3ffvBH9/B7Xle5Y/+4/K3r9zjbXlsaLD+8RfwvJQf/ceXv53c2Znp Q3sf/vbH858fjtK8vY/NL/+7zs/btL5N0239zCB+G7Lf/vzl178/gzG//fzXL4/vuXGOyNmHbX37 +R9f/ut+X9f/fvv57/l/b/PtuJ//CZll+cjsj9t2P/vzoZ1HYZ635T4fH9vZPzLHfpvO//BjO9NH 5rne7vftY3/W+0fmcZ/OTu8fL+w+F+hxP3t9PD+29JsKHbflbK4N0Rnn25nwy8dr+58Czcvt/pj2 jy2VADyWx+2oDR2Ved62xzr1F7fuZ7+3j0PyKMN2/v/Pfu8fx+RRL26fz35/jNLjt4U57rdjupck +ekr9NPPr1J6u3/P5/kckv+vhD5nnB8PKRI6zKoJPZg6oEjotOMJnXYOTegwkyd0oKcntPaaCR3o 957Q2m8m9IAev9OEHswlV5HQgWqKIaHTpXq3IqF1bJHQYWq8kdCBagCO47ZNj5Jt9eKe222Zpn5Q pvtym6al5EnJpekx3c6c/BjL6V6h59nxrQxK6dM07WfH9499upccmOb17PhzaoduWubbNN/LqJQc mNbH7T4/9jbi03rcjrncBWXkpm27bfNSRmUr0L7clnld+zAd09nv7WOi1Olxet7Pfh8fR+Veh+65 n/1+Lu1dcEboti33rY3l/Jhvy1ISs9508/S4nUGf2gyfp+ftXKOXHjon32NZ9zZO87KcHd/7fq/T 2e9jalNu3u5nv59rH6btuJ1r+dF/3L7ejnV6tHfdfMy3ba3pVIZufj5uy7ps7V03P99XpvXZhmm5 72fH90ebTstjOTt+LG2czqni7Pizv3+X+X5btvvHPj32Ch23aZvKsJR0Wpbtdq49ZVjKrLqs54pZ s6CO3bKdS+bWZ/iynUvmtn8clbn2ez/XzO0os2rt93EumtvzaO+V5Xkumvvj0fZpvZ+L5j59HJX5 XqFz0dznkillfNfHuWjuy7O969bpXDT3rZ9V1/lcNPe9ZEpt6VSmbT/2dnzX5Vw1+/tpXc818yg+ VHN3PaXpfhQhehT/WPdz0TyKEdU8Wfdz0TyKEl2u7TgXzaM40QV6novmUZyoftxpcWfHixM9tgqd i+bzXvpUZovtcS6azxcqs03novmsTnSpqM5F8zk/23t8O8Xp/ixSVIdlW85F81mk6NtEcHq36/F+ iptdwnc9bpihx2CqZg89JlMiMfQYTJ2lhx6znVo2DT1mQ1XXhh4Duvjq0OPm8qPHTb+jx+x49czv egymLi7RY/a7lnJDjwnVMA09JlT7/V2PO2boMaFLUfhdjwHVmTV6zAhU7xt6zEGpvjr0uBm56DFH rn7c0OMmvaPHzdBFjxmnMm1GjwnVPn3X42ZUoseEagSGHjfZFD0mVCMw9JgR+Mn0uBnf6DE/rtwq 0WOOSpnJo8dNpkSPm0yJHjf3+NDj5haPHhNSPe4ubuhxk3LR42YiiB436RQ9bjIletx8XPS4GZXo cTMq0eNmVYkeE6pyOPSYUNmzjB43i0/0uBmWocfdxQ09biaC6HEX8KHHHTT0uBnf6HEzg0WPGabq 0EOPmxSPHjcLefSY0PYZ65mO19ajDKzHmViPMrCewSwlELCeMHWXDtYTqIoBrGdAF3+C9Wi3aT2B 6qIP68nHXXawh/WE2SoT68nF1YZgPRpKWk/67dYzmIsbwno8SrCeXFxjPem3bgqypTJD0XoG5JuC vLq6vQjrCVQCTutJCKqJwXo0TrQejROsRxOc1pOGSlrSejxMsJ5EoIoYrCdQVTpYj0K0nkC6KUio XB2txz8O1qNDR+vRoYP16KjQejIqtUuwnrT0mwrFenToaD2JQO0TrMfDBOsJVE0M1hOoygOsR+86 Wo9+HK1Hh47Wo0NH69Gpl9ajSwatJ2NXoViPzuG0Hs0UWo8mAa1Hk4DWE6jkHK1HW6L1eEuwHs0U Wo9OPLSe2qcX1nPe9C+tRxlYjzOxHmVgPcrAegbTWU+gxnoG1FmPdonWE6ixnnycW0+YxnpycY31 BGqsJ/126xlMZz0eJVhPLq6xnvS7sZ601FjPgDrrydU11hOosZ6EoLEejROtR+ME69EEp/WkocZ6 PEywnkSgsZ5AjfUoROsJ1FhPoMZ6AlV7gPXo0NF6dOhgPToqtJ6MSmM9aamxHh06Wk8i0FiPjwqs J1BjPYEa6wlUbnJaj34crUeHjtajQ0fr0amX1qNLBq0nY+fWo3M4rUczhdajSUDr0SSg9ei9QuvR lmg93hKsRzOF1qMTD62n9umF9Wz319ajDKzHmViPMrCewTR7PWEa6wnUWM+AOuvRbtN60lKFYD3p k1tPGqonsmA9GiVaj38arCcfVxIP1uNRgvXk0y6PwWI9PnKwHu03rSd9qiebYD06crSeQFUMYD35 uMZ6vE+wHr86WI+OL6wn/W6ecA3om7C+mA12fWSf2UAZzAbOZDZQBrOBMpgNBrM2s4E2xNmggTIb 7HbMgLNBoHpXYTbY7VQDZoOGyWyQCNT7E7PBgJZalmE2GNDl8Xpmg8HMP/ls4KHEbOAQZgOFOBsk TPWmwmygI8fZIGGqFQBmg/RJj4N2Hcds4BBmA7+6zAaacZwNNJtYAzmEGkhTjjWQphxrIE051kCa c6yBNJasgRooNZBDqIESp1qVpAbSlGMNpCnHGkhTjjWQ9xs1kPcJNZAmAWsgzTnWQNoSayCHUANp zrEG0pxjDaQ5xxpIc441kEacNZBDqYEaJjWQphxroEDN827NOdZAmnOsgbTjrIHSp1q5oAbSJGAN pDnHGshbGsdBO+jbcdAm43IctMm4HAdtMi7HQZuMy3HQLt7jOGgHjeOgneyN46BNyuU4aAeN46BN yuU46DXlXvjq8xPnM5WBrzoTX1UGvqoMfHUwna9qQ/TVAdVNIfpqoKZ6DdT4qrcUX22Y+OrTDgnR VwfU+erTThLBVwfT+arHG77qEHxVIfpqwlSPC8JXA9UCD76aMDW+mj41vuodh68+7ZwUfVWTAL6q GUdf9Ybgqw7BVzXl6KuacvRVTTn6quYcfVUDTl9toPiqQ/DVxKk+Ioivaizpq5py9FVNOfqq9xu+ mj7Vb3nBV73j8FXNOfqqtkRfdQi+qjlHX9Wco69qztFXNefoqxpx+qpD8dWGia9qytFXA9VDnPBV zTn6quYcfVU7Tl9Nn+ohTviqJgF9VXOOvuotwVcdGr6qGUdf1Yyjr2rG0Vc14+irHm/4qkPwVZc9 +KqmHH1VpxT6qqYcfbWmXO+rj/vrk7XOxFcbZviqM/FVZ+KrYRpf9Ybgq4Gq9cBXAbmvAnJfbVoa vtoxw1cRAffVQI2vBnJfDdP4ahPv+GoDxVcdgq8iTBcVHb4KyPdXESb3VfTJfbXpeHwVfapKF1/1 JIivesbBV5uG4qsNFF/1lIOvesrBVz3l4Kuec/BVDzh8tYOGrzZQfNVzLr7qMwp81VMOvuopB19t +h1fbfodX/UkgK96zsFXvSX4agPFVz3n4Kuec/BVzzn4quccfNUjDl9toOGrPjnBV5swxVcB1bMx 8VXPOfiq5xx81S8Ovuo5B18FVL8qFF/1nIOvepzgqw303Vc94+CrnnHwVc84+KpnHHy1iXd8tYHi q55y8NUOGr4KqMQJvuopB1+9pNwLX51en4l2Br7qTHxVGfiqMvDVwXS+qg3RVyc7mUdfdQi+Otnx LvpqIPfVhomvTnYwj746oM5XJztPB18dTOerHm/46mTH6eirgerRGPiqhom+GujyDqn4asLU+Gqu rvFVDQF9VbOJvtpAw1c14+irgerBAviqxxK+qilHX9WUo69qytFXNefoqxpw+qrmHH01UD0RDF8N pPurDQNf1ZSjr2rK0Vc9AvBVzSb6qkPwVc05+qq2RF/VnKOvas7RVzXn6Kuac/RVzTn6qs5O9FWH 4qseAfhqIP/+u09z9FXNOfqq5hx9VXOOvqpJQF91CL6qOUdf9ZbgqzoZxlc14+irmnH0Vc04+qpm HH1Vk4m+qjlAX9UA0Fd1KqSvavLSVzXl6Ks15V746vL6NLsz8FVn4qvKwFeVga8OpvNVbYi+OqDO Vx2CrwZqfNVbiq82THx10fPl8NUBdb46oMZXB9P5avqtrzMlVPez4auBmv3VQNWM4KsNFF9NmO6f uWHW1we+ncEN40xuGGVwwwymvm4KN8xgLq+hxg2jH8YbJlDNKdwwA6pvbeINE6h+awM3jPcpN0zD 5IYZUH39E2+YhMm/9IqP0y+94tP8S69Nv3HDpEv+pVekgH/pFS35l14xKP6lV1xdrSdR4AWq3+ZE gecQCrxAzQOJQJdTNqPA84ZQ4K3lON0PCzwdOhZ4OnQs8DJ0l5cEp8BLS7UGQoGnQ8cCLyG41G4p 8BooBZ4GnAVeoHoQJQWe5gALvEC1LEOB10Ap8HR8WeDp+LLAy6g0DyQyvjUCKPDSUi1KUODp+LLA 06FjgadxYoHXQCnwAvkLzpqWUOA5lAIvo+JfevXxZYGn48sCT29NFngZusu7y1LgpaXLN2NT4On4 ssDTm4UFns7PLPA04Czw9P5NgRekXhsKPP8wFHiaTCzwAtW6FAWeTvQs8DRRWOC57KHAU21ggReo eSCRRKkQCjydUvL+3yYv8/7fJuXy/t9mStnWc9V8Fim6vJN4O1fN57NNy/M2uc33IkWXc/GnOT3u RYouZffpxc/3t9h0ObedYrzfixTVnNvv8229Fymq6XTOAWfHy+T0+MwLAx/762P4zqCKcCZVhDKo IgZTv2aMKiJMfdiAKmK3A3WsIrRHrCIC1VIDVUQ+Tn9FhNDltTijisjF6c/iEPIvkaPftUBAFeER SBUxmK6K8IZQRex2eo9VxG6nJVlF7HbKkVWEt4QqIlCtR1BFaMdZRex2gJFVROLkXyIHVOU/VYQG nFVEulS1HlWEhwlVhEOoInxUUEVomFhFBKpajypCh45VhHacVUSg+pAEVYQOHaoIHTlWETp0rCLS pfo2aVQRmpesIjwCqCJ06FhFaDqxighUn5GgitChYxXhLaGKcAhVhA4dqwgdO1YROnasInRYUEXo qLCKcAhVhA4dqwhvCVWEjgqrCA04q4hA/uocHxVWEToqrCJ0VFhFeJ9GFaEDxypCQ8kqQgeFVYS3 hCrCBwVVhLeEKkIDwCqiDsoLgTw+cS5eGQikMxFIZSCQg2kEMkwjkIedcKNAao8okIEagczHNQIZ yAUyF9cIZKBGINPvRiA9AhHIwXQC6Q1BIA87c0eBPOz4IgXSIQhkA0UgDzvASIH0liCQgaqKQiC9 JQhkIH0LEQL+mRv9jN3LG92Z3OgNM250Z3Kjh/EbHYzf6IGaG917hBsdkN/o+Lj6JCU3OiC90XFx fqMD8hsd/fYbvYnAuNHDNDd601BudLRUn8rlRg90ufNyo6Oly3dbxo0O6PJQatzoTUu50QH58yaE oBaBudEBXYrAcaN7MHOjo0v1mVQqxSaWqRTRkleKTSxTKfr4olL0j0Ol6LFEpeixRKXosUSl6EmA SrHp+KgUPeCoFJuGUil6wFEpNi2lUmwuLpViE/BUik3AUyl6wFEpok+1lkql6FeHStEjjkqxaSmV okcclaIHE5UiID1Q6AFHpegBR6XYBDyVYtPvVIoeJlSKHTQqRR8VVIpNS6kUfVRQKTYtpVL0EIxK 8TIoL+xpen28zRnYkzOxJ2VgT4Np7ClMY0+TngCDPWmPaE+BGnvKxzX2FMjtKRfX2FOgxp7S78ae PAKxp8F09uQNwZ7SUmNPkx2Uoz1NegYO9jTZ2T3ak7cEewpU3QH25C3BngJVWYM9JZj6ekYwVcNg TzoqtCcPOOzJLw721ECxJx0V2pO2RHvSUaE9eUuwJx0V2lMdlRdT6/z6IKQzmFoHU292TK2DqWdt MLWGKcOMqVX7w6nVIUytDZSp1SFMrQ5hanUoU2vDZGp1CFOrQ5haHcrUOphuavWGMLXq6HJqDeTf dANUSyBMrbMdjeDUOtvRCE6ts50e4NQ62zkETq2zHaDg1BpID0Ii4HVCxNQayA9C+tBxavVRwdQa qJ5hxdSqLXFq1aHj1Kph4tQaqD4KxNSqQ8epVZMAhal3CYWpDh0LUx06FqY6dCxMdW5mYZpRqR1H YRqoRgCFqQ4dC9PEqf6sBQrTQPU7cyhMNeIsTBsohalGnIWpRpyFaeKkv/SKUfE3MwAqUy8L00A1 lihMfVRQmOpcyMJUw8TCVMPEwlTDxMI0V3f5ElsK07R0qV5TmGqcUphqlPAIE1GaP6NY6+tTYs5A sQYzlUBAsQYzl29IQLH0s6BYg6lnGKlYgepqDsUa0OUHQqBYgerHQbHycf4Da2jJq1cNABXLISiW Q1Ash6JYg+kUyxuCYqUlf8iHVGr2/ld7Kk7FWu35OhVL+0TFWsuj8x8qViD/gTWPExUrfaoCGcXy MEGxPExQLA8TFMvDBMXyMEGxNExULA0TFSt9ahRL40TF0jhRsTROUCwNExVLw0TF8jBBsTxMUKz0 qYoRFMvDBMXyMEGxNExULI0TFUvjRMXSOFGxNE5ULO8TFCtQ9RkolgYTiuWfBsXSgFOxHIJi+ahA sbQlKpaOChVLR4WKpaNCxUqf/LsmPmFSsXRUolgeACiWhhKnxJqWckqsCWVOiTWhzCmxJpQ5JYYA lI7jlJhHCd818bsA3zXx6RnfNWk+Lt818WDiuyY+LPiuSdNSvmtyGZYXDr29PijnDBzamTi0MnBo ZeDQm543g0NvenINDj2g+u1wOnRaqg934NBpqT5ugUNvengvDq0BoEM7BId2CA7tUBx6+8RBuaYh OPRmp8To0JueXINDa0t06E1PrsGhvSU49FZOpf3QoQM1Dq1xokOnT75N6WGCQ6ch/6ZFEwE4tENw aA84HFpbokNrwOnQGnA6tAacDp0+Vc2EQ2vE4dB+cXBoDRMd2luCQ3uY4NAeJji0hwkOrelEhw5U LRMOrbGkQ2sI6NAaTDq0twSH1mDSoTWYdGgNJhw6XbqcsYlDB/IXcjWxhEN7BODQGks6tLZEh9ZY 0qE1lnRojSUdWnOODq2zahxaQ0mH9gDAoT2UcGhvCQ7toYRDeyjh0C57cOiE4CLacWiNEx1aB4UO rYNCh/aPg0NrMOnQOix0aG9pfF/7MiovPPv4xDl1ZeDZzsSzlYFnKwPPHszlnBE8e0BrfXUQPDtQ VWh4dqDmOED6VF9uBs8OpD+LjU+7uHg82xuCZ2so6dkOxbMH03m2NwTPPuw4IT37sCOO9GxtiZ59 2BFHevZhpxfp2Uc5v/hDzw7UeLbGiZ6dPrlne5jg2X5x8GyPJTzbIXi2BxyerS3RszXg9GwNOD1b A07PTp8az9aIw7P94uDZGiZ6trcEz/YwwbM9TPBsDxM8W9OJnh2o8WyNJT1bQ0DP1mDSs70leLYG k56twaRnazDh2elS49mBGs/2WMKzPQLwbI0lPVtbomdrLOnZGkt6tsaSnq05R8/WWTWeraGkZ3sA 4NkeSni2twTP9lDCsz2U8GyXPXh2QtB4tsaJnq2DQs/WQaFn+8fBszWY9GwdFnq2txTPrqPSe/Z8 f/2NBmfi2Q0zPNuZeLYz8ewwl73jeHagy6Z3PLuDhmcDqlIbz0af/IvfgHQ/2wMAz26geHYDxbMb aHh2mMazm4bi2QhlDUA8G1CpkODZ3hI8G/H2H0QDVO04nh3oIrXxbPSpnuSIZzctxbMR8YuMf/ds MDVM8Wx0qf7OcTy76VI8Gx/nZ0IA+fdB/eLg2Q0Uz26geHYHDc/2YMKz/faNZ3ss4dlNl+LZuKH8 TIjfdfBsn+Xg2bi4y4na4dl+a8Kz/TaAZ/v9C8/2YMKzm4+LZzctxbN9WODZninwbL/t4tlNl+LZ TZfi2U2X4tmevPBs7xM8+9KnF4Yxj5Og9WWOMYzB1CUWhhGmvlwxhjHbyQgYRtopdwMMI4yfOg10 +WIlDMNbgmHo5dMwtN80DA0ADCP9rq9vh2Hk0+qTfhhGoLqXCcPwi4thhLnIQwzDrw2GMduZDxqG tkTDCOQ/CeA5QMMI1BiGQzAM7zgMIy0VXaNh6NDBMNJQXc1hGJqWNIz0u34dB4aRofvdZ+aV5fVJ HGcwrwxm/o3OK2FWnVcGU0/FY15JO828kob8JE6gi5NjXtHL57wyoOYkTqDLnZ55ZbGHTpxX0u86 aWJeWewZF+eVQPrLCQ2DeSVQPc6PeSUj57+cgAjUGwbzymJPHjmv6MhxXsnI+RMCQLUCwLziEOaV 5MDloHrmlbSkp9k9UTivZFTqG3gwrwSqn4Z5RZOAlctijzBZuWjysnIJVH8SGZVLQlB/Pw+Vi6YT K5d8XFO5ZOhqBZDKRfOSlYvmJSuXfNrl9/NSuei0w8olLdUHEqhcNOVYuWjOsXLRnGPlojnHykVz jpWL5hwrF805Vi6ac6xcNOdYuWjOoXLRlGPloinHykVzjpWL5hwrF805Vi46YeIJAVqqr/vPEwLP OTwh8JzDEwLPOTwh8JwbTwg84/CEwDMOTwg84/CEwDMOTwg84/CEwDMOTwg85fCEwFMOTwg85fCE oHHLPCHwlMMTAp/m8ITApzk8IQBUH0jkCYGnXJ4QeDLhlxNcwfDLCR00fjnB0wm/nOB3AX45wQOO X07wJNhPc3rcqxTV/YZTjJ/3KkW1pVOM98d9+VE6adGyLz+sSF4UOZsdBEKRM5g6q6HISTv1RydS 5IQpo4kiZ7MHXihyNns2zCJns8drLHIaKEVOoFpRoMjZ7JEfixwNNoqcMJev9abIyYj4MShA4yc+ /nj+868v7+z2mM6hmd7+/Zcvf3r755dT5fb1NKPTDdevC8P5f6bz8tbvwPlfvRPv/+m3f/nVefn7 9Pbnf3z59R/ub3/7T5dmy7mGfhvWOnOONAtTNyiSZmD0m+FgDkuzMHVCTJqB8Vo6UPPyHVy9/2yn dwlphmvzNENLWkuH+dReyHI2/OP9CYzfYOoeG8ZvMP7ypDC+xxqm5hPGL/2pYcD4BfK9EEB1aDB+ gfyb/YB8jzXQ58bmnPdf7VM5g7EZzKT7VGGae2swNRAYm7Tj7wz0hjg2DZSxCVRXDIyNxohj41Du rYYZU3ig2d8Z6OHGPlUTgbFP1cQ7+1RNQ9mnaqDsU3m8sU/lYcI+VQeNfaoGyj5VA2WfCveAn2TF 1V22tsc+VQd936fCqJTyBftUaKiewM0+FSA/ydpEIPtUHTT2qZC8dUck+1SevNin8sTEPhWgy6sZ xj4VoLq9lH0qQHVTaOxTIZZ1Tyj7VD502KfyWGKfqoGyT9VA2adqoOxTddDYp3II+1SeBNin8iTA PpUnAfapPAmwT9V0PPtUDZR9qgYa+1QdM/apGij7VIhl3V/LPpXPqtin8gzHPpVP4tin8qHDPpVf HfapGij7VB009qka6Ps+Fabw+oqH7FN5KLFP1QQg+1Seu9inauKdfarm47JP1che9qk6aOxTNVD2 qRoo+1Q+EWCfyq8O+1SIk//Cp6+s2KcCVI/gjn2qS5deePY5Nb70bGXg2c7Es5WBZysDz1aGnj2g ywYHPDtQrTzh2YGaGsj7BM92KJ7dMPFsh+DZDsGzHYpnOwPPTpSq+sOzA/m7uQHVIyTwbO0TPbuB 4tkOwbMdgmc7BM92CJ7dQMOznYFnOwTP1gSnZ3tL8OwGimcrRM92CJ7tEDy7geLZDsGzHYpnOwPP TsD9eXDTEjzbIXi2Q/Bsh+DZDRTPVoie7RA8W+dwerbO4fRsbwme7X2CZzsEz3Yont0w8Wy9NenZ HgF4dqAq4/DsQJcXl8WzGyieHagaOzxbQ0DPdgie3UDxbIeGZzsCz3YInu0QPLuB4tkOwbM1wenZ Lnvw7AaKZzsEz3YInt0IaDzbIXh2A8WzHYJnOxTPrnP4C8/e19eerQw825l49mD83GWYWrnAswdz eUIAz05DjWenJX8zg3eJnq3XT892KJ7dMPFsh+DZDsGzHYpnD6bu09KzE6XL4cx4to8cPDst1Y1a eLb2m57dQPFsh+DZSd16LhqenaurRxPh2Zpy9OyEQH9r3gNOz9aA07M94PBsDxM8u4Hi2QrRsx2C ZzsEz87Q+VuEPQT07ED6jTEfFXp2RuXyM/LxbL84eLZD8GyH4NkOwbMbKJ6t0zw9W28DerYGnJ6t Eadna8Tp2Xp19GyH4NkOxbM9TPBsXXvo2brU0bM9lvBshejZGnB6tgacnq1homc7BM/WYNKz/eqG Z/u1wbP92uDZ6VHj2TrP07Pzcf5mBkB1Ix6e7bIHz26geLZD8GyH4NmRi3qgEJ6dONUTTfDsBopn 6w1Fz9ZgwrMr88Kzn3Z0CZ6tDDzbmXi2MvBsZeDZytCzHYJnN1A8e0CXL/vBs70leLZD8eyGiWc7 BM92CJ7tUDzbGXh2A8WzHYJna7zp2doSPbuB4tkOwbMdgmc7BM92CJ7dQMOznYFnOwTP9oDDs70l eHYDxbMVomc7BM92CJ7dQPFsh+DZDsWznYFna8Dp2d4SPNsheLZD8GyH4NkNFM9WiJ7tEDzbIXi2 Q/BsjTg921uCZzsEz3Yont0w8WyH4NkOwbMdgmcrRM9uoHi2Bpye7S3Bsx2CZzdQPNuh4dmOwLMd gmc7BM9uoHi2Q/Bsjzc822UPnt1A8WyH4NkOwbMbAY1nOwTPbqB4tkPwbIfi2Qn48gnPXu92nj2e 7Uw8u2GGZzsTzw5Td5/i2WHq5hM8O9D1WPzwbECX1xEPz8bH+bkRvzZ4dgMNz+6Y4dkNFM8ONNVD MfFsRKAe9xie3TDxbEB+Phuh9F/F85bg2R4BeHYHDc9uoHh2A8WzPeDwbA8mPBtxqnvVw7M9weHZ aMjfI+B3ATy7iUA8u4OGZ+MWd8/2TIFne6bAs9FSfbVXPBtDV18REM/2OMWzfVTg2T4q8GyPJTy7 geLZDRTPbqB4dhOmeLaHCZ4NyPezGyie7fcKPPuSKS9Wxen1t5acwaroTFZFZbAqKoNVURmuig5h VWygrIoDanafmpawKjqUVbFhsio6hFXRIayKDmVVdAarYgNlVXQIq6LGm6uitsRVsYGyKjqEVdEh rIoOYVV0CKtiA41V0Rmsig5hVfSAY1X0lrAqNlBWRYW4KjqEVdEhrIoNlFXRIayKDmVVdAarogac q6K3hFXRIayKDmFVdAirYgNlVVSIq6JDWBUdwqroEFbFGvEXq+L8/aiUf886jH/POox/zzpMsyoO xt85h/742Sd06PL1gayKaampFQf0qW9Hr+vr02Rh6lvzENHVHsMiooOpMoWIrva0DxFd7akwI5pO ++/8oEeXr7cnooGa6lvjSM9wKJ7RMPGMAV3emADPSL9r4sEzAtXEi2ck3v7taECXI2fxDM0SeoYO Lz1Dw0TPSA5cXiEaz/CW4BlpqX5jG56huUvPSMAbz9BgwjM0lvQM7xI8Y7WH5/QMDxM8o4HiGQrR MzTg9Ay9OnqG3r70jED14+AZGnF4hncJnqEBp2d4mOAZDsEzPJbwDO84PCMd99/58UmVnqEfR8/Q ONEzdHzpGbWlF6vi/vrshzNYFZ3JqjiY+sUMrIphykSGVXG35xxcFQNdfkA6q2Igfx8PIH/tk18/ V0WHsio2TFZFh7AqOoRV0aGsis5gVWygrIoOYVXU4eWqqC1xVWygrIoOYVV0CKuiQ1gVHcKq2EBj VXQGq6JDWBU94FgVvSWsig2UVVEhrooOYVV0CKtiA2VVdAirokNZFZ3BqqgB56roLWFVdAirokNY FR3CqthAWRUV4qroEFZFh7AqOoRVsUb8xap42N47VsXDnppgVQzj1fdhz0ywKobxWvGwhwpcFXNh lwUvq+Jhz1W4Kh72XIWr4mGPJ7gqDuhTdfx2H6/1uBzK/T42YaopZWwaZowNGP1WWBh/XoA+1xe/ Z2wCXUrGjI33GmMDqD5Ezth4vzE2DTSMpWOGsTRQjCXQ5dX3MZYmTMNYmgDEWDpoGAs+re6bxFi8 JRiLRwDGgkS5vLJ/GAv6VL8TFWNBn/z3egG5sTQdj7F4hsdYfORgLJ7gMJYm4DGWpt8xlg4axuIQ jKWBYiye4TAWjxOMxUMAY2mgYSye4TCWpqEYSxOBGAsSpZboMZYmAjEW9OnyE2HDWBDwyyv7h7F4 x2EsnuEwFu84jMVTHMbiEcdp1abjOa3aQDmt2kDjtGrHjNOqnuE4rdqEKadVmwjktKpDOK3qKY7T qk1LOa3qIcBpVU9xnFb1EOC0qqf4OK2KeNfX4+e0atPtnFb1BMdp1abbOa3qCY7Tqk28c1rVO47T qh00Tqs2UE6rNlBOq3qG47SqxwmnVT0EOK3aQDmt6hme06qXhl549uP1W86cgWc7E89WBp6tDDxb GXq2Q/DsBopnD6i+H4ae7S3Bsx2KZzdMPNsheLZD8GyH4tnOwLMTpctDtXh2oFpOwrMD+fMy7xM9 u4Hi2Q7Bsx2CZzsEz3YInt1Aw7OdgWc7BM/WoaNne0vw7AaKZytEz3YInu0QPLuB4tkOwbMdimc7 A8/WDKdne0vwbIfg2Q7Bsx2CZzdQPFshenZC4KdVPTHp2Tqn0LO9JXi2dxye7RA826F4dsPEsx2C ZzsEz3YInq1homdrwOnZ3hI8W/tEz3YInp2Pq5oJzw5UWxqerXcmPTtQVWh4tn8YPDtQrSHg2YGq jMOzfVDg2S578OwGimc7BM92CJ6tEadnq4LRszVO9GxvCZ6tYwfPrv1+4dnzOOGkJ/3CXM6cxbMH U2tFePZs5yPg2elP2YKEZ4dp9rO10/Ts2c400LNnOx1Bz57tgAg926F4dsPEsx2CZyfczX62hyme 7QGAZzdQPHu20yj0bG2Jnq0RoGdrctOzNQL0bE1venY+rhYI8GzvODxbMxye7f2GZ2uC07M94PBs 7zc8u4Hi2QrRsx2CZ2uG07M1TvRsDQE926F4tmY4Pdsbgmd7BODZmnL0bI8APDt98l+XxcddDq/F s7Xj9GzNcHq2dpyerSlOz9aI07O94/Bsh+DZDsWzGyaerRlOz/YwwbM9AvBshejZmuL0bG8Jnq0h oGcnU6qxw7PTp8azHRqe7d2GZ3u34dma4PRsHTl6tiY4Pds7Ds/WjtOzGyie7RA82yF4tmY4PVvj RM/WENCzHYJna4bDs2tDLzx7fX3S1Rl4tjPxbGXg2crAs5WhZzsEz26gePZq59Lo2d4SPNuheHbD xLMdgmc7BM92KJ7tDDy7geLZDsGzNd70bG2Jnt1A8WyH4NkOwbMdgmc7BM9uoOHZzsCzHYJne8Dh 2d4SPLuB4tkK0bMdgmc7BM9uoHi2Q/Bsh+LZzsCzNeD0bG8Jnu0QPNsheLZD8OwGimcrRM92CJ7t EDzbIXi2Rpye7S3Bsx2CZzsUz26YeLZD8GyH4NkOwbMVomc3UDxbA07P9pbg2Q7Bsxsonu3Q8GxH 4NkOwbMdgmc3UDzbIXh24u2/jtfIHjy7geLZDsGzHYJnNwIaz3YInt1A8WyH4NkOxbMT8PGWs/ef j/8/ACATxAplbmRzdHJlYW0KZW5kb2JqCjQxIDAgb2JqCjw8L0ZpbHRlciAvRmxhdGVEZWNvZGUK L0xlbmd0aCA1MjAKPj4gc3RyZWFtCnicjZXbahwxDIbv5yn8AtHqaFtQehHa5rplob0vTSCQlqTv D5X3QLA28ZaBYZC/kX//km0qGM8Nxas5l59P2/M2IpX4EHj5tX0vv0cQmh3Y08cNiVYdP1AZz7e7 cvx4edh2d1ge/kYeam5cOtVCaHUkuz9kHxEmYjyH3gapI+Ipiv31dxyzHT9ittv9tvsyUpT9/Ubn 9Wgfv4/g0/YBkfrHsn98HbYGXtFnhmamKjAKTQxKYhyEtU2MJKYFg8wTo0lPd8DGMjFmM+MVREzn PD4zhAyVfBZtnCBiYPG2nI3IwZBniwwTJAqRbdatmiAVoNptzpTMJpNwW2fhWhNUCTz8XVpJjaC2 BOW6UavQXH3tU6+gZLZ23MNxd15P5w0649yTliDGBtEDdWkmUwhvnjK1BLGA8dJKlugBSrL1NkEa kFdd+s3GgDVDWdGoXFpaNolrg1jKlfVH4YikL5uJo3CkfkVS7CckX0oSpEiEsyRLBggF1LUuJQkT MCMvO0AkMl3ZTaII1FLhcgPE2QxO2YDUAmIWZ0VvyxaQGpBbX7aANAOtGcqaul2eXxeGO4JaOk+T lRpF6aizA5y2pUZRLjaTHB34vN++xvPmTScNrIQxVtQwuq0D2/+MPW8jdpj8NHCgRWTcl3FNni7F 3Q8sn/5cmV+iyd8V8N7gOwqqaVZAZwX/AG+olBIKZW5kc3RyZWFtCmVuZG9iago0NSAwIG9iago8 PC9GaWx0ZXIgL0ZsYXRlRGVjb2RlCi9MZW5ndGggMTY4Cj4+IHN0cmVhbQp4nI2QMQ7CMAxF95zC F2hqmzpJd6TOsMCOoBOglvtL2Em6oBaQJct533G+Q4AaDWmKPcPl7iZnJBBnMF/dCR4GfZTcW4uG OpZgFwgsjgOUYh5dOyCML51DPWKCRAEItVmH3fJ0I0zEuKD1RqV6KFTlTA8aqxZ30Qvk1AkCx+RZ /tEmZyz/QhVKN4ktqvvVbdozwv75633mLwY2xA0HAfHTAS0O3sfwWO4KZW5kc3RyZWFtCmVuZG9i ago0OCAwIG9iago8PC9GaWx0ZXIgL0ZsYXRlRGVjb2RlCi9MZW5ndGggMjgyMAo+PiBzdHJlYW0K eJylW+1qZTcS/O+nuC8QRd/SgbCQzEzyexdDHmDZBAIJbPL+sLr2WFWnj6uv2TAQjFNutVqlUnVf O93i+vdNWv8ZR779+/en/z7dv9NTfvnGn/95+vn2x/2bYbQX7Ncvvkkt1nn/gXS7//vXT7fXL/78 9enbn+Lt179WnBxjr7eZ+i3F1u/BfnmJfv9OTinHt2+9D8xxYb5+N078eLyv9vrFWu2H56dvf4y3 NG7Pvzylt/2s3NaP5+P2/PvTdzGm4x+359/wv9sIR4/HGZPOmF5DjiWdMLEYzBFKruOEyf2MGUcY ee2KMcXEmQtj1yrzjEkxhZxrdxNKsYdSez6BajWg1EKNuZxAzWw/5QUq5zLWYTClhRLHg5RqDDUf 55RKNKBWQjxs3rYCPYfZDajZSGOEOky9WzagOa7LtWZARw2lHfUM+nQGLYqsMmWfJjnO0Fo586SZ 3eW0QMcwIEOmvE7lSPNwzzeXGo7RzjnV7w2oltDiLO9V/Mvz67W6n/A716qPt1M0ZW0tzJjKCWN5 1UvoMbcTxtZrpMXPYta6XqscX/e4MZYw91OO3eRjTnkd8vxKYWAOe6vySvq16nKxlBbPF9JdLa1D XtJX3e2n3MM6lOHuP5USpglUP9mrl0JP7ZzS5RLXY+Xdm7+5pZg5jfORpM8fYctsYpvEFokhtmzM RWDBlo2xdSe2bIzDFmCqZAvWsgUltsjFmC1y98wWmRKzRUcCWxDIspzYAtCFUmALQD9qtugK9Lby Pvy0x1L8JbHnehslSzOGnlP3855zvUL58E/u6Pcn1tDkByvmNSxKVjfxdWor8X4+lIuYp2MlPg43 8ZzHfj4R6bNV/BZyic0t+F3xl8nwiZJbDLOcC57s+9Jm6KUakA3U+3oYm7m69hEay2SV7utEnnnl Pc+nkpsFHSvv43wql8f6GKHX6F/fVcZQa/Kvb0klrHrm987X18M7mx/p4QXzFjLbkNBH/IwVeOij g9n6qPODPlIcU2HoI2H0awpQtjYO+ghQsjYO+qi3RvpIICtGWx+BubgA6KOTEvSRIpnrSvropLT1 kQJZOwF9JNDF7G591IdL+uhEgj7q4yV91MdL+qhrSfpIOV0c8dZHyslqNvSRQFbWoI+64tBHfXKk j87moI+0mjHLpI9OwaGPTpmgjwQyV4X0Ue+O9FEvR/qol1ttU1inUN06lTzDrG24JSilr8QNxkhT qXXlPQ1RjHMpLa+8j+ZXoMew+k5fzUsfYbZs7ophUxkt9GbodKnlXC1mq929UOVIIbd2PhX7cJfj WImPc0724V7nvxKf5lRM4jXVlfhh7q8536Unob6OWagEXwyoxJB7zm4xa5lhPT3mWEydal0PqGWB lbDa1ovZz7W0nXjt68Xso7hXs/b1ZPbZXYbXsd7Mfkz36Opcj+ZIyT26eqxHc+TqHl2L69EcpT8A rUdz1MMHpfVojp7fAz3wN2XPeOR0ABg9HQBGTweAeWfo9uZnNsbedfIzwGg/U+RYivyM3Bj7GYCs VSE/I7fPfga1ltMBvTf2M0WO+MjP6AqQn9mgD00HclNDEGKLxBBbNqbYzh9s2Rh74YktTc2tiC0y H2JLk+MWYosGEVvkaswWRLKmldiCvVlfB7Y0ObkituiUiC06ErGlyYET3C8wdvJB7leXktyvzpvc ry4luV9ZSna/sgLsfmVO7H5lCdj9AmSXI/erEyf3Ky8KuV9grIkk96s3R+53g6x/YPerI5H7RU62 3SD3izLp6YCWAXa/skzsfmWd2P3Ka8DuF5GsQYT7RS3TR9R3PJ7NXjAfmUUMNasjNR5qwEZqPNRU jNR4qCkkqfGQQ0hSY1kMVuMhZ4ekxgBZ6SM1BkjPInTepMYblL5YENRYb47UGMvZzUGNkbZ9REiN 9Wqkxno1UmMNIjWWy7EaSy6xGgNkm2xS4w26TDVIjXEqVmhJjVFM2/iTGuucoMajCW1gNZbcZTWW lGM11suRGuvlSI11mUiNZQVYjeX5shpLsWA1liVgNZacIzWWd4VnEXpzNIuQlONZhKwlzyLkheJZ hK4AzSKwO5s4zSJ0TjSLkNeAZxFSL3kWIc+XZxEaRLMIeb48i0AkOx+hWYS8LDSLkOfLswjU0g4H aBYBkE2JZhEAmbeXZxHYnB3Z0CxCPj48i9A2g2YRkpg8i5A3quX1aI5RXTq1sl7NMcd7FPedUlkl eX/SAKcEjP31HjgjwtgedDsjYC6Tj+2MgLEdAZwRMLYOcEYUR/epOiFyRrSzy0c52xnRctbQwBlR 3jbSdkaEuXwAs50RgS7N7HZGBLLzETgjpwLbGVEguzc4IwLZiQ2ckT5cckYEsp+HwxnpepMz0psj Z6QLTs6IlrNWBc7IWQ7OSHOXnJGz3HZGuuDkjPQtIGfk5A1n5JQJzsjZHJyRsxyckd4dOSNNAnJG +q6QMyKQffLhjJzltjNyVoMzogqYV4qcEUUy14CcEaVkvSGcEUXSzogi2ZzgjPTRkTOiSPbo4Iw0 ncgZab0kZ+SA4Iy08JAz0uwlZ6RvFDmjS53kaAFOSZ8SOSWnbHBKBHr0uQacE37I/rowOSfa16eP eIv6eAauMeQtNqbIqQsw9iaQt6hqZEXeAnHMPSBvUdWskb1FVUM09hYyI/YWskTsLXQkeAu5N/YW VY3s2FvolMhbICXtLaoaALO30Hsjb4FIFwMCb2EroC/myw764zGixhCBN8ZOgonAwBySwMDIsSFh piSwzJkJvEHXX7MHgbtqbpjAG3TxvURgRLJ2FQTuqgFiAuvNEYE1iAisQSCwxhCBNYgI7IBAYA0i cyxBbI4lJ9kc60hkjjWIzDGWs3aVzLHOicyxXg7m2MHAHGsQmWMNInOMvPWHOE4kMseIZD8QI3Ms QWyOAbKGjsyxFBQ2xwBdHDTM8QZlMy8hcywrwOZYg8gc6wqQOZaKyuZYb47Msa4lmWNdATLHendk jhHJDtbIHG+QHcqwOQbIzhbJHMs6sTmWibM51iAyxxoEM6wxZIZ13mSGdSQyvxpE5leDMDbUCkZj QwJ9tqA9NtSCSWNDLTw0NrwkvkzPK+7ljxvvX7z9ueHFBtFgURuLVta7Ot3L1Op6VacxTLbTaMtV xWkdk410/1u5aS2TzWisZ3UazxT3PDS+7T4+2P1YD+/s/mPR5np4p/FVlwHtMl9xWl9laNvjeniP eM7azj/WnQ39MHboq/ny27EaVfcON6sxcLPA6HYMGN2OAaPbMYoj2zGdM7lZvRi5Wb0auVkC6VGv k9N2s5TSxahuN0urWRcON+tsDm6WItkGabvZSwEeMCqp3+UjRlnMB37NAj9jn3diGDDywwTCyD+J I8yQDAPG8oIYJovBDEvqb1uZYUn+0iIxbIP0hwkUSP+KJAW6NFVgmN4cMUyDwDCNQb/kgNAveaDd Lzkg9EsaRP2S5gn1S04k9EsOCP0SLad/6Y1A9nMC9EvOcrtf8jC7X3JA6JccEPolfXepX3IioV/S SkH9kgZRv6TvOPVLBNL9kj466pcA0v2SrgD1Sw4I/ZJTAfRLWi2pX3IqgH5J85L6JacC6Jec3aFf IgWzvxqAfolAul8iwbSdF/olimRB6JcIdGmqdr/kLId+SZcA/ZI+X+qXPNDul5zV0C85IPRLDgj9 EuVkuxz0SwTS/ZJmL/VL+mpSv/SeY/k/+iX9YOx+SZ8I9UsE0v2SvpbULzkg9Eu0+7/TLznWEP2S th7UL9El0f2SvpPUL1181draP9e//wHWFyL7CmVuZHN0cmVhbQplbmRvYmoKNCAwIG9iago8PC9U eXBlIC9Gb250Ci9TdWJ0eXBlIC9UeXBlMAovQmFzZUZvbnQgL0NvdXJpZXJOZXdQU01UCi9FbmNv ZGluZyAvSWRlbnRpdHktSAovRGVzY2VuZGFudEZvbnRzIFs0OSAwIFJdCi9Ub1VuaWNvZGUgNTAg MCBSCj4+CmVuZG9iagozOSAwIG9iago8PC9UeXBlIC9YT2JqZWN0Ci9TdWJ0eXBlIC9JbWFnZQov V2lkdGggNjAwCi9IZWlnaHQgMzcxCi9Db2xvclNwYWNlIC9EZXZpY2VSR0IKL0JpdHNQZXJDb21w b25lbnQgOAovRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDY0MDMKPj4gc3RyZWFtCnic7d2/ ayTnHQdg/xVpA2lSuDIprrjGYBU2NhgCigwC+SoTg4pcECQgQkwaNSZuTPBBMOEaJZC4uUIQbMgF GQyxCyMCgUOEQDDimkCISCL5tBluyDA3+0Or3Z2dz+w+Dy72xtLuZ1+97/vd+fHuDAYAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AADQgcvLy1u3bj333HM//vGP69vffffdamP1Mw3f+MY3/vKXv1S/8sUXX1T/65vf/Oa//vWvCS83 7OOPP17sWyvzDCcZl6HRAtOrP+HMTwJAJ+pzeL0S3bQOfvTRR43/O7IUTqiDCy+FN62DM1exsq1K v/zlLxcUH4BlqBeF733ve//973/L7SPr4Lg69eTJk+9///tVFah+frgojHuq8uXqAeZ3bR2sZyjr eGMPd0pleBUQoI8aO0fVZD5PHWz8+siXazxVWbPqdbC+g9koMfX9r0bprH6rKH9/+MMfpq+DwxtH Bih/rCiX9+/fL+vmG2+8MbyD/Pe///3b3/52Y+PIX3/48GHxk8WD4nXLXykDV8/QyD/yvVfhf//7 35d/iOGWqR+1Htdoqjmwhqop9MMPPywm3mrWvVEdHNTm53FnBhsvV3+qqoxWdbM+248s0HXVbw0f m52+Djb2B8cFaHxs2NzcfOuttxp1sF5xKuVrNX69qEd//etfq4o5zoRmmXzgekLLVKVwQlMDrIN6 UShny3LynOb8YH3CHP6ZkUVzwrm5xs5UVb/KXyn/WX88ePbIZ+OCn6q2znB+8NoAzz27q1s/Ljr8 A2Wrjvv1ar+v3FLV0PLZhn938ntvVLfG/60nLFt7wjtdQN8C6IN6HazPkDetg6XGfseE84MTnmrk /lSjsNYPPNYPJNbn8JteJ1NlmBBg5L5kvQ4Oxxhu4fqvlz/f+AxQ/XPkWxh+78NPW//F4Ug3amqA 1daYQqtdwpseFx35nNcek6zm4fru1cjDm9WvDB/HK19luGSMm/+vfTsTAtQ/KlQ/X6+DwzHqJ0+H f/1GdXDce59cB8d9Hri2qQHWQWMKrSbtenm6tnCMq0HDl19OuByl2h0rt4y8drSqm/UyWv7k8Cve 6HrRugkBrq2D0+wPXlsHRx78nPDeZ94fnPBOAdbEuCn0RnVweKFE/cTW5JerKu/wxZb1Cb98qsa8 XV9tMcP5wXFvZ0KAa+vgNOcHZ6uD07z3kXWw8dep759OeKcTew3A6phwwmv6OjgYc4RtyvWD1Wx8 7UWMI89nVfP2PNeLNky+XnRCHRwXsn696Jz7g8PvbnIdHNkyE461ul4UWCuTC9ONzg82JuqR0+nk 9YPPjVkfUX+q+grBf/zjH42yMvP6wWEjA0xTBwdTrB+coQ5OeO/X1sHGX6dxINT34QAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAALDmrq6u9vf3j4+Py39eXl5ubW1tbGzs7u5eXFy0txEAEhwd HRUVqqyD9ZpYbD84OGhpIwAkODs7u3v37t7eXlmnih23nZ2d8/Pz8n9tbm4Wj9vY2OV7BoCnij21 n/zkJ48ePar214o6VdTE8tBlUb+2t7dPT0/b2Njl2waAp46Ojg4PD+vHLU9OTqpTeFXNamPjNPE+ B1i0aSaf77z1+bX/zTrvEqTaTavXQfuDADPUwa+++urFF1+sTvo8efLk7t2777zzTrH91q1b36p5 +PBh9TPF3Pjo0aOlvjdqystj6op9Q+cHAeasg2URLOtdoz4W/7x9+3ZZ+4rHb7/9tkvoE9T3B8vH RUEcDF3wudiNALHmqYP1IjgYtZ9Y7QN++eWXxQ5j+aDcVXzhhRfsJnTC+kGAupnr4D//+c96ERyM 2h+s9gHff//94ie//vrr1157rfyBBw8elJURADo0Wx28ffv2m2++WezW1WvZ8PnBO3fuFHWw2jEs 6uArr7ziLCEAOWarg0W9u3//flHXXnrppeJBtX34+pliN7C+Y1jVyueff15BBKBzc14n07gYpl4H B/8/HFqdHKwrNpZ7iy2/PwCYZP51Ew8ePCgrWmN7dRS0rIaDZ2ufOghAgkWtHyyK2t/+9rfh9YON lYNFTXS9KAA5fJ8MAOtMHQQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAm+89bnC/+v6/cEANNSBwFYZ4utXOogAP1SVa6F7Aaq g0BLTk5ONp569dVXz8/Py42Xl5dbW1vFxt3d3YuLi/Y2ssLUQSDf2dnZ5uZmWf6Ojo4ODg6KB1dX V/v7+8fHx21vZLWpg0C/VDWx2HHb2dkpi2OrGzt9u7ROHQT6pdpNK+rU3t5eeeiyqF/b29unp6dt bOzy3dI+dRDoi6JIvf766y+//HJZm05OTqpTeFXNamPjNNk+p7faqIOwEG1NpvRcq7t+9gfXkP1B oHfu3bt3fHzs/CALoQ4C+UYeriyv7Tw8PBwMXfC52I2sNnUQ6IWiKpXrB6vzgwPrB1kEdRCAdaYO ArDO1EEA1pk6CMA6UwcBWGfqIADrTB0EYJ2pgwCsM3UQgHWmDgKwztRBAJZv/qIzsgzNnGQhkQbq IADTUQcBWGeLrRfqIAD9klN9cpIAsD5yqk9OEgDWR071yUkCwPrIqT45SQBYHznVJycJAOsjp/rk JAFgfeRUn5wkAKyPnOqTkwSA9ZFTfXKSALA+cqpPThIA1kdO9clJAsD6yKk+OUkAWB851ScnCQDr I6f65CQBoFXzz/MjZ/55wiwk1UAdBGAK6mB4EgBatdgpWh1UBwH6JWrCzwmTkwSAVkVN+DlhcpIA 0KqoCT8nTE4SAFoVNeHnhMlJAkCroib8nDA5SQBoVdSEnxMmJwkArYqa8HPC5CQBoFVRE35OmJwk ALQqasLPCZOTBIBWRU34OWFykgDQqqgJPydMThIAWhU14eeEyUkCQKuiJvycMDlJAGhV1ISfEyYn CQCtiprwc8LkJAGgVVETfk6YnCQAK2n+2XXkfDtzkoVEGqiD6iDAdNTB8DA5SQAmODs7e/311zee Oj4+LjdeXl5ubW0VW3Z3dy8uLtrbOI/FTowrM+HnhMlJAjBOUZu2t7dPT08HTwvid7/73eLx1dXV /v5+WROPjo4ODg6KB21snFPONJuTJCpMThKAcU5OTuol6d69e0WpKorjzs7O+fn54Glx3NzcLB63 sXHO8DnTbE6SqDA5SQCmUe0bFnVqb2+vPHTZ6sY5A+dMszlJosLkJAG4Vnnc8vDwcPB0J7E6hVfV rDY2ThPs8/HamGZnk5MkKkxOEmhY/BxKz5VXsJRFcPD0uKX9wZ4miQqTkwRggvJ60epK0cHTOuX8 YE+TRIXJSQIwznARHDx7jLRxwediN84pZ5rNSRIVJicJwDhFSdp4VlkT+7V+sPNpNidJVJicJAAr KWeazUkSFSYnCcBKyplmc5JEhclJArCScqbZnCRRYXKSAKyknGk2J0lUmJwkACspZ5rNSRIVJicJ wErKmWZzkkSFyUkCsJJyptmcJFFhcpIArKScaTYnSVSYnCQAKylnms1JEhUmJwnASsqZZnOSRIXJ SQKwknKm2ZwkUWFykgCspJxpNidJVJicJAArKWeazUkSFSYnCcBKyplmc5JEhclJArCScqbZnCRR YXKSAKyknGk2J0lUmJwkACspZ5rNSRIVJicJ4ebvISP7DKy8nGk2J0lUmJwkhFtU7Wv0GVh5OdNs TpKoMDlJCLfYP66uwvrImWZzkkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaT M3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMz dnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2 cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3Zy kkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKS RIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJE hclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3ZykkSF yUlCOF0FZpMzdnKSRIXJSUI4XYUEZ2dnm5ub5+fn5T8vLy+3trY2NjZ2d3cvLi7a2ziPnLGTkyQq TE4SwukqdO7k5KQoT6+++mpZB6+urvb394+Pj4vHR0dHBwcHLW2cU87YyUkSFSYnCSPN/3cZ+Zea OclCIg10FW7u3r17L7/8clGhqv3BYsdtZ2enfFztJ7axcc7kOWMnJ0lUmJwkjDT/32XkX2rmJAuJ NNBVmFW9NhWP9/b2ykOXRf3a3t4+PT1tY+OcmXPGTk6SqDA5SRhpsU2qq9B39Tp4cnJSncKralYb G6cJ9vl4bYyd2eQkiQqTk4SRcv5Ay0+y2PmT1WB/sHrQ6yRRYXKSMFLOHygnCeusXgedH+xvkqgw OUkYKecPlJOEdVavTeW1nYeHh4OhCz4Xu3FOOWMnJ0lUmJwkjJTzB8pJwjqzfnBlRnFOmJwkjJTz B8pJAv2SM3ZykkSFyUmSY/6mGNk484RZSKqBrgJdyBk7OUmiwuQkyTF/U4xsnHnCLCTVQFeBLuSM nZwkUWFykuRY7LvQVVa4q8A0csZOTpKoMDlJckS1SU6YnCTQLzljJydJVJicJDmi2iQnTE4S6Jec sZOTJCpMTpIcUW2SEyYnCfRLztjJSRIVJidJjqg2yQmTkwT6JWfs5CSJCpOTJEdUm+SEyUkC/ZIz dnKSRIXJSZIjqk1ywuQkgX7JGTs5SaLCpCVZ7H99b5OoMDlJoF9yxk5OkqgwaUkW+1/f2yQqTE4S 6JecsZOTJCpMYJKFWI02iQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6JWfs 5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQn iQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9y xk5OkqgwkiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6JWfs5CSJCiNJ cpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6 JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5Okqgw kiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJ oF9yxk5OkqgwkiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6JWfs5CSJ CiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqT kwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5O kqgwkiQniQqTkwT6JWfs5CSJCiNJcpKoMDlJoF9yxk5OkqgwkiQniQqTkwRadXl5ubW1tbGxsbu7 e3FxMf8T5oydnCRRYSRJThIVJicJtOfq6mp/f//4+Lh4fHR0dHBwMP9zLra3L2QUd54kKowkyUmi wuQkgfYUO4M7Ozvn5+fF47Ozs83NzfLxPOb/6Djyw2Svk0SFkSQ5SVSYnCTQnqL27e3tlYdDi5q4 vb19eno653PmjJ2cJFFhJElOEhUmJwm05+TkpDoteKM6+B2ARWt5woMR2tgfBIC+aOP8IAD0RXm9 6OHh4WBx14sCQI8sfP0gAAAAAAAAAAAAAGvrwYMH77zzTvn466+//uEPf9jJFar1GJ3TJiNplmHa BHrq8ePH1Wh98uTJ3bt3Hz58OHg6msoHy1fFeP/997/1VIfjWptUdJUpwwy0CfRHOWTu379fbfny yy/v3Lnzn//850c/+lGH311TxKjGb316aVv5WuXsUTXLmrdJSVdp0FVgNRRj5Oc///nt27cfPXpU bSw+Rv7sZz+7detWtx8jixj16eWFF15YwsRSfIwvX/Srr7568cUXq1dc5zYp6SoNugqshvIsxu9+ 97v6mC3GdTGKi0FUHldZznCuf7ouX3F4eqnvjLSkeJXyk3Pj/M46t0lJV2nQVWA1FEP4vffeGz5y Uj/VXoyglj5MfvDBB9XTVp+u6+oxinFdRG37woNyEvtWTZVqOW0yqDVLSJuUdJUGXQVWQzE6yu/u Lk8oPP/88+VRr2LSe+WVV+pHwBauOt/02WefDWqfrkvlJRltx6h/ir5z5045SxQb9/b2htthCW0y eLZZOmmTcXQVXQVWUjG5/eY3vykGUTGtvfnmm/WPlGdnZ59++mmrr158Xi1mleJjavGZtnHNeTV+ W41RHSyq7+YUbfL2229Xn5zrM8wS2mRQa5bf/va3y2+TcXQVXQVWUvnZvhrUjasgWlXMJ3/605+q Oa1+OWJjemkvQPVhflCbxBoTXf0kyxLUm2X5bTKBrqKrAPMbPpvfOLL00ksv1Y87LVzxEq+99lo1 WX3xxRflp+XGRLeEJHUTmmXJSQb/r3dVyeuKrjJSVFcBZlC/xK4Ys+XjcsnVckZueW3D8CV83X6E 7rxZKtWlj51fad95m+gqwMJVB5HKo2q//vWvq8G7tCvMixf65JNPhi8eqC63K0J+9NFHbceoS2iW 6rXqFz12eKV9QpvoKkBLqh2N8gPtMmfa8sr/wdDn5+rIUjmZlBdjLC1VqcNmqX/3SP0k1/JPeA0H 01WGddgswGyqHY1y8FYHdorh/Ktf/eqDDz5Y/nXdjUVw1YmVpX2cbrTJoHa8q5Nmqe/61R8v+Rub dZVhaV0FuJH6jsbg/5NqdRlGt8dw6js7xVxXzC3LvAql0SaD2tUpnTRLvTU6WXSmqwzL7CrATdV3 LtJW9RbZOplJltwmjese6xcc1q+Eqe/6dXLjHl1l5OvGtgkwpcZn6foF52tryW3y+PHjd999t36o c+Rhz86n2Q67SshSkWGGD+SbfOPR8htIqp9Zn+W9E5qlkzap17j6d540al+r3z0S21W6XSqS1lWA KVX3Yx1349HycoJiVvnjH//473//e92W945slk7aZHjZdX3iXcKd6ZK7SsJSkZyuAkyvcT/WkTce bdyMe7XX+U55P9ZO2mT4G8Dq66/b3sWI7SpdLRVJ7irA9Ibvx3rtjUdX+47Ys92PdTltUj8KWlW9 pX0BV3JX6WSpSHJXAaY3fD/WaW48+uc///mTTz5ZbtIlmfl+rEtok8bpp+WvRo/tKp0sFUnuKsD0 Rt6PdWk3Hg2UcD/Wuuq0V/HSjW+h/PDDD3/xi18s7QhbV10ldqlIWlcBZjPyfqydX3W/NIH3Y63U T3s1Tgh2cqFFV10lZKlIclcB5jHufqxrcsfPzPuxDscbBFxf0WFXSVgqEt5VgJl1eD/WzkXdj3V4 TcTwq3d4z4hBR12l86Ui1QvldBWAeWTej3Uw5lZ0g2en/WKmfe+995azDCHk+1g6XCoS21UA5hF4 P9ZiXv3BD36wsbEx8lZ0Sz7TlHPr3lKHS0UCuwrA/NLux1qUmzfeeOPg4ODx48fFHsfIW9Et50xT wvexDOtwqUhaVwGYX+b9WKty09Wt6NJu3ZuwVCSzqwAsSuf3Y62ryk2Ht6ILuXVv2lKRQVhXAVig ru7HOlIntwis6/zWvZWopSKlqK4CsEBd3Y91WKulJ/P7WEauExmELRWp5HQVgFXV3sUwId/H0jBu nchg7ZeKALBwCd/HUpm8TmQ4VdvSlooAsEAh38dSmWadyGC9l4oAsEDd3rp3XKRu14kM8paKANCS bm/dO1LCOpFBzFIRAFrV7a17x0moNTlLRQBYrITvY5ksYZ3IoItb9wLQqsDvYxmn83UiA7uBAKso 8PtYlm/KdSIDd8sF6LP8W/cuX9o6EQDak3Pr3hyB60QAWKxyhv/pT39azvOd37o3SuA6EQAWqP59 LPWvAuvk1r2BMteJALBAjQXgXX0lS478dSIALFD9Sphuv5Klcz1aJwLAAln0XbFOBGANreeVMNaJ AFBZwythrBMBYD1NvnXveu4dA7Amprl17xruHQOwPhJu3QsAXQm5dS8AdMVSEQDWmYthAFhzLoYB AAAAAAAAIMr/ABZ9DzQKZW5kc3RyZWFtCmVuZG9iago0MCAwIG9iago8PC9UeXBlIC9YT2JqZWN0 Ci9TdWJ0eXBlIC9JbWFnZQovV2lkdGggNjAwCi9IZWlnaHQgMzcxCi9Db2xvclNwYWNlIC9EZXZp Y2VSR0IKL0JpdHNQZXJDb21wb25lbnQgOAovRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDY3 MjMKPj4gc3RyZWFtCnic7d2/ixznGQfw/BVpA2lSuDIpVKgx+AobGwSCyxkEkioTg4ooHCQgh5g0 15gYggk2BBPUXAKJGxUHIS4UFAjELowIBMQRAsEcakJMRBKdpc2gwcN49r3T6u5m57vvfj64WI/2 dr87z8zz7vzamc0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAABgAoeHh+fOnfva1772wx/+sD/97bff7iZ2zxn4+te//te//rX7 k08++aT7p2984xv//ve/B+/16NGj7373u82//uIXvxhM6T+/fZ3vfOc7//vf/wav8I9//ONb3/pW /8lNzt///vcn+LwDg49/shc88YsAMJV+G+8PKM86Dn744YeDfy0Ohf2XnX05rg3evX2p4pjSHwe7 MfRMxsETj2LtJ2p14zsAq6I/LvQ3wYrj4FEjzmBDr3v+/Lgw2NZr//e1117rnjy/zXiUU46D/b9q R97B5u2C2hllBARYUYPto66fn2YcnM1t93UGOzabAagZfZqXbSb236sdkrrHN2/ebMep27dvt3/+ +eeft+842Pbsb5YWx6biZ5mfWHyd+TztCD7YOu5v5PbH1qM+TjcHug/SvcJgm7q/7dl9l+jC/+53 v+vmyWCvcn+X9eCfnjrHAOrWddEPPvigP0I90zg467Xo4u7Q+bdrXqodPZvn/+tf/2oetP25P1AO xujmCX/729+OGQf7w8RRjX2R7cGjXmeQZ3Nz8/XXXx+Mg/0Rp9O+11EfZ/75fd13iflUx++17v5w fpd1NxQuMscA6tYfF/oH5hY5PtjvmfPPOWrQ7HYktn/S9uRmYn8cGbzv4Hhi8fjgYEuz/dv5Qfmp xwePeZ3iOUX9/aLzT2hn6VF/3m33tVO6MbR9tfm/7VK1zxy87GB0G/xrP2E7qxecYwB164+D/Sb5 rONga7DpUdy46Ebbtg/3e343Fg+GlW5IPWYcLG6IzQ/HT/0sx7xOcVuyPw7On846P3vnP85gh2r3 v/3Brnu7/k7XwUjXvWz/D+cjdRacYwB1G3TRbpB61v2ixdcstt+2MzcbL7u7u1/76tZcM5q079vf kdg/xHbMODi/9+/4cfCoz3LM68znmX11HJwfufpHTo/6OAuOg/P7MBcZB4uD6VM/6SJVBqjDoIt2 fbu14Dg432wHLX3+HZsnv/nmm4OdchcvXrxy5cpg4jONg8WrDo/5vPOOeZ2njoOLbA8+dRws7vzs tt36G79tyBNvDy44xwDqdlQXfaZxcP5Cif6xrcGT+0Nt14SLExccB/tj0GCkOOb44FGf5ZjXeeo4 uMjxwZONg4Mxq33TRcbBQWn6M23BOQZQt2OOeS0+Ds6O2Ml21MmH3ZP7J5zMTzx+HOzn7B/TPD7A M537Onidp46DsyMOuh2/m/eZtgf7FtkvWizNMftanS8KrJviuDA4j3HB44ODXn1MRx3s5Ttq4lPH wS5n8ZKHxa8fnFd8nUXGwdkC1w+eYByc9YazZso///nPwVWWx4yDs2OvH/R7OAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAKyzg4ODCxcubGxsvPLKKw8ePGgnHh4ebm1tNROvXbv28OHD8SYC wISasenSpUv7+/vN4729vXaEevz48Y0bN+7cudNO3NnZaR6MMREAptVsDG5vb7dbZ82YePny5WaT sHvQPmFzc3OkiVN+cgA4YntwMDi2Txhj4pSfHACemD9sd/fu3e5xN2aNMXGReB8DnLVFms+3X//4 qf+dtO8SpNlMu3jxYjskNUNVe6qM7UGAE4yDn3322QsvvNAd9Hn06NH169ffeuutZvq5c+e+2XP7 9u3uOU1vvHfv3lI/Gz3FzTTHBwFOOQ62g2A73g3Gx+Z/z58/3459zeM33njDKfQTKm4Ptud27u7u zuZO+DzbiQCxTjMO9gfBWWk7sdsG/PTTT5sNxvZBu6n4/PPP20xYsmb423jipZde6vZVun4QWHMn Hgc///zz/iA4K20PdtuA7777bvPML7744tVXX22fcOvWrXZkBIAJnWwcPH/+/JUrV5rNuv5YNn98 8OrVq8042G0YNuPgyy+/7CghADlONg42493Nmzebce3FF19sHnTT58+faTYD+xuG3Vj53HPPGRAB mNwpz5MZnAzTHwdnX+4O7Q4O9jUT263FkT8fABzn9NdN3Lp1qx3RBtO7vaDtaDj76thnHAQgwVld P9gMan//+9/nrx8cXDnYjInOFwUgh9+TAWCdGQcBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAYODbr3985v9N/ZkAYFHGQQDWmXEQgHWWMw7mJAFgfeSMPjlJ AFgfOaNPThIA1kfO6JOTBID1kTP65CQBYH3kjD45SQBYHzmjT04SihQIqFJOc8tJQlFOgXKSABXI aSk5SSjKKVBOEqACOS0lJwlFOQXKScJ6evz48Y0bNzY2Nl566aX9/f124uHh4dbWVjPx2rVrDx8+ HG8iZy6npeQkoSinQDlJWE/vv//+nTt3mgd3795tR6h2ZGwn7u3t7ezszL4cLs92ImPIaSk5SSjK KVBOEtZQs412+fLlBw8eHDXx4OBgc3OzeTzGxAk+8BrIaSk5SSjKKVBOEtZQMyRtb2//7Gc/6+8X bSe2uy6b8evSpUvN9DEmTvnJ65XTUnKSUJRToJwkrKFmeLpw4UK7u7IbqrodpLPemDXGxEUSfswz GqOlrHoSinIKtPwk4zVVVs7SNv1sDy7NGC1l1ZNQlFOgnCSsoWZIevPNNwfDk+ODKy2npeQkoSin QDlJWE/d+aLdJlt7bufu7u5s7oTPs53IGHJaSk4SinIKlJOE9dRd1vfKK69022iuH1xdOS0lJwlF OQXKSQJUIKel5CShKKdAOUmACuS0lJwkFOUUKCcJUIGclpKThKKcAuUkASqQ01JyklCUU6CcJEAF clpKThKKcgqUkwSoQE5LyUlCUU6BcpIAFchpKTlJKMopUE4SoAI5LSUnCUU5BcpJAlQgp6XkJKEo p0A5SYAK5LSUnCQU5RQoJwlQgZyWkpOEopwC5SQBKpDTUnKSUJRToJwkQAVyWkpOEopyCpSTBKhA TkvJSUJRToFykgAVyGkpOUkoyilQThKgAjktJScJRTkFykkCVCCnpeQkoSinQDlJgArktJScJBTl FCgnCVCBnJaSk4SinALlJAEqkNNScpJQlFOgnCRABXJaSk4SinIKlJMEqEBOS8lJQlFOgXKSABXI aSk5SSjKKVBOEqACOS0lJwlFOQXKSQJUIKel5CShKKdAOUmACuS0lJwkFOUUKCcJUIGclpKThKKc AuUkASqQ01JyklCUU6CcJEAFclpKThKKcgqUkwSoQE5LyUlCUU6BcpIAFchpKTlJKMopUE4SoAI5 LSUnCUU5BcpJAlQgp6XkJKEop0A5SYAK5LSUnCQU5RQoJwlQgZyWkpOEopwC5SQBKpDTUnKSUJRT oJwkQAVyWkpOEopyCpSTBKhATkvJSUJRToFykgAVyGkpOUkoyilQThKgAjktJScJRTkFykkCVCCn peQkoSinQDlJgArktJScJBTlFCgnCVCBnJaSk4SinALlJAEqkNNScpJQlFOgnCRABXJaSk4SinIK lJMEqEBOS8lJQlFOgXKSABXIaSk5SSjKKVBOEqACOS0lJwlFOQXKScI6e/z48Y0bN+7cudP+7+Hh 4dbW1sbGxrVr1x4+fDjeRM5cTkvJSUJRToFykrDO9vb2mhGqHQf7Y2IzfWdnZ6SJjCGnpeQkoSin QDlJWFsHBwfXr1/f3t5ux6lmw+3y5csPHjxo/2lzc7N5PMbEKT9zvXJaSk4SinIKlJOE9dRsqf3o Rz+6d+9et73WjFPNmNjuumzGr0uXLu3v748xccqPXa+clpKThKKcAuUkYT3t7e3t7u7291vevXu3 O4TXjVljTFwk3sc8ozFayqonoSinQMtPMl5HZeV0m2n9cdD24Eobo6WsehKKcgqUk4Q11J4e09ds Gzo+uNJyWkpOEopyCpSThHXW3x5sHzcD4mzuhM+zncgYclpKThKKcgqUk4R15vrBauS0lJwkFOUU KCcJUIGclpKThKKcAuUkASqQ01JyklCUU6CcJEAFclpKThKKcgqUkwSoQE5LyUlCUU6BcpIAFchp KTlJKMopUE4SoAI5LSUnCUU5BcpJAlQgp6XkJKEop0A5SYAK5LSUnCQU5RQoJwlQgZyWkpOEopwC 5SQBKpDTUnKSUJRToJwkQAVyWkpOEopyCpSTBKhATkvJSUJRToFykgAVyGkpOUkoyilQThKgAjkt JScJRTkFykkCVCCnpeQkoSinQDlJgArktJScJBTlFCgnCVCBnJaSk4SinALlJAEqkNNScpJQlFOg nCRABXJaSk4SinIKlJMEqEBOS8lJQlFOgXKSABXIaSk5SSjKKVBOEqACOS0lJwlFOQXKSQJUIKel 5CShKKdAOUmACuS0lJwkFOUUKCcJUIGclpKThKKcAuUkASqQ01JyklCUU6CcJEAFclpKThKKcgqU kwSoQE5LyUlCUU6BcpIAFchpKTlJKMopUE4SoAI5LSUnCUU5BcpJAlQgp6XkJKEop0A5SYAK5LSU nCQU5RQoJwlQgZyWkpOEopwC5SQBKpDTUnKSUJRToJwkQAVyWkpOEopyCpSTBKhATkvJSUJRToFy kgAVyGkpOUkoyilQThKgAjktJScJRTkFykkCVCCnpeQkoSinQDlJgArktJScJBTlFCgnCVCBnJaS k4SinALlJAEqkNNScpJQlFOgnCRABXJaSk4SinIKlJMEqEBOS8lJQlFOgXKSABXIaSk5SSjKKVBO EqACOS0lJwlFOQXKSQJUIKel5CShKKdAOUmACuS0lJwkFOUUKCcJ6+ng4ODChQsbT9y5c6edeHh4 uLW11Uy5du3aw4cPx5vImctpKTlJKMopUE4S1lAzNl26dGl/f3/2ZEC8ePFi8/jx48c3btxox8S9 vb2dnZ3mwRgTGUNOS8lJQlFOgXKSsIbu3r3bH5Lef//9ZqhqBsfLly8/ePBg9mRw3NzcbB6PMXGa z1y7nJaSk4SinALlJGHNdduGzTi1vb3d7rocdeKkH7daOS0lJwlFOQXKScI6a/db7u7uzp5sJHaH 8Loxa4yJiwT7mGc0RktZ9SQU5RRo+UnG66WsqPYMlnYQnD3Zb2l7cHWN0VJWPQlFOQXKScJ6as8X 7c4UnT0ZpxwfXF05LSUnCUU5BcpJwhqaHwRnX91HOjjh82wnMoaclpKThKKcAuUkYQ01Q9LGV7Vj ousHV1dOS8lJQlFOgXKSABXIaSk5SSjKKVBOEqACOS0lJwlFOQXKSQJUIKel5CShKKdAOUmACuS0 lJwkFOUUKCcJUIGclpKThKKcAuUkASqQ01JyklCUU6CcJEAFclpKThKKcgqUkwSoQE5LyUlCUU6B cpIAFchpKTlJKMopUE4SoAI5LSUnCUU5BcpJAlQgp6XkJKEop0A5SYAK5LSUnCQU5RQoJwlQgZyW kpOEopwC5SQBKpDTUnKSUJRToJwkQAVyWkpOEopyCpSTBKhATkvJSUJRToFykgAVyGkpOUkoyilQ ThKgAjktJScJRTkFykkCVCCnpeQkoSinQDlJgArktJScJBTlFCgnCVCBnJaSk4SinALlJAEqkNNS cpJQlFOgnCRABXJaSk4SinIKlJMEqEBOS8lJQlFOgXKSABXIaSk5SSjKKVBOEqACOS0lJwlFOQXK SQJUIKel5CShKKdAOUmACuS0lJwkFOUUKCcJcDJRa3FOmJwkFOUUKCcJrJacdScnSVSYnCQU5RQo Jwmslpx1JydJVJicJBTlFCgnCayWnHUnJ0lUmJwkFOUUKCcJrJacdScnSVSYnCQU5RQoJwmslpx1 JydJVJicJBTlFCgnCayWnHUnJ0lUmJwkFOUUKCcJrJacdScnSVSYnCQU5RQoJwmslpx1JydJVJic JBTlFCgnCayWnHUnJ0lUmJwkFOUUKCcJrJacdScnSVSYnCQU5RQoJwmslpx1JydJVJicJBTlFCgn CayWnHUnJ0lUmJwkFOUUKCcJrJacdScnSVSYnCQU5RQoJwmslpx1JydJVJicJBTlFCgnCayWnHUn J0lUmJwkFOUUKCcJrJacdScnSVSYnCQU5RQoJwmslpx1JydJVJicJBTlFCgnCayWnHUnJ0lUmJwk FOUUKCcJrJacdScnSVSYnCQU5RQoJwmM6vDwcGtra2Nj49q1aw8fPjz9C+asOzlJosLkJKEop0A5 SWA8jx8/vnHjxp07d5rHe3t7Ozs7p3/NnHUnJ0lUmJwkFOUUKCcJjKfZGLx8+fKDBw+axwcHB5ub m+3j08hZd3KSRIWRJDlJVJicJDCeZuzb3t5ud4c2Y+KlS5f29/dP+Zo5605OkqgwkiQniQqTkwTG c/fu3e6w4DONg98GOGsjNzwoGGN7EABWxRjHBwFgVbTni+7u7s7O7nxRAFghZ379IAAAAAAAAAAA AABr69atW2+99Vb7+Isvvvj+978/yRmq/RiTM0+KzJZ55gmsqPv373dr66NHj65fv3779u3Zk7Wp fbB8XYx33333m09MuF6bJx2LyoJhZuYJrI52lbl582Y35dNPP7169ep///vfH/zgBxP+dk0To1t/ ++1lbO17td2jmy1rPk9aFpUBiwrUoVlHfvrTn54/f/7evXvdxOZr5E9+8pNz585N+zWyidFvL88/ //wSGkvzNb59088+++yFF17o3nGd50nLojJgUYE6tEcxfvvb3/bX2Wa9btbiZiVq96ssZ3Xuf7tu 33G+vfQ3RkbSvEv7zXlwfGed50nLojJgUYE6NKvwO++8M7/npH+ovVmDRvoy+d5773Uv23277uvH aNbrJurYJx60TeybPV2q5cyTWW+2hMyTlkVlwKICdWjWjva3u9sDCs8991y716tpei+//HJ/D9iZ 6443/elPf5r1vl232lMyxo7R/xZ99erVtks0E7e3t+fnwxLmyeyrs2WSeXIUi4pFBarUNLdf//rX zUrUtLUrV670v1IeHBz88Y9/HPXdm++rTVdpvqY232kH55x36++oMbqdRf3NnGaevPHGG903536H WcI8mfVmy29+85vlz5OjWFQsKlCl9rt9t1IPzoIYVdNP/vznP3c9rX864qC9jBeg+zI/6zWxQaPr H2RZgv5sWf48OYZFxaICnN780fzBnqUXX3yxv9/pzDVv8eqrr3bN6pNPPmm/LQ8a3RKS9B0zW5ac ZPbleNcNeVOxqBRFLSrACfRPsWvW2fZxe8nVctbc9tyG+VP4pv0KPfls6XSnPk5+pv3k88SiApy5 bidSu1ftV7/6VbfyLu0M8+aNPvroo/mTB7rT7ZqQH3744dgx+hJmS/de/ZMeJzzTPmGeWFSAkXQb Gu0X2mV22vbM/9nc9+duz1LbTNqTMZaWqjXhbOn/9kj/INfyD3jNB7OozJtwtgAn021otCtvt2On WZ1/+ctfvvfee8s/r3twEVx3YGVpX6cH82TW2981yWzpb/r1Hy/5F5stKvPSFhXgmfQ3NGZfNtXu NIxp9+H0N3aaXtf0lmWehTKYJ7Pe2SmTzJb+3JjkojOLyrzMRQV4Vv2Ni7Sreptsk3SSJc+TwXmP /RMO+2fC9Df9Jrlxj0Wl+L6x8wRY0OC7dP+E87W15Hly//79t99+u7+rs7jbc/I2O+GiEnKpyDyr D+Q7/saj7S+QdM9Zn8t7j5ktk8yT/hjX/82Twdg36m+PxC4q014qkraoAAvq7sd61I1H29MJmq7y hz/84T//+c+6Xd5bnC2TzJP5y677jXcJd6ZLXlQSLhXJWVSAxQ3ux1q88ejgZtx1X+e74P1YJ5kn 878A1r/+euxNjNhFZapLRZIXFWBx8/djfeqNR+u+I/bJ7se6nHnS3wvajXpL+wGu5EVlkktFkhcV YHHz92Nd5Majf/nLXz766KPlJl2SE9+PdQnzZHD4aflXo8cuKpNcKpK8qACLK96PdWk3Hg2UcD/W vu6wV/PWg1+h/OCDD37+858vbQ/bVItK7KUiaYsKcDLF+7FOftb90gTej7XTP+w1OCA4yYkWUy0q IZeKJC8qwGkcdT/WNbnjZ+b9WOfjzQLOr5hwUUm4VCR8UQFObML7sU4u6n6s89dEzL/7hPeMmE20 qEx+qUj3RjmLCsBpZN6PdXbErehmX237Tad95513lnMZQsjvsUx4qUjsogJwGoH3Y2366ve+972N jY3ireiWfKQp59a9rQkvFQlcVABOL+1+rM1w89prr+3s7Ny/f7/Z4ijeim45R5oSfo9l3oSXiqQt KgCnl3k/1m64mepWdGm37k24VCRzUQE4K5Pfj7WvG24mvBVdyK170y4VmYUtKgBnaKr7sRZNcovA vslv3duJulSkFbWoAJyhqe7HOm/UoSfz91iK14nMwi4V6eQsKgC1Gu9kmJDfYxk46jqR2dpfKgLA mUv4PZbO8deJzKcaW9qlIgCcoZDfY+kscp3IbL0vFQHgDE17696jIk17ncgs71IRAEYy7a17ixKu E5nFXCoCwKimvXXvURLGmpxLRQA4Wwm/x3K8hOtEZlPcuheAUQX+HstRJr9OZGYzEKBGgb/HsnwL Xicyc7dcgFWWf+ve5Uu7TgSA8eTcujdH4HUiAJyttsP/+Mc/bvv85LfujRJ4nQgAZ6j/eyz9nwKb 5Na9gTKvEwHgDA0uAJ/qJ1ly5F8nAsAZ6p8JM+1Pskxuha4TAeAMuei74zoRgDW0nmfCuE4EgM4a ngnjOhEA1tPxt+5dz61jANbEIrfuXcOtYwDWR8KtewFgKiG37gWAqbhUBIB15mQYANack2EAAAAA AAAAiPJ/KqKWPAplbmRzdHJlYW0KZW5kb2JqCjQzIDAgb2JqCjw8L1R5cGUgL1hPYmplY3QKL1N1 YnR5cGUgL0ltYWdlCi9XaWR0aCA2MDAKL0hlaWdodCAzNzEKL0NvbG9yU3BhY2UgL0RldmljZVJH QgovQml0c1BlckNvbXBvbmVudCA4Ci9GaWx0ZXIgL0ZsYXRlRGVjb2RlCi9MZW5ndGggNTQ3Nwo+ PiBzdHJlYW0KeJzt3b9rJOf9B3D/FWkDblK4MimucGOwChsbDgyKAgLlqi8JqMgFQQJKiEmjxsQQ TLAhmOBGyZfEjQs1Md84KBCIXRgRCBzqghHXpIlIIvm03+EGD+PZWd2e9se8Nc/rhYv1nLT71jPP Pp+dmefZmUwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAABgAJeXl3fu3HnmmWd+9KMftbe/+eabzcbmZzq+9rWv/f3vf29+5dNP P23+6etf//q//vWva15u2h/+8Ifl/ml1nukk//jHP77xjW90wn/wwQezwszK3GmxyXwtAECU9iDf Hvyftg526sisQnBNHVx6KZyzDj569Oi73/3udJhvfetb//3vf6/P3C6Fc7YAAFHag3wz8k9m1MFZ daopJb/61a/aP1//b+/LdZ6qfrl2gMXNWQeb+tVEag7rOn9OO3P9W9PF9IktAECUzsFOM24vUgc7 v977cp2nqktPuw62D6861aR+8una3f6tqvz98Y9/fGIdvD5P/bu9P9PZOH8LABClGc/fe++9qjo0 VeOp6uCkVZuuPxnY+1RNEWmqRrvS9Rbo3pOT02cmn1gHZx0z1jmvqZWd48H5WwCAKO1Bvh7b67Iy z/XB9mHa9M/0Fs1rrrU1NaWuU001qX+lfWjW/FPvUVuduamt89TB6fOxza9Xf8Wc1wfnbAEAorTr YPsg6GnrYK1zRHbN9cFrnqo963JWWalrWb29rnSd6jmZ7/rgInWw99rfE1sAgCidk37NIeHTnhft fc5Z5xubp2pK3vUTL9t1cPq8aP0q01VvujK2ty94XvTGLQBAlFnzPdrl6YmFYFYN6qzR632qpuo1 h071lt65o03dbJfR+ienX3Ge48FZEzvrDNfMk7lxCwAQZXqQb5+WnLMOTleTdh25/uWaytu5Ptgp dvVTdUpke7XFza4PTloHmE+1buLGLQBAlN5BvikN858X7T2ZOef6wabwdQrc9FP1Xjpsas0N5otO nmYd/fXnRedsAQCiXF+Ynur6YKdI9ZaA69frPTNjfUT7qdorBP/5z382V/Gm/3We9YPTTzv9ivNf H5ynBQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIBwZ2dnd+/e3djYePXVV8/Pz+uNl5eX W1tb1cbd3d2Li4ulbASANFXB2t7ePj09rR4fHR3VZevq6mp/f//4+LjeeHBwUD1YcCMABKoOBvf2 9upDtqom7uzsVIeEzYP6BzY3NxffOOhfCQD9eo8HO8Wx/oEFNw76VwLATNPX8k5OTprHTSFbcOM8 ST4BWLZ5Bp9v/s8nT/zvxmMs4apjt9dff72uU1X9qqfKOB4EinKDOvj555+/+OKLzUWfDz/88Nkv ffzxx83P3Llz59mW999/v/6nL7744qWXXqo3Pv/88y4eDaj32M31QaAoC9bBqgjeu3evHkjrAlfX u06trP7plVdeefDgQf0zTbls/zrr13s8WE/4PDw8nEzNAr3xRoBYi9TBqqi99tpr7U/7zT916uCj R4/u379flb9Z9XE9fyzTqvK38djLL7/cnMC0fhAoxyJ18LPPPusczc2qd9X/vvDCC1W9q3/AMSAA IRasg2+88UbnX99+++26DnauDzbnQivVL9Ybn3vuOQeDAAxouXVw1vHgLM1x4pL/KgCYz3quDzam S2d9/Liivw4Arrf4fNFm7cM180Ubnfmi5skAMKzF1w82F/s66wdnnRdtrx90fRCAYfk+GQBKpg4C AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAACNq6ur/f39jY2Nl19++fT0tN54eXm5tbVVbdzd3b24uFjKRgAI9O677x4f H1cPTk5O6rJVV8Z649HR0cHBweTLcnnjjaPxzf/5ZOn/Df03AZSrOnDb2dk5Pz+ftfHs7Gxzc7N6 vODGIf64lVAHAcakqlN7e3u/+MUv2udF6431+cyqqG1vb1fbF9w46F+5TOogwJhUNevu3bv1Ocym fjUnSCetQrbgxnnCfHIbrKIOAquzwgGUUVjFoZ/jQceDALdFVad+/OMfd2qW64PXUAcBRqaZL9oc x9UTPg8PDydTs0BvvHE01EGAkWnW+r366qvNgZv1g7OogwCUTB0EoGTqIAAlUwcBKJk6CEDJ1EEA SqYOAlAydRCAkqmDAJRMHQSgZOogACVTBwEomToIQMnUQQBKpg4CUDJ1EICSqYMArF9O9clJAkA5 cqpPThIAypFTfXKSAFCOnOqTkwSAcuRUn5wkAJQjp/rkJAGgHDnVJycJAOXIqT45SQAoR071yUkC QDlyqk9OEgDKkVN9cpIAUI6c6pOTBIBy5FSfnCQAlCOn+uQkAaAcOdUnJwkA5cipPjlJAChHTvXJ SQJAOXKqT04SAMqRU31ykgBQjpzqk5MEgHLkVJ+cJACUI6f65CQBoBw51ScnCQDlyKk+OUkAKEdO 9clJAkA5cqpPThIAypFTfXKSAFCOnOqTkwSAcuRUn5wkAJQjp/rkJAGgHDnVJycJAOXIqT45SQBY oqurq/39/ePj4/p/Ly8vt7a2NjY2dnd3Ly4ulrJxETnVJycJAEt0dHRUla26DrZrYrX94OBg8Y0L yqk+OUkAWJazs7P79+/v7e3Vxas6mtvZ2Tk/P6//aXNzs3q84MYFE+ZUn5wkACxFdfj2k5/85MGD B81BXFW8qppYn8+sitr29vbp6emCGxcMmVN9cpIAsBRHR0eHh4ftk5knJyfNdb2mkC24cZ4kn8y2 iupzMzlJgHmscPRkFJpjt3YddDx4K5IAsLh6ekxbdWzo+uCtSALAErWPB+vHVUGcTM0CvfHGBeVU n5wkACyR9YO3LgkA5cipPjlJAChHTvXJSQJAOXKqT04SAMqRU31ykgBQjpzqk5MEgHLkVJ+cJACU I6f65CQBoBw51ScnCQDlyKk+OUkAKEdO9clJAkA5cqpPThIAypFTfXKSAFCOnOqTkwSAcuRUn5wk AJQjp/rkJAGgHDnVJycJAOXIqT45SQAoR071yUkCQDlyqk9OEgDKkVN9cpIAUI6c6pOTBIBy5FSf nCQAlCOn+uQkAaAcOdUnJwkA5cipPjlJAChHTvXJSQJAOXKqT04SAMqRU31ykgBQjpzqk5MEgHLk VJ+cJACUI6f65CQBoBw51ScnCQDlyKk+OUkAKEdO9clJAkA5cqpPThIAypFTfXKSAFCOnOqTkwSA cuRUn5wkAJQjp/rkJAGgHDnVJycJAOXIqT45SQAoR071yUkCQDlyqk9OEgDKkVN9cpIQTlcBlihn SMlJQjhdBViinCElJwnhdBVgiXKGlJwkhNNVgCXKGVJykhBOV4Hb4uzs7O7duxuPHR8f1xsvLy+3 traqLbu7uxcXF0vZuIicISUnCeF0FbgVqoK1vb19eno6eVwQX3/99erx1dXV/v5+XROPjo4ODg6q BwtuXFDOkJKThHC6CtwKJycn7Tr17rvvVvWrKo47Ozvn5+eTx8Vxc3OzerzgxgVz5gwpOUkIp6vA rdMcG1bFa29vrz6fuayNC2bLGVJykhBOV4HbpT6ZeXh4OHl8kNhc12sK2YIb58nwyWyrGFJuJicJ 4XSVECscNxmRelpLXQQnj09mOh7MT0I4XQVui3q+aDNTdPK4eLk+mJ+EcLoK3ArTRXDy1XOknVmg N964oJwhJScJvXJ2UE4S4BpVndr4qromWj+Yn4ReOTsoJwkwAjlDSk4SeuXsoJwkwAjkDCk5SeiV s4NykgAjkDOk5CShV84OykkCjEDOkJKThF45OygnCTACOUNKThJ65eygnCTACOQMKTlJ6JWzg3KS ACOQM6TkJKFXzg7KSQKMQM6QkpOEXjk7KCcJMAI5Q0pOEnrl7KCcJMAI5AwpOUnolbODcpIAI5Az pOQkoVfODspJAoxAzpCSk4ReOTsoJwkwAjlDSk4SeuXsoJwkwAjkDCk5SeiVs4NykgAjkDOk5CSh V84OykkCjEDOkJKThF45OygnCTACOUNKThJ65eygnCTACOQMKTlJ6JWzg3KSACOQM6TkJKFXzg7K SQKMQM6QkpOEXjk7KCcJMAI5Q0pOEnrl7KCcJMAI5AwpOUnolbODcpIAI5AzpOQkoVfODspJAoxA zpCSk4ReOTsoJwkwAjlDSk4SeuXsoJwkwAjkDCk5SeiVs4NykgAjkDOk5CShV84OykkCjEDOkJKT hF45OygnCTACOUNKThJ65eygnCTACOQMKTlJ6JWzg3KSACOQM6TkJKFXzg7KSQKMQM6QkpOEXjk7 KCcJMAI5Q0pOEnrl7KCcJMAI5AwpOUnolbODcpIAI5AzpOQkoVfODspJAoxAzpCSk4ReOTsoJwkw AjlDSk4SeuXsoJwkwAjkDCk5SeiVs4NykgAjkDOk5CShV84OykkCjEDOkJKThF45OygnCTACOUNK ThJ65eygnCTACOQMKTlJ6JWzg3KSACOQM6TkJKFXzg7KSQKMQM6QkpOEXjk7KCcJMAI5Q0pOEnrl 7KCcJMAI5AwpOUnolbODcpIAI5AzpOQkiZLTLJIAo5QzpOQkiQojSXISYP0uLy+3trY2NjZ2d3cv Li4Wf8KcISUnSVQYSZKTAGt2dXW1v79/fHxcPT46Ojo4OFj8OXOGlJwkUWEkSU4SRbNQgupgcGdn 5/z8vHp8dna2ublZP15EznsnJ0lUGEmSk1xdTf73/x4u/b/qaW91s8DqVLVvb2+vPh1a1cTt7e3T 09MFnzPnvZOTJCqMJMlJosLkJIHVOTk5aS4LPlUd/CbAsq14wIMeqzgeBIDbYhXXBwHgtqjnix4e Hk6WN18UAG6Rpa8fBAAAAAAAAAAAAKBYH3744RtvvFE//uKLL37wgx8MMkO1HWNw2qSXZpmmTeCW evjwYfNuffTo0f379z/++OPJ43dT/WD9mhhvv/32s48N+L7WJg1dZc4wE20Ct0f9lnn//febLZ99 9tm9e/f+85///PCHPxzwu2uqGM37tz28rFr9WvXo0TRL4W1S01U6dBUYh+o98vOf//yFF1548OBB s7H6GPmzn/3szp07w36MrGK0h5fnn39+DQNL9TG+ftHPP//8xRdfbF6x5Dap6SodugqMQ30V4/e/ /337PVu9r6t3cfUmqs+rrOft3P50Xb/i9PDSPhhZkepV6k/Ones7JbdJTVfp0FVgHKq38FtvvTV9 5qR9qb16B63ow+Q777zTPG3z6bqtHaN6X1dRVz3xoB7Enm1pUq2nTSatZglpk5qu0qGrwDhU7476 u7vrCwrPPfdcfdarGvReeeWV9hmwpWuuN/3lL3+ZtD5d1+opGauO0f4Ufe/evXqUqDbu7e1Nt8Ma 2mTy1WYZpE1m0VV0FRilanD77W9/W72JqmHtO9/5Tvsj5dnZ2Z///OeVvnr1ebUaVaqPqdVn2s6c 8+b9u9IYzcmi9mFO1Sbf+973mk/O7RFmDW0yaTXL7373u/W3ySy6iq4Co1R/tm/e1J1ZECtVjSd/ /etfmzGtPR2xM7ysLkDzYX7SGsQ6A137IssatJtl/W1yDV1FVwEWN301v3Nm6aWXXmqfd1q66iVe e+21ZrD69NNP60/LnYFuDUnarmmWNSeZfFnvmpI3FF2lV1RXAW6gPcWues/Wj+slV+t559ZzG6an 8A37EXrwZmk0Ux8Hn2k/eJvoKsDSNSeR6rNqv/nNb5o379pmmFcv9NFHH01PHmim21UhP/jgg1XH aEtolua12pMeB5xpn9AmugqwIs2BRv2Bdp0jbT3zfzL1+bk5s1QPJvVkjLWlqg3YLO3vHmlf5Fr/ Ba/pYLrKtAGbBbiZ5kCjfvM2J3aqt/Ovf/3rd955Z/3zujuL4JoLK2v7ON1pk0nrfNcgzdI+9Gs/ XvM3Nusq09K6CvBU2gcaky8H1WYaxrDncNoHO9VYV40t65yF0mmTSWt2yiDN0m6NQRad6SrTMrsK 8LTaBxdpq3qrbIOMJGtuk868x/aEw/ZMmPah3yA37tFVel83tk2AOXU+S7cnnBdrzW3y8OHDN998 s32qs/e05+DD7IBdJWSpyDRvH8h3/Y1H628gaX6mnOW91zTLIG3SrnHt7zzp1L6VfvdIbFcZdqlI WlcB5tTcj3XWjUfr6QTVqPKnP/3p3//+d2nLe3ubZZA2mV523R5413BnuuSukrBUJKerAPPr3I+1 98ajnZtxj3ud75z3Yx2kTaa/Aay9/nrVhxixXWWopSLJXQWY3/T9WJ9449Fx3xH7ZvdjXU+btM+C NlVvbV/AldxVBlkqktxVgPlN3491nhuP/u1vf/voo4/Wm3RNbnw/1jW0Sefy0/pXo8d2lUGWiiR3 FWB+vfdjXduNRwMl3I+1rbnsVb1051so33vvvV/+8pdrO8M2VFeJXSqS1lWAm+m9H+vgs+7XJvB+ rI32Za/OBcFBJloM1VVClookdxVgEbPux1rIHT8z78c6HW8SML9iwK6SsFQkvKsANzbg/VgHF3U/ 1uk1EdOvPuA9IyYDdZXBl4o0L5TTVQAWkXk/1smMW9FNvjrsVyPtW2+9tZ5lCCHfxzLgUpHYrgKw iMD7sVbj6ve///2NjY3eW9Gt+UpTzq17awMuFQnsKgCLS7sfa1Vuvv3tbx8cHDx8+LA64ui9Fd16 rjQlfB/LtAGXiqR1FYDFZd6PtSk3Q92KLu3WvQlLRTK7CsCyDH4/1ram3Ax4K7qQW/emLRWZhHUV gCUa6n6svQa5RWDb4LfubUQtFalFdRWAJRrqfqzTVlp6Mr+PpXedyCRsqUgjp6sAjNXqJsOEfB9L x6x1IpPil4oAsHQJ38fSuH6dyHSqVUtbKgLAEoV8H0tjnnUik7KXigCwRMPeundWpGHXiUzylooA sCLD3rq3V8I6kUnMUhEAVmrYW/fOklBrcpaKALBcCd/Hcr2EdSKTIW7dC8BKBX4fyyyDrxOZOAwE GKPA72NZvznXiUzcLRfgNsu/de/6pa0TAWB1cm7dmyNwnQgAy1WP8D/96U/rcX7wW/dGCVwnAsAS tb+Ppf1VYIPcujdQ5joRAJaoswB8qK9kyZG/TgSAJWrPhBn2K1kGd4vWiQCwRBZ9N6wTAShQmTNh rBMBoFHgTBjrRAAo0/W37i3z6BiAQsxz694Cj44BKEfCrXsBYCght+4FgKFYKgJAyUyGAaBwJsMA AAAAAAAAEOX/AaRCrbYKZW5kc3RyZWFtCmVuZG9iago0NCAwIG9iago8PC9UeXBlIC9YT2JqZWN0 Ci9TdWJ0eXBlIC9JbWFnZQovV2lkdGggNjAwCi9IZWlnaHQgMzcxCi9Db2xvclNwYWNlIC9EZXZp Y2VSR0IKL0JpdHNQZXJDb21wb25lbnQgOAovRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDU5 OTQKPj4gc3RyZWFtCnic7d2/ayTn4Qbw/BVpA25SuDIprnBjsIozZzg4UGQQyFd9SeCKXBAkIIeY NGpMDMEEHwQTrlECiRsXauLiggKB2IURgcAh0gQjrkuISCL5tN/Bg4fXsyvd6vbHPKv388HFem61 ++zszDw7s/PujEYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAwgLOzsxs3bnzjG9/48Y9/XE5/5513uondfXq++c1v/u1vf+v+ 5NNPP+3+6Vvf+ta///3v3nM9ffr0e9/7XvOvv/rVr3pTyvu3j/Pd7373f//7X+8R/vGPf3z7298u 79zk/MMf/jD9620foZf8ww8/LF9X+YAXvfbe7Jrm5QMQqNzOl9v/q/Zgr0ou6oLyYUdftVLv2duH Gi+a0dd7sOvQWXqwe5CeroUveu29hFO+fADSlNv5chdsYg9e1Di9Hb3u/t1+X6e3r9f+7xtvvNHd eXyf8SJz6cGuv7oH6Xbreq+lfJb2r8bL9JkvH4A0vf2dbtM9Sw+Oxvb7Or0Dm02hNG3SPGwzsXyu tmK62w8fPmx759GjR+2f/+tf/yr348oHHH8t4wHKxx9/XW0Vto858T69idO/fADSdJv0Dz74oGyo K/Vgd/9nHg8sH6qtj+b+//znP5sb7U5iWZS9jm7u8Pe///2SHuwyXFKFZQ+WfTce8pKu7O0PTv/y AUhTbufLL+am+X6wbJnx+1xUmu0jN3/b/klbf83Espt6z9v7PnHi94O9Pc32b8dbabwHx0/IKR95 yu8Hp3/5AEQpe7DcD7pqD7Z6p4tMPDLZtW1bSe192oldF1/03dwlPViernlJH82rBy95aZffB4Ao va7pSuqqx0UnPubEg4RtEzXts7e319uba4qjfd52YtnL5d9O7MHxMzan7MHnOC46y8sHIMpFp3yU h/6e2QXjhTJxmF75jM2d33rrrd5hzDt37rz55pu9iVfqwYmjDksTz5Pp7bi1D3XJeTKzvHwAooxv 58sDjFP24HihlFXSu3NZtV1tTZw4ZQ+2T9oNRSwPk17+/eCoOL/lSuMmZnn5AESZuJ3v2mH646IT D0te9AVZd+eJQ9G7iZf3YJmz/E7z8gDPPY7+8uOiV3r5AOSYuJ3v9q2u9P1g70yVS1qgu2f5gOMT n9mDXc7x/buLAkzzu2oTz4N95veD0798AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAINnx 8fHt27fXvnRwcFD+0/n5+c7OTjfx7OxsY2Ojudu9e/dOT0+vOhEA0jSFtbm5eXR0NPqyEO/cudPe bu3v73flWHZiM313d/dKEwEg0OHhYdlTDx486Pb+mlq8f//+9vZ2O6VpzK2trZOTk/af1tfXm9vT TxzgtQHAVZT7hs0+3U9+8pPHjx93e3ZNozWd2B7k7O45/cRBXxkAPEN7MHNvb6/93/39/eZ2eYSz 2XPsvuzr2m36idNk+ARg3qbZ+Hzn/z555n/Pt2llVbSntXQl2O3QlT1ofxC4rp6jBz///PNXXnml +9Lno48+euErjx496u5z48aNFwoPHz5s/+mLL7549dVX24kvvfSSL4+G1Z4vWp4p2p4eU2oq0veD wHU1Yw82JXj37t32w39bcG3f9bqy+afXXnvt8ePH7X26uiz/nOUbL8FSuT9YHjjtnRo6zUSAWLP0 YFNqr7/+evlpv/unXg8+ffr0/v37Tf1d1I/LebH0jO/6lZ1o/CBQg1l68LPPPuvtzV3Ud83/vvzy y03ftXewDwhAiBl78O233+7963vvvdf2YO/7we5YaKP5w3biiy++aGcQgAHNtwcv2h+8SLefOOdX BQDTWc73g53x6mz3Hxf06gDgcrOfL9qNfbjkfNFO73xR58kAMKzZxw92X/b1xg9edFy0HD/o+0EA huX3ZAComR4EAAAAAAAAAAAAAGhNc371Vf8b+jUBwLT0IAA104MA1EwPAlAzPQhAzfQgADXTgwDU TA8CsHw57ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkAqEdO ++QkAaAeOe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAKhH TvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0T04SAOqR0z45SQCo R0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAJij4+Pj9fX1k5OT7n9v3769trZ269atbuLZ2dnG xkYz8d69e6enp1edOIuc9slJAsC8HB4elpXXtNjm5ubR0VFze39/v+2y8/PznZ2dg4ODduLu7m5z Y/qJM8ppn5wkAMzFgwcPbt682dRWtz/Y7Axub2+3+3FNJ25tbTXTuxujYudx+okzhsxpn5wkAMxR WVgT9wd75djeYfqJM8bLaZ+cJADMUW/HbfwLvsPDw+52127TT5wmwycXW0T7PJ+cJMA0FrXR5Nop e7C5fefOnba8mlJrvze0P5iWBIA5Kntw4g6d7wfTkgAwR8/cH2zPAt3b2xuNnRo6zcQZ5bRPThIA 5qi349aOpGjcvHmzO6pp/GBUEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0T04SAOqR0z45SQCo R0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkA qEdO++QkAaAeOe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJ AKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0T04SAOqR0z45 SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+ OUkAqEdO++QkAaAeOe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHT PjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDm6Pj4eH19/eTkpPvf27dvr33p4OCgnXh2draxsdFM uXfv3unp6VUnziKnfXKSADAvh4eHTWfdunWr7cGmxTY3N4+OjkZfFuKdO3ea2+fn5zs7O20n7u/v 7+7uNjemnzijnPbJSQLAXDx48ODmzZtNbXX7g00tluXV3KH516Yct7a22jt0O4/TT5wxZE775CQB YI4uKqxu37C5w/b2dnuQ8zkmzhgvp31ykgAwRxN7sD3Cube3N/pyJ7H7sq9rt+knTpPhk4ston2e T04SYBrz31xyTY33YHuuS1uC7R3sD0YlAWCOJp4v2p0pOvqy0Xw/GJUEgDkqC2u8BEdfP0baOzV0 mokzymmfnCQAzFHZg015rX1d24nGD0YlAaAeOe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBoB45 7ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAe Oe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGg Hjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQB oB457ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++Qk AaAeOe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAKhHTvvk JAGgHjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAWBxzs/Pd3Z21tbWbt68eXR01E48Ozvb2Nho Jt67d+/09PSqE2eR0z45SQBYnAcPHhwcHDQ3Dg8P2y5rm7GduL+/v7u7O/qqLqeZOKOc9slJAsCC NHtzW1tbJycnF008Pj5eX19vbk8/ccZIOe2TkwSABWnKa3t7+xe/+EV5XLSd2B7kbJpuc3OzmT79 xBkj5bRPThIAFqQpstu3b7cHNrtS6w6Qjop2m37iNM/7ycUW0T7PJycJMI1Fbiy5tmbc9bM/aH8Q YKU15fXWW2/1isz3g2lJAFic7nzRbueuPQt0b29vNHZq6DQTZ5TTPjlJAFicbgDgrVu3ur054wej kgBQj5z2yUkCQD1y2icnCQD1yGmfnCQA1COnfXKSAFCPnPbJSRIVJicJwLWUs5nNSRIVJicJwLWU s5nNSRIVJicJE+W8QTlJYLXkrDs5SaLC5CRhopw3KCcJrJacdScnSVSYnCRMlPMG5SSB1ZKz7uQk iQqTk4SJct6gnCSwWnLWnZwkUWFykjBRzhuUkwRWS866k5MkKkxOEibKeYNyksBqyVl3cpJEhclJ wkQ5b1BOElgtOetOTpKoMDlJmCjnDcpJAqslZ93JSRIVJicJE+W8QTlJYLXkrDs5SaLC5CRhopw3 KCcJrJacdScnSVSYnCRMlPMG5SSB1ZKz7uQkiQqTk4SJct6gnCSwWnLWnZwkUWFykjBRzhuUkwRW S866k5MkKkxOEibKeYNyksBqyVl3cpJEhclJwkQ5b1BOElgtOetOTpKoMDlJmCjnDcpJAqslZ93J SRIVJicJE+W8QTlJYLXkrDs5SaLC5CRhopw3KCcJrJacdScnSVSYnCRMlPMG5SSB1ZKz7uQkiQqT k4SJct6gnCSwWnLWnZwkUWFykjBRzhuUkwRWS866k5MkKkxOEibKeYNyksBqyVl3cpJEhclJwkQ5 b1BOElgtOetOTpKoMDlJmCjnDcpJAqslZ93JSRIVJicJE+W8QTlJYLXkrDs5SaLC5CRhopw3KCcJ rJacdScnSVQYSZKTRIXJSQKrJWfdyUkSFUaS5CRRYXKSwGrJWXdykkSFkSQ5SVSYnCSwWnLWnZwk UWEkSU4SFSYnCayWnHUnJ0lUGEmSk0SFyUkCqyVn3clJEhVGkuQkUWFyksBqyVl3cpJEhZEkOUlU mJwksFpy1p2cJFFhJElOEhUmJwmslpx1JydJVBhJkpNEhclJAqslZ93JSRIVRpLkJFFhcpLAaslZ d3KSRIWRJDlJVJicJLBactadnCRRYSRJThIVJicJrJacdScnSVQYSZKTRIXJSQKrJWfdyUkSFUaS 5CRRYXKSwGrJWXdykkSFkSQ5SVSYnCSwWnLWnZwkUWEkSU4SFSYnCSzU2dnZxsbG2travXv3Tk9P Z3/AnHUnJ0lUGEmSk0SFyUkCi3N+fr6zs3NwcNDc3t/f393dnf0xc9adnCRRYSRJThIVJicJLE6z M7i1tXVyctLcPj4+Xl9fb2/PImfdyUkSFUaS5CRRYXKSwOI03be9vd0eDm06cXNz8+joaMbHzFl3 cpJEhZEkOUlUmJwksDiHh4fd14JX6sHvAMzbgjd4MMEi9gcBYFUs4vtBAFgV7fmie3t7o/mdLwoA K2Tu4wcBAAAAAAAAAAAAqNZHH3309ttvt7e/+OKLH/7wh4OcoVrGGJx5MpHZMs48gRX15MmTbm19 +vTp/fv3Hz16NPpybWpvLF8X47333nvhSwOu1+ZJx6IyZZiReQKro11lHj582E357LPP7t69+9// /vdHP/rRgL9d08To1t9y87Jo7XO1W49utlQ+T1oWlR6LClwPzTry85///OWXX378+HE3sfkY+bOf /ezGjRvDfoxsYpSbl5deemkJG5bmY3z7pJ9//vkrr7zSPWPN86RlUemxqMD10H6L8fvf/75cZ5v1 ulmLm5WoPa6ynNW5/HTdPuP45qXcGVmQ5lnaT86973dqnicti0qPRQWuh2YVfvfdd8ePnJRftTdr 0II+TL7//vvdw3afrktljGa9bqIu+sSDdiP2QqFLtZx5MipmS8g8aVlUeiwqcD00a0f7293tFwov vvhie9Sr2ei99tpr5RGwueu+b/rzn/88Kj5dt9pTMhYdo/wUfffu3XYr0Uzc3t4enw9LmCejr8+W QebJRSwqFhW4lpqN229/+9tmJWo2a2+++Wb5kfL4+PhPf/rTQp+9+bzabFWaj6nNZ9reOefd+rvQ GN3BonI3p5kn3//+97tPzuUWZgnzZFTMlt/97nfLnycXsahYVOBaaj/bdyt17yyIhWq2J3/5y1+6 bVp5OmJv87K4AN2H+VGxEett6MovWZagnC3LnyeXsKhYVIDZjX+b3zuy9Oqrr5bHneaueYrXX3+9 21h9+umn7afl3oZuCUlKl8yWJScZfdV3XeUNxaIyUdSiAjyH8hS7Zp1tb7dDrpaz5rbnNoyfwjfs R+jBZ0unO/Vx8DPtB58nFhVg7rqDSO1Rtd/85jfdyru0M8ybJ/r444/HTx7oTrdrQn744YeLjlFK mC3dc5UnPQ54pn3CPLGoAAvS7Wi0H2iXuaVtz/wfjX1+7o4stRuT9mSMpaVqDThbyt8eKb/kWv4X XuPBLCrjBpwtwPPpdjTalbc7sNOszr/+9a/ff//95Z/X3RsE132xsrSP0715MiqOdw0yW8pdv/L2 kn+x2aIyLm1RAa6k3NEYfbVR7U7DGPYYTrmz02zrmm3LMs9C6c2TUXF2yiCzpZwbgww6s6iMy1xU gKsqdy7SRvU22QbZkix5nvTOeyxPOCzPhCl3/Qa5cI9FZeLzxs4TYEq9z9LlCefVWvI8efLkyTvv vFMe6px42HPwzeyAi0rIUJFxVh/Id/mFR9tfIOnuU8/w3ktmyyDzpOy48jdPet230N8eiV1Uhh0q kraoAFPqrsd60YVH29MJmq3KH//4x//85z+1De+dOFsGmSfjw67LDe8SrkyXvKgkDBXJWVSA6fWu xzrxwqO9i3Ff73G+U16PdZB5Mv4LYOX460XvYsQuKkMNFUleVIDpjV+P9ZkXHr3eV8R+vuuxLmee lEdBu9Zb2g9wJS8qgwwVSV5UgOmNX491mguP/vWvf/3444+Xm3RJnvt6rEuYJ72vn5Y/Gj12URlk qEjyogJMb+L1WJd24dFACddjLXVfezVP3fsVyg8++OCXv/zl0o6wDbWoxA4VSVtUgOcz8Xqsg591 vzSB12PtlF979b4QHOREi6EWlZChIsmLCjCLi67HWskVPzOvxzoebxRwfsWAi0rCUJHwRQV4bgNe j3VwUddjHR8TMf7sA14zYjTQojL4UJHuiXIWFYBZZF6PdXTBpehGX9/sN1vad999dznDEEJ+j2XA oSKxiwrALAKvx9psV3/wgx+sra1NvBTdkr9pyrl0b2vAoSKBiwrA7NKux9rUzRtvvLG7u/vkyZNm j2PipeiW801Twu+xjBtwqEjaogIwu8zrsXZ1M9Sl6NIu3ZswVCRzUQGYl8Gvx1rq6mbAS9GFXLo3 bajIKGxRAZijoa7HOtEglwgsDX7p3k7UUJFW1KICMEdDXY913EKrJ/P3WCaOExmFDRXp5CwqANfV 4k6GCfk9lp6LxomMqh8qAsDcJfweS+fycSLjqRYtbagIAHMU8nssnWnGiYzqHioCwBwNe+neiyIN O05klDdUBIAFGfbSvRMljBMZxQwVAWChhr1070USuiZnqAgA85XweyyXSxgnMhri0r0ALFTg77Fc ZPBxIiO7gQDXUeDvsSzflONERq6WC7DK8i/du3xp40QAWJycS/fmCBwnAsB8tVv4n/70p+12fvBL 90YJHCcCwByVv8dS/hTYIJfuDZQ5TgSAOeoNAB/qJ1ly5I8TAWCOyjNhhv1JlsGt0DgRAObIoO+O cSIAFarzTBjjRADoVHgmjHEiANTp8kv31rl3DEAlprl0b4V7xwDUI+HSvQAwlJBL9wLAUAwVAaBm ToYBoHJOhgEAAAAAAAAgyv8DMrq0SgplbmRzdHJlYW0KZW5kb2JqCjQ3IDAgb2JqCjw8L1R5cGUg L0ZvbnQKL1N1YnR5cGUgL1R5cGUwCi9CYXNlRm9udCAvQ291cmllck5ld1BTLUJvbGRNVAovRW5j b2RpbmcgL0lkZW50aXR5LUgKL0Rlc2NlbmRhbnRGb250cyBbNTEgMCBSXQovVG9Vbmljb2RlIDUy IDAgUgo+PgplbmRvYmoKNDkgMCBvYmoKPDwvVHlwZSAvRm9udAovRm9udERlc2NyaXB0b3IgNTMg MCBSCi9CYXNlRm9udCAvQ291cmllck5ld1BTTVQKL1N1YnR5cGUgL0NJREZvbnRUeXBlMgovQ0lE VG9HSURNYXAgL0lkZW50aXR5Ci9DSURTeXN0ZW1JbmZvIDw8L1JlZ2lzdHJ5IChBZG9iZSkKL09y ZGVyaW5nIChJZGVudGl0eSkKL1N1cHBsZW1lbnQgMAo+PgovVyBbMCBbNjAwLjA5NzddXQo+Pgpl bmRvYmoKNTMgMCBvYmoKPDwvVHlwZSAvRm9udERlc2NyaXB0b3IKL0ZvbnRGaWxlMiA1NCAwIFIK L0ZvbnROYW1lIC9Db3VyaWVyTmV3UFNNVAovRmxhZ3MgNQovQXNjZW50IDgzMi41MTk1Ci9EZXNj ZW50IC0zMDAuMjkzCi9TdGVtViAxMjAuNjA1NQovQ2FwSGVpZ2h0IDAKL0l0YWxpY0FuZ2xlIDAK L0ZvbnRCQm94IFstMjEuNDg0NCAtNjc5LjY4NzUgNjM3LjY5NTMgMTAyMC45OTYxXQo+PgplbmRv YmoKNTQgMCBvYmoKPDwvTGVuZ3RoMSA1NjMzMgovRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3Ro IDMwNjY2Cj4+IHN0cmVhbQp4nOy9d2AU1Ro4+p0zs30323uvKZtNb0DKhhBAkCYICRAIIFIEIYK9 gAWQgIKiiIKCBQuIhAAaAgoqFlQUFUEFBb2IWFD0IldNsvu+M7sJRb3v/t57/z128p0+Z858/Tsz uwECACqYCxwMnnnNxJmnWw+qAEIlAPIXrh43faJyU/77ADv6AphWTR93w0zZJOW1AAT7wTNtxoRx NKB4EaBkGoBh1eTps2/4GDbWYn8Nnn/Z5MkTxylupqVYN+D4AFav2D3DpcHyYYSiSdNuvPJQ6MA1 AAOkADPKr5w5aXrr4d9n4dSnAER9J1w327N27bhfAHJOAEjumDB93Mya9z8bBaB4HsDYs7rnpVXO cMdogAOf4viC6p69hj23/lKsf3QMgDsN7N7ohz/e+0306rHq0t+kMrwMfp4MDhnH8q1lm2Xx/X8u FdmkYazKcDwRBmAqfq7jYQB+b3x//CO+Ldl+9sOxFlGI+OASuAZEQEED2VCJHY/JX8Beyu+FSWwg 5mzCH4gctsMxhBj5itaRCHwNS0kYWsleOA7fYM96eB0OwG6ig4/hW6InexHH42EiPED0cBC0MALm wGNQA6uRVlfhGeuhFksWyILJsBmhBrbBEhiK6wjCYJgAn9Ay+BcpxZmB7IClEMEzbsMzDsKtcDm8 DFtgJ67GCNPgPuybi70fwP0wCnpACV71QThJHqSl5AEco8VjDs7PrjQUZzp7rMfzEkdr8mCzdR6j kkc7GYKruAWWkBnCqgW0kO2kAq+jw7VOx5nGwwMII6EJwlAEz8BXJI2EoAzvZiYcJz/gfS6ETbiW oXhnc/A8tqbJCDq4L/4L3v8h0kGCOM9KXPkExLwErqLDIAX00IaYDMNRnEuL98CgBrGXOADnAJyT Ha2kFK9ZSrpTIJtIK+lB9iP2huM1tyFmPoGTtDTeAbfj7A/i9SJIvRRyHbmcTEhyBKPLrTgnGz0H 75PBbfFv6G685lIBHsN6B159rgBzceZOyEK8MZiMWKvB8xiweZYgRRgMRSwywFUIMAfvcCTiayux wwrYBzfHvyE6LKcAJbd2AkvhOcTVw7CUOgVRcFInSxPQ+SG3Yi8bLXz+qfzPHzqps4CHOgkvIL1D KAcoCaQSWvAuKd7faqLGdctAH/83JSQLtmMfJVPIFHgBeYPhqBNznVhKYOrWLrgKefcqKEc8v3we rEd+rkWOvr8Ln3MRAPEJSZwm8HlTFy47IYj8zmh6ULi+DjluMMxEqWTtnYD9yF+lcDeuXonjFGCn UuSP7UQK0Xg7clll/AxkxPfDr4KkTsQrfiJIaS1ig8noMqTtFcg3u3ENE/AKTijF3itgPFJtEdkO IwgPvclwWASbqRo5pRKGQT9SjWt/B9c9AmlYDdeSNCzdh3CtwMlz8Ngm8PF68ON9auF6yMQ52QqY tugHNfE21E5peFyPIyy4osQq5uAqMoV11EI68Hgw2o1A7jbhepci7m5GvhqJuQFr3fG4AfLBjeff h8A0ydO4/uvxPgdAb/Di0R9nfxrugADciWfdi2czffIyaoQtkB//CSl2A55xFV55BUp4LkymQdKP XEIuoQHyEh4ryAos9acBWoRcvYKWcotgG3kPefsxYoQnYQ25nlyC1J1MZiGttsAu1BrzUP4cMAjL v8Kf8CU8AW/A8/AerEEqz8PenfAfpO8JHP+gwJ+7sG+bAPuEo3Pmiahpz847T5iTzdg1H7keKbIF W56nVWQxqScB8hZ5C9ooChU5TB5COEyeRHiHHCKfkStQs50mc8gwUkykREJSYTmOPk77kQ/Jv4mK pBItUvas/L1DOUooR54gT5H1ZDq5DNtWkfGkHnkvKAxRgFgYqcF1sM9SxDyTLfaR48E+61BTnoKH EE7hqMdQFvDAlTA9nWh/iNxJPsGVP0vewfFOpEO4K+8s/3/wwbWvEiwcugQo5XJ4FzH0EHL+LrKD /C6sU1AWWE7eH3mb3NV1r51tyXv9S/4YGcJAwAEDcQI3XfmFH2USP8mc2JC+5+SduEXuPSDkW6BE 6JdCg5A3k2ahPYZczer/xrWyD96PcC/r4DqhPgll9A54HFahJkGgVqQ28gWMg0sRI4eQN1TIAU8i JurAAyKkwzt4fILUuBN72VVWwSryPfmN/IbyfRXZSk6Tf5EQnYBYa0K5qYQQOYot/yI/kVdxxrcQ C4/htQ6i3/A+7CVTyWxc4V7YgWssRV5eiByohZ+Q23fg8RY8gvpjPqnD4xU8dpBHyJGz2O7CAuMU hmenwA9A+uBRA/+Gz8nvSC/0BAUbhXoT1/AwSu1u8i7ZhXrwDeTcbSSMkmEhY0gv7lZ4Wzh/NXmZ rCWvCzIeFo404Yh3HbsRA+fWzx49cTRCl/38X+Fc2/F38A1qJWYz2J38n8CFluNcmCD4HQlga2DX +IdzSDZ6w78hoC5E/WxAPXqDAFfhMR7PZzAYOTsddetVaMV64ponC5YMBJln4Eh4pPLMhCeqyANe jn41mYkVCXLlbUj9JeR+8jhpQi6M01r6Ft1Dv+AIx3Eyzs/dxjVyi7nHufd5JT+IH82P5ZfxD/GP 8k/ym/nt/Gf8d6JtotdF34tOi5Viu9gt7i6+THyVeLq4QTxbfJt4vniF+Cnxc+KN4vfEn4iPif9w zXP94VF7jB6Xx+cJeXI8+Z7unlJPuaeXZ47nKc8znue9Iq/ea/L6vCFvlneYd4z3Qe+zPuoT+9Q+ nc/os/ncvnRf2NfXN8430U/9Gr8XWZMGlUFN0BC0BB3BQDAzWBAsDU4Lzg3eFbw7uDi4LPh48Plg c7A1uCO4O/hu8IPgZ8HjodJQNNQzVB+aELoydNUJ0QntCf0Jy4kep+gp2amcNtrmaStqK20rb6ts 69U2pK22bXrbrW2L2h5si7fL2lPade157b3aB7XXto9vn9o+s/3a9iXt97U/0P50+7Pt69qfb29q /6T90/YvOnI6Kjru7vg11h6Lx9vjcSSFB+0Ew/gashE9zj8R428ixj9FNd2J8bsQ4/dyT/KET+GH 8GP4pfxy/hH+Cf4FvoX/lD8hahJtF+0TnUpi3CuOiusR4zP/FuOnXHNdazxKj95j9ngEjOd5unVh /EnE+LrzMD7UO8q7tAvjWsS41edKYrzed4WAcc8/YHxwF8aXBtcE13Vh/B3E+KeI8e5dGJ8YmnqC CBg3nsg7RU7xp9RtBDGe0VaCGI+2VbX1bhveNrXtprbGtnvb2tulAsZz2yvaB7bXtI9BjE9vn9V+ r4DxVV0Y34MY/xwxXipgfG4C4/F/oSAsixsQxzu4SPwwRZsWU6ME3IdewVTS0L4G61MExR2OZcTS Y2lYvAmlbDZMhSvRNypt/6L9cPu+9nfbj7Z/1P4BG9n+cPsKTJe1P47HsvY57Xe139E+pT2/HXXk v+oAvj6cMClH5x198KtRR+86+sdXzx69/uhL2LIUofHorV9de2TqkRuPtv4rfPTeI88eWf7l8i+f +HIRwJdPs/OOmL9s+HIs1nK+jH6Z/2XgcO/D1YdLD3c7XHQ4/3DO4fTDvsP2w4bD5NBPh344dOLQ N4e+ZmcdevPQzkOvHMKrHHrj0NpDGw9VH+p5qPJQ4JDvkPeQy7bL1m5rs/1p+0rzChstekXytORR ySrJSskjEoxgJXsk2yQbJI9LVmM5WxKWpEnk4pj4N/FJ8Xfir8VHxV+K3xS/IX5dvEO8Xdwq3iZ+ UfyYeJV4pfgScbnojOgeEfAxfhrTMeTq8w0r50rAefU0bmBXfdXf2uPO3vlci5B/+re9byKgPebn 8438igt7+cUJ+KcPfx0D/oZkbfZ/W8cFZ17Kd62f7/d/OzqTL7qgZer5q/h/8eHQN70L5vGTYDkc h/lo2xfBoxjPPYURSiOS406MKU7BL3APWuy74TU4DD+j5V6HtvpXOI2+6/Nogd+EDWhNJqA9ugLj iInoBexBG/4u+rR74VuUgg/RR9+Hsdok9BPug/3wEXyMluY7+AGj7akwBa3PdLQ/V6NHPQO9oZkY S8xC32Q2ejzXo199A8rTjRgp3Iqx3kvo+cwR9hduh+/hR/Tdl6PXRwlHeCKCNmhHX+Fh9DlWQgfE iBi9YinE0Yt5FL2Y1agzHycyIicKokQv+Ek4A/9BX3gteZo8g97Bc2QdehfPkw3kBdStTRidN5PN 8Dt8QhrJIrIFPaUX0btoQc86BX2QVqImGvSwdRjzf0X0xIDeyQ5iJCb03F9Gz2cn+iuvkteImVhg IzQRK3qCr6MXYicO4iQu9GTehD/Q1/8a/kXcxEO8xIe+1ttkD3pr75L3ULe/T/zo5wRJiHxA9qGX 9xH5mOyHVvTr00g6yYBj8A35BL3JI/AZfA6HMDI5CF+Qn8kp8gva4l/RgzxNzpD/oEf1B/kTfZw2 0o7RZYxkop3GcJhSjAd4KqJiKqFSKqNyEqEKqqQqmkLVVEO1VEf11ECyqJGa0JvIoWZqoVZqo3bq wBjfRd3UQxdTL/WRXJJH/SQf46ogDdFUmkbTaQYN07vpQlGKSE1/5m7n7uTmcQu4hdw93BJuGfcg 9zD3KHoGa7nnuPXcBm4jt4nbym3jXuZe5d7g9nB76SnuQ+4T7jPuC+4r7hvuO+4k9zP3C/2F/kr/ TU/T3+gZ+h/6OyfmnPQP+idto+2cnFNwSrSEBG/sCfQxnuLX8k/zz/DP8s/x6/j1/PP8BrSCG/km fhPfjB7IFn4r/yL/EtrFbXwr+iM7+Jf5V/id/C7+Vf41/nV+N/8G/yb/Fv82v4d/h3+Xf4/fy7/P f8Dv4z/kP+I/5vfzn/AH+INoVT/jP+cP8Yf5L/gv+SP8Uf4r/mv+X/wx/hv+OP8tf4L/jv+e/4H/ kT/J/8T/zJ/if+F/5f+NPvYx/jT/G3+G/w//O/8HbIJm2kgKYCu8CK+TbzCC3oL+/x3wKixAT+5b uou/FX3sh+EkyuFauB+jriWkEu3QfegPLMPosYXcQk6Sn/iZfAM/l7+Gv5OfjfrpLv5afh5/A+q4 Bfzd/ELUdIv469EPW8zfw9/LL0H/4EH+YfQQVvKr0DNbjv7ZCv4W/jF+Nb+Gf5x+SY/Qo/Qr+jX9 Fz1Gv6HH6beck3NxhVwR92/uNOprMXRtWxIKLOQ6/4OdHC8SS6QyuUKpSlFrtDq9wWgyW6w2u8Pp cnu8Pn8gGEpNS88IZ0aysnNy8/ILCouKS7p171FaVl4RrexZ1au6d5++l/Trf+mAgYMGD7ls6LDL h4+oqR05anTdmLH142D8hCsmXjlp8pSpV02bfvWMmQ3XzJp97XXX33DjTTffcuttc+befsedd82b v+DuhY2LFt9z75Kl992/7IEHlz+04uFHVq569LHVax5/4smn1j79zLPPrVvPPb/hhY1Nm5o3b9n6 4kst21q373j5lZ27Xn3t9d1vvPnW23veefe9ve9/sA8+/Ojj/Z8cOPjpZ58fOvzFl0cuesUXveKL XvFFr/iiV3zRK77oFV/0ii96xf9/94qj0WhFeVlpj+7dSooLC/LzcnOysyKZ4Yz0tNRQMOD3eT1u l9Nht1ktZpPRoNdpNeoUlVIhl0klYhHPUQKZ1f7e9Z6mUH0TH/L37Rthdf84bBh3TkN9kwebep8/ pslTLwzznD8yiiOvvGBkNDEy2jWSaDylUBrJ9FT7PU17e/k9LWTkkBos39PLX+tpOimUBwjlpUJZ hWWvF0/wVFsm9/I0kXpPdVPv6yY3Vtf3wuk2KeRV/qqJ8kgmbJIrsKjAUpPZP3MTMZcToUDN1d03 UZCqcFFNNn+v6iarvxdbQRMXrB53RdPgITXVvexeb20ks4lUTfCPbwJ/zyZ1WBgCVcJlmsRVTRLh Mp4p7G5gkWdT5q7GxS0aGF8fVl7hv2Lc6Jomblwtu4Y2jNft1WS+6ZjlbBUn11XVLDi31841Vlum eFi1sXGBp2nNkJpze70sra3FOfBcGuxd39gbL70Ykdh/qAevRufV1jSReXhJD7sTdleJ+5vor2Yt 9VM9TTJ/T//kxqn1SBpbYxNcdqO32WaLbosfBVu1p3FYjd/bVGH3147r5dhkgMbLbtxsjXqs5/dE MjdptAnEbkpRJwtK1bmFiV19QkkYzkr9L+vCLGEr8l+CDNHkmeDBldT48Z5KWDKxBBonlOAw/NQS PKvpCqTIlCZZVX2jpjtrZ+c3iYLoxDb+hkq93n/yx/NbxiVbxEHNb8CKjE+6WA37O8tN4XBTRgZj EUkV0hTXWC7UCyOZ17XQKf6ZGg9miD4YjLgdV9s9G9Hv9TICL2qJwnisNM0dUpOoe2C8vRmi2eHa JlrPenZ19hgvZz1zO3u6Tq/3IydvEcJZY5M01PWn1pj01ZO7NxHTf+memOjvP9Tff8jIGk91Y30S t/2HnVdL9Jd09SVLTfqqGs5OkyVq54ReZMrRXYNZpUbZxAfxTyww9RUtEilypdBCPL2bNPV9E2mt 3Ov9H09qiZ9iZwnZ2dOSy2zqHj6/3uO8+nnLUzZyuGA+RPsPG9nYKD+vrzdqoMbG3n5P78b6xnEt 8bnj/R6Nv3EbF+JCjTOr6zsp2hJvXWRv6r24Fm9iMuke2Rbfxe1qvjw/2oJZdyHbnBLIm8tyhUrI m2X5FZXZ3C6YibAR4QMEHsZiOifZwoEb0woE1rpE6F/DbYcmhF0I+xBYSyu2tGJLK7a0YksFusmE e4l7sTngxktv2WwN5P1caeM2QxyBcvdxi8CLc49J5mOT+RLMMzBfmszv4RY193CrK2VYJ/AzpnEE ive2qrnPoLxtQqG4VCis7GxZuRlb3JVWdOWbEHAIrmoVrupnTAnOuhLbV2L7SmxfKbSvBCJM5U1P TpUsrGpWm5ItWKiUc7XccMjDKWqS+QhueHOee2dlPXc5Tr1RSNdwwzBdIqRjhXSQkM4ReucI5RlC eYZQrhDKFckyS7PPSd1CqmYpdxk3FNKxZQjXT8gHc9UQxHwQ1lk+kLtEyAdwfYT8Umy3YN4fx+kw 78f1FuqXYL0X5n2xzvI+XO/mXu6cyplYH4t9FK/H2nvhGnrhmnohkljLEoQ1CEeElrGYzkH4AIET RhKuFx5VeFRylXhGFOeIYk8UOC6KRwUe5Vw59pTh2DJMo1ypcI+lOKoUr1SKuCrFmUuRPKVInlKQ cKWYerhCyEGIIgxGqEcQ4TyZeF4mrisTr5DJRSCAc3npYjBg7knmbroIXJi76KJmlztaKaNbYDBC PcJMhLl0S7NIp6404Dg2NhthEMJYhDkIqxE2IkihItETVdAKWsENooM4Hrk7fXNpaZ6Q5xclcocz kStteerKa7h0RFM6rEbgcMnpuOR0vNXOmhuBIuukwk6EDxCOIDCEpyIyUhEZqXiDqXh+qjBKLIz7 GSGOwCETpeL8548RCWe7EbLPmYW1pmFLGtbS8Jw0HJuGrUcwJcIZrH8wwhKEnck+n8DMPoE5fTiX D1ebjWmFUFJj6uZ8zVSmbkH8ku7qymLE+yAE7KT3IDbvQbzdwziEMiHOxp6K5IglCBsRROipb+PS 8UjFIw0PHx5ePDx4IAXRQ3TRpXgsweNePO7BYzEei5Aaho3hnWE6tnBG4ZzCJYWrCzcW7iyUbKfj 8Kin9VE5mExoeHRaqa1SQ3kYDSryp5BuENJrhDQqpOaobbTq2GjV26NVD49WPThaVTNaNXC0qvdo VfZoVQsZHzWHVYfCqqVh1fCwqiisKgyr8sOq9LCqUktqyQhQwStC2lNI84TUJ6ROMqJZBbIdZBR4 pcjxJHWL93b3N94WnjS77/S2SDG7I1Eblch6sMYX3TneSe7MREsokQW8L/M4A1xOngcJCUczJXsk YyVRSTdJliQiSZOkSvwSt8Qg1Uk10hSpUiqXSqViKS+lUpAaWuJHo2G2t2wQs51XEPMs5YWyhrKU bUOjpaZESqEfNOm5/rT/0J6kf9OuCdB/vKfpzFB/C5GjCRP5e5ImXX/oP6ynpak43L9FEr+sqSTc v0k2eFTNJkLurcVaE727hcCwmhYSZ03z7Mxb3AaEZM67x57Ma2vZOTWbeHLPPbVguq7CUqEr13br 3etvkvpkGj77sYTPreBKnE3L+w+taVrnrG3KY4W4s7Y/Yo45l9toCS2q7rWNFrOstmabfC4tqb6M tcvn9qo9Ow482N5rG3hZJowDDxsHngvGuWgxGxdkWWKcSxjnOm/cpjJvda9NXm/nmDJhTNn5Yyad P2aSMGZScgyXGOM9Z4zkKHiFMV7J0b+Mcf0PY4J/O+YcbE7sGf4vH7IN+pEDm6puYp55vb96IkJ9 06LrJlua5o73eLZBFTmQdNpD9eMnTGb5uIkt5IB/Yq+mKn8vz6Z+N/21v+km1t3P32sT3FQ9rGbT TdGJvZr7RftV+8f1qt3cZ1zGhvMut7Dzcpsyxv3NZOPYZBnsWn02/E33Btbdh11rA7vWBnatPtE+ wrUErke2lELPWnQFhXwzVciRgevt3tqeJs3McoGbe3gtt9lbeSDPggI9YyVGWSoE1hWpjFSyLpQy 1pXCArBkl+W2Hl57K3k22aXBZq2/J1iqp/TCv1mzkoX/8W/WrFmzx8waM4vlwt+s2dciMDLBLJg1 G/AOKpWCfXOjNma6eRHCYkFHc7Nm1c4GgaazrgU222yWnJ28q3QtzkxmncsEMOvCD+OMMCQAp5t1 LcFRbOC1SbaZxV40w2mALTIxCfDfItwPdsxd3Hi01xA/koSvY7cl+mMd8Tg9iOppWBISn2F4PCik w8iARA5XwH6YDvfBQ9iWT96H5yAKamzfDxxh334ohWVwPXwCl8d/wVYvPAk/QyZ0g8nxmPDmbozc Ck+SxBvTJfAxe5eRlnJh/gdUjRkkh1tP7oAIzjIMloMZPsAZM+JyrG+mTlqKZw2Dd7mx0sx4TvxX sovfEx8PT5BSeoB/Ad6Dk8THQ+zO+KL4yvgqSIHTnLPj9XhufDqedTnUw7VwC65gLjwGe0ktLaM7 4wuF9+InYutL8C4JIzvVoz93GY6+C1bANngFPoBP4RtCiJqkkblsZ08EHbtju+OXxMfHZ0A1DITB MBd7nSRIKulIbiS3gTvY8a/Y0bgL5x4G18ENcDMsEb4zcBA+g0OEo3I6jF7ObQA7lAlvs9+HOHsM MbkHjhApKSDdSZTMJ8/T63iuYzfadx6MiMG+Avbvg5WI07WwEXbDPvgQ5/wFccoRK5L+cjKa3Erm kXvJA2QteZ68QH6gIvopx3G382/yP8QOxOXxR+LP4XXt4AAPerqZSINLkZ574Xu8vwySSSrIRzRM MznCKztisfx4n/ic+Bvxg+CHVBxbhl5tNQyAEbjqG+FO2A5v4rl74X04Dv9BLHFETnSICw/xk8vI UHKtsE/7M+mgJqRfCZ1Gm+l+Lszt5UfwL3RsiRljzbGfY/H4+nhT/PX4ewJ9i/A6VUiBOpiJAsYo thWv8wYcg+/gN7yGmLhxrX1Jf7zfFTj/EdKO7CSlt9HnaRx936XcHt7Kr4gNjE2PrYhtjhfEByBv cehyWaEAj+7ITZdDLc59B2LzSViHlNmM3HMAfiIW4iI55BIynNSQejKZzCAzSQO5mdyCWH2ObCHb yQFyiPxEeSqmRsRTmE6gd9BldAvdTQ/QYxxwQzGCaeBu5pZxW7h93Alew2fyOfwAvp6/kb9JhA6Z 2CR9r93cPr1jfMcjHa/HsmK9YlfFFsVejR2IfR1XxHfGv0FHNAfXWAuTcI234v2z3f7VyB/rcI1f wbfwA9L8V8QFR2TEhit2C3SrwnUPwJWPQIfpSjwmk6mI/7lkPWkmO4Rd7j3kXfIROUx+pgRXn4VH D5SCy+mVeA+P0PW0iX6Gx2/0D4yBM7k8Lh9jinq8mwXc3Xg/D3GHuW94yhv5XH4oP4d/S8SJrhAt F60U7Ra9LfperBGPSuqIYec9uHiPvsqXc9NgDcYGHPc9/YiWkltpG3mGOsmreDUnRluDaRXtgZ7R duTy6WCQrBR7xV5qAI2kns1BH6YRbgQf4pQwG+UN6Eg6n9bD02QHtNG+yGnXcXvpGjqWW8nfz5eT gxhdvMoDVZEzUAmVpBxp9zE0IIUi3EaevcsLIinXLppOVfEF/Lciyn2EerCMUO4dMpKcJIOpCbHV g94LfqxryEnML0EJ/Aw5fxs6nSX8UW4x7UcPYds0WEZexXvcDtPodvIE0qUE5fEaMpis4nLhNtKA 2OgGU+kD4KMzqQ/5+XL4N7mDGFFy25A2AXol8JyKToD9tBapvo/oaBa5Dfl0OiwijZBJOsgueI/e B0VkIvdKu7UjjZL2k2QT1xc2kTZ+D78HXe82xKQTOVeK7vZXyNMr8SpvgpcLIdeUgIiyt2XrUANe Clr6G7mFToMpZAX3HVlLK2EQTORm0d5keew3vpLLR4y1ojapEneTgqhU5OQLkOLfQjly4yQA8WT+ iOgOVuY+5k7Ha+Pe2FhRSuww3ITY6YvabRHKUl/4nJjIGDKEj9P+fDw+HNbTjfzhuJkoiRc+jKOE xbaSUhKIe0hDXEGGIIePYd874xfx8/hr+VvQNrWh1pwP98Mj8Bpak6fQbqUiHi9FbI5G3TMFbUQO 5EEh3l059EStdAn2DYbhqE/rUUteCVdDA2reR+F52IQWqj/iYwyedyVMxfZZaKFuhttQ/hfAYtQB y+Fp+JCuo6sxwr2bvkGvo1Pgc/ice4uLkuGwn1/Iz4GhGAEPIXq8cjFSyY3nLY5/jFdLBztq/wKU UuT7+A/xA/FnOz7A+Z7Gtd8v7gk/iKu6RGHmfwd63VngAggfnQU+D+FKhGbk3TEAkjUA0nUA8hA6 OfsBVCkAKdMANE8BaDHX3wVgxNz461kw3QFgvvQsWJcC2BcDOJUArlUAnp4A3jcB/CsAQtiW2isB Gb0BwhjpRQ4nIKcEILcpAfmnAArvBiiaC1CSkYAeuL7yy86B/QmoGJiEexNQ2QhQlQbQazlANa6v Tw5AX1xPv1KAS7E8QA4wUAowpAjha3RH8N6G4z3X4HpGYjA16gBAHc43NgYwDtvGbwG4Aj2ayYi7 q8YDTMf1XX0iATPKL8JFuAgX4SJchItwES7CRbgIF+EiXISLcBEuwkX4/yWwH3MBER7AgQR6bxJL WohyC7aKeFbgQC4WYeFFjqM2mYS1vUjAKh10syU8UHO6dEBH6UDNmdIBmo5SqCjtKGWQm+PVerVB TAjw0O7hdrVH2Wv/Hn4Xu9T2+AnewLeBAswQhmKoIj2j/d+2ErGPXCVNvIMczkiXdX0PT5bmsgxw +4/4qd9fyPkGaKz7rNRq5XoUF8Yri7ONxVxcXSxTFquBQFxXLG4h30Y1vVzl4rTykmJ1JsmMlxfn tdB/v9RLBtmKCestYaioIJozJzvqTh7THEsUQHOy4yQDXbfsupNaISVanbmbuVtuTtWN0WGRKmIu LShPg+5FJWkkmoOlnllY0kh1aZAiV6YRA48lE8VSWX6PNNKtGJOK3Mo0qIpgopWo04hKgYleZEwD M8EEup74dxZuv530bzIN7d8UHDKyJirr6ejuMDlSHKWVsvgxqIj/CFHMNQiG+LGSzk8tNNQRg9jv CxUWFOXnmSQFIb9PbDSY8vOKRKJEe3FRcZD1GQ0SMfcPY+nxh6ZMXb586tTlpbOGDJnFgFzafiZF otBKRDpOniKVY8H90NQpD+Ggh8o6B3G/T1uxYtq0hx6aNnT27KEI+zp4nVIuF4uTeUwz7aEVV7FB w2bNHnrZtbOB0LpYO7eR/xbsMDiamq7M0FCROUUv15nEYpHGbNIby/WiATKZfk1KAEADFKyOd1qJ CCzEOo9xXt2AjtOlmpMaRAsyHRKsG2FJbg6pIwU6XXHnzVKjQcdeacdbTg3REK0rXZeqTNFZJVeP GXO1xKpLUQafjZJfZxFKLvMrLFq58p1Yy1NrYy17lHKtVeEj/WJASCTWTuckV5suozIbcqCNZyuW 6cRmk0YkxtXK5bhoXK8a3Lhem/OpVjKgc71n2HqP4YKF5Z63WgOlkk4i6QoLaGqCNGaTzkTn/O1q f5kVi8c2+JRWXO0e0veptaTvO7hai8IXe5GtNhzbTz8kWSCD/KjlNfgIjsIplMQXefJv+ip8pJa4 JVSyg6wAOUwnTrCEcXnHOo5B9klhQV6SXA7ZRbSxg/aQ1c+RrI5P8/xWuZK9xNRKJbyezkGNYYsq YRfeq4haeSZZA1GejkP2ADaR0VvI69ufoXNuuAHXtDf+NUfgF1CBIyonzVIF/6nCmjJ9G3GBoEkG IGbwrKBANrE/waD08kDJ4CHFLPllUEn3gQzw+sfjI7jvRdORLaZHu8tkJmKVcSXQTdabXCIbJbtK dh25QbZQulC2nDwsW0uek70IL5K3yB7ZAXKcfCc7Q36XmRUyomghb2/lFOUwStZCmnFRo6QvZ3OE O6htIds37UCsnK7rOHn6ZBIvDXV1pAsxRQn24o52jNbatVY5fVJhSNFaRYE/a4JWtdIoetacYlUr kA2+wfs+IWJPn7PJhs06Kve3xn8FLn66OSJNR7H+FdLipyE1/h8wIRjj/3nRkSJLkabQ1vjvKOm/ NjtTIuyMjPivUX+6yJHiTvHppktdDh1kkVSRyudP8ZbpMstEOpFIZSuDFvrei7mBshRrzuOtRIzs lzkvgV7Uc6j3UGJQxXVD5tN2S3AgKreRNEsTsljNVpPVaDVYRWKH3Wl32d12XpwaSgulhzJCvFih lCtlSqlSohSJuZBPG4iCR2+LkrA4GIUInx0lfrU3SuxWTELKzChkUUzOarcM/IRvh06lRUrO/VSN rokatS69tcLg0portCwxuVy6Cl9LvC0axUKqwaHFxK7BxKrGxJxS4WdJqsGkwhImnAHHcS6doiIi x8TESk6D1csm+TFqxoLaYHazs9wVVK7RlptZ0qV7z335BqGWGDWCXKaG8K+wUCOoFLMJ/1BppuLh 91Ejak4zHvl5ukLuxO0TH+l3Z5azWm3GUv87sly9NKZhVRnWtG597llTFbakdeu7eA09tC/2y2O3 9Cj03l82fNY+omFl3/2lw+dcv7fMb/XHju7adv37ZT5rgHh3MWljv+l5gv8ddc+mZp3U3hL/ParW ikEqs0ftg3WD7bxM3UqfAyVZGZVplEq15hWZlLIWEbboiEhEySvS5Pd6JDq7oZUeBC2d9BKIZFKl lRq209tBC2b6flQOk7RaMgk0RPMynQkOeJy8n+AgVBAnSzVoIU8LCuxkwjaCpqMMLaWFaH47vfu8 Sm4O1AlU1noT8uxNGByv9qw9okuJx2WzuTqmsZR4Yj8ZZGqrXGrlf28bbdbrLBad3sznDBdbtWqV lP1K6XrExEGUpTD8uQ3F5ctopj1Q0Fd9U8r81Plp89OfTns6fbtyS4ZMpZObCpUlGXy6P8MVNqS6 0vxKg4Ixgep73UnTn7oOE58m7UTS4ZeSOBK9TI6hzlQQFeqrUVtkMrnS1kL+2GIo86HXQkahx0Kx XfqVtixYqaIzIIJmfBS4cLyCTodMcl+nwGnOnGbyhgnTaicrEHXHNCdJEkOQwBAKnsMd0FlMQU/I 6LVEQe/XRonZbYgSXQCTpODcfnsClfiBBtIQLvYmLLcR9WuguJwWChZcIk7akKRmEoslIOmg8yyI x/b9BH5tGOZ+4ear11nFMqVGa56ybdyjX4dGXRf7tHWYl6H/2luO/zRj8qC0aU/fVmeRyM2anKfG fN7Yfdys2bHDjzMufD3+NY94AiTp5mklBFpQH+Xn5RVquwcuCfQLVpVcA+I53vklD/LLCpeXrC18 umSbvtX8rv5dw17zIf0X5h/1f5rj2Vp23laDD+mmbUECOrCQLlUrwmlaLhsXYgGR3wFWlyctlGlt IaM2ezy6zBZyz+ZQWX4K5lt1ZWJ/WVELUUXlxjLO4ejG2bpntyIFHPT2lxTWbvkiserHVjI3QQfm 6DHld+zYQM1xRP0ADdIEGDE6jmGV+XlMEQrMjH/ahDp0FBQGgnoDLwoW+KPMXYuSQGEoyhy8KPPZ CCMKemrhcEldQwmUNBBTwsEIdZlu9KaQLqGkm2UWagKVOrk/QSROP/um31qmnchSmzUaw8oN978x 7sU6l81q7duw7JFbRtyfqdEqtJYRNz6y+r3xdH3B1vEPfTs6R6PTWNSzXprZf+lQJiWkcdSYpaUF BplZk1Z2+c67hi1Hq3OASQr6K07wwodRFVpqD3V5RU63w4RoPf6i0/mKSW3UtZD6qC4l5RWjx+ud RDkD+7Kp1+1BxL/EcbzI61K5sNwMKWhW0BI5HUwKTKDGNpORa6F3RtVElDLJ6XSD2kVQElyt9Grw klFRBYoQsfp43qhEO/QRkiPQRY6GAR1n6hpKS1EqStF/05SeZIWf0Dkq1WAja9Z2Ey3ICt+q2Y3C gnLz2/7Szlydm9NAvIUkX9vpKXQWkiomX6v1E47r+Jh8vLG322Zz9xbS2NssfTQzNoKMHceltr/H cBf7rVPPkLH0SIcX+Xw343PEXCZ8GfUp7DKHT5Zu7W4RRdIvTR+bfnX6ivQ91kOWHyxSK2NiE2Ni PRbsHr/UoPEETG4bcTu98DJhP7FK2Gvp5FhU5izjeTmEgvoW8q+ozFwmt5VpJETSSudBOp22FUdO CgZayBcvaayRIC/vZOGzOBtwGgMrTSJSQfZlDm+2ELFgQXAldZ3ca7E4RDKHCC2zRYaJXeyMEqvU fJZzUYTD4boGou1UFcwzvoBz/b6EOulU32ROv7vLHvvk1Jbrrx4YDVk0Wv1Dzct2PT33zjs9KnRT +zEVwt8fm+h2f7n17d8Lg8Vek86qu2fPM/duqNZYTDTC9BBqTx1i14ZaxA85ZF1UmeUzBAp8rrDL 6wq1xs+wl1KjKYV8D2kV3196OT9SKg4igjcjfj3J3Cfk/oJAS3x/VM60B54dkKpa8Mw5PM9LDbxB GuJD0gx9d31//Sj9VP2N+rv18wLb9VsDnys+1/2g0iuISCrxiENWdcAT9E70TPDe6L0xbVb2zJzN vu0ZB5Rfy48rdSOl6M5otDqP3uA2ukxOs1VjUfkgoFIGFSE5ycmmWZloRNIl4QyRWZyiCuSijKzd GinjOJm9hXwZNbnLDKLUMpnK8pW4DDI0GZ6MnAw+42W6F/IgQAKgpE+/5CvLSSEp1tztpITc3uWs 1Q1gpqOjDl3yipNoPxitjzEqMz3FIKGmgpkeL6/XqLVqnZoTK1UKFRVn8hlR4tH7WsjzUSOE5Oil BQNpUmwMiyJR4lW7WY+CBFWpUUiXpEYh6aJpSgUfjem1BsHeCH5QwvKEyVlWETgFzQ7jlSTv+H1g NKBjdJZ1yLSBayfO3/fKM9NfLqqqyFnzyS3DSiwmrUqXXvZ6bKc19OSMmavXTBw3spTqZ1195Knl f8xftOGjx+6esnqiT23VmeWG2KZvvR++uGrj4jufH1qMUvlxPMYdQKk0wtxNMo7ZbTGqrgwqFnP0 FZlSpZpkBIPRCEZ0E5RmhVEJnIbQSQq5Vq2R8xqlohUlkdBnt5hlVtOP5zjGxwYILk2FoHhQ75gF aWLCtCAlK5zCNNAFZpsUehOIKMQC6VTo3O0dTzNdwnGxF6SmFJ1FzE8LCWKxen7b2zatRSPXoRb+ FqOBb4VoIAi5ZEG0l+4Z3zvwE/yk5G280xiOjAhPpCJFCm+xpxgsjZYHyCPSRxTLUleHV0WeI0+m bqU75a3K1vBe+Tth/Y1krZfmGiLo2DQ7/K6W+BfNOf6s1vgXGEb8vkUrTUsLsLaMNF9r/EcIxr9v TvV5mRekC6dFpf6y9HSxs0wvyi4Tq/wt5LOoJj3dpAmVcV/ZyipMg0zU1EJORhX5njLNV5llMmve BQEFsujpOkyZKjouMCrjU4E1cyK5drfWyEtdOk8UHAbUQ1kSjAZyRGhG3VrUSHYjJhFpdhRyMXQ4 GyYww/rXGAHqSF0DNFSxL4aE4yc2o5+PN3JiM7r/LI/moPcvsmBNZMESYSViEdoMygqjBYcbWZuR tRlZ23lOf22X/UYdWNypCoXdg+Jztkf055Q5/ZSrjq5Zc/SqqaMzun+y/KH93dNVj187+/HV112/ 2vz83LnPb5gzZwNdlP9M/YOff/7g2GcKCrsNGd/4wQeN4wd3/27aylVTxy9bFpPMeOqpq6959lnU i3rUi2bkiyDkk8HRiETKZ0jCkLUu0BoQh5iS9GdikmLBRJXiyitQ+jDJM+VnpmYamSemHpX7je4P /78zTmeJdgLJZVqSndXCiG5C+n8PeYinCJ4lNmzN3Z37cS4/RqoKQChFmapIk2VgXIclVQgbVLw6 kF4mFzF9FpVno0KTe8tMqlAr6iwVfToqD5SpbYW2ryRlmS/TZ6HgrOrSnO5AR+sMssY3kOCGYxWJ SEErxJxJxZWamuXz80ZVijKFirXozug1Bg0vFgUzZMgjaQrkkdSQzxhgmkpPsngWRkrTsTEFE7/G i+1bISLO7tJd5ygvqAszhdVAunQYlgUhTVLVLNBV8JbPsXlQWJAaOkve4iJuZ+XmMSOerN+55pod BVXdQstG33b3yG42i1ZpTs3/hOQZCh+dctUTT1zZY1a+l745a/YVr059pOPeBRu+ab5u8PLsCp/G ojUr9CT/24xP31225Z6Fm6PRMNJZ2AXhxoMKo7ncqEzdbFJIm0Gs205MqBN4YtqqUFitjrPbIqUD NIkggm2OkPM2R/T/tFVyNuHGDy7uMZBBx5Ku/RMKE0kjP4VTCqsobpYESAv9PWo3BtQKq83JD9IR /FPrsnUVOk5ndSS3euvQZdOcKYVs9N7YWjojuoQje16Nz20XYjruAZaeU6YvMT+AQexjs15vZiB8 fb0SvdUBGIXOj/p2ij9U/qrk5DJZALQGkElBK2VlgmU5EPkobQtZtxmko0ilgqxDK7oc5GQHRrTr QYo5oeujStiWLSZiqw4G32xpIV70oqzZxJLYNTx28tjJk2A9bTlp1WCyQJpQ85hbhIKwBda10UOS PMIdbJ9tUmitPk7e/h+fVasw0T4xsdKqtepJE2nSY0GJmNUC8Dv5+yEDssiL0XCRFk2JvTyzONJH d4nt0szeEQzYTWNtYzMHR37PUIchIyMzi1AakWta6FNRk2qJarWKHlERVbpWpdJonXKtzp/OulJC ofyMUCg9w+nPyJRxQpNYnC8YQKeMRqx6oclkGq4zmfQ6p1Wn9TlYU183uOe6l7q5fW7iTre73Q67 02e32TIzMlx2m8Fut+m0WheNoO8fCfj9csQ3cYXVWe4smpUls0YyQzZ9yGaltlZSgyFuedSQEbJH 1bIK0BK13W0/aj9l59HhyXwxh4a0kZCulZSDNr5rs1ZegSHerqgGx6q1BLSDtD9r41oeyZe5Obt6 GhImsbHQgALMIuVEsUPYYWARgbAJLzxSQNO8QCSEAwuQSAtuxahA2rXb8EtdQ/bp3ec2/B9VhbMl aP4ZJLY/uQsCCpIM47zkgg6O83PczR2fNjzOODz2Jksryazfhf2MZ8gjlULzWyzwWLPshPsrsiC2 tzPg4L5nzN/2WlcAsoBO6HiUPYUZgTxUizzkgFTII1dGX96YsT78pvwNxUG5aElGY/hRz8rg6vAL QfHNgTnBWeFrI0vkSwyLAkuC0ss1EzVz5DM1M7UzdTP1kn6eAd5LAv3D81NEeeoenu7e7sGKjB7h anUfjVSWbfU4vPagPcOe7VdnhKU3anYE3srmensuCV7nme9pzHnQs9az1SPNlGLIGAZwmqhUFCbE Kc3xpHD+tJQ8T6ozPWRKDUldTlduXp5JSk1Sf1CtdCuzlRXKQcqxyhlKibKF3BlNjwRBq9FStXap dpd2n/ao9pRWrLUVpKZh0Mg290+hErDm97sxwRNMThuST5bqhGCReWhILyEE0iSi9uQG1PnBoWBe XIFMnUGu0IfCwQxDJEKCcn+EZOrSIxBQhCIEznoa7LlJQ0NDHX6CWv85GkwiOARdhNZ784qLBN/W iwFRUSK09xJoYPSlmkffWHvnTYPXjutYzOpvkPSxg8p6PXB9bDN5bsgN5bWPLYp9NCxB7q03PTI2 e9WYYYvGM5LTIr9javGgee2mvlO7RW8oZ79iED/CX8pvgBI4Er0hYiDZUAGDgBOZjKbh5omGK0xT smYaZplmWraY5cWOopx+pn5Fo8yjCqeaJxfOczycLc/PVXvsPgKcNMVkLs7z+F1qjP51Cv+WsC5Y rFjEu4LhYo6nYVlKSFrvDYVs3e0hda47Nzu3IpfPtXZbcA4RBpxk1ryjg6Ff2DlOYF8w58knLuZu zLKjXYf+TYqh/ZsCQ0aij+ZAjxR1N3M7nfEft5pMZofF1Pm0irlyKOmdu1jJoCJVCBzYgU0g2OCk D8YsdRZXWFigwxbu04Td0JqpaPjsB8YNj4Z6pjqIZsu09YO1Rp0pfNneKaPG9B2zMG/etwv28e4e jCTfuW0W+7DK2rA7MnBs75plO2I/jBlrNGnN2aPr/Pa+6+8bsf4Wwn4Ag/0PAP46lD0nqjpl1Huv fKHibt1C/ULDYuMS9xJPo/ee1Mb0JRlKRRpJ9aQ7vOxr37KHU7d6aZXU7GT6VmFLB5vNCU6zlLJ6 oShd2F50SrVZarfLZHK6zNKwSyajLikNhNRqolZ71FRty8p0uYgHqU3BGtlOuhHp2XjwrDAwVwCF QEhYpPLftmBRFgo8GXJjilqlVqoVal4cCqYG04LpQV6s1xl0VOwNZsgDWcRj9GeRoDqcRXw6d1Zy o4Bt2Cf3HjEiPFc+2IY3o5vkrCYUpIL5UKmCZDj7CCpw99SNWYPSndfMm3BHrJS1rCS5U7fVWQM9 A4uHxD5ICkVNydipA6bMvv3XkT2ZVDS+OmbFwLLawZmXoDzUID2ykR6FRBe1jXXPEM8Rc1pFSlin cyp8Dneh3+90cDIx2pnNalcFy6OZamuFeDhFq2iwmcN6vdNWkMUYnOaGCwudWakRFovTjHAo5Ixg MDwtWmqjJKTwB0K2QggFXQAKG1VIfSG1g/zsiDuoo5ILgYwMlq2R7ZMdlZ2SiWSFoVAWRDQRGmlB i2gKBtE9ccku02frftadYk5TUb8ZliTlTnawna7TzJJp6hpOompLarOOxFYX+0PtdRLDy7r9pV2F pEYTquFwZ0dXO9tgJ9rOrWBt195XJ5W0nSHq2THJFnI5nc/Q3j6OUaRB0GHcLNbS8TQR9mmQChZa GHMLdiy25ay1ih1hLXtj/ccKPT+xdCxSaTX7XUWkUgH8Eh1TLyJqmTKs0ThlXrur0Odz2vMj6hx3 Ds0JFxQ4I2hGipgZ0VmNYa3WaQ1lQromnaaHg0Fnps8fshZAMBACsCJVZFYqkxYEI8EQZGoyB2dy mQzfmYGAH0hI4wuB3WOng+1r7PsEP0Rkv0zr0RDQzNUs1ZzS8Bpr4ZltTI66TAoiX5OkB9tsZIF/ R+lZWlyIfTiXCnV/QwRSd+ETjiQJiv87DVYlHn3E7J00UCtc3KMM8R23nk+E8/wFlfzvSYA0mIuW Yx5ajlISipYs9jziodmaCs0gDXeJsndguKJOOTzwtOLpwA5xq1LG+83+kDLVHwoUBcRF0G0pdOsG zqLCbKaw8tV5JK8oKy8vO8tZKJe6UzURPXGZLWieIkUZbqeG89pLQ0XZoaIrCwt5vTeYwqH7NyXq MRj0NCPIy1xXZmVFXISArTw1pJa6pVRqLVsw4wKzIrwyohE2uAR9xry9Y2fNS2JTM7kVc56aq0vo uUSFuevnWJ5KOWjQ5ojipyETQ950hLT491sDJp/J32l/0AA11DEDpGWWJYsmDYs5+eSk0x4ltuux yjNb1ClgosSzFT67fufYBXvvHbTwp8XvLpaw/R6LTmsm4g9vnr19SBGBry69Y0SCVATjHo2BNMdW FBYNXtq88JFGImqckWtQ21yvuK1m5+XTJt5bd93DH57xpJFiJLGFmPUqkwQpehVK1QyUqiryWlSp e9z0QvZm085sPhEqKFThZIRg8wiev8ZJnGGv0+nxOm2ZeUITZJPs9Pzs7Lx8Z2ZpT9akUVe4K2hF uKqiomeVszQRRyjE4WQYkQgiFKb0ZAwRDgrzqNNIWjiQlhYMOMM9CllTFZSQknBBSUlhgbOH3+cC QmTWvFBmZtgTsgVD4XAiZijt0UOOAUW+K1DgClRFHe6C1VUbq+iSqiNVtKqFbo/aq3Uur1fryqFR upRyg+g+StV0LJ1BObqDbode7IcwQHgSj5LLnD8U43CpEBEziS1lkYLgE7JUm3QRL2SVv639c+W/ nXXhHIIGELalstH0yNSGClMUk2w0RC+l6LGCSWKLyfuXxxRJVd31GMP7l5YLw407Oz4W1HXssCD2 BSyw+EPQIDQy02Wzuv9gLQVjO8dY3TNpUcx1fsghKPJLyZbOcrupsx957hsMQL5DnnPDwWgkm88S +ZUelcfgMWY7sl3lonxljiHHWOGocA0UVSmjhqixv2OQc5DLyH6lCDlHWSQ8gEZOcgt1RxE4HG5w WhO+kAI1f8IXsuhYPdVYpDUadVqnxR2y6kJWC6UhqTokk0lZEKodpCEaq2fxEUuXD8SojsRmVD/5 v5Dy76j1lyfS521d+Ol9FzyVPio4+cJOBl9+Fllnkcm0by3K6sOIt570kmidJWIpslUG8gvzi4v6 eEdVTvJOq7zee0tlY7Sx8uHoysqNldsr383Xq6Eovzp/RAGv9oWLehdUFg7P3V3xenRXpdTus+dO 8U3JfaBgY+S5ohO+PyJ/FMnzegLkduI5fB6eU8BBHPkeRLXHac3IEbYSPJGlEZoTIZHI0txIJCfX mZELCSqkgIiI8s8jhAJNcYIQ6X5WHxxSh9yhnBAXCvuYJnSm+7yVBdEivqKnLxd04PL6DF6vD7y5 Pt5DckIZ/lBGero11+fzICWRlBZaUhwqr6iQSjWhqEwKLfSmLV6vRZbXQmpe8vTsmQs9Q3mt5Fnw 0Zui5ujg3Prcmbkc5EZzB+dyR3NPobdWWbyd1IAHKkhRVNvL62EcARpyijFFVb9WMuyseywEi6Wl Vs1pW4cFqw029piXqQObVdAKJy0VtpOCwugoFVxnTWniSBqcBVlh9gaKFaKu0gqIOooxseZhYo5g YkirSLypV7tAdOtuYCdYzt1ByP7vuuTcvYe6hv+qTiQpyU0IIUL6J43h7dQP5AKFQdjLKebOGg2l kuErhWefa866eqSetaykvUaz/D+syRfLm3R3RaB+Cmt54q4tC8g7sca/8nlHGxV1KZAJGbfNrjwl PH+e8kEG8z9QAmpQArwwLVqCLnohc9GZAwjgtKOLfijpkRcyj5yGFHbmZKtlRGZDX86l11l9a288 55XS43XomyU2P896YH/xyxBVqDb/yQdOPrr8hGYJb0HUs3t9+20BCd90ijMZyNRkbMwFIk3Agvez C++nhKZGu3/tPO6ivaFfyS7YBx+TTx0fOs/AGXLGKQ9CqjPVFSrp4xjheNa1zbUf9pP9zu/JCaeq xkWUgnTpV7Mgz41BXrperdbpnUq3YFg14Bvso770kM8XDDnd2YJpVeTlF+XlFRY5sxUioS7N56VS Ee9U2I2JySxEbXFbqCXdYLEYDU57VlpCqsODwzScnhoOp6U6s1rii6IOJwGPw+l0EWogLHWVALic LgM2oUQ6owpXMOR2u1wOZ4iwej+Hw15STDljyE6zslOLQtnZCoWS14eU0lBqSYnT5XIWF7lSo/AB caeOTZ2RujF1Z6ooNZqaXpAa1RWqU5ek7ks9mnoK21roV1Gj003GErqEfEAoIbzDwVPKY5B8Y9Sk 93C8gXcN0n+gP6L/Wc/rrd1eS0ZMA5ig2qyakxZtt+zEX10DVuvC4QaL5rhNeDuAtTL3vSMhxyyr YCpAqCSkGzmHbRouuDWxtSu6VbM7bPlnw97w/8w7aBCsyjXoUTYQP/nruwidEkrIP76u4KeP1cde 1qwUrPY7LO1TyNL3STnp9r5g0RNvMLzrsqPk6tirChfaoY5Muv98Y859L/znE4wKAKMCFVhJezS+ R/2GleqOm45b/tD8oTttOm0Vv2X6TPOZ7oDpoOU7zXc6iU1j0xlNJgv/lu5P9Rk996jsQeVT9DnR c7KnlO+I35FK76SLRfdI5yoX6hcaH6ArRdJicbE0X1aq7K7J1+WbulukGTSszNYEdUFTtqUHlexQ 79Q065r1zcYm005Lq1W6Qf2CZq3uCf2TxqdMGy3rrNIR+iGmOstqzYP6ZaZVloet0mp9tbHa1M9y qXWkeqTmMp003dJdXaQvNnazDFT301TrpAqxXGoX26Xp6lR9qhH9dCvhpXq1igeJGUMQbVDOpQTZ 1qIHcmANiOB6Q1Bi3Wyruin5sil7kYI9VOl6x5ttBCb3ApGWdXVoD7aa5A5tha4lfmYz5pqW+O+b dZYKE3sBL8VgrzBZTM4KC0tkLfGjm9VW1vU9y0Ut8QNddYWO1V9juSyZ61mu0lYY2XmJ/HQ0Ramp MHpUunK9CxPCHm/qrRWqZE5ZrjFWKJO5hT31U2n15SQFE6WPlf7+PUZmt4C9ZYxxDWg1gOynkxRQ 9vIi22fT8XD3zwvfjb1LCt9d+NPCy396eVMbkax9+Sfa+9nYV2tILUkhalKzJvb1c3tJ79iew9/H DrJ/40phM2rIUagh/RCBU1ELb+PtEhe49XadO2gvtFfbt4XlGbrUlvhPUc21trtsNFWaIV1me9BN L/Ri/t47zO1ySjIF39APrqBOHagI0EDAgi5ielCNbo8tO4KOgcaadebsxmXndhnbLWPRfR37lbao OhBF3AfQS8dEoWZ+eW3yGwv/u/vIXsJjG2L/EPZ3Wh3hdbxgcq/Y7yVNF8b7aINObDjUJ6//4O7D Y38QZd2T/dfdEfuEHI3NPl+y31s45I5giU0/bOgN5RMeY3hne2KvIN4jUEwe3wbe+O7oQI+3PGzA GH1U4ZW51+ZyknD33H65I201ubM9szNvKLyncG3GutwPQp+4P/YcCX0S+TmkRTc7t9rd23tD5jx3 Y+Z97ifc6zPf9uzxHg+rXNvjv4MM1H9Lo/Mdxx5naeT2ZIS9Yl8k0+/OgqKkFxgBV3YWQ3sWw3hW lhQdzFBGBvPw3a30JojQNVEV4I24NPlBB4RIqIXUbZ3jWOKgjhaSFmU/8znYt8a3z3fKx/uYjVJr oxqSrTmloRprSb9p5z8xqGs4VnesTiPs8QjvOwuGQHiShKLQ6fyd+/zgfyV8CfRv0iV3GZrdSk9r /DRi/vSWsLLQ5EYN0VzgyUVp73xfIbnJwN7h/O9enMR0nsMWzO9imcv/6rq1P3pw3qqRc++JstrM VetnxH775urNQ567MfYulcf6nc84b906cnVh+apfBR/N/ErhsMHTSoatQM9mG9oEA9qEXnAomlGW f6l9UH5d/vWm+aYFtoX2xd0e7im/xNO7kjKWeK7y2Z6fmI+bfzNL7Owm9ZYi9sJbbTiaXtbDZlGL DECKU/Jy/FxWAXvOoFVYQ6WlBdpglWIRn7UotSDoreJ4FHyv8LihODjWNcNFXbbehmA0N+QPRStn pM9JX5K+On1juijdWv1oK3Gf8xbUsZNoxBPfnUg8g+h8CNGhFV7eTLwMlXj9Tdg1ys1h744QpsAv fL6QeLjvomzzOvniUudrPMn3ApLvL6WGut6G45YlHEKdmYievGvRU1mX1l+5vnJE7fHXD93J0Jro 2f7YYy/1rs5Z8eHo0R9vaOLLHYw6B1zskcP8JePyLst3ax3O1MYxS99dmMO6TrCnEaMfemxaz0ku o83ft++8u15hMeQSlOtSQZ/eG81Qy5SFbAvV53AXsU1uKhUVsl1TvdVUhJ611a9DZ4CiGFlbyIwX NRqtC6fAYtSjcWQ76h0fOHi1o8IxyDHWMROlaaPjiEPq+C7InCz2HOF08t3LCkH9XbCt+ZdNzr+w cOfW5tkCXfq5EDULAcXnsWeFd6E2MPSdHzTHvmBcTa6PLRRy9P1hKPLjLXjfOcS3HRyoetzx35vd Ggf7AoQdba3vevsx8XHH9+4/6G/i3+y/u9s8MgXlxcSucM+zrxSLdZaEN23UGKkx32o0WqxOXSIM TgGMgNMBA2BwZmjliZ2zdJlKJZc5tYlYt08oPxnjYhyLgWx6uiWkk4d0WupET8LndREyA2lD1TAI xrKvvORZbS6pdJBsrGyGbI5siUwks+Z2v+qsIhJ2NhnX1iW/FpfYrxBeb/2fdi3+suUkfPtEUCsl JIFwxHjXNnOnXym8uVJUeEH4w3X8+MzMF27q47KlKF2kSHgK88odQxdOEvzNRANf3tFz06nxb91A XxE2mQWPsuei1y59bILQ0hkToUfFP4WUSqPbo0Nn2GbYZzhmOO82zTfvEu0ynDDJ6jX12npdvZ7/ gBKNSWOOmqJm3kLtZpfV7XSlpZuLaJEp9/+q7VsAo6quRfc+Z/7fM2fOfDOZOfPJZPKdfCYQmEgO BKIU+YhEPpLSkEwgEDIhmZCiXom3vYq11g/SirZX7rW2FSwqHw2ixWtRsbbFvmKr1hbqRW17pVIv 2lZI8tbe58xkkoBt77tvktlnnX3W2d+11l577bX3uJqZZuds10p8vXOFa7vre65XmOPOt1yCmVof bdwSmOHXCRxnFwrNgiNYTGL9ETHSG2FQhIssiTwfeS2ijtwdi0SKY4XBGDJpKIreqg/oGav+qP6U /kP9GPTL3Wq9XqMuNKlVopegCIVrCnFhnaew0OspFD1uxDhd4vDop1LCoWJFQa1S+R2CACIjBvMk twfmVx76C/V+twtgF/m9etbvcAKGk4m6hpktkt8dRRjDhIlV6YqjQS/5F0V71KyJmk0M/gEuRwgE WivyAHu2SjUnPDjgwR6ptM4jJaYlPENxAMKRhEeKFic8UckaC8TWxLbF7oo9FDsR+zCmix1htsKY 78LlkssJrzmlOHzhVafkrbM6P6ReeisOMlK0DgborfvVouM5yE5ALGStwhWSIyDg5wUsRDk1RurF 6rvUJ9Qq9XPwtATNo0aUDtkL4yxIhT96uDMwwSob2UzURvd7Hm5ks9d9Vnajaj0DT93cH1GOnBVT HLGjjNDJlo64aKhhtpUDxn02ID002WryWV4cUyPilBMWPBGFcbgUxuGnmSHG6/I6vcqIu+AJb24F mhn7YD+jcw2PnXvSyWVHZGJUaW0NhonfxiRbit1ea7dPimN/+eU//v7LNwUoo9QTeXUs/Z+3/H7T izLnkIgA23jxP1SzcrbUEBu/+DP2N3k8swR4ZohY8ZlaaSdfh2cG68N1TZJpsXNx5ez6a0xrnK2V 19S3mXqcPZVt9f9aeXf9d0PD/HBwODHcdJw/HjyeON70Bvog8WHj2aY/oz/hP3EhNyRbg/kmG98U 5kJhLpiorcHBRKKJ53l/MCEEg4maMMdzflwjYFzDgFbFRa1Rgz3KR4NRMeqdE22KJqJ10WR1tCYq DjNflHyglRl0Xl2SKWU+TOBEtKmpsb6+MRyurCxuIooY3zhbzUUxVptM6sJCk9NZiEm0zaqOqxuB rNao1WrvvJpoGGIPFXcWQk7kuWExsFy6kC30zD2Co3RJ2yFLSc/C8+6z3Hnih0VkpWfhGTefne97 SI+ThzTOAze5yFZFDaCqwdn8gNAfUfP9HDHdccR0xxHTHRfi/Y2cxeyEwCRkPT9XErsAECa13fNj Jw/AOzDTO3kAXqNXeJMncy14md7D++S6f2ISVvhQ6UzTqR37k2S3uBptVpuv0aZ2kADmamQuJ9kh KuiGh00k4NxWQU4frjVwfRquNgto8Dh/6rYST3Y5mhQxDSImGwEnRzC78F3y7OMjEt46+sjoo7fS +/NkaaoWf2V0O6XxdwlFfx7PxXM+T6D3SJzILB8Zyfko/WB0jgxbnBqQi+/nzIGt+OF8f77REbxC 9T6yoWYpajNaPkKoxggazEdAjTqjgdPxCEf0nKHKsMTAGjx86tF8E3/jVCufPaeBZKddy35Ma3Ba nlep7v30KMlb/Qe6zyC/BLMkcXIJOCOvN0RwI3E6UvK+mM17cs7q8RV1pQsw+2r+ysDj6tlkqfZT quNAigtHTqi9o2FkRuaD2uuxURWPy/tHJ3pIvnPhYVrw1oDHM3Ii13YMuhu0oevYIRRD0/Ba6Zo9 2m8H9lSyUW1RIKnK2Ae9WwqGhH/x3ivs9O7V7ha+7d0XP6R91vKkcNB72P+q5Xy1w4A9uBSzD9ju 8zI3Vn6l8sHKPZa9lS9Wv179brUuBnOofZK3KB4sKgoFQzG+0O4qmRZE00owW2vSl08bxqelVXh7 DBlqg6xRHyTr7r3lbHlJ0mSKCd/kgoVa8sCMRDEoAU9YgzgebAwuDq4JPhR8PHg0eCqoC3rrXXdV BTXkeVrzkOao5pRGpfFMLz0yriThsoUj7y2SXf7kps9udIi3niUaE/Urz2n3M2wzJs3IYD7mUaT9 UaQFVTExdg7Vwdczdv4Ar6vUZffGt25WFogFQD2C/IBiH3te2TXfGqwb3wvvytv4QjYoymZlhfLY KH2m+AezK55+7Rt7Tv9y5vbFQ0NrnxT1nMtgaf/mkof29xLWeTH55flPr1s02LfpSPvWB3alb3jK ym2f1znD4OZtBqu39FvtIyfpvOzfbdzi5NKr1y9fQ2b2FdD3y4FqfSiGI08SfWifZOTiVBcKmX1O cm/3xB0ej9MR8vm1LDaKUVOrcRi3H4oG9WIQZHK7VMr6EGK1emNh0Aotz2i8peFlyCQ6BOK2aRXS wimBFTwln/9afneQTjiTNZoAD5KxHgQriOAzykr8Z232XPCESekMadkGPa4yVkWujF0X64g9Gnok 8jQ+bHzW/1TxMfWrupOqt3Vn1H/Q2ZyqalyjvsLYhBcb5/uvwy3qVm2rsQN3qruNA8yNhhv9WwO3 +58JPBc6VOQECXpuv5GLwQz7Sb9T3grZijevxDboI+QQEJnNhSdNsHGevz4uvf+Xw1gz+udDb+94 Mc/v5V/fuvfet8hX9f7Iz18a/fiFY6PnXnqEbjydRRcNjj/0618/BF+y+xR6ZwFwZik6dyhoMFqJ 2e4TqRyAlx1vF71ZfDpwOvhfRX8o1kYcxc654sKihcUtYmvRquIN1g2erqLbPSYnMdr124WV9usc G4s6iz/xqjVeD+fwlnAlfJH3K9yD3NfdO72POB4B3DBMQaweoYD68nl8Lnl+jbbbgiVa4wGVxvfv rmDYaEnqVu4O4LsDzweYgLdcCEZJJ++OYrKAd3eUjXrKjuX1M3Ab9b5o3bzwvLw1Ff7OKD4X4+58 8lSazDlgaCNGjOxkWpM/mXbme+mFQ6gugWDO/CIRh5h66Gkev+/IC7/Ys/bVpQ7O5ko9fPzV0QvY +Op/sGYf4ZIfBLyugiuH/vCNh09etURw2crmbMTsy69iE+GFm6G195JfLYD2/u1T80vXlzLEBLVP Xr6MUytUSOd3kyiuIO4qKHC7Qn6DMxTTtxqADQ7EgtDewA5iKCj4kckoaMkPsLgCenGInOePsbe8 KDgEc41h/NUDZaVDWX/kzUr7EBNSA3V8hPHoDPyfJ3xw+SlbdZV8cgdhggMWHa8jImacLw6jUlBJ RaGYTGujY+8fCOsinpyMyk2xw3XZvQrEJyVLyvlbT1SMLGLu/W3fz7Zu/Vn/21+n971v7Pz6G298 fecbqvcvbCKy5bvHt54e/OKpG47jt2RK3v3227sJJTPUVygOlOxBInpN6jI4dzmYGmYOs5RpZ15i XrL/yPMW/5bn7YL/dL8b+NRp9vhKfQmm3v+5gqsDqwtWBdIF3YGbC75asMu3y/+02jrgfMZ3jD3G v+J7xa/RvWjziiLMiWyFQZdWFbQZTcu8yd0I9wIHDeN3JVdITOLkbgGnhaPCCRBFKsETLH0sj0QX nqWu4GfPZPcUUbffCUJmv1PQgEg4WCAE/Mzw2Ac5UY/hP+h0TnIflSkTaWWnHlXFxe8533308z+d bbdwbq7q41veGD2Frcd/ig3LPa/v2HHSi7/18Muzaq0em42rWY4LXnkaJMd/33LHvsfuJHaaX4JG vwooM4FelYok0xL1kPpLpluqd5v2mw6WvVB2sszg0ln1puMcF9InKlE1rh5mVE8hFKoEBWQYS5IX A+VGYiFU1FoSLESIFz2VFW6NXmcIAS1KhmmoHIveE5Q0d0rmuENy9Dpec6gcnrqBw/jHijvbQupS 2MC9R404DcS4OUI3CU7yim6d5B5tKS0rgA4tD6CygpIAJj8ocsstl/VjA0DZJDO+20/jcGSdJeKY ytGRNAlffYqETz32tcHbah1uQWf/xvqeQXw7FbTmkSuzWiRzmNDjtg3fdOqcPO9iXd3ztlG1DSjz n0ZvVt0MlFmMarFfqp4n9ArM28GfF30QPFN0IXg+otlYsqmiPd5ee4P5ppLNtV8tGar9Vsk9tXtL dtc+47cwOiIN1lIBoVerdfoQg/xl1W6Rc4nQlxb/juqgaCgLoh1RLUx1NFiDY4UiFg0GTr9b/4Se teqJWedx/Qm9Wu+tqwwOhe8O7w4/EVYdDZ8Inw6fC6vCnkRp2wRipdKCrFRDZxAFsvEMEamNWU/1 GZOERB4VH0EFY+eRd+z8/lIdzAH+st+vQ8NwV66rIpcSUy2JrHDGxw3L4+f9tOK63IqioLUw4fG9 5tOn1REpwtQl+NqaCbvWbpHHvoi7d/VC6jr4p88NFjtve/37Fy58//XbXr3zzh/96M47X2WOP0Al xuFlc8o/H6M+aVfPL5198TDGhw5hNLrgvh//ZMd9P/kJ8EIL8MIm4IV6vFyq2OW9IDIq7MAdmgHN 3fg+Zjf+NvMEPsAYHtF8R3tQfUj7kvYN7Smv1quzuajctgoBgRFWuwXB5Q7ZSuJU4SlfXVVeHq8K lXAGWd6bsXk1NdmFOFl/NRatVvTX+hpyH66LV9fV1VSH6jFxylaVxGLQ3fVIpeUMOr3oOeXGME48 LBlnoqBYfbTqRBVTNYz/68CMK9tyu1CIkKEcpYh8unJgu6zA/3s9y1oVR2zOW6DWaooK1J4A9mp9 Mt+RzdjjywqHkWbs/CHRFBBkFWelbMeWt2mPK6I5/pSVVe3l1hfw0iU7rl97++rPwwwjMPohndt9 aWD17Hh3vusoZV9Qfi4sv3LeXYtH/pxjUvb6GyrEwZEPcgdxzMry6HPQ5061DbGgp26TSkOeGo/k Wepp92Q8X/Zo7WZuhQDaqsakX6FWh0xOn2enA7RV9kVmGN/3lE9jNhkQPoKJKZWByYZFpVKLjsUC FjyF12wbnwdyI7QvGho/OTtpOojyvW8d4Tr7FI8upQWYu2/ahj9HKj7iplO2z31MVtDVtjffHL3m 4kd58gg0FlKzg6M3s/W0ZoXoIamMIwehMBx7vXWlD3Q4X8Y6hIbwEDPE7rRaFunu0j2k2+t7xqf2 6QrIcpAPeFZt1A3j7z+lUoWMcoUli1HjXeYRebvFucNPTPRrJBvDsKw/YDKLhYWLVVjl8T+DD+Gf Ife4+Zi6MmZN9CNnGj8ZGff7JvtiYZgjNc/VON8ZX10zrY45ddMto2bi2sJcuWLFFctGP6YNoN/4 ZVL7kYuUv9s33l0RoOz91XXAy0ehX3cAL9cx9x1GJWPPH3CaG0uIZ71goldpMW9sXGf/jp05lsCl QmlRZUlpIlY3I9JYdEVJY2KDsCFs7LTjsH2anSkTFpe8WfRm4oOiDxIXii4kdDOLZiY2RDbU7RX2 hjWRunAYycLamJPUPsLaB1EABwIkUxPXGKDbx0C/DqwOBwKhcMgXRhW1VCZUVTUnqqpqE6GKRJ3N SBOyxA0Wi9EQshE/GZgnyU4y7l3USyZUINjLoyT+ypKS1UUlJdGiUHlRpCgSEesSQl1dIizYebuI wgJCYWSviwjqMA4lfT5HskATTZbXJisqyssZY5K3IV0SMwaBTJT16TAOP1AUaal7Bu9GRRBj7k0M JRgxUZX4QoJNEJlTON0OIzyMMb36IT3D6UV9FQBktNHoPdOO4G+hIdkGO+7IxhFJtFnZX02srYqR lbo+Khs/XDNuU1XK1iv72MkD/gbiVHDygG+6fPXUyFdXBb3uH3dhw8SH7TaL4hHj/qyNclPE2uVx QZZNQc86s+XrwGHQgQVdETnQKjH2Sb43NmAA1hIF6y8HijwJMbdbiBjFsG3yBK825wAXzNm+ppjL ns2b7r2IU2WUD8xEKLSNDuOH2ugi6zkSmxz9Bt4y+pW8yd+nuJyIBrpf9Y+jK3NmsH7gliPALQJw ixu1Som1jn7HlxygPphWEK0P9LwVRMfj3Y6dNlvIjUC1Q1i0cdxi7ijHch5PvqSjBypdXsJdVrrd M1G2fURkW1b5z5fYGDnIrlTQqpoZjdRgnW6tt8ywzrQ2WK+wStYm6zw9HzVNMx0s2F+uKsbTMNPi W6td68toMz71NG2Nb552nq9Fq67STb+C8t6pmXhm86yZM6+YFZrusJIov8jjJfxr/Gn+HK9CPMdL PMs3W3jeagk5igJ0qEchLsSEmv2hUMAfKppWJUfWcrVMbXO8trYqHprWLJHI1Kkm3NTc2NQkNYYq 4hp/tLIiVujTYG3pdCmJmjWlQdYb1OtZ7fRp04qKHAazRXQ5pUBdlXPIyTgvRgv9YnGU3EeHokz0 4iwUFxtnEVMUmnV01olZ7CzPlaXfd+dZPQAoa8hdco7liqNxdoWZn4H+Bx7irVMdPqk6IMZK3B6D SaU2FpWoigNYrfEYXAEcU5cGsNvkDci7FMm2dnogQmsr6AkF45scDGN/RCr4asfeAqXhLYTHfp7V DLF8SoKWyG/vLLozCq4YrvvhKh9s1mp3UHsXnWOOaxZhm3xI0cT7PBVjMgP+bmP37LXB+v6Z10+7 ku74enBRbWXn7GYKLq6uKL+iiUa/Q53aKMiubemf19w8L3n1qpFDhFKZb0jL5qVGfk7he5qWF5Z0 yDfjUwWg4G6g4OVAwfW4W5r+uuZ1HXNMc0zHPKzbr9mvYzdrh7RMu7ZD11HAPljwiIa5MXAAH2RY X2BDgEFYxTB+HS9bCqyOgINxNNNF4hA/WeOUhxILsmBLszKayBonh4q4ImaS2mmua5bVzppkvQY/ g08jEbdL9sKgSgsaKM/bDHqD6D3lwR4yEHBU+by7ajconx6ieY6rOoreKRPeyHkQ8v/4zoRLkplQ 4FPrtDqNjtH41EBVBbpCWfMspZpnQc6hRQAC+c2TBYJMQ5vpvtjWVlCvpilzvykkMJFUpiify1d8 beUXFtdfTzv9t/QYpn/edO0Nm/N1T4Ugtq2cW+K/Y/7Ih+O658obm/5l5E+TqIBB94ydUjUAFRiR C18l1fNOlVNwOdlX8CvG15lfqX+tfd2o2ajtsjEpJqXq0nUZNpi7bSl7p0vnCLLWoJ416rWmIKK7 BT2N9Gpx0atkdtQ9gTCHqtAXQP0bZm6T3HxQI5G9hBLgpDVHNSc0pzXnNGrNMH7ngBtESHbmAIPT 2ZHWzUSfz56wOGFT0hHkBO1QGDt/kBMsguuZsXdgyH7ngNlv84/P6FrJMEhYVzI6Ba6gUSCBjRgY 7VZ/o1GAQGeAQEsCGznEpRC0Ma1g5OEhBE7B5polkMAukKWk4bFjEg+AwQCKlI4EDGsNNOCy3Jmz ucNPiPU0ayHJtzM1jJ594djoHzF/7AVsb/nt7t2/JV/8+POj57DtKDka9Nx//OtvTn3rm6dPEds1 zNwJh5JzbSqkxmqDdUYxfOsqrsEtTKu5A0OfaDaaM/jG0r5K4w81zxve1L6pf6v4zer3NO8adB62 nL1R+1V2F/sYq3H6KFt64oUej68w5JRHGSN/fMKQMjsUV0YTbC6JW5MOXxII1RIPGg0lQbxDpUWB ZJEmGrTqsM5bW44sot9aKK9Hqgo9Nfnmb6p2ZY3fZxvoFP5SM/jPdhLLN1DFTFVkTlBBXcTMIia9 Xj326yeLwxO2oZGzD2QmI/YVYsa+LEdNsGYveGzgpv/TPzry3G+/Ki+EpfOM2t/6+f27Tp7c9Y2T 7Npd16/OnOg7NDr29KhG9lgCvSBJFZque068dvc9r50g9kDouz3Qd2EUx1eTE3Y+2W+dUUKIr946 Yx/6bsG+InYpWuPtQD3eDcF+dJN3S+WX0J3eWyt3Rb9Z/o3KR6OPlX+n0vbtMH6wZK+4t4SVdXtL viFGlr9Gx3FF9MqidikRtVnFHXmLK9xJnijZloqgz6AnNpriINoR0kawR+8RhwzYajhtOGdgDd7q 0iA51mJ34ImA6kTgdOBcgA14qrJm3HzbDHUoB/EKnUocABsbLmWY+UxJmt+xXnlbexwU2SKhnBy9 FBPKhqFnSyb1rGw3v6zXt2yWjEwy0Dz2IrXlUovuaC81qt3xzrOjI5j9wek7Tt5//0nyZV7ZRXrw wovZHsWfPo3xoafGRhfcc+LEPfe89pp8VqZqFTtIzsqUhJssuFy/2LCB38rfzn9d8y271icbUgLH lZlVgeMZZh9MRCRJr0yYyOa/fdLi2CK68y9UZrQIJiP0iVprxnYkWDhDpCiJyjSGRg4GPJgnkelS gcGqPadltN4KJIgRa3hJWDahnQtrwp7yka+587azky0e8g4PekQmnfRmzwnCthl/v3MmdI9N6Z5D dsHi5H1ZVUhhrQl7Oi9n52SYbz88b8EtHrvBYg8nPNMfPIozVL/eRKbRrz5IQnbtyftaUl67R2sP e1fsHU3QHuBtLubZrK59YuwUOwq8NBf/SdouNPpmM/zVaCXqmvuY+Nj0f6v/sf2VOb+x/8L5i1m/ mvNf9jOJ3825aD+f+Msc3mjXONWz9HMCdofTMatgzh2hnYkjVuNy+6r6rvoNyRvqb07eXn978hFh v2D4WvJQgLlGV1YSjlZLVzQkvG6rReswzUCJmqqwqnKa1WJiDYi1eZJXXBG0BZsMw7juICtW4sph /HXJF50WDKKktmVGcLGfOGSyfm9z9bJwssQRlMhY6IRRT1qZLsElnnlNWlYTNQSNn1cYi7pfYNn+ jMvOciNncv6ZpCdbx50zZ+S5Zyp7enn5/Kf66XN40VdkL3LNcgRQsmBGAE8XIeDnwK2z0R1ALves K2YWNoDC4k021AemBZAw20aVYqIiyQHOnpie1/0Hk0LC4Ht27H3kAh6dC8w5S5gOPHog5GzwjdtP 6WFisstGPYykepgcJAUI6sm46uYccAfBXDKQzhVg6JwrGK2NPpIOtAxBepqoDwIJ8gZSGMMvdaQF ORovz+lU0OQ5nY4fm1ccjSgnVLE3yXNOsn5Wv/S2Oxclm6tufXxu25qfvvzyNp3DTN1OPa7wrvS3 d1+zdPTl7Vef3LGPLSsEUr3b73V6GorrZ5TVNcR8Vrs7fNNVG7+bCgkWr//7QL+OykBV4w1zF8Xj YmJ9Q/c2Qq/3gk6VVN2LytErUuRCATYXeAuYbxsOGV4w/NxwxqDeYrnVstPyHctLxl8YNS4dObNy H1LhPsmhU6m0uhDmBL3DZuVsvKD2mEqG8cOSzZ+MRLRJjJHGFPQYhe2qYfyoJJSX6/RiNPgS8nE+ 0dfrO+pTwzj/7oEKMh0jJ5DT5Yzz2S3hZCOPvCA55ZwLeR3DW2AwGr36ADIUmAJIXsegy8KtOMvi NmHyUlC0buK6htMBijvdEzZaP7C55aXpgplzm8U/b96xjzpIPkg6g11LuHvkZ/PX1opmchpxcOFX Bpg4iaRnJJB2vB7acSW7FhWjMclkUB1yMjEn9uqseipnTXGdyaTXhazygqWxYJGyYFkcJPcV5EDI ZjESCYqhYuy0CmIwiYoNLncy4PdbdfokZ9UIQdYoigi5nGQ2oS/hbKLuhBZriSE7NtmQ3dAgHxrW MJLd9SjL179v0MvarHk7OZXNrrIFEK8R5DaWGc6uMNxzyAGM5gTlhh97R1mDoy4VxXkNTXth+vht 1qPi1seO3yhdK1tm1i/6yV7a4B/SWcGN32xaMcD4abPfuXTDszIo22xJayfJr9pDa4fxGql6L97L P2ZnRYNoFMl2ZYtoFWG2lcT1/Ex7J7PO1iV0hR8HpD12XgpgsoF+n+QwIzNnjptZ8yK6kT5ksPHy oAi1DeA8kyPZJb+PnLa1mm6TD+kZLJsYG92yjXFRzsRoYzAWeZsAcz8hjJBoFwS7XbDzGBkUY2IB lzSwSYNeE04Kw3iDZLQzybit0fa4jbU9gzcgO9ZLZonHVXya382/xqv45/DjQB1FOKi4aoJi8975 VuUk7pxfcWNDPJ7d7napM7L+xiFYl3a1JJa5YHiKka12cgzzxNdGv3sdNUPR83buwIkiXElPQ8IN ZPmhhTVnD34buVKezmXHy/qxMdV90JMxtlx6JOYsdt3K7nE+4hpmDjsPunSI4Zhtzrucjzt/4Dzl HHXqdjNPMCcYVqfSOdwqtyPGlKhijmJXvarecZXqKsdy1XJhhWOFZ0WsE29UrXesc63zrIvdqPqi 437n113fYfaqvufY7TrEHFENO55wPe15OvaK82XXr5wnXb93nnGVGZ0FzjKmzFnmus1zW+wx5xHn S+qXhLedv8O/c/2FueD8i8sme+VYuJxbjuyivE8q741gFBEjUoQ9R6DdkdcibG9kKMIQn2UmEtlF HZZDisPyPqlkDXUfZ4nb8mI9+6EeP059l1myK1a/i/ouhxTfZaDKwsI4dVwOiR73Tuq4PPY5qSbr uCzmHJfFPMdlMc9xWVQcl4/i0zB9zgA1nSamQXxaCqvQMozZZSpDcTLoTYr2pFmTNAVF0Ww2adJu 7P6hB5OF6Sja4ZGq6jxSrCzhkYqKISj0Q+DxQmC1JTxJ6QsxHDuCv0s9lu+QXM4WRqqekWAIHkPw GImzJZhh/F3JrBa/4MCOHwqqHUJSTcxTVXXkcqB+RoLelsm3kA29Qgr0Cu/TKyRGrhLvdCXUkqNu m/ouNUMcnBn1c/gdVJLHMZ+0tuZG6bPEibmV+DjDZ4R6OLdmPZzLzr9HHiJ3Y0PW3E610/MN3BmO /jjMP+jjPMWg3tq6efPUuKmRiqNz1oJwKKbz6FTcuOqC+4Jali1mJ3l65jsxZ+PY7esPD6/fV0KY 8X0SbNx5oGP4rg3EWvwe0W9jmPGNnMF5HNrJCCMfMA/kc2kK5O0G4NIm5gZpZ8AW4Bm+3rbcxhQQ m0kg9AW8iU8H0+EvNP0Q/5D7Kf/T4I/DP655IfFCk1WH3Oj+EHsph2XqpCzKTsrUeVmUnZeZJE5a kyAo7Uk+GUyKSW91siYZSYaTpXOSTcm6ZCKZlLIOysWVlcWNK9WJYVx5UGx6oJEjCzMFxFE5GHSa TGrkxMRZ+QGrOg2k4Z1XA88PhB8o5ile8IHildbCuGIWUBd65hoMXkOpJql57xmszR0in1V5z+SO jczzWW4lzsnEcZk6KBMP+rNu7kzWY1m5epF7kr8yDdS3VWZdkH80yQX5McUF+ZMDfJhc3yG2Krj+ Zn9Bw6wpTsxSmJtG3J7Lc27PBniN8xNfYz/xfQ7l3prouAwDtuK7/I6k542NNr+Rb6wlx2l/DgCb wemaZYOhcVbTbD/fiEnQNN1na8QkaJpewAEEQRP5TQ5MgqChUJyVsEJQI3gKZnFEk64hqjNceeXa NDx27AAnEKv0MckMQLgBgiAJLruJlajXuMb5t5yhlXNeL+MMrQkzu/EtUcEKs+uPCPHfMXp49Agd qEY/9Hut9ii+ZXRPxA7P3yXjVgcuwIUdhFXeJU8j+MXRu7ROs7IMNGP0ZdlEaXZqYa55lY4+IbaV D7FN5h6TUwfcs3P0ZtX9wD01+FlQGpCbd4fKzEFXHa6zLTZLrgv2v4aMevsC++dC6/F62xftXwxt t28PHbY9Z38m9FLolyELsCBfw9tq7LLu4jeb4zmlpSDkH/Jj/66Q3x8KFYTCZdVkHbWyis72XJKx prKyuiZUVmPXy+5zavUu2XlOj5FXoEOLq8qFXXG7yyXYQ157TWmExG4qLo6Hi4sj4VBpOGSvqRHD ISEcDtmATclZrLwd4Rp4wNsw0vnVvJ6oOAUFQtLrBc5liIoTSZZWJ8vKSi3Iv8TP9PpP+8+RuWZi CdmJwqlFda/6tPqcWqP21JY+Q6W1fCR362buPRBzWeNAnpKj+CiSM1vVitT9G4uYf6/ik73lJmNr dVyDroGyShBnf2HjsuQ1af0xyHSP3uDxe80OJ/Wm34yX46WbFV97Tqgc+eBLlPbolkGsBRnMmx16 KoQXM0/KJATENW7pIbL4LELsH4GaXOj3ksFCTLtYZzEwz419gsxjf0EGpCIzDG2c1WpVbMjgpCQz 1x632u2cNeS0YIZnRLNFMJstZhNjwU4zY8IWq4hcoN2KRpMBt6qSVkOjIU3sbB5na9qETR73QJ5p baHiKXwm95tvM8Z/hwBEmnwkEiMvRDNkuAaxRK8gmeD6xn6QS1lRNPGA9cmnrdNzwMuwA2fPUNMG 63DuBHb2FyN3MPXUh2EEMX0jn8jTtQUjV2ToaR8LmBf6CPAywkzD6Ahzj+p9ZEbBw9BGtZLZiHAC RbQ6j0XZgdCwkB65D9kqv5Emny3C3IOVtQi3at+nt5EE1VtBfCD6aWYXIXJuK/mM0pDAGAYeToEZ ZMErFJhFqygWgVV5OOSXzToVWIN8+GYF1qLv4AcUWIeizI8VWI98uvMKbGD8+mw6RtRtrFRgE+o0 Zt81aw4ykgJb0Gr6O17yZ5tptQJjZDSdUWAGac06BWZRhelXCqzKw1Ejk9mqwBpkMRcosBZday5R YB2ym19QYD2ycL9QYAO2ctl0jGia7b8V2IRq+ey7ZnaV+SYFtqBKfjWUBKtYKJuJ/wqF1QBz/AMU 1tD4PRTW0vinKKyj8EsU1it9JMNyH8mw3EcyLPeRDKvycOQ+kmG5j2RY7iMZlvtIhuU+kmG5j2RY 7iMZlvtIhuU+kmG5jwhsyKuvkdblFxQ25cVbKPwehcmP5Vn4TyhsBxhm8RQW8vAdNB0ZdubFe8i7 dieFCyiOnGZhHk4gD45Q/AiFSylcQ+EKCtO66PLKr8vLy5QXb8rWZRnainpB2+xEbagdriJ6FL7L 0HoKL0Rp1APfjIIloia46wOYhG0Q30UxRIjphvcrAZpL49v+H1OK50omomvhSTcayOH0Q9x8uMr5 VaMZ8FeFKhQoQWNnwxvdcF0K76yDMmToW0shvX749qEtEHZMKdVMWqoBeN5FsUS0CK6DEL+F3vfn SlkDudRDKKIYpNIFZemDJ/3w7YTUSmhO6yClbqhbH7ruMm9PzE3OawnUdyHUfsKzYIi2JWmpDrjf RFPdCHEkv/95K4sQS8rZBWXL0DKQVhHhnuC00xjSl9l7UqIeiJFL1Q+1WIQWQ+7zUTN8m6DVCbwY Ysm5us0QXk3j50HMtRCSfrkS2mYe/C2ksctgdDDQL6lDF+2lzBSazMbLteylbd2rlG5rrhWm1l6m oTTUkNS+F94n2G2AJddSpooBShMiWkufbqW1zOZJ6rwlr2UG6LsybWTLI7fcJoovl4RQfzelihSl 1xSNW0dTIb2Xoq1I6HSlktt6eL6F4qWhHNk2l/PMfEbLZClukFIEiUnReq1XytgBdyS+HeK6af06 aettumR7pZV6kRZL5aUyqKR5qfw6FOohNLGWcqlc6rVKz/QoKV+qh4pprSa2lExXU6lias5yPGnr LRASCdEGuXYrrd1PU8tcNm/S+i0Q001z7M/r+fG+kPtpIl+Q1pFz7afptENsJ63B39PnokKLPZQX e+BuPF/C2x20pWUubaMSrC9PgpXnsPvy6FauX+ZvthQp3Saafpau0hPSG6T9v5H2Zr6s6FToYhwz DbiyFBmgLU7SX5+rj1yufOom8opQg9z+Mlf1KvSRpdLJNPRZNRqnj/m07lN7jrQwSX8zxKdo2tna tNOrLNt6JvVB36T2Hk+Z1C9N5XmHIjW3UBk4mCcH/p7ez6Yn8yTh1S1Kb4zzWDa9qf0ot5ZcgwyV AZlL8nG2x9omtXXnP1Ta8VaemkM7bWHC5akpJZLrQyhoZi6FFpD/syG2ApERsx5G6ekwSooQVsNd BYzfCfhWIaLDtaAFCmYVPK2GJwkFno5q4UvemobqYKwnX5I66a0MlGwm6A1xaC/yVwn1mMzx7VTy XUXpl7QpKecCKiUyVA70gfaSouP0upz0bctJmWw6g5RGMopsHJfF2Vafj+ZAixFenaxNDCqpZSUn kQSDSjuSHppN47qUtm0GWNZ61uXyys+BaEYpWu52hXfaKdWk8sZnkaaaLXsX7bdumlIX+qJSw15a m3ZKex159S+nnJttw6wkl3WBQUq7Mp+Mj6j9VN9Zm1eKTjSuTWT5rlcZ+4j87Z8giwjtyTpTVgJc qsXTtFV6aTjeJn005TTVCWRJmaFlyepg4/JtvLwZ2nbrqRzItkwHYLXDW1kuGJeElf8gncUp/iZI NQ5hhkp0kmqc6gprFH0qSx09tJ6VuXf+d/MapJQi46b+V3LJPotPkiS5tJdt7U11trWnxEfFZetT 4sJ0TzoDUWJTuq833deW6Ur3iL3d7ZXi3LZM299AipPExGvT3QMkpl+c3wPvVc+YUVUBQaJSnN3d LS7tWrc+0y8uTfWn+rakOrJJzWxKD/R1pfrERanBmVtSff0kyZrK+hoxtrCrvS/dn+7MlCxNrRvo buu7Lu+x8hq8teTahcuUuz3isr62jtSmtr6NYrrzM4ss9qXWdfVnUn2pDrGrR2xP9WXayDU90JOB pPorFy1eNr95ftPsZfMXLxIXN4tXz2+at+jaeeLsK5fOm7dw3qJlZoPZsGx9V7+YybYkgSHL3r50 LyS3lRQhlz20UHpdX1vv+q1iWw9kCU0x0J8S124Vt6YHyJvt6S20MAM9HdAaJB0o3KZ+kkib2N3V nuoB9LZ1fanUplRPplJcCa+tb9uSEtNrScnhzcyEwpCGG2zrS4mpLkisT+zo6ku1Z7q3ip196U3j 5UpDXul1KYoyCJjj73VA8/R1rR3IQNJQzHRPKr9Cxf3ZQkFb5Zoi9zLAbeKWtu6BtrXdUOz+/lQm /+1KsaWnO9XfTytPawF1Uvoik4ZX+3tT7V2dXe1Tay5CK/ZkunrW0XfbOjq6SJe2dYt9lMDKSXQf bVvILzO5UN1dm7pIhSATijeY7tvYn5GpohPagkamB4FEBtZ2d/WvJ/lAWnJzb2rbKkL5oat6t5KG G2+hiRnR9pjfOV65tp6t4uaBVD/Npj3dA9TWo9SgTyk3Re5fnx7o7gDS3NKVGqQ0MLX6BA96MtUF TCT3GMHL1RGKBRlk2toz431MKtamlLrz0snSIudeaG/rEdemsglBPm2ZmQSh5drZYoUYq09MLxGn V9dXVCWqqvT6lgUQWVVdnUhAOL12ujh9Wt2Muhlmw/pMpndmPD44OFi5Kdvx7elNV6WhpB3iglQm 053qm5vq71pHyLeNkAzBGeyDLuoTKRWTos+fs7BczIqJQUAjxNnXBp0EZDm7o68LStvcB6JnHXlL fkG8NtUN5N4HFATyhvCzKM4mqXe1A6l0dn0RMuztyrSvFzto/uUiLSEhcpACgynSJ5RR+7vb1tIk OqmYIH3XC9wntvTLVJTaBJKJEMB4wdMDmd6BDC1JXwpkDiHKTNtaIsEovdF0M6n29T20MB3p9gHS BZQIKy/TZvH1mU3d8U2ZnrZNqfim/jXtcnP0pAYryZO/863BVDfEpv72K+QurhAJxUaL6Oi8iY7C PVRPBp0Im2FE2QD3v6fjUfb5tXRcJmM41eLYB9gn2efYo/A9zD7DPpaXVhsdnbL3v6VppybklZqQ Gk1P5VdVqxaorlRdAeEMwG6jM8wOZUxcj5/A/8YiOnOYDfh9ilWkjZz/y2LMMrtVDGKZThoOqjFW sd/TsEjFtrRI9No68LyGxWrV0POSllyrfqjTqpBatWrVqiU6FmsgwqRTIQ2JWEWBNfC5PXTVvZE9 5BN/9aOLR+nt7SFpj/zZvfz1j/9pzfLXzxlVWK+5m35MGqTXXPfIBvmf3i3/eJv8v4z5nloqHaIX 1JL3kVpGln+85vVt9PK8WYuM2mVIvQQ5EIarZKURe/I+QzSqdPlf17y7becovQxxWmzShbmvzqnW HVMunA6ZdC17Jv7JkWMT/2xabJ70csimQ+Ypb7fIsZPf335PKHLV7Xxo7jgw1DL1g0p3bnt3zV+X j7aM5YAhXo/NhtDcq+7hI3Nvt+UBEg8zGcOenasebZkSLKHPWrZ9vHxsSnBO0GOr4RJFAkASDMhq aNnwyHV7LhXITy+ZKgTbv9wcsrm3z10btE0GuUvUNvtZUnr04p6xj/aO/ejxscnguf8faToMmDPd K+kj6NKB04Q4U8uqPZ/9L2OtGfvs/+3QU9Bn2+/hQ5G5l7sRP6MmeWxQ+mjLtlFIc+fysXdbxv56 6ZvThOfhwwPnrydHDyJlnYfF8gN2jMAo98H9eEClwhDDDuXjqxh5QUs1SuCJ+Go1xb8/H1/NqOQl oVECT8TXaCj+E/n4GgVfM0rgifhaLcU/Dre+cXz1ZfF1Oop/Jh9fz8r4+hECT8TX6wk+zd89jq+5 LL7BQPHd+ekbWS19bhwh8ER8o5HiJ/LTN6oU/IsEnohvMlH8q/PxTSodfW66SOCJ+GYzxe/Ixzcr +OaLBJ6Ib7FQ/Jvyy29S6+X0LxB4Ir7VSvEn9K9ZbZDTv0DgifgcR/En9K9VwbdeIPBEfJuN4k/o X6vaeFl8nqf4E/rXopHxLZ8SeCK+3U7waSK58nAaE33OfUrgifiCQPEn9K9NY6bPbZ8SeCK+w4Ex uSKyuslSjiMtrkEmxE+Kd9J4C3JOii+nMIc8k+LnUdiGvLl4hlZkDXzVOjXSqEcvoknPeggMfahX j36KJqV3KyKrqx5UPOmd75OutJBxcuSvk985SaqNImj6xHhMGtyOYuiKSfFXASygUiRNzANvJ/E2 PTLpL/55Uh74GMBeNA0tnRjPOGkfzEQrJqbFkDr6BCOyGi98Mqn+zAvKM8uEZzQ9lqRHLFfdE99h /xngqMuMbOZPz6P/C9O+6XcKZW5kc3RyZWFtCmVuZG9iago1MCAwIG9iago8PC9GaWx0ZXIgL0Zs YXRlRGVjb2RlCi9MZW5ndGggMzQzCj4+IHN0cmVhbQp4nF2Sy26DMBBF93yFl+kiAgw0qYSQEpJI LPpQ034AsYcUqRjLkAV/XzM3D6mWjHXm4bnDOCyrXWXaUYQfrldHGkXTGu1o6C9OkTjRuTVBLIVu 1Xgl/qqutkHok4/TMFJXmaYP8lyI8NN7h9FNYrHR/YmegvDdaXKtOYvFd3n0fLxY+0sdmVFEQVEI TY2/6bW2b3VHIuS0ZaW9vx2npc95RHxNloRkjqFG9ZoGWytytTlTkEd+FSI/+FUEZPQ//xpZp0b9 1I6jEx8dRZuoYFozyQx0AJVMcYTIHVOyZ8q2TCl8GXypBB2YMtyyKlnPtfLLTcdD9pbDohI116gZ M8U7GPdslCgm4UtQU6Y3rfORxjBCZIKEFFoTqEvQeAqRSQYj+s8kjBsYoSxbobn01up8PMfXrtDH /MPnh3Gfpro45wfJr4cnOM+uNXR/YLa3c9a8/wAh47ETCmVuZHN0cmVhbQplbmRvYmoKNTEgMCBv YmoKPDwvVHlwZSAvRm9udAovRm9udERlc2NyaXB0b3IgNTUgMCBSCi9CYXNlRm9udCAvQ291cmll ck5ld1BTLUJvbGRNVAovU3VidHlwZSAvQ0lERm9udFR5cGUyCi9DSURUb0dJRE1hcCAvSWRlbnRp dHkKL0NJRFN5c3RlbUluZm8gPDwvUmVnaXN0cnkgKEFkb2JlKQovT3JkZXJpbmcgKElkZW50aXR5 KQovU3VwcGxlbWVudCAwCj4+Ci9XIFswIFs2MDAuMDk3N11dCj4+CmVuZG9iago1NSAwIG9iago8 PC9UeXBlIC9Gb250RGVzY3JpcHRvcgovRm9udEZpbGUyIDU2IDAgUgovRm9udE5hbWUgL0NvdXJp ZXJOZXdQUy1Cb2xkTVQKL0ZsYWdzIDUKL0FzY2VudCA4MzIuNTE5NQovRGVzY2VudCAtMzAwLjI5 MwovU3RlbVYgMTYwLjE1NjMKL0NhcEhlaWdodCAwCi9JdGFsaWNBbmdsZSAwCi9Gb250QkJveCBb LTQ2LjM4NjcgLTcxMC40NDkyIDcwMS42NjAyIDEyMjEuMTkxNF0KPj4KZW5kb2JqCjU2IDAgb2Jq Cjw8L0xlbmd0aDEgNDc5NzYKL0ZpbHRlciAvRmxhdGVEZWNvZGUKL0xlbmd0aCAyNTI4NQo+PiBz dHJlYW0KeJzsvXl8U9X2KL72PidjkzRt0zRJ2yQnadIhbZNOQMec0pYKCC2TUKRQkFmwDAqIDPWq oKBSRVQcruCA4EQoqAVR8DqickVBAUGKiopDL+gFHNomb+2TtBT1ft99n9/7/fXI6drz3muftdda e619TlIgABAFTcCBb868KXNMKqcNIPVHgLiK6ybOnnLPg0/9BLBxA4D5rdkTF83RTjacBiAm7GWf 1XjNRDIzsBegIgnA8Mj02dcvmqEcsADr+wGoh0+fPmVi1PfkIWx7DiEFs5PfPFtai+ndCH2mzbpx atsbpSsBRrcCDDw9dc602Wn3zXkKhz4AoEi8ZsH19ge/W/wRQLEP8z9eM3vinDH/PHo1TvhOgPj+ wOZOP/rxJs8/v54QXXJeqVIC+zwxqrGcxS+WbteFNv4+TPao6i1sq0IgUgMM5Vu61gPw+0MbQz7Z o5Hyix+OlcjcGPhhHsiAgh68MBpnsSauAmspvx+msYYYswF3EyUshk3wd/iCfEGT4WdMDycbYTf5 ELbAZoTrYCWsh1vhCDyIucNkP7kv9A2kw0j4GHaEPoZEECEZrDAZyiELe1yHJZWhz0Jnsc1iSMO6 5TAcjKGPQt+DETzwAAnCM9AFT4e2kiehLvQDzrMM+sM6BAuuZwCq4ObQCciDq0Jn4AqYBQ/AfTg+ hDqxtwe2kTGUkGGwNPQxYhfBKmEyQl2vax6OFb6aIheO1nNZI9erJBVnUgbXkAJYCubQ73g9S54k KZAe+gRHvBqG452ORJzJMB4qwAQDQUFiSDQ4sD4bNpPdoaNwByzE3lVQDdNhqjSn9NCnoU+x7xvw OBwgQZKG9/+wNPdHkOJ6shipcwReQ0qmwQFIwx5WBnhtjlzp0mWUriZiIxriIE7yNXmcrCcCeYHY oBLvaTlS5gHYRknoOM6Vjb8YKTYcPiAjSA4RQxtx9UFal/44JmstImUYlIW20jcRZx0DvLsyHMGK rRhUws3dgJTNY4BzuQpbzZKAjTMcV4SBEXswwFlIUIbrXQPDkFM+ggWwB8aGtsLDJBbnQcnSbmAh ckk6XB06Sq2EhH6iyTSZhWHovshSmsxah3P/Kf2fLzoN8WPMsEN0BF6AJeBmK40zKYdWpCLFe3oM 13QsaEKHQ4dpFOmL0vAu1meSTHiBFEg06qZcN5UYlPeCLOTdLNRHm3HE3nAd8nMickdONz2Ri4ZK 9OymaZieS3poGYHQ9xF+Xyet0ycSR2aT/ay8G1g9StM3iH8syte/Qj+GfiW/kGfJY3AQ810XL9gr SapGWismpYk4IpPRGTiPNJTTgTgHD0rpb5AET4Ab1/AjmEfehSHwLWSQPnjnd2JuKfhwzhXEjnN/ FrXBEKRJNTQQDlPDEBpw3iG8UxHvarOkCyhKshrHjoJUaQaoGxCfFVShC9IqDAdZ6DOcUTrCYuzH WmbinWpQxr4PHQydRElB+oWOhw7hKsyS+lfh/WoRbwxeE0GJWi0L55OKvD8P+2egrHqwv8j645r+ jn36h76CfEm/VGKbBySNEAj9glxvwREyIBXLK8GF91ZHXWQQGUgG0hTyMl4PkgcxNZim0D54nw9S 4JphJ/kAmqEe9d9w+BspghDeuYBcMwKuR5nIhCtxl3kP4uE2eA3egDthPtyCumEGzEVdUgqlZD3S tgh5bASMCO0N7cV2MyJXs3T1HvniuFdKY+6AnRfHwxb1WPcGTKYV5E7SQFLIa+Q1eBoByEEyHeEg uQ1hDfmEPEauIHr4J4Y2+AExfAzfww3YcjPNJ/9AXZQIP8GXRHNxK8FeZ6XrdbKXbCOjkAsAR5tJ BuKKhj+ySHwCeYZ9boepvXai8Kcv3vMWnO8WvKbh9Txez8FvKG9jI+XTyRLEtYZcQ9ZEenKR2CPB /6UPeYg8Le1wLP0a3u8XZCW5Ed6B98mj5H1pnqzmJKYj90euI8U999odn4ANfxWTG4iLgUSD3nS4 SA/ukviPn61w7JK4m7azIUwVJYSNhFUonQzfWDJWyjch77P8fpwr++D9SPfSFzmbfUpx55qN8jgb tTIC+QFXG/kCpWkhruYnSPdE5IDbyHTU2InSqq+JrMYS5KlG0oi9ZpNDyAWv4153O5lP3kDtq6IO TF0FN6Im6WTaWLqGIXeE+zyI1w54C94is8ls3CHfwR1Fi/vdQhhLFiEHdmE+fI2C64ga2qEdV+FR YocOUtuL2t1UYJwyLUI/drc1eNWjfLSR93G97sEipk9Rb5JmMh7hIbyaUQKayQoED4lDbT6ejOcW os2yMbSR3E2ek2pn4qXFywO/EiPCtz1XM2m+JH/xqiEJCJ7u/fO/hUv2jr+C7j2je3f4b+EPO8cl 4Ll4SXPoHv8v2qJ2OAGHEJguVKGGHY4cxCALr+5RmAXnQD2dhTuZH+ecHuY+aVUYJIUtUnVm2BKN ygVercf1moMZBcyBZWQ58ta9ZCMJkOMkROvoO3Qf/ZwjHMepOCe3jFvF3clt5P7Ja/gafhw/gV/L P8A/yj/Bb+df4Y/y38l2yt6QfS87J9fIE+U2eZF8uPxa+Wz5XPn18mXyFfIH5U/Kt8i3yj+QfyI/ Jf/Nepv1N3u0Pd5utTvsbrvPnmcvspfYy+yV9uX2J+1P258TZEKcYBQcglvIFkYK44V1wmYHdcgd 0Y5YR7zD4rA50h0exxWOiY4pTurUOwUXuKhL49K7DC6TK8mV4sp05btKXLNcTa5bXbe77nStdW10 Pedqce1y7Xa96Xrf9aHrqOsbd4lbdPd3N7ivcU91X3tadjrmdNxp0+nis/Ss6qyvg3bYO/p0lHSU dZR3VHYM66jrmN2xtGN1x7qOUKeqU9cZ25nbWdlZ01nXOalzZueczhs613Te03lf56bOzZ3PdD7X Gej8pPNI5+ddvi5/1+1dPwc7gyG0EUK4FHbYIFF8A9mKOuJ3pPjbSPEjqIa6KX4rUvxu7gme8Dp+ GD+eb+bv5x/iH+df4Fv5I/xpWUD2iuyA7GyE4oJclDcgxef8JcXPWpusG+wae5w9wW6XKJ5rL+yh +BNI8WcuofgI4WqhuYfiMUhxs8MaoXiDY7JEcft/oHhtD8WbXRtcz/RQ/D2k+BGkeFEPxae4Z54m EsXjT+filsafje4gSPGMjn5IcbGjomNAx1UdMzsWd6zquLujs1MpUTyn0985tHNM53ik+OzO+Z13 SxR/pIfi+5DinyHFSySKN4UpjpYGcGtDBqTxbi4rdJy+BxCMRgm4hyxAXTO3E3eL4AwmI0FPMCOY HkzD5GJYhPv4TNw1BkFJ5+edxzsPdL7febLz484PWcvO9Z0PYri2cyNeazuXd97a+bfOGZ15nS6A r+oBvjweVpInbzu57ourT9568rcvNp9cePJlLGlGWHVy6Rc3tM1su/Hkrq88J+9u29x2/4n7Tzx+ YjVuXptYv7aEE3NPTMCc74R4Iu9EyvEBx6uOlxwvPN7neN5x3/H0447jiccNx8mxfx374djpY18f +5L1Ovb2sT3HXjuGWI69deypY1uPVR3rf6z8WMoxxzHhmNWy19Jp6bD8bvlC/5qkyF9TbFI8qnhE 8bDiIQV6sIp9ip2K5xUbFY9h2qvwKNIUanlQfl7eLv9O/qX8pPyE/G35W/I35Lvlr8h3yXfKX5L/ Xf6I/GH5QHmZ7ILsLhnwQX4W0zHkuj9stNYwXJJP44b25B/5y/23u3YF1yrFR/6y9m2ET9F7XsGv 4h/8Yy1/Zxj+04dfwIBfFMld/z/N4w89r+R75s8P+t+2zuT7/KFk5qWz+P/w4dDWuxVu46fB/eh5 rIC7YTU8ivvzk6CHVbgct8BaOIu25F1oZ98O/4DjcAZt3Wfg32h5nUOP+Dn0rd5GG3ASXIO27mS0 aqegdbAP/ol2ywewH/2NqWgff4ie8Qu44/8L9/ZDaKsexP3/O7Rb70ApmQHXok0yC72LDdCIVvUc tOnnww0oQQvQvjiNsrQY7ZKb0FtZAi/DRrR9luGudzPauz+i7X4/eYBQwhGeyKADOtHCX4+2wsNo kQSJnCiIEkLkEbRE/o4W8wbcp1RETaLQA3+cPAEX4BfyJHmKbCJPk81kC3kGfa3nyPPokW/F/Wwb aSHb4Vf4hKwiq8kO8iJ5CT2IVrQudGQn2UWi0fePQW/4JFqeccRAXiG7STwxosX0Ktqje9DKfh1t swT0HrZCgJiJBS2tN0kiSSLJxEreIm+j1fw7fAlfERuxE4E4yDvkXbKPvIc20Aeo2/9JnOgBuIib fEgOkI/Ix2jnHYJdJJWkkXSSAafga/IJfAptcBQ+QyvzBByGz8kZtPB/wr34Z/Jvco5cQP/xV/Ib +R1tpg7SSbpIEC2nEHPgKaUc5amMyqmCKqmKqkkWjaIaqqU6Gk31NIbG0jhqINk0nhqJl/hoAjVR M7XQRJqEPr6V2qid3kkFtB1zSC51kjz0q1zUTVNpGk2nGdRDb6d3yHSyaHqGu5m7hbuNW8ndwd3F reHWcuu49dyjaBk8xW3hnuWe57Zy27gXuZ3cq9zr3FvcPm4/Pct9xH3CHeU+577gvua+49q5M9xP 9Cf6M/03PUfP0wv0F/orJ+eS6W/0d9pBOzk1F8VpcCckeGOPo43xJP8Uv4l/mt/Mb+Gf4Z/ln+Of x11wKx/gt/EtaIHs4F/kX+Jfxn1xJ78L7ZHd/Kv8a/wefi//Ov8P/g3+Tf4t/m3+Hf5dfh//Hv8+ /wG/n/8n/yF/gP+I/5g/yB/iP+E/5Q/jrnqU/4w/xh/nP+dP8G38Sf4L/kv+K/4U/zX/Df8tf5r/ jv+e/4H/kW/n/8Wf4c/yP/E/8/8mX5FT/Dn+PH+B/4X/lf8NtkELXUXy4UV4Cd4gX8N2tLnfhL/B 67ASzpNv6V5+KeyG9Whd/wOegnuJH9aQctyH7kF7YC1ZCK1o47eTf/Fz+Ll8Ez+Pv4W/HvXTrfwN /G38ItRxK/nb+TtQ063mF6Iddid/F383vwbtg3X8erQQHuYfQcvsfrTPHuSX8H/nH+M38BvpCdpG T9Iv6Jf0K3qKfk2/od9yyZyVK+D6cP/mzqG+lkPPsSVhJ1T0DxoGKzleJlcoVeoojVYXrY+JjTPE GxNMZktiUrLVZhcczhSXOzUtPcOTmZXt9eXk5uUX9Onbr7CouKS0zC+W96+orBpQfcXAQYOvHDK0 pnbY8BEjR101ekzd2KvH1Y+f0DARJl0zecrUadNnzLx21uzrGufMnTf/+hsWLFx04+Kblixdtrzp 5r/dcuttK1befseq1Xfedfea5nvuXXvfuvsfeHD9Qw8/8ujfH9uw8fEnnnxq09ObtzzzLPfc8y9s DWxr2b7jxZdebt2565Xdr762Z+/r/3jjzbfefufdfe+9/8H+f354AD76+OChTz49fOToZ8eOf36i 7bJVfNkqvmwVX7aKL1vFl63iy1bxZav4slX8/7pVLA66atTIEcOH1dYMHTTQX1ZaUlxU2K9vQX5e bo7Pm52V6clIT0t1u1KcDsFusyYnJVrMpgRjvCEuNkYfrdNqotQqpUIu4zlKILPKOaDBHnA3BHi3 84orsljeORELJvYqaAjYsWjApW0C9gapmf3SliK2nPqHlmK4pdjTkujtJVCSlWmvctoD+yud9lYy dtgYTN9V6ayzB9ql9BApzbuljBYzgoA97FWm6ZX2AGmwVwUGLJi+qqqhEsfbFqWucFZMUWdlwjZ1 FCajMBVIcM7ZRhLKiJSgCVVF2ygotTirgMVZWRUwOyvZFAKcq2ri5EDtsDFVlYmCUJeVGSAV1zgn BcDZPxDtkZpAhYQmIK8IKCQ09hnsdmC1fVvm3lV3tuphUoNHM9k5eeK4MQFuYh3DEeNBvJWBhMWn TBezOHhsxZiVvWsTuVVVphl2ll21aqU9sGHYmN61Agvr6nAM7EtdAxpWDUDUdzIqmrw4ETZ9divh m5rirGIlDTPtAZWzv3P6qpkNuCCWVQEYfqPQYrGIO0MnwVJlXzVyjFMI+BOddRMrk7YZYNXwG7eb Rbv50pqszG36mDA1t+miIwmNtndiSk+dlJKas9Tg4T3kJGxGzoHIBgH7NXacyRgn3kg/FkzpB6uu 6YfN8FNHsFdgMi7DjICqomGVvoiVs/4BmQuN2FXnUak3ONt/vLRkYqRE7tKfB5ZkzNHDYFjfnQ54 PIGMDMYXigpcSJxjmZQvyMpc0Er3O+fo7Rgh+aB2DHarK/IizQWBrerqVhEmYSbQNGxMOG+HSYkt IHo9dQHawGr2dtfEj2I1Td01Pd0bnMi+OyR3Nj6gdPf8ReuNcVXTiwLE+D9UTwnXDx7hHDxs7Bh7 1aqGCG0Hj7wkF67v11MXSQXiKsZwiTSSoomcVIucOK6nMcuM0QR4F/7JJU6e3KpQIitKJcQ+IKBv uCIc1qkF4b/s1Bo6y3pJ0cVukWkGijyX5osvyV8yPc0qDifMu+ngkWNXrVJfOvWhnoDGFVC5kCsC WldAJ6XjXC1G3SiPPaBrcKECie4JWUD0o8YcShTq7GPsgZEZqFlKTGe9Z0sCtSjugSgX8isLZdJY 0VKolQaNdwUSXCaiL+ksKSz1mk6eZc3ULoY+WgqVroDeFYiR0kZXizmGzSBGwh3bE7IA/jQDNgF9 yf9+DtHSX4IrYHaZQF+i7ITIXCT9ECBh4teOaUicWMckj/3JXKPGBOQSeQWmRiP00kko9NJfeNiR KLeBGg/+oZTW3RyWTCHcrdcHR+DcRD+wOCvTiSmQUna3E/+whDGlvQHF0LWqX6JTqGsNhRqYVpUI QBtcdla9qgGTzsCIDFbrtieiOmhw12E3DtsOwK1k1aoBTvuAVQ2rJraGmiY57Xrnqp2ckTOumlPV 0C2kraFdqxMDA+6sQ76cToqyYGdoL3d+e1FJ7oFyA3cexalZCqMx9CL4EWoQ1iBsRZCDyJ1rUWly Wb9zLYVFueVqloISyAs1YTwCY8xvHzY811ZuxQI/Qg0CqzyAIMNxz0EDQjMC68ojtnOI4Rw8hnCG leAQP7f0KZKw/NwydGRu+VCWQsM8T4oPRuKXIvHjkXhFJL4tEl8XiadH4qsi8YhIXBaJSyNxSSTO jcQ5kdgViR2R2B6JbVL8U8uIvGa82Z+QcA3cdzAHoQmBg1oMe5c0I2xACCDsRTiAoMIRzkojJHJn pRFOY/vT2P60NMLpS0qaETYgBBD2IhzgTreoYu3lIncr+BBYXIvAY6+HsNdD2Osh7PUQlgCGegQ7 gg9BRKhFkGPNYaw5DBROcgfhLALFsoNYdhBbH8TWB7H1QVy83jmOe4NOQIfKxj1J61sm27xIgxZc 8BZc8Bac+0nuEI51SBrrEI51CHsfwt6HsPchaayLOY4b28JNtrVy/2ipYNHr24XJtujyHK4Ch69A nqnAG6rAm7Bz7B3PvRieRKDIO/2xtj8O0h9b9Mdb7g8yrprzgBt7ltCroADjYsyzuIjLlOLCSNyP 87QUIB4H58NRfMiFPpx4NJeKuVTMpUq5FMylYC4Fp+nDMAV7pmKch3EK52R5XER7S5xZ4lh7i+CK JLJzc1/lBDoKiqUmwvaq6tyG8iguCeeZhLNP5RLhMALFysSWnFypW2LLgOpIYtiI3PIYLoHOknDF 0/PIcjbOgHE6xnGR2NZi7W/bScrpGFwFQD7SILU1SCoN0leDpNHgOmuQPBpEq0GO0CBHaJCPNMhH GiSmBvlIs10XGyu20n0tKXmP7aLvwhn6rjiK2gXymOyMjD6GXgF9DB0c+hg9Q+ke+R4Ftcn98gny Rvkaucym8CsmKBoVaxQyP/VzNbSG46Uz0FR7pr1aprfqBb1Dn6rP1FfLJ5TPoNfiIk6gx4HQ47RR CXhbTfQYltnpUQx9GIoIFBownCOlmjBsllIbMAxIqb1Sa9anScrre/qxlgcQTiJwUrnUlx6lsyRs dnoEsRzB1keAo0foZqlUTw9jDZMDFvoQRIRaBJ4epg9JbTbTT6EV4QgCRz+l16Jg2egnLfnRtvIu +gm9Ssp/gNf7eL2H1z683kWCRkvwnnRX+3Du+yCEwIEfyxsQ5iA0I+xFkCF13sN720A/wNCLoYjQ gMDavwdrEPYgcFj3LsJ7WMrGmoAhgeV0CSym2xDTcroI4UaExQg3oQAtp9cj3ICwAGGhVDIHYS7C PIT5UskshNkI1yE0SiXTEWYgzES4FksaEccUCUcj4mhEHI2Io1HC0Yg4GhFHI+JolHA0Io5GxNGI OBolHI2IoxFxNCKORglHI+JoRByNiKNRwjEIcRAMFyHciLAY4Sap/HqEGxAWICyUSuYgzEWYhzBf KpmFMBvhOoRGqWQ6wgyEmQhs/CJp/CIcvwjHL8Lxi6Txi3D8Ihy/CMcvksYvwvGLcPwiHL9IGr8I xy/C8Ytw/CLauI0vKg8hgiJEUIQIiiQEXgmBFxF4EYEXEXglBF5E4EUEXkTglRB4EYEXEXgRgVdC 4EUEXkTgRQRe6Qa8OL4Xx/fi+F5p/DZp/DYcvw3Hb8Px26Tx23D8Nhy/Dcdvk8Zvw/HbcPw2HL9N Gr8Nx2/D8dtw/DZp/DYcvw3Hb8Px26Txl9NpyEjPIryAzLWcXoMwGWEKwlSpfgJCA8JEhElSydUI 4xDqEcZLJaMRxiDUIYyVSkYgjEQYhXCVtPTTYCbimSLhaUQ8jYinEfE0SngaEU8j4mlEPI0SnkbE 04h4GhFPo4SnEfE0Ip5GxNMo4WlEPI2IpxHxNEp4JiCeCXQLjEVcTFiuQZiMMAVhqlQ/AaEBYSLC JKnkaoRxCPUI46WS0QhjEOoQxkolIxBGlocwHIXAMNUgphrENEjCVIOYahBTDWKqkTDVIKYaxFSD mGokTDWIqQYx1SCmGglTDWKqQUw1iKlGwlSDmGrwjmoQT42Ex494ihAHxdQ1CJMRpiBMleomIDQg TESYJJVcjTAOoR5hvFQyGmEMQh3CWKlkBMJIhFEIV0l8Nw0yJBxexOFFHF7E4ZVweBGHF3F4EYdX wuFFHF7E4UUcXgmHF3F4EYcXcXglHF7E4UUcXsThlXC0IY7PJBxtiKMNcbQhjjYJRxviaEMcbYij TcLRhjjaEEcb4miTcLQhjjbE0YY42iQcbYijDXG0IY42hoMuIZvoTcSCUtKB0vI7Ss1GlI0NKCOP oaxMRpkZjZJRjRJSgZJSghLjQ7nIQvnIRDlJRXlxoVQ4UDoElBI7SouVTsMxp+KYU6Cj3Imz/h1n vxHnuAHn+hjOeTLOfTTOsBpnWoEzLsGZ+3B+WTjPTJxvKs7bhbNz4CwFnK2djhDN1vt/nWy7A2Ee wlyEHIRshFZiEQvQMupA2IBQjVCC4ENIRXAhOBDsCFYEMBrRQY6NUYrlCbSUoh0AWvKqFK6Rwrul cKEUXimF1VJYJCbUal+t1a6q1TbWaifUautqtQNqtUW12ldIEJZhi2/F5GXadcu0K5Zpxy3TDlqm 7b9MW75MW7hM22eZ1otpO/mRlGDDx6Xwfim8h4XQIYW/SuFJKRwvhSVSaJdCKylp0YKqlZxvEUrx vs+1CDUYtbcIkzDa0iLk23aTTSDwBGzkyRZhPJY+0SIMx2hai1CA0dQWIQej/i1CBUblOwSf7Xeh lSditO0LYZ7tY2GQLSAU2jayshbbY1JVlG2e4LFNETJsk8PFo8NRBYtespUKz9qywiWZ4ZJRcao4 VXMr2SnmKZrfUTQ3KJp9imaPojlD0exWNKcomm2K5mSFQRmr1Ct1So1SrVQq5UpeSZWgNLSGToqZ 7Km8Qc6eWYOcZyEvpfWUhewBPgGgRElhEDTsoqVoJpRuo30DcdxgOnhEfzI4sPcaGDzJHrgwwtlK 1MPGBmTO/iQQOxgGj+zvmW8aHDCPGBwYMWzsmFZaGmiqHGzHT8A8XMrurawLuKVkKwFM50bSIqaL IukmTFdH0ti+LtDXM7hVERoe6OcZHFDVXj1mGyF312EuQG/HUUaOaSUhVnRbIju62wmE2G67K5HF odvuqqsD4wK/yR9bFlM4oPIvgoZI6Ln4MV1MMty1N4oa2wsKW5XClqewORWsfPAILGx+QdFcpWjG hQgXmpID9w8eMSYQSsYbiyQG46qNsI8bs5P6aWlV5U5axqK6MTvNG6i/ajgrN2/Am+xph8Lpx3Yo m/5IO3CxduD6QzsHLWPtUlkUbueQ2jkuabetWqiq3CYI3W2qpTbVl7bZcGmbDVKbDZE2XLiN0KtN XD8QpDZCXL8/tXH8F21S/7KN5z99pvT/j1W9P2QnDCdt24oXsFPWBmfVFISGwOoF002Bpkl2+04o Jm2RA1h3w6RrprN44pRW0uacUhkodlbatw1f8Of6wAJWPdxZuQ0WVI0cs22BOKWyZbg4vMo5sbJu e800/6xL0N3RjW6bf9pfDDaNDeZnuGpm/UX1LFZdw3DNYrhmMVw1Yo2Eq2oGk77aMduU0L+uYlw4 3k6j1Mj1DYlCXX+jfk6ZJALFgmlZ4i4eyGaI8tQFNM7+AS0Cq8oqzypnVSj4rErHjtAjVaZlxULi LrI5UqXH4hhnf0AR+NOnqvL//nW99Jn/X3z+m5bQXX+9qWpGZe8/Sag913vm45/nhp6BMIcDw/xI wfXzPYA0FjUNqQ2ZDdVcg7VBoPPn17HCV9GrYl4P868IlpHrAZkvQhrsGPngKOEEsOGAleDYJByx KeJQuwC4ZThIHZl//Q3YEIO//HRXSK34bwH4eyERYys3Sfo+ZVsEvgwuA1YfH+xCJX4Ylfn+CIQ/ U2E/ScU8u+6HZzCsQ1gJK8kKYpZK18IWDBfDrXAfu0VYzlw+MgaeQ89/PxwFD1wlfVfxN8zFwttY vz/0E/SHgzBSap+GZQ9g/k32nT9qww1lP++CgyTE/0BiuadgAVlO/s1NwPEfwBGCdE+IfV/tNnhE mRl6AdwgwmxYAvfAoySaOELXhY6CHIyIuyr0VOhdmIi126CVPM/V8ktDj2HPEXAd3As7SDbfwO/r +ip4S6gx9DFo4A7YRKKIwL4wKMsIjYYk6Ad+GAfvh++e2Pn0rlDw89A2HN8D5TjScsR6D/wDDsBP pJIc5N0yCJKQLfR+6DNQQBn2XUc4vPTEQQaQZ2kC9yH3OzrOJqjG3uNgCkyDRpgHT+P1HM7yDMkn BaSSVtJ6ejtdR9/g1vJL+WW4MsvhFQKEJxlEJIPJCPIs+Zh8jNS6kVsaBJyPHe+3AqrgSqiXvjN0 P7wrzfoodBGCM5hKGslS8jDZQPaTL+ib3Ej+Cv6H0NTQrdL3I2ORXgKkQimOMBLX9wXYDjux9xeI 0YxzzyN+vL+/0SvpAi6fq+Wu5pZwzdxT3CF+NP9CMD/4r9BtoY2h3aFPQ8dC7TheDDggCwYjpUfC GLgJV+4eeBxHfR0Ow8/ESfqT68jfyH1od7H3BHaTT0mQaumzXB9uLfcST3iRX8e/HYwJPhFsDZ4J VYXqQp14f5PgFrgdue0J2IQctwNHayPV5EoyjIwlDTjiCnIHeZq8QX6kPB1HX+Tc3FxuMXcTt447 z7v4xfwnsgXB+uDa4M6QLzQfZ3x76Hvpe6Fm6IuGy0gYDzOQM+bAAliEc16CNP8be6tDuu7CO3ge cb4MryBdTsKPcJ6opLcYkokPr36kDO9qDLme3EnWkyfJl+Rb8gslOBMP7UOH0mm4nhvpm/Qg/YIb yT3H7eYOcgd5Iz+EH4Vc+DT/ggxkMfJS5QcdRzu3dj3Y9VCQBtOD9SFFKDGUFKoObQ29EToa+hdK rh0ykS+HokwtgWbkmlZcqfeRAw/gWn8N3yIPyaR3KlKImwwh48jNSOkVSOtHyBN4bUHO2Upa8dqN 117yFjmA1D9MTpKvSQdB5qVu6sUZj6NT6U10M32VvkGDXBSXyDmRniXcFKTpUm4ltwnv4WPuJ+4X XsfH8W6+mJ/C38s/y7/OH+U7ZNWyIbKF8hj5nfI1Ec2xv/frMqSK5uP4lNSh/GuQ4i/St2kWSsT+ /x+uO8gv8C7pD1+TLuTyO/C6GU6jHI2mFeQb5KTHSV/27iXl0D+6g+yFDbCRe458Sm+BO1H6s+EH DAmdTrLJ7TQJteE9dDt8hZyxH+XlJ1qN6f240ibYz+0nc9Bj+JncBex72A00HqaRj6EfuZ1Uwiya Dk64nuwH6c0smcgT2dWob6cx3cuvo9/TdeQMemCPSXO+k0yEDSQd+W0/uRq20ja+D/8qcukAlFIL th5O5eRG5M1HKA9P07eRd7ehnA1FqXgApXcDykk5zjoNrocKMgyt2l+ICmLIHcjt41Ey78D5PAvP ki4uiLgGhHZJcJr6kM/XAXuDayekwDOhu+E1MgnleAdRwyPwBVzJnePjccc4yyfLqkI0OAmOhIbB e6ix9NwJuAKOkdWoN66Az4gRHg7NCuUjN+4P1eE8b4XpMEpWLrOiNp6IPurrig3yE/ISeY6cyBbL JsuGywbLKmR9ZTmydJkgM8uiZWr+DP85f4B/jX+S/xvKbjYfz2u4E6g/t3HrudVcIzeE83PZyJPJ HE9/o/+i39Hj9AjdS7fQ5SSAszwWeje0PlQbKg31DcUFg8HzwTeCLwQfDq4L3h1sCs4JNnS92fl5 58HObZ1PkQtdR1B/vU7eC3bgHnBDaGzoytAFlDdDaG2oNHiYrMF7dEEXytcHqFfX4ro8ibQdgxpO pOx7tUE4D+1IoU+xfidsln4VoAGuko+EGlxvN0rmLRFunIK69mnMcbhWsbgD+JHiV+KajAP2/lUq 7rRvwnOhjdwoHGObJCxP0w+JPfgEpKKWuQ73p8HwFSmD7/HaATu6HmLftJY/jVh3yrfAefmjXEdE yCb/z0BO/O+BouNHD6C98TFy6SHcZiYBKD4AUC9Be+9nAG1aGHStAHq0KfQbAGLcEdj11xDn+5/B NPzPYO78MyTl/5+BVQdglwMICwCcVwK4xgOk9gtD2psA6SfD4HMg/AiQfxqgTxNAX7ynwgthKEYa +BsAxFUA5b8DVODWe0W/izDw9GW4DJfhMlyGy3AZLsNluAyX4TJchstwGS7DZbgM/08DZd+GkbEf duNAAcWiVa44i2Uy/iwHarnsLMdRi0rBnyVgVg6+yeQZqj9XMqSrZKj+QskQfVcJ+Eu6Shjk+IQY IcaFAQEeOu3c3k6RfXfVzu/Fwf8e+pLPk8kgChLAA32hAl4Us80mh5NPs0qRK9HpcOSaTQaz2eR1 +V3UVRlvT/Ql0sTKYi7XFk/iW8k2UR2dq4KCXH0xKZay5bneqNxcWxkpY9nYfrneTH8mzZydZqmC XL2GaFixNjbXLvfJqXy21Vw5cyeJAukuhpyr77pQ3443EkmBn4VSoD9Vj38rddmepfo3ISY2oZCw IMdHDHKnw12Q3ycv16jIdzsd8niDMS+3j0wWLu/bp6+L1cUbFHLuP7Tl+C2LFj3zzKJFW5Y1VFRO aKioaCBzujRJ2oQUndpFzyVpTU6dOu6ZcKNnlrMGDLivFm1hPbfcWDlhQiWW/eu8yp0cH6PVnFel JhpjtMGqRVueWdjTYMIEAEI+DLVxHJxByguiRrWHi9IDgUNys2YXkZHsCCna68HfnuNz9Zoxxw2/ 4orhDKYVDR1ahMBeKtoSMvAu2SaIg3Gi3ghH4VM1p9VwFIlKSCsdKyaqowxqdZQaC/hYpRTJdLJY S7zyQ0MreXvbepNH/81Q/Tfg9XZTdaVuSLZnpW7pmyQmLy83F4k8lxC5IkJQ0qdvmKC8tqvRbLC4 MkpNdK05LjElo4S/8PuiAn18iTsxX7YSE8XuRKCwGXnNLfNCNAjwoKgeaKjWyJKrea3wCh0vvSM1 /kW8FYvTxPIqMGNeqVSZHWt3kgkQYXB9u749zOKYQNow6gwORNfeKPahdqNL54p1J7rlLpsrPsrk gTit3kOSZGYPWDnBQxLUBg+JicbAokj2gJ1iIL3bcvFdgJtJvIHiDdKCgvxYvL1YRX4qcohCHh9v SEAm6dungHd/e3jJs+u/OXzTsw//s76gob64bnz+xHHFdfS3L94N3jubuJ784h3SOCt47Kmnl1Zd Of+ZLzYvYRFbJQA+FSlgga07wRLaK+rNgj/BMtWy0MLFW1wWamkNnW0xmvNbsU5jJBd/2WUXfYg+ TB8RtdaBcPELrN2l8daBMcsNxCBWFxhEhyvfIHpz83FZZ78IMpVSk/AaHQx6ADoJYoDQSaIqplbf rN+g5/S76SBIhLXkKBJYf6G+BKmqP4W6w9+OxKgv9HhyfOCZW49kk9Za7hTCfJgXE+GDPn3phx8O 8vsHda1j4YcvZxpTCzMqZN6OvxdmZxUy4Eb7NGVZ6d5YpMB1SAEDUsBFknaCGu+yuLrgRvty5/KU JS7eFZXu9KRUp6xIeVv9ZpRisHoUzIIpKZNca+CCSxHr0Dv1KXrXAccB54GUAy6lFqm0vcyfz2Ix uqjgMWGv9oCWa1ITOeFayanthONIK/lxh9yZAgmtNGqHvtoqU7Fe/avzpfjKEVLcMrJA1UqHvESg WqHUaHfRaSAQ3Utz5ERuSTW00mmiyvKTEsTU8nyIdJfigSMwxlq1SmVTrlFSpdm9i9xHxoe5tZ5x KZJRf46xbP2Qc+2SUv4ai/zt7TGFhUTfVVroNYH+PCqzufMYscncekByuwRJtuKFAkCOy5c4UCGP UDwsdpgnT5PXXYK3b+dYmjG/6JbZ1w10J+hyUjJKGz9Z+o9fq1fO3G8tGzzpKNl3S0XJ4PmioyIj pSStZMesH54edU/TFFyNlSiRIq5GKewWo1S4k9BfyS9myu5NHODNyT8Ex8xUUWRSJRTNME/NulF2 Y/yiwrmlKpVSpcsHR7UtyZdEk5L6KXRilDZfpzNUK7TRObYcmpPjqe4ns9kYmU5ud7okcu0wJeZD Wisd31JcnLALZZxDNCp9fD7H+fPzo7DFS3pDPkSRKG+9J88bk+f1tOd52/M8nphCL24Cud56Fnhi Ygu9c9uZ0vfUz/VA/VxiZPRA6qSGdWSCkYkqo1sqXuGyvmW0h46RHSFMRm711WMXfvnyvxZVFGck 2zJTS2Zs31A/NOvavL4l2VMVabU58+ffNzRBF2/JKBm3Yt9r31TS58uemDJ754RBGUWZpQarWlc/ yn+DPVbBFWZklhA+c0h6+YSrzIqoksyq8glHHqy5g/1yE/st+liZDiU/mQ4VLcuT1yQ/msyVJF1l rk2aat5plvUxk2SxqCC5NdS03TPKn8zYy5AWjtMMLG4SR47N8istsmSLJd3iTC60DLKIyROSb7Dc l/xU8kvJnybrUpJzkt9L5mJjkxyWxPwk0V2Q5IgqSBqEi7MkidiTfck3WbioZBILiogKuahDmGa5 +BtSvUu7vzavVERKX7IONJuSkpNb6UJRY7agaWCxJBtNyYmMb2L0if5EK7EqTAkJCjHRna94jQ4H IyjJJtBBEr1d1IKFj9ZpzDWmgImish+GtTxdIaqVRKFQJiYkGOEVVFVJoERVZUkCo93oM4rGWuMc Y5Nxg/GAUcWy1Lib1oIVddd4SXfNDSuvksh1iskfU2RY2lUS9LCy9i5PCe5shYUrsz08MyBiC9nr qMhIFeylz2QkdYlVIrmo6pvkTxYNGpY7sR2JyeKX3AUWpKnUQJeo91tErLA4onT50jKpY8Kt9fFS vC26sPdbZnUo2fUswd58cyY14SInibjCSSIub9IpXNqkJsSe1ISYk0Qcq7uf9KqcOllMdOYnsgCL UEuEX36bW0/mkZiwZo6/VEEXCAUkLoZwa8vZL32V+QcF25mCru06SH7oHzws03WeKMzKKhpelBXW 0sOf5So7T5Bfg0pmT9wqaQYdZMDvYrQ5iijNkGRCtuMdSiLdfrwpn/yUWN3MkUaOcLvI72CjyS3O DEl36CxOP4jRBgyQQtBKNm1PcfL6VrJW1MRXq7QTUhtTl6dyqbtIM5joeDG2QT5H3iRvlvMNGFG5 JZO8gjrYwV77EFWo9+1On5Nzsv5xNq1X26Rt1vI+raht0HJas2cX8ZPbwzp3LqpZiRGGMo4Y0n4K DaiuU6h2kVol+q6uufXtp3DBK24UTUYLr7TwJg8xKjEwyxLRQFDEe8KvG958M9KYxKOOiEWiCt2a pUexMEXC1EiMEKY62TWv/tXfu4Id36wYWprhHJ4lTt51+y3TGu+2mzKL6XxGeb78XEow+MHHZ0bn lqeXVmjjFt5046orYsQ8Wsvoz7TEEaT6KNTH7M2l58Uy3mqwVttGRY2OHm1bGH0H/4DraZc6304G RI0mrerW6HfV70V/GtWWeSrqTOa5qK5MjSraHD3IOsjGCxkpuDENFWMyqjlOFWP6Kak6RpaClE9l ejee6LbL27XCLjpEonDCHImiG7QB7VmtHLSMskjXrF2kkqzqoWvX1/quU5LN1XWKqWDcwmJRCTMe xL8+EVJJtlN+bArTwlKWmU+oc+OMPZSTk02DVly58ecvXt94YObHJOnvS6uySjJMWYnmSR8PKZDb Z0+ZMnvp0LK76O6ywhC8vv3z+0m/vd+QnM05Ql5WqUnXOL82OHjRmJlXT1+1mHlHW5BqzKoygwty ICiWKnilWpUmz0hLTfVUa69NUy5KW5jxUNq9GfxK2a2qralbPadkp1QXZBdUyrq0uoxrPVy1Uica Lfm6zBiJf+2YhgQ36NKra5wk2mlzrkH2c3qTq9F2wT3OshupmEB04Kbjt3OWPDvLx2A+E/NR5tyL FuuQdmnzZ5FEOdzJ6nHvYgZ2IdvIwryYn5WTaIs18mqVS+a2xtk9kBRv8ZBsZaYHfHK3h9hikz0k 0YhBltrrgRweg96m6834YQvRay+U9r2+kd2PlaX2cnXiers9d9TUTDly662HptTUVA04u3v32aq7 Fk6ZunDh1CkLTaunTVu9aPFNC+ma0ofrpz0/efILU+sfLhWbax/96qtHhzV/fuW111455Npru34e dsstI0Tm2VA4jOtxFa6HDbXHV2K91pHkzv9e8230Dxm/yjo0F6I7MhQrlbdr7o/eFH1EdkTzSfRp mVJrTbZeYRtjmxY9LW2lTNGqedH+rua45iP7Z0K75neNslAzSFNHxmlmpq6P2Ryj0IFWS+3OdInJ HenVXm45t4f7kDvDhTi5jWvkKMfFOqvVMvNPydWxWnuE01t07W4mAEaiExNBbkPvU0StIwsrnw3y gPysXC5nCqWH8evnDjnV3tX1taRG2pnlUX+R8ZHg9WgbM3LTiEURG69X9LZE8nJT4tw9xOZWDmmq fuqXUdcffODLmf/efXJJZXaxx2RN9TxM5FS4ecToxTfVrKaW/n2J8s31S597MfjszuDbr92bK/TJ LI42HCTH7l5423Xz72bfIdkfOs5x3GrJcy8QNdwelTx+T7Q67FzbRHVs34hDabrUoTwn+ZTMb+rt M8f19jBJt4dJuxMcF/E1u1K7nU5K7gulcg3cSpxBPBTuRP/tRzFamocN7eZDsEUT0rbSl15EdRNv Nu4ig6k5QtIhXe3hWXj/MAsikIhxxjUMv6J6+DBEHXyDyIJtSYlmp4IGi4bWFBXWDO36OejxWmO0 RuS2dGLnn+YaQYt+TPFLyixQZAF6vO2iMV6XZYzOMsZHgZmYLcl8rDnpxtawH8lcHfCWDGEWwwW2 XeA8IhtnWKFzl+R4a+d9TIVzs1jYK00f73Zygsv6ZWdjKrsfUl1ECbidPwZ94aAoLOpLbA7cC2fB DDI5dXL6zL43kYXx16cu6rvT/FJylNfxCgH23xtIqaiNSy3g1P/gaGKqR4W8XSdGy706v65GN0HX qFuuk+teoXUgBwW9ZbvLUoh+XR1qPJMU+tB8jo7Nt/mIr5Wc3t7vuk2SR+eJbIZDzpW010dcZn/7 KX2P3+zIzDdYsr1ZXiqPd+W5LZmmDDDkJ2SA2ZuYAcbcuAwSUTMZuCnWo71Nwh5IRLtLJjbyN/OQ JQe5b5/ug5awIEiudArjq81xWRakVla62axRxKevHTTuoQWH98yryc63pySkl2WUNtz8yEv3Lti0 jijvq3uYv91iKRv0wiB/QoI/IyGrT+2OJbfd97YttsAeV5aR4RuQ1mdwCeHWr95A4u9PZzZLItrY h/lXQeSvFSfemndL8a2l9zju8z7kuy/v6bJ9jvf8J73nvZoMR1HuoNy63EWOG3Pl4FWVFXiHOAb6 2hyfeRV6R5J/adkK351l63IeLX60RBkvzBLfFA4Jp4TzglyVo/ZXCLcKB4SjfrnAvu7lzyzKj3WI aUX5JY4S76OO9d57fTKvY7djV8krpYe8MoeojfHP8pLYFMFV9nd4WNiaK9OUaEo1ZZzo83phV2gv mJmrn0Cln7Hqw37HqrfpffEAoFxPN7NitHgHYq9EhFiEOIQYhGh0YnWilg2ox2SSqNmNyQyENDZ+ svbij8LuokfYQNutA0XJc9ZE0+5fiPX6yg1h7H+BRh9GA7owSlGTpOx9ErG5Z9IXf3Dr1d5zFnVm MYN2/zqXA+fxqtQl2jrQJ3rB4QUQq0fkS4TV5fXJB7EY3Wxx5GTmY18rrhIcBoHRUxB1cf4CYYBQ J1wrLBdWC48KzwrvC8eEH4QOQRMtJAp+4UOBFwRHqT0DV4YFxSwotfurMItBMQtKxQHV+SUsKGZB qTh4BGYxKGaBv7Qklxd9jmI+j6Tnm/PT0ky0qLgYXV8l2rMX0IkRaoUmoVngFQLBOQdaBhcI7Cih iEVNLX4p2j5UuiG89xg1TlxrxEBj8LOil+PMfkEwK4t3kUfZL4+IaizIc4joPThayfPbG73Ey3rG 6gu9rNDnFb03edn3Rbyt9BYx6vo80pA3J68pj8trJQkv9hcA7Kj3rkLXwacnenP58DPdpxBdJcz7 uVBff475DfXtHr2nHZgJ1+2bzJ3X7pfcJgxPnZIO/xIucV5W8tkRl8MjnQiy3oXdX+VbqdeVLH1T 2hAlP0pAH4jdrnTfOPOS8Qk6f6kDAyw6tN0sSFXbGTFYHKEHc59yWczowtyoaKn8RIu2xxWS3CFU QXMRJEy5oROiSpfgd4gYSNRKxdE9ImLPYkEmC7wODByiFDDiimqt38cQoaMlIYw2s3yTqMOEgy2P g7lfWHTiZeQ0RyYGf/rOUB07FSPzmAkQ/8eTsch2JgsXO2NI2AhLiHgKRmNCpIeTqc5Ucs8gtp8E 9w3yiwPtdpYmfVk4iMyym/yDPsJ0RbB52osD+g70lw3aJV5Z1jiEBL7qOV5z4K5TSNj2EzxAChPS kr1FmMsqCr5W/PN21mL96rnTbMwyQ3OZ/ztqSAegcSkm3O9+ykV5G++m6creqkZ3iQ5gMh+FoEFQ MblXy/58Anmp3HeXCtaB9iS0ZlMyHc5MB4hR0X5U1J5Mc76JkkwUpdRWOkZMtif5ksSk2qSGpDlJ TUnNSRuSVM1Je5NokiXLI7G0Mwb0dr1PL+pr9Q36Ofom6fRS3azfqz+g5+yM3TNbyYBtL0d8lfrI k5B69ADBX9LuP8UMb7YVdiHjkthCyetHw9tqSbcm25LtyZw83ZKaQlwCBmmJGSnEnZySAmFfMEMy rf/HZVboqDM14t84yVnpQDRY2WtBjxz4d7rtngeW/OO1FY8uX/Aj2XDwD8v31RNja0tuKN5/46gr 2O/SGtGC2IUr5UVN8IuCxCB/vqQrcDbpChKYdtEVGMNRbDiKD0dx4cjAuNmpK0jB5mkILoRUBDdC GxzmzsAp+Um1LEdZzvS7IaLfM0JN0lrrI1tKAoIpvEFJ+4AlshckSRuK7uKvje8JbygRdkmQjq5p ZEfB5e5mBtTzYgLp/hVGtaq7PAvLDWKcmCQmixmiSlQ3RYt6MUZMEE2iWUwULagCL/6oY2T3YqPp M/LT02NoUiqvcgivUPYPdgjq1ej4fPJLjsnMGCdGDSqfiqrEuPh81S8+8QZmE3nqS7pQHXpK2FMy pvjOlXi6GSLFYqdKhVKupHKb3Wqn8kSZOQWSOSGFWJRJKWCnST1sEbaHen/QNooxgNORTQtiwtzh 6lYCkquFG/yl2oEvDp7ZO+Hp2waVl5UNJlpJ6tfPumJZmlnSCaykjDvT9cprwQsVf7trAa0qysru RxjDdG0Zd1dleXoxHR2R/qwiJuF1oRP8g9zTkA590Z4syJARXzaR9TH2cfbxZ/g9/szSrNm6JTqV zB5vf0D5hnyf/ZD8lPxCHyVALxKHF8ZgHRgT58vo6wCyIp2kZ/TN18SqmZL1Wu35enWtmuJaqala mJBJajJJZma6QczKzTdMidULVkW6uimf5At8lBa379E7hAkO4pBO1mKj/A60XVf6WukoMVYh4r5g U9gVPgWnMPfzvxTetDzoIwyVlmqItEpz/f52dg4WrRezxvr1uPwsMDDFXNfuYW7ZvPa588K7ghp1 OjZSS7rdGo71iZE4Xopbwl2xcyFuZDGF+h+lc0H8Yw880IueWy8URM7b2VFGrxOf/D7dFm73IxEu /IBIej7Upy9nG7Lryqf+SRSn6xfXNF59Tx9reqEhpfDKv4t7DjrZop69afrSsf0Sc0cPemWgLz19 68ybPzfkZBelaIuzLe4Efbz5qTXBsUwzkEZTaWpacqxQlAs09EOojb9fFoNSlkEWiTkyqlKpNdxL yreV3yp/V/E2qtfYUvRuL7VrvCl29w/uHzI65Z32UIo2RVRF+90S7TGRIqqj8qWcCROJIu9IFNPU buV/OPyFnt9Q7S41Wwdq7dq06CZCCG8FB1pA0eo0QR1lYwtqAIVeWk1RUauQBxTkpIIoJNy4ASss meDSITuIhoTwse6HxjbjGWPIqHjMSIzdzYxmzzVLJD7ATVYyXyRWmCtp83bkBH9M4Vxcr34RobWk Z9gFmUqQ2WwkXYWBXe6wkQxlmg26T/NuhoEjbxT17tQojVuT5uRTo1xO0GiJvsQDnu5aZwrlUqjD KXNyWEtJTy3jFPT1YS6JYUsP8UzI3anxzj/sBaTXsQqZPPjhmk9IWvCbb4c/WH2W7QVOaf25US03 N7VsuPfejbKYYH5OTvCzA+8Ez2ek50q7wUIWdj60PBBYMveee3AnmIcSvRol2gNnxCFHYw4aDqcc Tf0u9hvDNynfpXYYOpxqpUHlpH1ip8RMi50SPzWtQyOP0pDYgbFDUutiPzccTfnB8F2KwmLWakAm jzMnGjVavUqfSBJbibDDAYvTcUF+36EX0hWqVjJIVFG5UXBEyYdaW6UHlAVzrCettNZ6wEqtlqw4 SWjnuAm47W6fe46bd5sz/7mk+6gEZTY4D4X3VNjP7DqlR0+znokW23fDj+qBWYmiXsmsYTULVCxg zw5aUBWEbTzpKRjp7Wa6U3tIe9HBDJ/LAvMtN7lSM3E/zUiON2UPW3bP1s1vNA3zXeXMKK1fFbxw 5rYdJOWHUfdy05z+gbcOKjPFNib6nvnbotUW/ZCyjMrSq6+57dtjxGZnGrQM5ey7iJyNEb3qKKVO Fs+d0xF9lC3eZtdn2KO88V67PeOY+1iGJGUxXfbOlGg7k6sMiYUxYWcyJ+VMmEgU45iUaZ3KXg9v X+1tZbGnvX8WNZN1oHqF0hjHpCxOoUQp00YlGI02FROhaGgkcwjdS04SSiyZLrYyFpu+Rj9B34g2 Upv+jD6kV+5hxpFn4JruU1zpUWS3T9AjTfofI6JkcqbqYp2xLhuk6jBIiUFBckf3EqSwpKRnRGky olCO0jU2J4lSXypHdsEQb49HORIMWBtv/Cs56jaBJS0L4UVMKIjIUe/jSb7UOVBErTlg/fBvg9+Q tE9qHxosyZEzLEb3PC6L6djNpCY3PYNEvXOAuHNyQoVZ2b2kiEJ/XNWZKEXJkEIeFtWtsa2GlxPf SeS1zNscmGTNn0xnGd6RH5YfMRwxfys/bTht/jc9L/93bKfhV9tvzug+8mo5jZ1hmGGaaZlpm+q8 jz5ma3Y+Z3vS+bs5Klkh46LiUqxEybYY9DSVkl9tduQ3KQ8o6VklVhDji7FWMblAkqroZNxArUS0 NlnpGiuxthKTWABiLHtqImAiqcAGJBpq4EPgQuw8SROdj6a7wLY9gW17gmBU8II+ytpKG1pgYRRz NZ0D/FI82M1ixO9MyT8ZRaIs7pSFaEU3iIY40Vlgi5sTR+NEbXR+nNk1cFZYy7Lt9hRjDFyfIZGT aumpKyrdeVjWLkUoti9aRXSwrJGHk1KMk5ZiZ6wUt6R176z/qp/LnrmUdL+hE3YOaegQ6vgEvy0T Ayf6gy0YSw4VCjwyh2CUJDwi4PxF3sArPnzSpOBndL5sf2L1vFeHWtP7WdOC7625EDxK/AeWfpR3 hdf+lffBGdMf9JHxtZNyDEWZaUmuCmJ8/wiJHpM3aPaVkxeMGT16DNJ0HRJ0Lcp5HhkiCorEhMTU xL6J/Ho3odH62Dz0VUQN/bNHE4/GqhqNVKzle3lMmks8JnXYU2Jek6jR8N3/ewfInkuahavpn52p cuvAFLGqOj9FHDICg4IiDNCwSpmSJiRD7OSsPJiclZmpN/lMoqnW1GBqMslN8ujJKhWdrFSDx3de 1kq+FzV2wSdQwVLgITGEKQy7RR+/SBPUs8eajfrH9Fv1e/Q8oEu1R/+hnteb81sJ2datypEPTulL 2nH9pHcXTp2StEWJvt0/F8u6WKK9vcfu1TNtUoK6fR5awMJfO0p54TPeXk+F4uMjTekUknLxHZLF k1g4ad+eUW3+fulx7uWTpg4hJayM7gnquv0m8jMLh67YZuuX6S1WmEuzhkqP0QhYcGWfw5XtBz+K 5jYVkcuN8lQ5x1wLGjZxEkwm8y56uMfICZ9/eX2+nD8p5bC/olaplL2torCt7BAEe+9hIHKCl+OL tN1hHdgP7K1kixhNziejoZSelhYTo1ebTWw99MoaFZmj2qo6qeJUliIQWKHO52vKIbYckmMurJkW WQzpjT62DnPD5nD4rKb9XHtY1NieSurDtgnSNyby6C3ymkNCTH74GdyfyrutlrXiLRUzN80eavKV Dfp+oN9nHpLiHVc5o64mIcc/6LtB/hzTUEnjor0y2O264uEFweXRtkK2DP1sekLm19g9BWOCTb3K wmYM+6FPXIvBuBYcJMEbO5l/tj1KW0aZforHRK2KiFpRRy/++6pLRUQbdiYv/mx/70Xge37cv7tU bR3opX5aQzm6i46W3i/R6gr9HEH0clxF2StYagBKR7eQRby0icbH2w0+Q4OBM5iTxz7Z895O1znp /bISv38uar/wSwL1bA+Mdxb8+V2oCCFp4BzRX5AYuZqFF55iO5cs5ujR4E1d5ZcyLvJpFdJmPdIm R9YqboS4q+Imxd0Q1xS3KmFF1ltZ+7wH444lfJx1NOfruO9yop/xBuJ2JezI2uX9R9xb8fsSlHzc QwnrsjbEPRn/TMJTWYopqNbXwCrHmpx74uT6OE9OUc4EGBU31jEhR3Ey7vuc83GcyhGPCqGPY4qw wrHP8aPje+cvPrXB2eykIPC+EcK1hhU5+5zv+g4K5wUVCA8bHnY86HvesMu50/ehQYm+28mW6gLm wrUMCp90DpJyounKIfmGUSMKYqNBl2ODpBwvuOIuxCni2K7gqchnhxLba4ezeG/L4AKpuLKGZQeK fUYUOOzVBXah3F4pDPXVChN8axLXJK1JXmNdY4syiNg90ZBkorGxYjIhtPvfnZWbLuESO4IQ5hbp lCoqckptQ0iWTiyacDdWX/y3aL2PJJLJxf8M0V2eYB2YLVYX6LPt2RuyA9lns2WQ3ZZNs9ntOvoX tGWT7Gxfo/ExdF64DcYARieNvM24xrgVXRmeHc2IRneBUVQhZGTmG8XCAmOTucBoNJRrI/PqnnP3 fJOlU/s4aeOOUV38LxbdU9JZBwoAIhIOxBEFzKUWVXEGQ1ycwelwsBxusYacHJ9D8IlJ7A1AFlxh LtDkELNhgWGBj4uDHIdBcGb7ctR54TQmVcTs20PY7yDeDzmE/SdsA123IzY2DqQtQ6dWsd0CVA2q OaikIqelqshxLYtFtUbvV5lzHY64nF30dwDyi2iOs1uFyeYUq3Oy7wuPajJVT44xiOj3G3aR7yGO bhJjYiEGLHKzPEPtURM1VR/cSY6DyQP+yOfcKQ97w1Pf3s4AxRGNEcJMFTQgPZL1WqL/Rn+h/cfI S8AJhcza8ChXZntkS/Vv8iuz2ek0S5o8+rzwQTcaMZis90gmjfQajjbFidzgcCLdAISVemWJsgSY bq3rfidIYPY8uic5YvdxdbQ5crKtS/AbRMwZ2OFzEmNyzEgx5q2RvDWST47kkyP5tEg+LZJ3R/Lu SD41kk+N5J0R1FLMTrzZFOLQEkxl6+NkgUMKIv1YLDqwoY9N3BAONAbMsqApKtZvYAEbZjvGqZEV dUZiLD+BfkyM3+fozunQj/E5MDCw5y9OydHBpcdR9eHZzIqK9rubcJXTWJDKAisLklmQxAIH44Ec UUoZtZjCwMHOf3JYYGVBMguSwllzNGYxSGKBmwVpLEhlwZ8O5P8PP3XzPPUQTjK79MUcB1JIkj4V O/N34E3Z2UManAgw4oU7hd+2mDt3Xj3Uz5s3dy76MX88AsiL6RuOI/6NgnSnpF03lbzW6yjg7AYz 7rgsVfgbqZFMnB8H+R1HyajgsxdPA7qG3OxJzJb2jveCR8LbSKp7L+4gN7P/7YU7iAmWiVHw374t LJkqf7JsVdaBJmysDgu+JUbP1kmyFdnJe4N+AzMRzd1bpMcTsUT84b3xP26J+/9iPwy7axdvEM2C EwAywm2G/twA0dgWTTbJn01+NvOV5J3WVzL3J7+fqYxlh0jbLc586fUz4X+19izQbRVXznuSZetj WXr62JJj6cm2FDu2PnYkJ/4kkuPYwQ5JTOJwClsTFPk5FpElR5JjUk4Xt3zSLlA43RYKu9uwu6Ut bc9CkxISaBdYupzTL7RnSYGe1mmXDaXdLJwun7KncfbemXmWHMdAdxtHM/fNm7lz5947d+78JEdT VMp5c6GbvTeH7vbeHTrmPRZa8C6EDB1VCxsX4mIcc+vN0Y20nwAgJfDIqvrzl7193xGPLXf01N8I qjH3m8vfESdfD5fY2jjMxSrU3xfyei6Tt2wNPWGSKtSfJgoFn2BZGf+39Cfw14v6ennqcc9wzWkA H0is8QTXxYTKLW6fYZ1PO2eo3KKLRaN+v8MAsgXZPFbrTHTG6GJZPQ4ynp6ouoqmvZ6ekbzHqZWc p4Q3E1aP7I14RS9K0Yvy9EL5k2ulACCgS4DNgViAIggsBN4IXAxorw/MBOYD9wS0ASwTwDIBwHSc BENQ9HifpZcO+YHYsV6hpvfB3oXes71v9lY8TwENfdl+bTDem9gUj/Ym+rdEe+cHcIP1iu0AXYl7 q6NjEPzFeLTXNRDn00/+L39ovG377o+cyPUKvafFRTIAenYNtfXvQC8dZzb7sQBaDST+hLOBrWpW Q4UBZiP4AlLbNXjgb7zvHSzgxAKUXbVQwokZnZjRiS10Ygu5SaBVwNRmycGOoxNopWtX3YNbt+MX ih4Tjw0+2pLcyr5PE+3BOF03kesaqqrX+E3+er2ngTR4qipdxtoGoaHK3aCpq3Y3CHT9BLG20cOF bDIcR16CdWmcxyEGg8ZEjbS5A4M4XaaWNnPiSBtb3K5kO6cC30GtxJ1Sez19Pg4xy3tIwLOL9Lxi Fz2F5LCq+xD8DPSy59Ij67R870JY13lVLPiRdbHp7vy26xKbN48829jU2OCPUbCpqXmoIwG9+vRI fHM/HjDV3NnT4W9vb2/bNPrJxRieKxWPhpsl1+Biij2E/MEBBjM/GCGwYuvBiuGeRUzQJOpxsnZf g2ZBv+AV6YxNXZXGZenSVIv9qm0wFAqvmLGpzpy+asUr5lJhv102beNdNBzithD6ogcnbVbh7XqY tMV0awMBi6XGUOvE7lelTzTEqOsjuWJ8+lajF/TuDR7ipU5TKDQfFrxhIezqWj6F66OnUpdsJ1My mMbxWZwAUwwuuDKRcZlcfhK3JKv7Nh7ddfKGMZQDFUhz+LptB3erM7hI3S4mocFI5NC1n128ZWky csuAt2XDNYu31Hh62OSthstFJFdfXNDeBHKpgbnbvySue1w8pTtjeNn8ivSi40zdi65X6l9a85r5 XfE9XfVzrufqRem87VXHOdfv6rWv1J1Z87r4mu6c4Xfm16XKibob1jxU8RX9l41fq/5qTWVanNQp hoPmG6QJp87uM1W6fVqjBVcvDYRYiEzOEi15UnwbxFYr7n3cWxWpmqnSVJ2GlAYL3gGjV8LYCbZx +IeOnLG+sUYflzBw0LMD+rgLzw5AzHoEzo/tuJzUzG9QaJ3sZo+6ynjTLYsX7rrzIrn9UxfvuFPQ 3Pqjbckv3vHEtz/9V98WHjv8i1s+8csjN53/1B2/+3hqz8zx2eu/8hUiXnxjcUz7eeBPgESFpxPh C963Gi+0Xgi+FXkrqtPVGwLiSd9zvpdafxb8Teu5oM5bbwmE6+WAVgrOG4xReuIAd2A8iYa2+kRn c3v5FvzyWTDbdb/MQtHld93deDyf3N7c5mtwv+26saGyTtfpa4YJtHktcrkpIifkUVlDZIssy2dl 7aOyILu76j/mdrtcJPB7GKuoC+Dim+7P86XkymO4lBzjOzN8Y6bvHD1b8ipVbLbbjmvKb70Kvvt/ UmPH9tnXR1vWeZpaA42BdZ61XmF9EwQt3javEPV1qovLZfs0kQ5/IBLobNJ2+MNNwOBl68tSe6h+ TdAfqm9rqmhfA+/r3ew9W2KmZjLC/NgINfvgtAbxSEcIgyCuzIcwUM0+LkiP+1fzYNY7+JYP3f4T ogG+20f3eKZuojv9ny9z6a544KqfCi2PfPbxXQ+I9q137bv/2k2PfOKT/3Ro8VHa54Khbg29GTXU EVn891M/uDUbEj7Tdts1hV3Du//mAbCGGbCGqFWtwuTjsiB8QSdI1OXxu2MGy4hFfMTyiBVGdK0Z rSOuQyaqK0oz20tWItVzG2wdRVv6LcRlvuCKnV+XZ9hUXSVZ5WA4ak30b4PA549azW467EQ6qRN2 whOg8Um7Kyq0mo2nhIaEz4yLXDq3y0CqZOi4o1XXQ+fV3VMlVLnbBAKTNdQsayPx4eLkqO9634xP 53OtK1t55FsTOy3nwMvGBZgd9LwtSIqFdKOWLXpRZaixiBqLaG6qqNFYm4jFSrftVHWBSSGaB7sF R30rBtSjtWLADcOhJYNLD6JSG7vKCqbY94nP909cl9jUFhjztX19ftmiJd2w19w5P755pDPavunK TGbxB5cs+YB07wOb2gfS3SY+k4jpJV3MJTljk5HbI/dGvhT6VujZ0Bn9vxnOdJzTv9bxlumdsNUg VFZU6iu7WiJd4W2tQ+GqZtSFGTxIg6dpDKRGqGraQDa3DhFdmDQ1t8TCQ+FtRzvu63iPXBT+0GSQ Kowakz5sitQa7aaGOq/LHZF6bjPeEfmp8edh87nuX/W8F9bItUKkuVazPmQyEG1bZbPPaXJFxJAM ko5gYMJbXKHOqIHHJnbQzcAi+rarm72FGN+eGN0TNfCYvh/Zxd5DTEtvw9JPsOhswjgQi0Dl2rVk sIfXgXFC714b7enTmAyGU2ImMRgJ2SORkMa3odI7ePPgG4OamsFdg6J3UBhMNPmjg4mu2OCZTZv6 dLWJ+mC09kYLaNdZn4b44j7Rd8ZtWOuzGxMEt0z6d7ahKbSyo0SPWp62nLXoLO7hyifFveCPNIvX J4yehp3e9fL6CB6ww3mhrym63nXFrrv5lgkeqLWwi30wgtOttPOHxl9tg2H9PPVl4+dL95RBYaVu afmZOvR4rWxTFP4foucOUFndBE+oDWAwhMEgBlsxoLN1iJt5LPPYx3Ydq2F2D64uPaKhBwc33Fjj ilfT4dBVPrOnsXrkTqYWEoABNI7bMBjCYLDsCtIlU2cBl1pwuyawdPAB/5zcRK5l5ybU+zLs0myU HRTmh+Rr1RPEtIBma2e2b27Iu07O/XA0nU/e8ctr7ovXNEoR6Dv+TnP41qvv2umPxb787p4943/5 w2239Nl85nUbLfIG/0bxb73etVYgwFKzZo3/s1dlRw56PdXm+MjgSLy1s6W13VnX4nZL7pHhg9nh ifo1ZnjVOVAXCmFf/Bz0xSe0Pybt5BvHXVW+U8LxhNfvJL6A39+g079d4bMaZ1yCy2UPtrYKM6az JtFEWQsa7w75m9lEOdDgcRA7riGP2q+3z9gftT9tP2t/026wQCImzNsr7HiVRhBi6lUaGDbp2LnT 8l9t49buMB7FBFu3Ay8q0b2VV+mgaZFsolYDDoPQQERbRQPhF5Os/CZyaY8sELtkk6W0sxKtX5f5 u9vWN7T0yh2LC6mnnqJ2aoRapZv4noqyxeEbcPe1tTSEdz10o/AsvjyN707zlWoPcOp+zZ2klbyX aDQ0mm1xPS4CGcxuc9qQlt+RK1rNG81/GVgQXqr5TY1Ovb284o4hW28tjULLHfTG6oQz4aBnwSR+ /gvPhNWYygeon6kL/T5Pq0tXafCho27Qv+3xmYxVjY0+3OisITPCI8JZQYNHVNxtvidFPXGTenTN rdZ5SfBKguRat9w1P0c9GJQL3eh6C5yYPn5OUKAX99RrdmzHGnW5tOTDDnepz1o3+trPfuTBK5TP tXmZVxDv33x3mg8RF/rRwQ63tOzd3nWVQHl84Yv9mzoSwj9wfuPvrL8I/G4UHjhZYyE2UaLXkuxG c/QFIkgJfdnN8EsmOXqB+QK+xkaicgvZDhMUKg8Ux3dYeqlI6UeSSxtXQAXqt9FqsVtJo9XSyJ4l mIxJQIONSPVuN0yydAQk8C1JgqEKgJOJUcubFhHc+cqEQbYefRNYfZQ0YpbEPptgw3SLPEoELbGT UTJPHiUVgPfBE003g2F1oTjcF8bddWBax8+7XRTEmRI/nyx1H61iK7wQ1/H13Tb1CudjAm6Q08VI MwBCAifGidogTwLAUgcDv4CBFY/3qnOGkm0j1GF4jMxDKr3iDYUI3+4m5XNsYdxv47d8eFdrEuCv 7NYI2DZx6he7azvjI4ILJd+4+PCZxYcb6ME+2zDMyfa8Jzxn9vYwH+FCv/gU8xe6PTVEuPhH0IGf gA4ENM8kDt4j3eMQ9WKNRud3iV6N03+f7Qv2l8WXpJ85zvh/K/5Ges1xzm95QLhXvNd2v/1+/70B nfS09LRjgbwgveB4g5yVzjoukjel3zuMZL5nX5TgeEnmPTIEa2Jk3h2zJ9wxG3zwJOmJbfuiEo8d 2Pge+kxj+uxiz4lbAMBjCvPSPK+q0kssksWxj4xKo45jBOnWrxP9/m6xyz8sDvk/at3j/KTtTvvz wvfF70nftf3Q/pzjX/1PB94TLlrtekEn6v26gEvwiFa/M9ArrA+MCFsDVwuzgvkFYcH2gn0BzWGj A0iFBgQSa2LogT6+Jub0uHqiINNfn4DYD/HjEIuYSI1RncD6i8VqBRV/aMn1ZU5zU3Oz/ynxSytn XBKfcuF24peWCrFjOjjgcVTQW5r9fvRQDA6n3QEfEgicEl9O6B12eLRrRJG+lKx26Cc2+o0cLyda sCdJtmZ/iz3g0DiJRlxrkwSNlUjEoQnYic1iE202UW8/JdyQsHg8DQ0Gg14Hg5MoCgbnE+IrxCq+ kvAloBvN0I50lrxJKnECjf3qHkjQEdKy9senhU/DwOOit+bHX32V1C0d9aczN5rep37nB13totHR UNvRj38X91Gwt0FUelK7nNpr6AzcWWeujdvqqq1x+6mL34f5lSdu64LATjcPgvCuEQK68malHcsc 92MflOriYh0E5Wvzh8ZJ/tAyp4OwBTMH2w2R+K4IVUYYiCQe02egQOKxg/dciccOfiRU4jFV9iDH F6xlz+AsSTx28PsIEo/pM9Aq8Rifjxu78fGbxkudO+4swSwmL+BvApS2BcAy2Gzrbar94F/noAFz 8ZlnvrGxOz7y1Ei8uX7XzptPzo9eWReJjzwzEt/Q9dV/Fj62eLv4lKa7Da1ESK5bfFLYsXhCGOQb 763d2gv9MHp8EyxHECyHU5hINE8a3jWIDkO9U3zRcM4gbjEOOp8j3zNqXya/NYqS02yqxu9m+GWi 0VgdNeK3OKwlAUOTcYxc7VSMirOSrBzFqz0jhK2V40IbT33MM2J0siGihn3VjJEYnHqT0eM8Jb5z Eo/xCQS8+HcSeuKB0avyRofzlOBLWGqMFuMuY854t/Ee41ljJeD7R3DOfAlAYbsR/Az9qF6zAKOX q/bmu5duZMKA8Nb4hT78ppLz56gun7sAg0O3oOptHRu4S8pZY48b50HkhIqWxXSHy4iOLlFPxxrn IXDy6yeUMdUAGGjxRvXQsCpaPKWYH/f7utQv6xB8bBIZ2+BzCH8QpjTd60Ldf7xVXLP4HD2lrRn9 UTeI9+TXRId9OB4f/tVXcajfp/0YwXvM+G+Rhhr6Q2FOwcJhkVQJLRzWkAmaC2EtkZfyVJA6YT+H dSQgqDgryZeFBzlcRQLiTzmsJ2uq/ofDBtGjd3LYSDLGKIdNZNKolq3WfUsc4rCZfNSkY34K/LvZ tJ/DAjGafsthkWhN73BYQzpNv+KwlliW8lQQU7WNwzpiq/ZyuJLsqe7kcBWk/4DDemK2LHDYINRY znPYSLqsf+SwiayX1LLVmmurb+OwmYQkBSgRtBqgzSx9ncNaEpD+msIVkG6QfsJhLWmUTlNYB+k6 6b85rCUe6ecUrkS52KwcBllI71G4CtJNtg4Oa0mzzUVhPZcvg5l8Gczky2AmXwYz+TKYyZfBTL4M ZvJlMJMvg5l8Gczky2AmXwYz+TKYyZfBTL4IG5BXtr0cBl7ZNlHYiL+FY7uJw1qyzpaisIny5CEO A/22uyhsxh/0tT3PYS1psT1OYQvF8xCHEQ/Lb0Oe297lMPDctkBhO9Jjd3AY6blAYQek2+29HNaS dnsThZ00/wEOQ377Lgq7aP67OIz5ZymM31NlsJ/mMOiA/e8p3EDpsXIYdYDJ2kvz/5zDmP9ZCjej DtgvcBh0wP4fFF6H/HH4OAz8cegpHEQ8jn4OAx5HK8JVZfyvKuN/VVm7qsraZSrLbyrLbyqTi0mV yxg5Ap6DQiZJkqQglsnD8BkjUxTeQXIkC58izyWTAfoLTDM0TEJ6muaQISUD5UMAbaXpyf8npvAS ZTLZA28yZHYpTwHShiFm9XWQbviLkCCHOmlqP5TIQLwbyhwAGoq01G7AV4BPnhyGcGIFVT2Uqll4 n6a5ZLIT4jmIt1AaJiDHYfqmsEQv1rcRQpm0AL40UJWHNwX4TALeVnL1KvmX18TqGYW2Bpfq2gE8 WJbL10g5ivyagOdpiPPkIKRhXf93XsuQir+tlQYqi5Qa5I1Mf99JplqRpxJVn5GiLKQwqgrQnp3g /4+BTIbgMwC8R3gXpMoQDkF4JU0fhJQ9EKJ08Be9BuFvB00dI9XEQD/YhjSVVXGFZqrprJUzlM8z nLojS1xY2XqmSTloIbZ+Bspj7iTkYq1kujFLNUMm++nbI7SVap3Y5sNlnJmlZZmGqPQwzk3T/IwS 7AMZqhEK1VqFph2gWFB6CuUiaus1vLYpeH+Y5ssBHSrPWZ3F9+GMqm1zVCMwRaHtmuI0TsATpqcg LUPbN0m5N31ZfuV4u5BjShmWOY7zcvVNcO1BndhP+yqjej+XTJZjvpyE1tJWLecU06uVWrGyZpaO vD4MIdqJJNSa4dwuUGzFVetG7u+FlAytsVAm+ZIsmJyW9wvkDqu1QPGkIHWStuDDyFzmupilfTEL T6V6sW9PUE6zXpqkdixfZsfal3Lny/SWta/4gZxC6qYpflWvcsvwzVH5H6TSLLcVk1wvSjlzkJdZ kVnKccQ/tdQeRle5dqO9Qm1g/Ge9aobrh6qll+rQ+7WopB/DtO0rJYccRvyHIF2huNXWpGjMbFv2 EhnkL+F3CTO2D6EM5RzScJjawLkyO/BhpK/iY30S++phLo1SH1PxrZQj4xZrQZHagOJl+7EqseQl vJ78k6gtcXllDSnKYezlygqKWHtQg3qWMOwF+98PqUGCo+VGEiUbYISUIeyApyCM4lH4RAh6rnvJ dp4zAm874E2UwxvIevhgqS4SgxEfP4gdpVUEynrAewgDv/AvBO24tMenqOW7guov8hTp3E6tRJHa gTz4MAodsQ8sWd/kkpVR8cxRHSly21iyxSrXh2Ec30H76qU+xRzHplpOtARznI8ooX6alua8HQKY +T4HluoqrwH9I4XSneJ9J0W1Rikbn2WKVaU9TeWWoZjS5EbewhnamhTVvYmy9rfTnqvyULXkzBeY o7rL+klpRC0AbrTBJSomScmbUPvdDB/70P4Wltki1L1ZikO1AJfjeI5yZYaGJZ7kKeYc9QmYpSxS Whi+cvtWordIeTdF7YDKmQnIlYJSai8oWcLQn6hnYZp/GrCGISxSi45Yw9RX2Mf9KVU7srSdoaUy f9665qimsLzKn6UW9V34EkuyhHvsyIwymUwp8sPy2JQi78hlc0VIkgdy+ZlcPllM57LyTCYVkrcm i8kPyBRGZPKeXGYWUwrycBbKdXR3R4IQdIbk/kxG3p0+MFUsyLuVgpI/rEyoqHoGcrP5tJKXdypz 8pZcZqLnsJIvIN7O0MZOuWVHOpXPFXKTxdary9J5ISgzuieIpXaM8aSvyWP55IQyncwflHOT70u1 nFcOpAtFJa9MyOmsnFLyxSTGudlsEVAVQjt3jQ0PDQ/0jw3v2invGpKvHB4Y3LlnUO7ftntwcMfg zrFqQ7VhbCpdkIsqMxGGKmfyuRlAdwRJWKoemJQ7kE/OTB2Rk1moErgxW1Dk/UfkI7lZLJnKHabE zGYngCGIB4ibLiCSpJxJp5QsZE8eyCvKtJIthuRroNhU8rAi5/Yj5VCyuIwYZNtcMq/IShqQ5eWJ dF5JFTNH5Ml8brpEVw7qyh1QaJY5yFkqNwHsyaf3zxYBNZCZyyrlDVpbUIkCXi2xYqkwwEn5cDIz m9yfAbILBaVYXjok781mlEKBNp62AtrEZVHMQdHCjJJKT6ZTK1suAxezxXT2AC2bnJhIo0iTGTlP dawdk/OUt1Bf8VKiMunpNDYIKqH55nL5g4Ui04pJ4AVNzM2Biszuz6QLU1gP4GLsnk4ekYF+ENXM EWRciUPLK6L8GJ4sNS6ZPSIfmlUKtJpULgvaluUtyHO6aebCVG42MwGqeTgNHQJ1YGXzMR9IUklD P2ISw3xLbQSyoIJiMlUsyRgbluRUT14eLSV5qUAqmZX3KyoiqCdZ7MEMe/f0y0G5ZWN0Q6u8oWNj MBKNRPT6vdshMdLREY1CuGH9BnlDV6w71l1tmCoWZ3rC4bm5udC0KvhUbvqKHFA6IW9XisWMkt+q FNIHUH2TqDKYZy4PIsrLVIuR9OEtO9pl1VLMQTZUznwShARq2T+RTwO1Q3mwPgewFCsg71EyoO55 0CAwOdifZbkfsadToCqT6Ruhwpl0MTUlT9D622VKISo5WIE5BWVCO2ohk9xPUUxSM4Gym4HeJ+8t MC1SpmczSVSAEuG52eLMbJFSklfA5qBSFpP7IR/TN4q3qKSmspSYiVxqFkVAlTC0Cs/CU8XpTHi6 mE1OK+Hpwr4UY0dWmQvhmw9Zak7JQKrywUXwKcyVhOZedQkGY+ac4nC8Wq4imRWqYRB6fdUck9TZ WO3tEHcEV3mv+ZTmO5rvap6C8JsfSGn6fSm9ElLYVCVHc86umnMbdRHVqSw6SatT/zoMxAfJO4D1 dXizWr6rKabV3l5BnaDDlFOr5xrlSyo4+WOu0ZEPxZFVqdd6tZu1vdoBbZd2ozah3aTdru1eFePY B8p5O7ZC6IA8q+dg7uLB1WkSrOTXmiZwZFaXYo5OXJOEkP8FUtXxgQplbmRzdHJlYW0KZW5kb2Jq CjUyIDAgb2JqCjw8L0ZpbHRlciAvRmxhdGVEZWNvZGUKL0xlbmd0aCAzMjcKPj4gc3RyZWFtCnic XZLLboMwEEX3/gov00UE5pVGQkjkUYlFHyrtBxB7SJGKsYyz4O9rz9BUqiWQzsydy4UhOjanRg+O R292ki043g9aWZinm5XAL3AdNBMJV4N0K+Fdjp1hkR9ul9nB2Oh+YmXJefTuu7OzC9/UarrAA4te rQI76CvffB5bz+3NmG8YQTses6riCnrv9NyZl24EHuHYtlG+P7hl62f+FB+LAZ4gC0ojJwWz6STY Tl+BlbE/FS+f/KkYaPWvv6epSy+/Oovq1KvjuI4rpEekJEcS50Dp7oyUxNhLT0QFUpYSnYkOSCl5 5uSZkmdOnvkJaVdjujVH8Zvq/hJCoEzsKQ8lEAcqkoX3DcUso0eTshBUJGV2pOKqpJA5pSsoq4+F xZpiJWssChK+X9jzfTnyZq3fC/4MuJCwikHD/X8xkwlT4foBCHinWQplbmRzdHJlYW0KZW5kb2Jq CnhyZWYKMCA3OAowMDAwMDAwMDAwIDY1NTM1IGYgCjAwMDAwNzYwMzUgMDAwMDAgbiAKMDAwMDA3 NTcwNCAwMDAwMCBuIAowMDAwMDcxNzk5IDAwMDAwIG4gCjAwMDAyNDU0MzAgMDAwMDAgbiAKMDAw MDA3NjEwNiAwMDAwMCBuIAowMDAwMDcyMDA3IDAwMDAwIG4gCjAwMDAwODY3NzYgMDAwMDAgbiAK MDAwMDA3MjIwNCAwMDAwMCBuIAowMDAwMDk2ODg5IDAwMDAwIG4gCjAwMDAwNzI0MDEgMDAwMDAg biAKMDAwMDEwNjk4OSAwMDAwMCBuIAowMDAwMDcyNjAwIDAwMDAwIG4gCjAwMDAxMTY4MjggMDAw MDAgbiAKMDAwMDA3MjgxMCAwMDAwMCBuIAowMDAwMTI3MTg5IDAwMDAwIG4gCjAwMDAwNzMwMDkg MDAwMDAgbiAKMDAwMDEzNzUxMiAwMDAwMCBuIAowMDAwMDczMjA4IDAwMDAwIG4gCjAwMDAwNzU4 MjEgMDAwMDAgbiAKMDAwMDE0Nzg5OCAwMDAwMCBuIAowMDAwMDczNDA4IDAwMDAwIG4gCjAwMDAx NTgyMzYgMDAwMDAgbiAKMDAwMDA3MzYxOSAwMDAwMCBuIAowMDAwMTY5ODA1IDAwMDAwIG4gCjAw MDAwNzM4MTkgMDAwMDAgbiAKMDAwMDE3OTkzOSAwMDAwMCBuIAowMDAwMDc0MDE5IDAwMDAwIG4g CjAwMDAxOTAwODYgMDAwMDAgbiAKMDAwMDA3NDIxOSAwMDAwMCBuIAowMDAwMTk5OTA2IDAwMDAw IG4gCjAwMDAwNzQ0MzAgMDAwMDAgbiAKMDAwMDIxMDMyNSAwMDAwMCBuIAowMDAwMDc0NjMwIDAw MDAwIG4gCjAwMDAyMjA3NTIgMDAwMDAgbiAKMDAwMDA3NDgzMCAwMDAwMCBuIAowMDAwMDc1OTQy IDAwMDAwIG4gCjAwMDAyMzEyODMgMDAwMDAgbiAKMDAwMDA3NTAzMCAwMDAwMCBuIAowMDAwMjQ1 NTcwIDAwMDAwIG4gCjAwMDAyNTIxNDMgMDAwMDAgbiAKMDAwMDI0MTcwNSAwMDAwMCBuIAowMDAw MDc1MjY3IDAwMDAwIG4gCjAwMDAyNTkwMzYgMDAwMDAgbiAKMDAwMDI2NDY4MyAwMDAwMCBuIAow MDAwMjQyMjk3IDAwMDAwIG4gCjAwMDAwNzU0ODIgMDAwMDAgbiAKMDAwMDI3MDg0NyAwMDAwMCBu IAowMDAwMjQyNTM3IDAwMDAwIG4gCjAwMDAyNzA5OTMgMDAwMDAgbiAKMDAwMDMwMjE5MiAwMDAw MCBuIAowMDAwMzAyNjA3IDAwMDAwIG4gCjAwMDAzMjg0MzUgMDAwMDAgbiAKMDAwMDI3MTIxMiAw MDAwMCBuIAowMDAwMjcxNDM3IDAwMDAwIG4gCjAwMDAzMDI4MzEgMDAwMDAgbiAKMDAwMDMwMzA2 MSAwMDAwMCBuIAowMDAwMDAwMDE1IDAwMDAwIG4gCjAwMDAwMDAwNjQgMDAwMDAgbiAKMDAwMDAw NDU3NCAwMDAwMCBuIAowMDAwMDA0NjY5IDAwMDAwIG4gCjAwMDAwMTMwNDkgMDAwMDAgbiAKMDAw MDAxMzE5MiAwMDAwMCBuIAowMDAwMDEzMzI2IDAwMDAwIG4gCjAwMDAwMDAzMTEgMDAwMDAgbiAK MDAwMDAxMTkwNiAwMDAwMCBuIAowMDAwMDEzNDY1IDAwMDAwIG4gCjAwMDAwMjYyOTYgMDAwMDAg biAKMDAwMDAyNjY5NyAwMDAwMCBuIAowMDAwMDUwMTk4IDAwMDAwIG4gCjAwMDAwNTA1NzcgMDAw MDAgbiAKMDAwMDA3MTQxNiAwMDAwMCBuIAowMDAwMDE0MDYwIDAwMDAwIG4gCjAwMDAwMTQyODgg MDAwMDAgbiAKMDAwMDAyNzI0OCAwMDAwMCBuIAowMDAwMDI3NDc0IDAwMDAwIG4gCjAwMDAwNTEw ODUgMDAwMDAgbiAKMDAwMDA1MTMxNyAwMDAwMCBuIAp0cmFpbGVyCjw8L1NpemUgNzgKL1Jvb3Qg NTcgMCBSCj4+CnN0YXJ0eHJlZgozMjg4MzQKJSVFT0Y= --f46d044287fecfccdb04f5952187-- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 12:42:33 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 49A78477; Thu, 27 Mar 2014 12:42:33 +0000 (UTC) Received: from mail-qg0-x22b.google.com (mail-qg0-x22b.google.com [IPv6:2607:f8b0:400d:c04::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C8A9CBEC; Thu, 27 Mar 2014 12:42:32 +0000 (UTC) Received: by mail-qg0-f43.google.com with SMTP id f51so2699579qge.30 for ; Thu, 27 Mar 2014 05:42:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=gfDL+vv4UPGWHYXuoDe+0WPwvQKOv8dbqRm4AT0K/mo=; b=tpKPaHW+XoyUIemJukrXVnlZSVqJAi3elCEz9L8X+klwfaQCpu/qQLG4a/TvSsGjhK D+yhqJWM+8T5PJESmBgHrlibRBI8/WU4v17qibm1f3vyo64vYA6sRQDxCqVNoYPO/LgN J1hzPbFOvt/deh69wNeXZfWpnxenZV1TSVWVIFuXGaJuPjHcbYCrKEfTEYwB9RiT/Ak8 NcTOCC6HfhKQUowM+jEw4HW/2GFuFnZSDM8bBcQmxTxvBx2QykuVkXrjHynKmecFDy4/ zhWld4SV1KFpVyol4H1Fcw7Rdv5qXBXTsaxrpyLgcrPudQOsVq3mL0zJjjFplVEE6/T6 g5Tw== MIME-Version: 1.0 X-Received: by 10.140.80.209 with SMTP id c75mr1538712qgd.79.1395924151988; Thu, 27 Mar 2014 05:42:31 -0700 (PDT) Received: by 10.96.79.97 with HTTP; Thu, 27 Mar 2014 05:42:31 -0700 (PDT) In-Reply-To: References: <20140326023334.GB2973@michelle.cdnetworks.com> <1903781266.1237680.1395880068597.JavaMail.root@uoguelph.ca> Date: Thu, 27 Mar 2014 09:42:31 -0300 Message-ID: Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem From: Christopher Forgeron To: araujo@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems , Alexander Motin , FreeBSD Net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 12:42:33 -0000 I'm quite sure the problem is on 9.2-RELEASE, not 9.1-RELEASE or earlier, as a 9.2-STABLE from last year I have doesn't exhibit the problem. New code in if.c at line 660 looks to be what is starting this, which makes me wonder how TSO was being handled before 9.2. I also like Rick's NFS patch for cluster size. I notice an improvement, but don't have solid numbers yet. I'm still stress testing it as we speak. On Wed, Mar 26, 2014 at 11:44 PM, Marcelo Araujo wrote: > Hello All, > > > 2014-03-27 8:27 GMT+08:00 Rick Macklem : > > > > Well, bumping it from 32->35 is all it would take for NFS (can't comment > > w.r.t. iSCSI). ixgbe uses 100 for the 82598 chip and 32 for the 82599 > > (just so others aren't confused by the above comment). I understand > > your point was w.r.t. using 100 without blowing the kernel stack, but > > since the testers have been using "ix" with the 82599 chip which is > > limited to 32 transmit segments... > > > > However, please increase any you know can be safely done from 32->35, > rick > > > > > I have plenty of machines using Intel X540 that is based on 82599 chipset. > I have applied Rick's patch on ixgbe to check if the packet size is bigger > than 65535 or cluster is bigger than 32. So far till now, on FreeBSD > 9.1-RELEASE this problem does not happens. > > Unfortunately all my environment here is based on 9.1-RELEASE, with some > merges from 10-RELEASE such like: NFS and IXGBE. > > Also I have applied the patch that Rick sent in another email with the > subject 'NFS patch to use pagesize mbuf clusters'. And we can see some > performance boost over 10Gbps Intel. However here at the company, we are > still doing benchmarks. If someone wants to have my benchmark result, I can > send it later. > > I'm wondering, if this update on ixgbe from 32->35 could be applied also > for versions < 9.2. I'm thinking, that this problem arise only on 9-STABLE > and consequently on 9.2-RELEASE. And fortunately or not 9.1-RELEASE doesn't > share it. > > Best Regards, > -- > Marcelo Araujo > araujo@FreeBSD.org > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 14:33:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8886E953 for ; Thu, 27 Mar 2014 14:33:27 +0000 (UTC) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 43B3E8FC for ; Thu, 27 Mar 2014 14:33:26 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id s2REQKpk023082; Thu, 27 Mar 2014 09:26:20 -0500 (CDT) Date: Thu, 27 Mar 2014 09:26:20 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Joar Jegleim Subject: Re: zfs l2arc warmup In-Reply-To: Message-ID: References: User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Thu, 27 Mar 2014 09:26:20 -0500 (CDT) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 14:33:27 -0000 On Thu, 27 Mar 2014, Joar Jegleim wrote: > Is this how 'you' do it to warmup the l2arc, or am I missing something ? > > The thing is with this particular pool is that it serves somewhere > between 20 -> 30 million jpegs for a website. The front page of the > site will for every reload present a mosaic of about 36 jpegs, and the > jpegs are completely randomly fetched from the pool. > I don't know what jpegs will be fetched at any given time, so I'm > installing about 2TB of l2arc ( the pool is about 1.6TB today) and I > want the whole pool to be available from the l2arc . Your usage pattern is the opposite of what the ARC is supposed to do. The ARC is supposed to keep most-often accessed data in memory (or retired to L2ARC) based on access patterns. It does not seem necessary for your mosaic to be truely random across 20 -> 30 million jpegs. Random across 1000 jpegs which are circulated in time would produce a similar effect. The application building your web page mosiac can manage which files will be included in the mosaic and achieve the same effect as a huge cache by always building the mosiac from a known subset of files. The 1000 jpegs used for the mosaics can be cycled over time from a random selection, with old ones being removed. This approach assures that in-memory caching is effective since the same files will be requested many times by many clients. Changing the problem from an OS-oriented one to an application-oriented one (better algorithm) gives you more control and better efficiency. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 14:53:57 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2F5D11F9 for ; Thu, 27 Mar 2014 14:53:57 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id E616AAEC for ; Thu, 27 Mar 2014 14:53:56 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2RErki3070901 for ; Thu, 27 Mar 2014 09:53:47 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Thu Mar 27 09:53:47 2014 Message-ID: <53343B75.6090807@denninger.net> Date: Thu, 27 Mar 2014 09:53:41 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zfs l2arc warmup References: In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070103020008010308040207" X-Antivirus: avast! (VPS 140326-2, 03/26/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 14:53:57 -0000 This is a cryptographically signed message in MIME format. --------------ms070103020008010308040207 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/27/2014 9:26 AM, Bob Friesenhahn wrote: > On Thu, 27 Mar 2014, Joar Jegleim wrote: >> Is this how 'you' do it to warmup the l2arc, or am I missing something= ? >> >> The thing is with this particular pool is that it serves somewhere >> between 20 -> 30 million jpegs for a website. The front page of the >> site will for every reload present a mosaic of about 36 jpegs, and the= >> jpegs are completely randomly fetched from the pool. >> I don't know what jpegs will be fetched at any given time, so I'm >> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I >> want the whole pool to be available from the l2arc . > > Your usage pattern is the opposite of what the ARC is supposed to do.=20 > The ARC is supposed to keep most-often accessed data in memory (or=20 > retired to L2ARC) based on access patterns. > > It does not seem necessary for your mosaic to be truely random across > 20 -> 30 million jpegs. Random across 1000 jpegs which are circulated > in time would produce a similar effect. > > The application building your web page mosiac can manage which files=20 > will be included in the mosaic and achieve the same effect as a huge=20 > cache by always building the mosiac from a known subset of files. The=20 > 1000 jpegs used for the mosaics can be cycled over time from a random=20 > selection, with old ones being removed. This approach assures that=20 > in-memory caching is effective since the same files will be requested=20 > many times by many clients. > > Changing the problem from an OS-oriented one to an=20 > application-oriented one (better algorithm) gives you more control and = > better efficiency. > > Bob That's true, but the other option if he really does want it to be random = across the entire thing, given the size (which is not outrageous) and=20 that the resource is going to be read-nearly-only, is to put them on=20 SSDs and ignore the L2ARC entirely. These days that's not a terribly=20 expensive answer as with a read-mostly-always environment you're not=20 going to run into a rewrite life-cycle problem on rationally-priced SSDs = (e.g. Intel 3500s). Now an ARC cache miss is not all *that* material since there is no seek=20 or rotational latency penalty. HOWEVER, with that said it's still expensive compared against rotating=20 rust for bulk storage, and as Bob noted a pre-select middleware process=20 would result in no need for a L2ARC and allow the use of a pool with=20 much-smaller SSDs for the actual online retrieval function. Whether the coding time and expense is a good trade against the lower=20 hardware cost to do it the "raw" way is a fair question. --=20 -- Karl karl@denninger.net --------------ms070103020008010308040207 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjcxNDUzNDFaMCMGCSqGSIb3DQEJBDEW BBQ7vQUw9e9T8sirDYLM+T2VhuHmVDBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAiShagBFjtv4mJfr5dM6KaCphMWk1 TPVGgOfjSJsuB2umaQ9aKtQt+KHnLkzasM113siATi9qvwJwG1foiCyHx+kBmZO3xk5Uciz8 TogYln/c/Xa+EfHb9y2dQw5ODXAugv813wIBp3I9fhrSaQJY5mCROaxi5BEu8+8Bk7dMS8fK BbKlPg4KpWWmszF74n5EI2+IJrWsiSUw3esNzwEavlsavRhwsimkAPjvBZhksNUTpRp/inFs NezYZRwTtSNflY+XCyMszqj1H2PwcpXvammzg0/nQ8Lz6Jrq5xzIbL7pfgBEhoocRQ/P4i9z itbK14a+PxVOIU6lUKiIRUCyOVtBLUkbv0U4w2nR/dMpxXWkjNFcdrKKvYW7t6B2JbPYYAXx I2zRZZZFFsed0segtnio73A67yt7PMpFER+wZkJX5XbhOa0OV9EhJ4kg1oNI4gs8UF3yGhrm ULnvfya7QIsdICp/CaNViP/KPcrlqPiWBhpiIpD5zut9O13ABh+ocP/YgYXhc11MJOyPtpXY T2DEJ4hWazMOTcBw6P6RLgnFkNLheJnrXwOeSl8jhc7sb4ybIfvjUeyaLy3YFSnl0OftP82c siAjT5x/Aa0VG38d3qqPo3jSow7FRR34KeNwLhja7uCYpgnoww3LuF1NqFxK784mbh9MqA3m yZBzuR4AAAAAAAA= --------------ms070103020008010308040207-- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 15:45:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A2EDF8AC for ; Thu, 27 Mar 2014 15:45:23 +0000 (UTC) Received: from mail-ig0-f174.google.com (mail-ig0-f174.google.com [209.85.213.174]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6D378FF0 for ; Thu, 27 Mar 2014 15:45:22 +0000 (UTC) Received: by mail-ig0-f174.google.com with SMTP id h18so1733553igc.1 for ; Thu, 27 Mar 2014 08:45:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=O5lvtHdaEUnIht0QMiBjcdWc2PgOpY34ZKnECLZs868=; b=JCFsj2UHIEoD2LAcaHK3rHazpNNA3qvBMnKvmFkSiPAWRZUELEZrEQIhPHyNYubuRi MGY2kURFusnEPkdpVDtSbkITAzli5xnTFEfczj0Y0pUprDkwy5N+//6Q/YnvRci3ghCb C+Gz3EyX/oQnn8ZIt2uBT3Z1fgbN3DUZme2jTiaUvJLA63c9cSn2JUPI24qarBLS9M7o 3S3SWmsvVolCtSirCsxk/r3y0B5Xz/MtHEjNaHyv/IcxtoPW2azyVtxWOwtejnhP0ySP DqskX9/oeg3e0KRSreRIDOsy4DnbvkaKQNFYLVoSu6gr60ade1Ra8kBn0l8Uh4BHamFc N6qQ== X-Gm-Message-State: ALoCoQl+IRsozkkO6PUOssAjLtmBIVXfaMJCbq4F9PqqABBTkjQtt5kRUvCo/Vw8FEf2GDll5vuL X-Received: by 10.50.32.70 with SMTP id g6mr5406938igi.0.1395935116438; Thu, 27 Mar 2014 08:45:16 -0700 (PDT) Received: from kateleycoimac.local ([63.231.252.189]) by mx.google.com with ESMTPSA id g1sm5605989igt.14.2014.03.27.08.45.15 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 27 Mar 2014 08:45:15 -0700 (PDT) Message-ID: <5334478B.7090408@kateley.com> Date: Thu, 27 Mar 2014 10:45:15 -0500 From: Linda Kateley User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zfs l2arc warmup References: <53343B75.6090807@denninger.net> In-Reply-To: <53343B75.6090807@denninger.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 15:45:23 -0000 it seems like this should be easier. The arc and l2 will hold what has been read.. I don't know, maybe cat the jpegs at boot? On 3/27/14, 9:53 AM, Karl Denninger wrote: > > On 3/27/2014 9:26 AM, Bob Friesenhahn wrote: >> On Thu, 27 Mar 2014, Joar Jegleim wrote: >>> Is this how 'you' do it to warmup the l2arc, or am I missing >>> something ? >>> >>> The thing is with this particular pool is that it serves somewhere >>> between 20 -> 30 million jpegs for a website. The front page of the >>> site will for every reload present a mosaic of about 36 jpegs, and the >>> jpegs are completely randomly fetched from the pool. >>> I don't know what jpegs will be fetched at any given time, so I'm >>> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I >>> want the whole pool to be available from the l2arc . >> >> Your usage pattern is the opposite of what the ARC is supposed to do. >> The ARC is supposed to keep most-often accessed data in memory (or >> retired to L2ARC) based on access patterns. >> >> It does not seem necessary for your mosaic to be truely random across >> 20 -> 30 million jpegs. Random across 1000 jpegs which are circulated >> in time would produce a similar effect. >> >> The application building your web page mosiac can manage which files >> will be included in the mosaic and achieve the same effect as a huge >> cache by always building the mosiac from a known subset of files. The >> 1000 jpegs used for the mosaics can be cycled over time from a random >> selection, with old ones being removed. This approach assures that >> in-memory caching is effective since the same files will be requested >> many times by many clients. >> >> Changing the problem from an OS-oriented one to an >> application-oriented one (better algorithm) gives you more control >> and better efficiency. >> >> Bob > That's true, but the other option if he really does want it to be > random across the entire thing, given the size (which is not > outrageous) and that the resource is going to be read-nearly-only, is > to put them on SSDs and ignore the L2ARC entirely. These days that's > not a terribly expensive answer as with a read-mostly-always > environment you're not going to run into a rewrite life-cycle problem > on rationally-priced SSDs (e.g. Intel 3500s). > > Now an ARC cache miss is not all *that* material since there is no > seek or rotational latency penalty. > > HOWEVER, with that said it's still expensive compared against rotating > rust for bulk storage, and as Bob noted a pre-select middleware > process would result in no need for a L2ARC and allow the use of a > pool with much-smaller SSDs for the actual online retrieval function. > > Whether the coding time and expense is a good trade against the lower > hardware cost to do it the "raw" way is a fair question. > From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 20:20:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9C8A1EED for ; Thu, 27 Mar 2014 20:20:13 +0000 (UTC) Received: from mail-wg0-x229.google.com (mail-wg0-x229.google.com [IPv6:2a00:1450:400c:c00::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 37BF5F4D for ; Thu, 27 Mar 2014 20:20:13 +0000 (UTC) Received: by mail-wg0-f41.google.com with SMTP id n12so2872976wgh.12 for ; Thu, 27 Mar 2014 13:20:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=S0IStkZ867pcl+a7Fqu/PYFr8Dvsz0CADaG/sokK6sA=; b=bAStjR3LGGxHt1AZkWmQpzz85Y/DgxynBhda98+HefpmHsn3C79cRdaWsdT7ABjyRn D7XYj6YtPsk5JQGLFvDSDZpWG/pMlcfQtXITg7wcSuyuD4zCFs/pCRdJJjHlGCyDWA6k GFG1x/F0pvTV+/I95skcDwNGBQWfDOyN2qWTLjgDbphTZl8K9u9xWsMEhLmfSJSjJUOR /qLLW6VgvjGk3mWM3qjy6lM1UHE8Do0cNhybgZddaPJ82e/qG0uhmGkTv1bDMpP6WsJ/ hLTv4GmNfntIA2qLTbRyxUfgiPbfCGcBUUT9gspij7U/A+vfvqNXEevHmKz1PPivs9th Jj0g== MIME-Version: 1.0 X-Received: by 10.180.38.110 with SMTP id f14mr8576091wik.0.1395951611356; Thu, 27 Mar 2014 13:20:11 -0700 (PDT) Received: by 10.216.146.195 with HTTP; Thu, 27 Mar 2014 13:20:11 -0700 (PDT) In-Reply-To: <5333FB8F.7010500@gmail.com> References: <5333FB8F.7010500@gmail.com> Date: Thu, 27 Mar 2014 21:20:11 +0100 Message-ID: Subject: Re: zfs l2arc warmup From: Joar Jegleim To: Johan Hendriks Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 20:20:13 -0000 thnx ! Read the first links many times before, but the second one was new to me and great reading ! On 27 March 2014 11:21, Johan Hendriks wrote: > Joar Jegleim schreef: > >> Hi list ! >> >> I struggling to get a clear understanding of how the l2arc get warm ( >> zfs). >> It's a FreeBSD 9.2-RELEASE server. >> >> From various forum I've come up with this which I have in my >> /boot/loader.conf >> # L2ARC tuning >> # Maximum number of bytes written to l2arc per feed >> # 8MB (actuall=vfs.zfs.l2arc_write_max*(1000 / vfs.zfs.l2arc_feed_min_ms)) >> # so 8MB every 200ms = 40MB/s >> vfs.zfs.l2arc_write_max=8388608 >> # Mostly only relevant at the first few hours after boot >> # write_boost, speed to fill l2arc until it is filled (after boot) >> # 70MB, same rule applys, multiply by 5 = 350MB/s >> vfs.zfs.l2arc_write_boost=73400320 >> # Not sure >> vfs.zfs.l2arc_headroom=2 >> # l2arc feeding period >> vfs.zfs.l2arc_feed_secs=1 >> # minimum l2arc feeding period >> vfs.zfs.l2arc_feed_min_ms=200 >> # control whether streaming data is cached or not >> vfs.zfs.l2arc_noprefetch=1 >> # control whether feed_min_ms is used or not >> vfs.zfs.l2arc_feed_again=1 >> # no read and write at the same time >> vfs.zfs.l2arc_norw=1 >> >> But what I really wonder is how does the l2arc get warmed up ? >> I'm thinking of 2 scenarios: >> >> a.: when arc is full, stuff that evict from arc is put over in l2arc, >> that means that files in the fs that are never accessed will never end >> up in l2arc, right ? >> >> b.: zfs run through fs in the background and fill up the l2arc for any >> file, regardless if it has been accessed or not ( this is the >> 'feature' I'd like ) >> >> I suspect scenario a is what really happens, and if so, how does >> people warmup the l2arc manually (?) >> I figured that if I rsync everything from the pool I want to be >> cache'ed, it will fill up the l2arc for me, which I'm doing right now. >> But it takes 3-4 days to rsync the whole pool . >> >> Is this how 'you' do it to warmup the l2arc, or am I missing something ? >> >> The thing is with this particular pool is that it serves somewhere >> between 20 -> 30 million jpegs for a website. The front page of the >> site will for every reload present a mosaic of about 36 jpegs, and the >> jpegs are completely randomly fetched from the pool. >> I don't know what jpegs will be fetched at any given time, so I'm >> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I >> want the whole pool to be available from the l2arc . >> >> >> Any input on my 'rsync solution' to warmup the l2arc is much appreciated >> :) >> >> > A nice blog about the L2ARC > > https://blogs.oracle.com/brendan/entry/test > > https://blogs.oracle.com/brendan/entry/l2arc_screenshots > > regards > Johan -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 20:34:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9577F643 for ; Thu, 27 Mar 2014 20:34:13 +0000 (UTC) Received: from mail-wg0-x22b.google.com (mail-wg0-x22b.google.com [IPv6:2a00:1450:400c:c00::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 2FF6516A for ; Thu, 27 Mar 2014 20:34:13 +0000 (UTC) Received: by mail-wg0-f43.google.com with SMTP id x13so2812089wgg.2 for ; Thu, 27 Mar 2014 13:34:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=LqIlYMn/xQtxSuBYPacbrC7UiUI/fvewlVP6Z5yfeTQ=; b=ru6kBXcMNbfXpLLVVJ/2CWhxLHxY+HK3z0aYIBUxrnI9Q2TM4PC9fRBR/J5eL53fAv 19us+NSQaW+hZQgBjnzvl1RERgTS2SFRJpO5mMxSmShRZAA/AJfaQ8ZP162cfeHcNLvu Vdo7hlYQ5dMNCzR9S1rZP62xSytAQSIKY/iuNayAVLyG/xaO99V0ZcbC08XS1Ykrg+z4 V0a0VhgcjyvJxQxCUKRlwNgb4JAkigIB+t6othWkvQn6bOPnKEIL9z93u2o0T6J7CX7r SirTltDfFNQdt6XF/uRQj4JA72U5dzIQbHEmEjqsOTQq0gtWQIC0mvmQfBT60vP04J4A WftA== MIME-Version: 1.0 X-Received: by 10.180.77.200 with SMTP id u8mr7839462wiw.48.1395952451408; Thu, 27 Mar 2014 13:34:11 -0700 (PDT) Received: by 10.216.146.195 with HTTP; Thu, 27 Mar 2014 13:34:11 -0700 (PDT) In-Reply-To: <20140327114018.6d50b666@suse3.ewadmin.local> References: <20140327114018.6d50b666@suse3.ewadmin.local> Date: Thu, 27 Mar 2014 21:34:11 +0100 Message-ID: Subject: Re: zfs l2arc warmup From: Joar Jegleim To: Rainer Duffner Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 20:34:13 -0000 > Don't you need RAM for the L2ARC, too? > > http://www.richardelling.com/Home/scripts-and-programs-1/l2arc > > > I'd just max-out the RAM on the DL370 - you'd need to do that anyway, > according to the above spread-sheet.... > yeah, it does. At the moment I've got 2x480GB ssd for l2arc and 144GB ram, though I haven't found a way to calculate if I have enough ram or not I've seen posts that make me suspect I had enough ram for this setup. The link from Johan Hendriks https://blogs.oracle.com/brendan/entry/l2arc_screenshots mention that in the bottom actually " It costs some DRAM to reference the L2ARC, at a rate proportional to record size. For example, it currently takes about 15 Gbytes of DRAM to reference 600 Gbytes of L2ARC - at an 8 Kbyte ZFS record size. If you use a 16 Kbyte record size, that cost would be halve - 7.5 Gbytes. This means you shouldn't, for example, configure a system with only 8 Gbytes of DRAM, 600 Gbytes of L2ARC, and an 8 Kbyte record size - if you did, the L2ARC would never fully populate. " My two 480GB ssd's will probably be full by tomorrow, they're currently at 686GB and got about 207GB left to fill. I wonder how I can read out how much ram is used for l2arc reference (?) would that be the 'HEADER' value from top in 9.2-RELEASE (the ARC line) it was around 3GB yesterday and now I see it's climbed to about 6.3GB . (got 128KB record size) . On 27 March 2014 11:40, Rainer Duffner wrote: > Am Thu, 27 Mar 2014 08:50:06 +0100 > schrieb Joar Jegleim : > >> Hi list ! >> >> I struggling to get a clear understanding of how the l2arc get warm >> ( zfs). It's a FreeBSD 9.2-RELEASE server. >> > >> The thing is with this particular pool is that it serves somewhere >> between 20 -> 30 million jpegs for a website. The front page of the >> site will for every reload present a mosaic of about 36 jpegs, and the >> jpegs are completely randomly fetched from the pool. >> I don't know what jpegs will be fetched at any given time, so I'm >> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I >> want the whole pool to be available from the l2arc . >> >> >> Any input on my 'rsync solution' to warmup the l2arc is much >> appreciated :) >> >> > > > > Don't you need RAM for the L2ARC, too? > > http://www.richardelling.com/Home/scripts-and-programs-1/l2arc > > > I'd just max-out the RAM on the DL370 - you'd need to do that anyway, > according to the above spread-sheet.... > > > > > -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 20:40:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 04060719 for ; Thu, 27 Mar 2014 20:40:15 +0000 (UTC) Received: from mail-wi0-x232.google.com (mail-wi0-x232.google.com [IPv6:2a00:1450:400c:c05::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8CDCE1AA for ; Thu, 27 Mar 2014 20:40:14 +0000 (UTC) Received: by mail-wi0-f178.google.com with SMTP id bs8so3581045wib.17 for ; Thu, 27 Mar 2014 13:40:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=IoqIzcQFVUCHDgkooKldidxaagDMkc5joKss6M4I1R8=; b=a5D75JpjkdmlBZ+mURDRY+Jilvi8aO5PWbkXvt0Cx1XMdNdz/RSw0QMlZStrb864nr VxCe0FIuS2dN5PGc+03VyFhOBxnwYX+3u79ucQ30ch2LmxLSzqGoPfKFx3LdmM2SnmBl tmdNhzmGQT1IqqLX8CBxesd4bDqKmgxEG65l3Ggs2cyPeHmGww2c3cxhIGqBzR9cZlj3 OgVYJFngVdUkZatEWazm1K5gK+VFdfXnSK+8913KpyNKU/8p+gx+MeFpQR0/9VQ15VaC aZsnLizEgy9P8tGOnMcHwZRnajAQRqyfIkOI7A6+VHp/UybjutNH3iCt05A3sHKn/3uo ib9g== MIME-Version: 1.0 X-Received: by 10.180.10.66 with SMTP id g2mr16023417wib.5.1395952812035; Thu, 27 Mar 2014 13:40:12 -0700 (PDT) Received: by 10.216.146.195 with HTTP; Thu, 27 Mar 2014 13:40:11 -0700 (PDT) In-Reply-To: References: Date: Thu, 27 Mar 2014 21:40:11 +0100 Message-ID: Subject: Re: zfs l2arc warmup From: Joar Jegleim To: Bob Friesenhahn Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 20:40:15 -0000 Appreciate your input, and I've talked to our devs about that and requested them to make some finit subset of the jpegs that rotate every night. Actually the whole web application is being rewritten, I won't have anything like that until august at best, which certainly isn't bad . When I get that kind of feature maybe I no longer need an l2arc to cover the whole dataset. On 27 March 2014 15:26, Bob Friesenhahn wrote: > On Thu, 27 Mar 2014, Joar Jegleim wrote: >> >> Is this how 'you' do it to warmup the l2arc, or am I missing something ? >> >> The thing is with this particular pool is that it serves somewhere >> between 20 -> 30 million jpegs for a website. The front page of the >> site will for every reload present a mosaic of about 36 jpegs, and the >> jpegs are completely randomly fetched from the pool. >> I don't know what jpegs will be fetched at any given time, so I'm >> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I >> want the whole pool to be available from the l2arc . > > > Your usage pattern is the opposite of what the ARC is supposed to do. The > ARC is supposed to keep most-often accessed data in memory (or retired to > L2ARC) based on access patterns. > > It does not seem necessary for your mosaic to be truely random across > 20 -> 30 million jpegs. Random across 1000 jpegs which are circulated > in time would produce a similar effect. > > The application building your web page mosiac can manage which files will be > included in the mosaic and achieve the same effect as a huge cache by always > building the mosiac from a known subset of files. The 1000 jpegs used for > the mosaics can be cycled over time from a random selection, with old ones > being removed. This approach assures that in-memory caching is effective > since the same files will be requested many times by many clients. > > Changing the problem from an OS-oriented one to an application-oriented one > (better algorithm) gives you more control and better efficiency. > > Bob > -- > Bob Friesenhahn > bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 20:55:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9937FED5 for ; Thu, 27 Mar 2014 20:55:03 +0000 (UTC) Received: from mail-wi0-x234.google.com (mail-wi0-x234.google.com [IPv6:2a00:1450:400c:c05::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 34E2033E for ; Thu, 27 Mar 2014 20:55:03 +0000 (UTC) Received: by mail-wi0-f180.google.com with SMTP id q5so12418wiv.1 for ; Thu, 27 Mar 2014 13:55:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=wQOPr9YTIuqUN5bCmD+KCIgtiYKUh3aVDGyp/x0TpFE=; b=q8XJ4bJFVKGJUQqFvB6SgqtfcMK6r9bxpn5I52v5D43OWXxxLfxmvUqc1bvG9ISWgR L/iWjlXfoKRClf/DH4Nr+hgEkT4fEMHhYZXmlf9R6ZlyV+ePJRgfjzZYcRbA5u6QJfD3 g01MyXqMi+dKdlbYhpix9C2oQ8Wodgw87k2IejNsrxkdHQyziT1Jw7wB6vzGwA4DOWYa HXZKb8l9Y0BfxM8E8E9SUfLyN/mdeKWtEDa73NRtB9cECRjPir3UCoiljkh9ne9+1Uve crRtBRJ+BTHgge27LEmwdTcn+dBz4yfFKHgEM24xtr+GMAkOcGFwDf+6QvQJHbObz5w4 MyyQ== MIME-Version: 1.0 X-Received: by 10.180.8.170 with SMTP id s10mr42219972wia.35.1395953701484; Thu, 27 Mar 2014 13:55:01 -0700 (PDT) Received: by 10.216.146.195 with HTTP; Thu, 27 Mar 2014 13:55:01 -0700 (PDT) In-Reply-To: <53343B75.6090807@denninger.net> References: <53343B75.6090807@denninger.net> Date: Thu, 27 Mar 2014 21:55:01 +0100 Message-ID: Subject: Re: zfs l2arc warmup From: Joar Jegleim To: Karl Denninger Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 20:55:03 -0000 I agree, and since the devs will take this into account for our next, total rewrite, release I may hold any further hardware purchase until then, not sure yet. I've almost got my current 960GB of l2arc filled up, I'm going to see how that affect any performance. It won't cover the whole dataset, but about 75% or so, so I reckon I should see some improvement. On 27 March 2014 15:53, Karl Denninger wrote: > > On 3/27/2014 9:26 AM, Bob Friesenhahn wrote: >> >> On Thu, 27 Mar 2014, Joar Jegleim wrote: >>> >>> Is this how 'you' do it to warmup the l2arc, or am I missing something ? >>> >>> The thing is with this particular pool is that it serves somewhere >>> between 20 -> 30 million jpegs for a website. The front page of the >>> site will for every reload present a mosaic of about 36 jpegs, and the >>> jpegs are completely randomly fetched from the pool. >>> I don't know what jpegs will be fetched at any given time, so I'm >>> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I >>> want the whole pool to be available from the l2arc . >> >> >> Your usage pattern is the opposite of what the ARC is supposed to do. The >> ARC is supposed to keep most-often accessed data in memory (or retired to >> L2ARC) based on access patterns. >> >> It does not seem necessary for your mosaic to be truely random across >> 20 -> 30 million jpegs. Random across 1000 jpegs which are circulated >> in time would produce a similar effect. >> >> The application building your web page mosiac can manage which files will >> be included in the mosaic and achieve the same effect as a huge cache by >> always building the mosiac from a known subset of files. The 1000 jpegs used >> for the mosaics can be cycled over time from a random selection, with old >> ones being removed. This approach assures that in-memory caching is >> effective since the same files will be requested many times by many clients. >> >> Changing the problem from an OS-oriented one to an application-oriented >> one (better algorithm) gives you more control and better efficiency. >> >> Bob > > That's true, but the other option if he really does want it to be random > across the entire thing, given the size (which is not outrageous) and that > the resource is going to be read-nearly-only, is to put them on SSDs and > ignore the L2ARC entirely. These days that's not a terribly expensive > answer as with a read-mostly-always environment you're not going to run into > a rewrite life-cycle problem on rationally-priced SSDs (e.g. Intel 3500s). > > Now an ARC cache miss is not all *that* material since there is no seek or > rotational latency penalty. > > HOWEVER, with that said it's still expensive compared against rotating rust > for bulk storage, and as Bob noted a pre-select middleware process would > result in no need for a L2ARC and allow the use of a pool with much-smaller > SSDs for the actual online retrieval function. > > Whether the coding time and expense is a good trade against the lower > hardware cost to do it the "raw" way is a fair question. > > -- > -- Karl > karl@denninger.net > > -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 21:00:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 32CD450C for ; Thu, 27 Mar 2014 21:00:12 +0000 (UTC) Received: from mail-we0-x229.google.com (mail-we0-x229.google.com [IPv6:2a00:1450:400c:c03::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C3CA4399 for ; Thu, 27 Mar 2014 21:00:10 +0000 (UTC) Received: by mail-we0-f169.google.com with SMTP id w62so2204777wes.0 for ; Thu, 27 Mar 2014 14:00:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=SFIXoIfDCyQmtJUteUp9il/YtaMu+MKDAxk+whEnTcE=; b=v8JJ2yyhsRPoy58n/zJ5RpPUL+022hlIy5ngtgDCu/EvGcYzCOS9KSIhXp+LvXZLr4 mbXYSa075FBAFWi+urQ52o4vuTZsGzPeOjaxFaPf37GXPHK+JofISSerWAvMrcPBsst8 7qnvfc/FC4c+jcfS/9hFX7ySizN96+qlvn+3LHLOle5XVBeHtJoAfH/5weda7IsFmVbQ qHLxJP5PLHm3STTbn3+BwzdA79Nw4RaS705GTMzNhBAXumRqDOpZpbdD2z6KY24Tgg86 /fv9CrefQgUvu3M4WSTL/VI1/0zb3ws9MHIAZe+Ce7NzY/TqsFLjWu8/dckRtNtZysNM cSYQ== MIME-Version: 1.0 X-Received: by 10.180.182.199 with SMTP id eg7mr43176463wic.13.1395954008794; Thu, 27 Mar 2014 14:00:08 -0700 (PDT) Received: by 10.216.146.195 with HTTP; Thu, 27 Mar 2014 14:00:08 -0700 (PDT) In-Reply-To: <20140327114018.6d50b666@suse3.ewadmin.local> References: <20140327114018.6d50b666@suse3.ewadmin.local> Date: Thu, 27 Mar 2014 22:00:08 +0100 Message-ID: Subject: Re: zfs l2arc warmup From: Joar Jegleim To: Rainer Duffner Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 21:00:12 -0000 On 27 March 2014 11:40, Rainer Duffner wrote: > Am Thu, 27 Mar 2014 08:50:06 +0100 > schrieb Joar Jegleim : > >> Hi list ! >> >> I struggling to get a clear understanding of how the l2arc get warm >> ( zfs). It's a FreeBSD 9.2-RELEASE server. >> > >> The thing is with this particular pool is that it serves somewhere >> between 20 -> 30 million jpegs for a website. The front page of the >> site will for every reload present a mosaic of about 36 jpegs, and the >> jpegs are completely randomly fetched from the pool. >> I don't know what jpegs will be fetched at any given time, so I'm >> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I >> want the whole pool to be available from the l2arc . >> >> >> Any input on my 'rsync solution' to warmup the l2arc is much >> appreciated :) >> >> > > > > Don't you need RAM for the L2ARC, too? > > http://www.richardelling.com/Home/scripts-and-programs-1/l2arc > > > I'd just max-out the RAM on the DL370 - you'd need to do that anyway, > according to the above spread-sheet.... > > > > > -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 21:02:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D3A87698 for ; Thu, 27 Mar 2014 21:02:11 +0000 (UTC) Received: from mail-we0-x234.google.com (mail-we0-x234.google.com [IPv6:2a00:1450:400c:c03::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6EC7163C for ; Thu, 27 Mar 2014 21:02:11 +0000 (UTC) Received: by mail-we0-f180.google.com with SMTP id p61so2147476wes.39 for ; Thu, 27 Mar 2014 14:02:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=fWTulvM+7mypmOhP9GxTtI80/6LMml9FBOQjzGBZW1M=; b=whZ/6gEmpIaLrZDM4tO8vVrhLdXBUaSRy+zJRPfchJ7hQcrxwDOqjfGH6Gp+bOGtzP E5o7S8Pb5MhxcRYq2JyQfhdyYW/UGTY+y/lDdJhbgmSJM9mkD4tFretPXo7abfxKRDz7 Kf+si4GuWvLbI2NTa70MqwfmOvuEdm8sp7DhoVCbaYr2nKQ6c+HTSznS05IXVNo7rKgA Yjg1W7zMs0FDSdwbX+6aerRhuiyCQRKFVTd/HDZ/GDbNgiFzee/4akyFWrQIFT3jGhnl G76U2VFB66ZKUlwyXkKFrtrrF354Nhkr08BbHMxcuGFBMM6ny8JOF4yRstW0U+u/XaK9 BmQg== MIME-Version: 1.0 X-Received: by 10.180.90.207 with SMTP id by15mr16411615wib.48.1395954129626; Thu, 27 Mar 2014 14:02:09 -0700 (PDT) Received: by 10.216.146.195 with HTTP; Thu, 27 Mar 2014 14:02:09 -0700 (PDT) In-Reply-To: <20140327114018.6d50b666@suse3.ewadmin.local> References: <20140327114018.6d50b666@suse3.ewadmin.local> Date: Thu, 27 Mar 2014 22:02:09 +0100 Message-ID: Subject: Re: zfs l2arc warmup From: Joar Jegleim To: Rainer Duffner Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 21:02:11 -0000 sorry for the previous empty mail. But I think mabye kstat.zfs.misc.arcstats.l2_hdr_size would show the l2arc header size. It's currently at kstat.zfs.misc.arcstats.l2_hdr_size: 4901413968 when l2arc is currently at 696GB, and I have the default 128KB recordsize . On 27 March 2014 11:40, Rainer Duffner wrote: > Am Thu, 27 Mar 2014 08:50:06 +0100 > schrieb Joar Jegleim : > >> Hi list ! >> >> I struggling to get a clear understanding of how the l2arc get warm >> ( zfs). It's a FreeBSD 9.2-RELEASE server. >> > >> The thing is with this particular pool is that it serves somewhere >> between 20 -> 30 million jpegs for a website. The front page of the >> site will for every reload present a mosaic of about 36 jpegs, and the >> jpegs are completely randomly fetched from the pool. >> I don't know what jpegs will be fetched at any given time, so I'm >> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I >> want the whole pool to be available from the l2arc . >> >> >> Any input on my 'rsync solution' to warmup the l2arc is much >> appreciated :) >> >> > > > > Don't you need RAM for the L2ARC, too? > > http://www.richardelling.com/Home/scripts-and-programs-1/l2arc > > > I'd just max-out the RAM on the DL370 - you'd need to do that anyway, > according to the above spread-sheet.... > > > > > -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 21:37:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BB12B133; Thu, 27 Mar 2014 21:37:44 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 3F1CF9BF; Thu, 27 Mar 2014 21:37:43 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqsEAN+YNFODaFve/2dsb2JhbABZg0FXgwq4HYYZTVGBNXSCJQEBAQMBAQEBIAQnIAsbDgoCAg0ZAiMGAQkmBggHBAEcBIdEAwkIDa5ymxoNhysXgSmLNIE7CgYCARs0B4JvgUkElXZqgyCLN4VKg0shMXtC X-IronPort-AV: E=Sophos;i="4.97,745,1389762000"; d="scan'208";a="109827248" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 27 Mar 2014 17:37:36 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 72614B404C; Thu, 27 Mar 2014 17:37:36 -0400 (EDT) Date: Thu, 27 Mar 2014 17:37:36 -0400 (EDT) From: Rick Macklem To: Christopher Forgeron Message-ID: <380071870.1832898.1395956256461.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Alexander Motin , FreeBSD Net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 21:37:44 -0000 Christopher Forgeron wrote: > I'm quite sure the problem is on 9.2-RELEASE, not 9.1-RELEASE or > earlier, > as a 9.2-STABLE from last year I have doesn't exhibit the problem. > New > code in if.c at line 660 looks to be what is starting this, which > makes me > wonder how TSO was being handled before 9.2. > > I also like Rick's NFS patch for cluster size. I notice an > improvement, but > don't have solid numbers yet. I'm still stress testing it as we > speak. > Unfortunately, this causes problems for small i386 systems, so I am reluctant to commit it to head. Maybe a variant that is only enabled for amd64 systems with lots of memory would be ok? > > On Wed, Mar 26, 2014 at 11:44 PM, Marcelo Araujo > wrote: > > > Hello All, > > > > > > 2014-03-27 8:27 GMT+08:00 Rick Macklem : > > > > > > Well, bumping it from 32->35 is all it would take for NFS (can't > > > comment > > > w.r.t. iSCSI). ixgbe uses 100 for the 82598 chip and 32 for the > > > 82599 > > > (just so others aren't confused by the above comment). I > > > understand > > > your point was w.r.t. using 100 without blowing the kernel stack, > > > but > > > since the testers have been using "ix" with the 82599 chip which > > > is > > > limited to 32 transmit segments... > > > > > > However, please increase any you know can be safely done from > > > 32->35, > > rick > > > > > > > > I have plenty of machines using Intel X540 that is based on 82599 > > chipset. > > I have applied Rick's patch on ixgbe to check if the packet size is > > bigger > > than 65535 or cluster is bigger than 32. So far till now, on > > FreeBSD > > 9.1-RELEASE this problem does not happens. > > > > Unfortunately all my environment here is based on 9.1-RELEASE, with > > some > > merges from 10-RELEASE such like: NFS and IXGBE. > > I can't see why it couldn't happen on 9.1 or earlier, since it just uses IP_MAXPACKET in tcp_output(). However, to make it happen NFS has to do a read reply (server) or write request (client) that is a little under 64K bytes. Normally the default will be a full 64K bytes, so for the server side it would take a read of a file where the EOF is just shy of the 64K boundary. For the client write it would be a write of a partially dirtied block where most of the block has been dirtied. (Software builds generate a lot of patially dirtied blocks, but I don't know what else would. For sequential writing it would be a file that ends just shy of a 64K boundary (similar to the server side) being written. I think it is more likely your NFS file activity and not 9.1 vs 9.2 that avoids the problem. (I suspect there are quite a few folk running NFS 9.2 or later on these ix chips who don't see the problem as well.) Fortunately you (Christopher) were able to reproduce it, so the problem could be isolated. Thanks everyone for your help with this, rick > > Also I have applied the patch that Rick sent in another email with > > the > > subject 'NFS patch to use pagesize mbuf clusters'. And we can see > > some > > performance boost over 10Gbps Intel. However here at the company, > > we are > > still doing benchmarks. If someone wants to have my benchmark > > result, I can > > send it later. > > > > I'm wondering, if this update on ixgbe from 32->35 could be applied > > also > > for versions < 9.2. I'm thinking, that this problem arise only on > > 9-STABLE > > and consequently on 9.2-RELEASE. And fortunately or not 9.1-RELEASE > > doesn't > > share it. > > > > Best Regards, > > -- > > Marcelo Araujo > > araujo@FreeBSD.org > > _______________________________________________ > > freebsd-net@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-net > > To unsubscribe, send any mail to > > "freebsd-net-unsubscribe@freebsd.org" > > > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to > "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 22:00:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 81870804 for ; Thu, 27 Mar 2014 22:00:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6F044BC0 for ; Thu, 27 Mar 2014 22:00:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2RM01Jk045266 for ; Thu, 27 Mar 2014 22:00:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2RM01vQ045265; Thu, 27 Mar 2014 22:00:01 GMT (envelope-from gnats) Date: Thu, 27 Mar 2014 22:00:01 GMT Message-Id: <201403272200.s2RM01vQ045265@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dteske@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 22:00:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: To: , Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Thu, 27 Mar 2014 14:55:58 -0700 Hi, I can't seem to find the code where you mention in your previous post: `...and slightly tweak the default reservation to be equal to the VM system's "wakeup" level.' Comparing Mar 26th's patch to Mar 24th's patch yields no such change. Did you post the latest patch? -- Devin _____________ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 22:05:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5504B946; Thu, 27 Mar 2014 22:05:35 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 9947CC6D; Thu, 27 Mar 2014 22:05:34 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqUEAOefNFODaFve/2dsb2JhbABZg0FXgwq4HYZmUYE3dIIlAQEBAwEBAQEgBCcgCwUWGAICDRkCKQEJJg4HBAEcBIdQCA2ue6JUF4EpjG8KAQUCARs0B4JvgUkElXaECpEBg0shMXsBHyI X-IronPort-AV: E=Sophos;i="4.97,745,1389762000"; d="scan'208";a="109664729" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 27 Mar 2014 18:05:31 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id D5094B3F85; Thu, 27 Mar 2014 18:05:31 -0400 (EDT) Date: Thu, 27 Mar 2014 18:05:31 -0400 (EDT) From: Rick Macklem To: araujo@FreeBSD.org Message-ID: <1618221559.1844923.1395957931862.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: review/test: NFS patch to use pagesize mbuf clusters MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 22:05:35 -0000 Marcelo Araujo wrote: > > Hello Rick, > > > We made few tests here, and we could see a little improvement for > READ! Cool. Just eyeballing the graphs, it looks like about 10-20% improvement. Btw, "rsize=262144" will be ignored and it will use a maximum of MAXBSIZE (65536). (I don't think it's in 9.1, but on newer systems you can "nfsstat -m" to see what is actually being used.) A couple of things you might try: - You didn't mention this, so I don't know, but you probably want more than the default # of nfsd threads on the server. You can set that with nfs_server_flags="-u -t -n 64" (to set it to 64) in /etc/rc.conf. (Double check in /etc/default/rc.conf to make sure I got the name of it correct.) - You might want to try increasing readahead with the "readahead=8" mount option. (It defaults to only 1, but can be increased to 16. It's kinda fun to try values and see what works best. > We are still double checking it. All our systems have 10G Intel > Interface with TSO enabled and we have those 32 transmit segments as > limitation. We ran the test for several times, and we didn't see any > regression. > The regression is threads stuck looping in the kernel, so it will be pretty obvious when it happens (due to exhaustion of kernel memory causing it to not be able to allocate "boundary tags" if I understand the problem correctly). (I doubt this will happen for your hardware. I was able to intermittently reproduce it on a 256Mbyte i386->77Mbytes kernel memory size.) Have fun with it, rick > > All our system is based on 9.1-RELEASE with some merges on NFS and > IXGBE from 10-RELEASE. > > > Our machine: > NIC - 10G Intel X540 that is based on 82599 chipset. > > RAM - 24G > CPU - Intel Xeon E5-2448L 1.80Ghz. > Motherboard - Homemade. > > > Here attached there is a small report, from page number 18, you can > see some graphs that will make easier for you to see the results. > So, let me know if you want try anything else, any other patch and > so on. I can keep the environment for more 1 week and I can make > more tests. > > > Best Regards, > > > > 2014-03-19 8:06 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca > : > > > > Marcelo Araujo wrote: > > > > Hello Rick, > > > > > > I have couple machines with 10G interface capable with TSO. > > Which kind of result do you expecting? Is it a speed up in read? > > > Well, if NFS is working well on these systems, I would hope you > don't see any regression. > > If your TSO enabled interfaces can handle more than 32 transmit > segments (there is usually a #define constant in the driver with > something like TX_SEGMAX in it and if this is >= 34 you should > see very little effect). > > Even if your network interface is one of the ones limited to 32 > transmit segments, the driver usually fixes the list via a call > to m_defrag(). Although this involves a bunch of bcopy()'ng, you > still might not see any easily measured performance improvement, > assuming m_defrag() is getting the job done. > (Network latency and disk latency in the server will predominate, > I suspect. A server built entirely using SSDs might be a different > story?) > > Thanks for doing testing, since a lack of a regression is what I > care about most. (I am hoping this resolves cases where users have > had to disable TSO to make NFS work ok for them.) > > rick > > > > > > > I'm gonna make some tests today, but against 9.1-RELEASE, where my > > servers are working on. > > > > > > Best Regards, > > > > > > > > > > > > 2014-03-18 9:26 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca > : > > > > > > Hi, > > > > Several of the TSO capable network interfaces have a limit of > > 32 mbufs in the transmit mbuf chain (the drivers call these > > transmit > > segments, which I admit I find confusing). > > > > For a 64K read/readdir reply or 64K write request, NFS passes > > a list of 34 mbufs down to TCP. TCP will split the list, since > > it is slightly more than 64K bytes, but that split will normally > > be a copy by reference of the last mbuf cluster. As such, normally > > the network interface will get a list of 34 mbufs. > > > > For TSO enabled interfaces that are limited to 32 mbufs in the > > list, the usual workaround in the driver is to copy { real copy, > > not copy by reference } the list to 32 mbuf clusters via > > m_defrag(). > > (A few drivers use m_collapse() which is less likely to succeed.) > > > > As a workaround to this problem, the attached patch modifies NFS > > to use larger pagesize clusters, so that the 64K RPC message is > > in 18 mbufs (assuming a 4K pagesize). > > > > Testing on my slow hardware which does not have TSO capability > > shows it to be performance neutral, but I believe avoiding the > > overhead of copying via m_defrag() { and possible failures > > resulting in the message never being transmitted } makes this > > patch worth doing. > > > > As such, I'd like to request review and/or testing of this patch > > by anyone who can do so. > > > > Thanks in advance for your help, rick > > ps: If you don't get the attachment, just email and I'll > > send you a copy. > > > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to " > > freebsd-fs-unsubscribe@freebsd.org > > " > > > > > > > > > > -- > > Marcelo Araujo > > araujo@FreeBSD.org > > > > > -- > Marcelo Araujo > araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Thu Mar 27 22:14:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BB692B7F; Thu, 27 Mar 2014 22:14:48 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 3F194D5F; Thu, 27 Mar 2014 22:14:47 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqoEAEKiNFODaFve/2dsb2JhbABZg0FXgwq4HYYZTVGBN3SCJQEBAQMBAQEBIAQnIAsFFhgCAg0ZAikBCSYOBwQBHASHUAgNrn2iVReBKYxvCgYCARs0BxaCWYFJBJV2hAqRAYNLITF7Qg X-IronPort-AV: E=Sophos;i="4.97,745,1389762000"; d="scan'208";a="109838808" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 27 Mar 2014 18:14:47 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 1270EB3F0B; Thu, 27 Mar 2014 18:14:47 -0400 (EDT) Date: Thu, 27 Mar 2014 18:14:47 -0400 (EDT) From: Rick Macklem To: araujo@FreeBSD.org Message-ID: <1882337939.1849245.1395958487066.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Alexander Motin , FreeBSD Net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 22:14:48 -0000 Marcelo Araujo wrote: > Hello All, > > > 2014-03-27 8:27 GMT+08:00 Rick Macklem : > > > > Well, bumping it from 32->35 is all it would take for NFS (can't > > comment > > w.r.t. iSCSI). ixgbe uses 100 for the 82598 chip and 32 for the > > 82599 > > (just so others aren't confused by the above comment). I understand > > your point was w.r.t. using 100 without blowing the kernel stack, > > but > > since the testers have been using "ix" with the 82599 chip which is > > limited to 32 transmit segments... > > > > However, please increase any you know can be safely done from > > 32->35, rick > > > > > I have plenty of machines using Intel X540 that is based on 82599 > chipset. > I have applied Rick's patch on ixgbe to check if the packet size is > bigger > than 65535 or cluster is bigger than 32. So far till now, on FreeBSD > 9.1-RELEASE this problem does not happens. > > Unfortunately all my environment here is based on 9.1-RELEASE, with > some > merges from 10-RELEASE such like: NFS and IXGBE. > > Also I have applied the patch that Rick sent in another email with > the > subject 'NFS patch to use pagesize mbuf clusters'. And we can see > some > performance boost over 10Gbps Intel. However here at the company, we > are > still doing benchmarks. If someone wants to have my benchmark result, > I can > send it later. > > I'm wondering, if this update on ixgbe from 32->35 could be applied > also > for versions < 9.2. I'm thinking, that this problem arise only on > 9-STABLE > and consequently on 9.2-RELEASE. And fortunately or not 9.1-RELEASE > doesn't > share it. > My understanding is that the 32 limitation is a hardware one for the 82599. It appears that other drivers than the ixgbe.c can be increased from 32->35, but not ixgbe.c (for the 82599 chips). rick > Best Regards, > -- > Marcelo Araujo > araujo@FreeBSD.org > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to > "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 00:36:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 509BCF6A for ; Fri, 28 Mar 2014 00:36:03 +0000 (UTC) Received: from nm34.bullet.mail.ne1.yahoo.com (nm34.bullet.mail.ne1.yahoo.com [98.138.229.27]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0F759EA5 for ; Fri, 28 Mar 2014 00:36:02 +0000 (UTC) Received: from [127.0.0.1] by nm34.bullet.mail.ne1.yahoo.com with NNFMP; 28 Mar 2014 00:35:55 -0000 Received: from [98.138.101.128] by nm34.bullet.mail.ne1.yahoo.com with NNFMP; 28 Mar 2014 00:33:14 -0000 Received: from [98.138.87.7] by tm16.bullet.mail.ne1.yahoo.com with NNFMP; 28 Mar 2014 00:33:14 -0000 Received: from [127.0.0.1] by omp1007.mail.ne1.yahoo.com with NNFMP; 28 Mar 2014 00:33:14 -0000 X-Yahoo-Newman-Property: ymail-4 X-Yahoo-Newman-Id: 171571.66294.bm@omp1007.mail.ne1.yahoo.com Received: (qmail 64163 invoked by uid 60001); 28 Mar 2014 00:33:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1395966793; bh=RsCS9EWEKg3nxskwGgwC9qO12MX5ztKTb6TD5//j7GE=; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type; b=3tsKrKtaW81VhsZg8NHLNQdNRUoMGmfgY6C17YgExiZzHitHc3Ci8wYGPakMfb1Ct+MLfxM0o1TUYITEIgQdYQpcpEcD0hO+iRhyOAAPwIq9WWBHUOnymniH1V0pwK7i0XKe4rQI0w35y7gRj5dj+G/AnXyhEXkx6BRoyjJbt18= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type; b=NQjx5XtTSKnpAH+lSXpEyJ3fnUx2Amv3S4Cw5v1Q9BiHXNuRt5Aw12g1BwNKMhMnTMGvNmObQczYzZWRe+cX+cl1e/fK2wIKDHTIIYTqWdncikpqeBg14464HxvYYztFYll1MWnGhNVDOItCBkAaSSCJdcMkWSFRlojOUNgkCjE=; X-YMail-OSG: KVMx0EgVM1kLBtEL0siyiUJJgOK96O2aqU4oiwV9xMbQfEJ 8rgBRrp7Vte70xnjFl2VI60d4tLiEOK4tylZS4iRoRXp_lGbS3j6TzeOXRNV Lwuxpma3bNNSKmhNzyUOvYUxqjUc_8.IIPJXWSYf6.NFd4Eos7PopUjp4bzi 3Pt3z0AoN4lyBsWutOPa2XJelRdIGeVXntUR6opfH_c6LatPcLqXzSEXwRjQ V5gpx_S2LsfTZdED53Ej52CJHlKkMERcjpsnlygffl7MIuCkCFK6Pu0ImL3. kSVrCfA7JYLe2LZpQeMNvjuRL2s98W_Hrwwi6ODwLl4rQ_qdJWVCCPD09ESl V9tKm24GdGyc5.uFJu3z1gAVB3RYrQLhKSM2pMBo1SWvDJY_ziBZ._NiCbxL ih.fvi13mQjjRUYk9ttmgIRv0MTZ3CJPiInq9e9Q4Gnn81F8x8czZbfOQi9g WZnYRZNw7QrAJ7pmiHGEPANRGykl.XBdkUvjgIP4tblI0kbd_G05WOgNUGGx YyZeBLuI0 Received: from [204.138.59.246] by web122101.mail.ne1.yahoo.com via HTTP; Thu, 27 Mar 2014 17:33:13 PDT X-Rocket-MIMEInfo: 002.001, SGkgYWxsLAoKSSB1c2VkIGtlcm5fb3BlbmF0KCkvZmdldC9mb19yZWFkKCkgdG8gb3BlbiBhbmQgcmVhZCBhIHRleHQgZmlsZXMgaW5zaWRlIGtlcm5lbC7CoAoKV2hlbiBJIGxvYWQgaXQgYXMgYSBrZXJuZWwgbW9kdWxlLCB0aGUgbW9kdWxlIHdvcmtzIGZpbmUgYW5kIGRvIGl0cyBqb2IuwqAKCldoZW4gSSBjb21waWxlZCBpdCBpbnRvIGtlcm5lbCwgaXQgY3Jhc2ggaW4ga2Vybl9vcGVuYXQoKSwgbW9yZSBwcmVjaXNlbHksIGluIHZuX29wZW4oKS7CoEkgdXNlZCBjYWxsX291dCgpIHRvIGRlZmVyIHJlYWQBMAEBAQE- X-Mailer: YahooMailWebService/0.8.181.645 References: Message-ID: <1395966793.20688.YahooMailNeo@web122101.mail.ne1.yahoo.com> Date: Thu, 27 Mar 2014 17:33:13 -0700 (PDT) From: Victor Sneider Subject: Issue with vn_open(), help me please To: "freebsd-fs@freebsd.org" MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Victor Sneider List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 00:36:03 -0000 Hi all,=0A=0AI used kern_openat()/fget/fo_read() to open and read a text fi= les inside kernel.=A0=0A=0AWhen I load it as a kernel module, the module wo= rks fine and do its job.=A0=0A=0AWhen I compiled it into kernel, it crash i= n kern_openat(), more precisely, in vn_open().=A0I used call_out() to defer= reading the file and wait for the rootfs mount completes. I set the timeou= t long enough (5 min, for example) but it still crashes.=A0=0A=0AI googled = a lot but have not found any report about this issue. I am not an expert ab= out file reading/writing inside kernel but I feel this could be a bug in vn= _open().=A0=0A=0APlease help me.=A0=0A=0AThank you.=A0=0A=0AV.Sneider. From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 00:39:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2C7E511E for ; Fri, 28 Mar 2014 00:39:45 +0000 (UTC) Received: from mail.tentacle.net (mail.tentacle.net [208.94.246.163]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0565DED2 for ; Fri, 28 Mar 2014 00:39:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=tentacle.net; s=mail; h=Content-Type:MIME-Version:Message-ID:Subject:To:From:Date; bh=3T6AHsZ0CJJ+8A3fKZR5hNO/bQaWxSUXkFJqvad8m94=; b=L69zT/ivPnv9Tjxyf9tvXWKPRIBrV/fU8kFeH9H1bPIdYm0wy+aeFYeGbMU7UoKw00iSPPfCjh+f81ZYKrD1tHkhlL2W4E271qchh4IA0oODNdWZWgYRg9onXsiZZTDt2sj+6KpyHILjDSqHJAytYk56Rk4HJU++hZK8CbvnXkE=; Received: from localhost ([127.0.0.1] helo=chaos.tentacle.net) by mail.tentacle.net with esmtps (TLSv1:DHE-RSA-AES256-SHA:256) (Exim 4.82 (FreeBSD)) (envelope-from ) id 1WTKYp-000JAW-Fp for freebsd-fs@freebsd.org; Thu, 27 Mar 2014 17:22:16 -0700 Received: (from sweetpea@localhost) by chaos.tentacle.net (8.14.8/8.14.8/Submit) id s2S0MFZg073687 for freebsd-fs@freebsd.org; Thu, 27 Mar 2014 17:22:15 -0700 (PDT) (envelope-from sweetpea) Date: Thu, 27 Mar 2014 17:22:15 -0700 From: Kevin Rauwolf To: freebsd-fs@freebsd.org Subject: Missing zfs pools Message-ID: <20140328002211.GA73597@chaos.tentacle.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Identity-Key: id1 X-Account-Key: account1 X-Mozilla-Draft-Info: internal/draft; vcard=0; receipt=0; DSN=0; uuencode=0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 X-Enigmail-Version: 1.6 X-Enigmail-Draft-Status: 512 X-Spam-Level: -- X-Spam-Score: -2.1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 00:39:45 -0000 I am running into a problem recovering data from an old backup. I had a pool containing a mirror of two disks. I now have access to just one of those disks, and I am trying to import the pool so that I can copy files out. When I try to import, there seems to be no sign that the pool exists. # zpool import # zpool import -D # zpool list no pools available # zpool import -f zroot cannot import 'zroot': no such pool available # gpart list ada0 ... 2. Name: ada0p2 Mediasize: 1983218974720 (1.8T) Sectorsize: 512 Stripesize: 0 Stripeoffset: 1048576 Mode: r0w0e0 rawuuid: 7ada140b-4194-11e3-9da4-f46d04227f12 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: freebsd-zfs length: 1983218974720 offset: 1048576 type: freebsd-zfs index: 2 end: 3873476607 start: 2048 ... # zdb -l /dev/gptid/7ada140b-4194-11e3-9da4-f46d04227f12 -------------------------------------------- LABEL 0 -------------------------------------------- version: 5000 name: 'zroot' state: 0 txg: 0 pool_guid: 3559240713814701742 hostid: 740296715 hostname: '######' top_guid: 5658684753042695532 guid: 6772479201275930554 vdev_children: 1 vdev_tree: type: 'mirror' id: 0 guid: 5658684753042695532 metaslab_array: 33 metaslab_shift: 34 ashift: 12 asize: 1983214256128 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 1772932453788439366 path: '/dev/gptid/7a375bd1-4194-11e3-9da4-f46d04227f12' phys_path: '/dev/gptid/7a375bd1-4194-11e3-9da4-f46d04227f12' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 6772479201275930554 path: '/dev/gptid/7ada140b-4194-11e3-9da4-f46d04227f12' phys_path: '/dev/gptid/7ada140b-4194-11e3-9da4-f46d04227f12' whole_disk: 1 features_for_read: create_txg: 0 Uberblock[0] magic = 0000000000bab10c version = 5000 txg = 2436407 guid_sum = 17763337121921767194 timestamp = 1395437212 UTC = Fri Mar 21 14:26:52 2014 ... I have tried running Jeff Bonwick's "labelfix" tool, patched with recent ZFS API changes, but it fails on the pwrite64 call that writes the checksum. (That's why my txg is 4536407 in Uberblock 0; it's 0 in all of the others.) Any suggestions as to why import seems unable to find the existing pool? -k From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 00:56:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 898D26E5 for ; Fri, 28 Mar 2014 00:56:11 +0000 (UTC) Received: from mail-wi0-x22c.google.com (mail-wi0-x22c.google.com [IPv6:2a00:1450:400c:c05::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1C8E39E for ; Fri, 28 Mar 2014 00:56:10 +0000 (UTC) Received: by mail-wi0-f172.google.com with SMTP id hi5so201804wib.17 for ; Thu, 27 Mar 2014 17:56:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=aZgtwG4Hvb2NmmqgAHkOPhbwEPB3Ghl8/UOFJfmieBU=; b=DfBynqsWTMzYo9k0v9TUM9JCCEVq5gm4wP3do1jmpMJ0+ySvMPbntoDwo4zCP7wMSF aKL0Wt+CleKU67SJQcW29LdIU7Izke5wPX+xZrE1all+rX2sFlE0x9ZPvMMe4j49ntpo oESAsPZLMQWK2qKjXds5nzVAzX/hTuXw7rTFkh3MqG6NdSVPbGHg+no8lVxJrCAJHFia vEjAakVXaCoqx8j5NFq6HUF6m5YamC5nLeNRBnx42f0GWV45dVl20zaYA/e9Jks9dXnZ EAfbV7sZE8pRzUNb+FrMRY5gLAB8aB3+pIWU4vwvQiBlybEbpKPkJS2OFoQj4HSWv3s5 Mryw== X-Received: by 10.180.89.102 with SMTP id bn6mr43378192wib.28.1395968168860; Thu, 27 Mar 2014 17:56:08 -0700 (PDT) Received: from dft-labs.eu (n1x0n-1-pt.tunnel.tserv5.lon1.ipv6.he.net. [2001:470:1f08:1f7::2]) by mx.google.com with ESMTPSA id f3sm1844385wiv.2.2014.03.27.17.56.07 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Thu, 27 Mar 2014 17:56:07 -0700 (PDT) Date: Fri, 28 Mar 2014 01:56:05 +0100 From: Mateusz Guzik To: Victor Sneider Subject: Re: Issue with vn_open(), help me please Message-ID: <20140328005604.GD4730@dft-labs.eu> References: <1395966793.20688.YahooMailNeo@web122101.mail.ne1.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1395966793.20688.YahooMailNeo@web122101.mail.ne1.yahoo.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 00:56:11 -0000 On Thu, Mar 27, 2014 at 05:33:13PM -0700, Victor Sneider wrote: > Hi all, > > I used kern_openat()/fget/fo_read() to open and read a text files inside kernel.  > > When I load it as a kernel module, the module works fine and do its job.  > > When I compiled it into kernel, it crash in kern_openat(), more precisely, in vn_open(). I used call_out() to defer reading the file and wait for the rootfs mount completes. I set the timeout long enough (5 min, for example) but it still crashes.  > > I googled a lot but have not found any report about this issue. I am not an expert about file reading/writing inside kernel but I feel this could be a bug in vn_open().  > Can you elaborate on the crash? backtrace, crashing instruction, dump pointer involved in the crash etc. Are you running kernel with INVARIANTS and WITNESS enabled? Does the module with these options? -- Mateusz Guzik From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 01:40:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C45B7381 for ; Fri, 28 Mar 2014 01:40:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A63433ED for ; Fri, 28 Mar 2014 01:40:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2S1e12K021909 for ; Fri, 28 Mar 2014 01:40:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2S1e1kp021908; Fri, 28 Mar 2014 01:40:01 GMT (envelope-from gnats) Date: Fri, 28 Mar 2014 01:40:01 GMT Message-Id: <201403280140.s2S1e1kp021908@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Karl Denninger List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 01:40:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Karl Denninger To: bug-followup@FreeBSD.org, karl@fs.denninger.net Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Thu, 27 Mar 2014 20:32:17 -0500 This is a cryptographically signed message in MIME format. --------------ms020507060801030702090509 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable The last change was the "cnt" structure rename; the previous rev was the = one where freepages was set to cnt.v_free_target (the margin was removed = in the rev sent on 24-March in the morning) -- there was no logic change = made in the 26th March followup .vs. the previous one from 24 March. The latest that I and others are running is what is on the PR (and the=20 link, which is identical -- a couple people had said they had trouble=20 with the pr followup inclusions being problematic when copied down to be = applied.) Apologies for the confusion. --=20 -- Karl karl@denninger.net --------------ms020507060801030702090509 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjgwMTMyMTdaMCMGCSqGSIb3DQEJBDEW BBSHn3P56J5+zff1Vf53Qobt/cgTwjBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIABDSkqEfzqvQZihKsBol+YJIFn5MB eum5+9zmYaDCSWT2QCYTYmC1oe7/YORhWUAm/9/W0kHqTzh5vQQyF8RUKoNFGOgLWAYHbv6N GoKPh65jqJK7FMbxYtX/wakEaV+wsRTBBbWRdy6PjBtPhwy0izxRVzKk+SfgQkPB6E6Kzv/O 8BflxmuxA2cECq9rHsPQy9xDmRhLHUTDy8g155NQ5YWNbx7LORTQF072Z0woI7cQPTgW3NrV yNvtFckQ6Lc6z91j7bsB0VfQvAYMTRpOm0oesUKLYRrZSo+vMuATrC9AC7Ehzcx2fglMMwsz xywZ5jzvn+3Hh61aphhC6XqT513hWjhgeV7yH2TH3aMeGdzjF0JQnDWz90KmiWsSO4mb8zFc Devvir/rgGddETsPyE1jDfBGxHiVJjIwVSI/9/6AKiuc2VULrDnz6rDM2QinZ4nlZGslL07e 3vfxZMoSPGOQaljKOLEJrQ3lLS4ax8L3isedvW1X2Akg8G9zMLrzNy9YVESE8xTZEVCpPHqf gb6hjL4Zs5xZ+iAIXJv4fz4xS+OLQgpE05JBqbV9SvTu+Dp+iKzQsa5hkXkwy6/OXa/mumfq 4dl5muWQGUs6xjKBj+0fmnuX5BRE4+weWOei2DR/sRLWEmRllJFpuR9qhQ6rMBc/53/Wy01N 6qD7zncAAAAAAAA= --------------ms020507060801030702090509-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 02:22:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 60AAE3E4; Fri, 28 Mar 2014 02:22:59 +0000 (UTC) Received: from mail-wg0-x233.google.com (mail-wg0-x233.google.com [IPv6:2a00:1450:400c:c00::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A1CEA9DC; Fri, 28 Mar 2014 02:22:58 +0000 (UTC) Received: by mail-wg0-f51.google.com with SMTP id k14so3072483wgh.34 for ; Thu, 27 Mar 2014 19:22:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=NPFp+tus8ErKwhknLJ2A80qU2/J7j8uPgzmKaAojwWg=; b=Dr11GtEjnRUDTfSDjzEGC4qxB1Xv26RlhLjVxSPSlIYIjeZa4dka4uC98KyHP97ciK KC80mqUi/WOUxJjsQj6E2sUFPPOt0o75XgsB9RQq/Z8qSl7U3BM52Ahz29RsNjh50nXp 29Qiz163E4mIYq3aduNJxSi8OIrsi7DRrTX43e0fmN7VTPnpfmnZ76mEiNkCngD3LtfH 37wi2Q/2KohBbne0f90BNP2AOG8/jLJed1VgWW92zaEwgQcLgTleXuTWeaHju4j+P0T5 zGCDWhDG5cH4YFYPS+y+uzbcLVuVheQlrEX4ii9l9LNm8QC7hPZf907WiRjVB3JtBUPL B0pQ== MIME-Version: 1.0 X-Received: by 10.180.94.196 with SMTP id de4mr9118830wib.16.1395973376922; Thu, 27 Mar 2014 19:22:56 -0700 (PDT) Received: by 10.216.190.199 with HTTP; Thu, 27 Mar 2014 19:22:56 -0700 (PDT) In-Reply-To: <380071870.1832898.1395956256461.JavaMail.root@uoguelph.ca> References: <380071870.1832898.1395956256461.JavaMail.root@uoguelph.ca> Date: Fri, 28 Mar 2014 10:22:56 +0800 Message-ID: Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem From: Marcelo Araujo To: Rick Macklem Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems , Alexander Motin , FreeBSD Net X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: araujo@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 02:22:59 -0000 2014-03-28 5:37 GMT+08:00 Rick Macklem : > Christopher Forgeron wrote: > > I'm quite sure the problem is on 9.2-RELEASE, not 9.1-RELEASE or > > earlier, > > as a 9.2-STABLE from last year I have doesn't exhibit the problem. > > New > > code in if.c at line 660 looks to be what is starting this, which > > makes me > > wonder how TSO was being handled before 9.2. > > > > I also like Rick's NFS patch for cluster size. I notice an > > improvement, but > > don't have solid numbers yet. I'm still stress testing it as we > > speak. > > > Unfortunately, this causes problems for small i386 systems, so I > am reluctant to commit it to head. Maybe a variant that is only > enabled for amd64 systems with lots of memory would be ok? > > Rick, Maybe you can create a SYSCTL to enable/disable it by the end user will be more reasonable. Also, of course, it is so far safe if only 64Bits CPU can enable this SYSCTL. Any other option seems not OK, will be hard to judge what is lots of memory and what is not, it will depends what is running onto the system. The SYSCTL will be great, and in case you don't have time to do it, I can give you a hand. I'm gonna do more benchmarks today and will send another report, but in our product here, I'm inclined to use this patch, because 10~20% speed up in read for me is a lot. :-) Thank you so much and best regards, -- Marcelo Araujo araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 02:54:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7B5A5A24 for ; Fri, 28 Mar 2014 02:54:13 +0000 (UTC) Received: from nm36.bullet.mail.ne1.yahoo.com (nm36.bullet.mail.ne1.yahoo.com [98.138.229.29]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1EFDAC0F for ; Fri, 28 Mar 2014 02:54:12 +0000 (UTC) Received: from [127.0.0.1] by nm36.bullet.mail.ne1.yahoo.com with NNFMP; 28 Mar 2014 02:54:11 -0000 Received: from [98.138.100.117] by nm36.bullet.mail.ne1.yahoo.com with NNFMP; 28 Mar 2014 02:51:14 -0000 Received: from [98.138.89.173] by tm108.bullet.mail.ne1.yahoo.com with NNFMP; 28 Mar 2014 02:51:14 -0000 Received: from [127.0.0.1] by omp1029.mail.ne1.yahoo.com with NNFMP; 28 Mar 2014 02:51:14 -0000 X-Yahoo-Newman-Property: ymail-4 X-Yahoo-Newman-Id: 405063.5569.bm@omp1029.mail.ne1.yahoo.com Received: (qmail 48220 invoked by uid 60001); 28 Mar 2014 02:51:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1395975074; bh=7ii+dhmItBiJ++HAR5pz3edVfKNBLurbfmyfxuYs7e8=; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=ESrHZICuL+Zzn0KzOXl7KI//UB85XrqBlq0cTh/uH8eOWMHdZafdlPcI+pA14Wz21XG6ew3cM/wCCHu25LD0ojdi6qLecgljuFsFx/7sNSXwVw/vwnjAajqq7NeVsNRawd1yYB0clvXg0FgXj62oJR/zjz+0POBVwFmMo66gmEQ= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=Sh6g1dFr6H1LlaBWccYqpA9F0AwbTFIQPztz+Axhtw6ZypJtmHH/2x0z11uRi+aPFn7I+O3OAnx7JhzYxOO0EVuwlcCpy+lty2SpcO58Pm5r10DsZvHGXBG+XS++PM0m/E9WMTLpsoFimsb2vhPeMm2Wb4j+2QEgH1zuBN/aHjY=; X-YMail-OSG: nMdK.FsVM1l6ledPFWdjlSKrs8Dj1g9rL9j12wmnPg9xCUN pLhrD5h7zxaqNGLERFpGl0CGbjcrt.DxzJ27J2EdmmWyWw0VPUmzxrO4z9EH X.KvVsi635PKIjqSVTE5Y3UqR1.OXOIvX4jtpYcoSxVNqyVlnu0O0zJLgABY fonUK1hduyMYOz1aZTrmX_A0Dc7.jfNLrqmK0OL9qtIbtnEnF4f8g0OFKZbZ SVrC.NUmwnhPjupCXZZfSm1VyrFuSesMrMLsHDYSVmC2LToHnBgLo.4KLfj8 pW4Ey_TMNuV3XW2s76fTLQJdhtDnehQv9y6EdmGj8XAuJFEKnqGv0TQRDrM4 fUCR7fmiIPnfkruqvk6fLX.TtzZkwliHZer9HG7D.NRI2xOtK3iBiBQqU_gN dvM0eBloITNFRHeHSFYzPGs6x6fVBnudFmTd1T8V3iarigiYDDF5uUvpjxvx 2h5CgTwOrM1pBmosfEltNvxZciNVuzW3pb3JL_lCbKg_0ZUVe9Ic1NFggZZC AOH3_XUyEF68id8VnzLFtFYuZOP_tl7C345A1PaVV_nK7lNPlWqP0KvfQdPc n2WMd1Q-- Received: from [204.138.59.246] by web122106.mail.ne1.yahoo.com via HTTP; Thu, 27 Mar 2014 19:51:14 PDT X-Rocket-MIMEInfo: 002.001, SGkgTWF0ZXVzeiwKClRoYW5rcyBmb3IgeW91ciBmYXN0IHJlc3BvbnNlLsKgCgpJIGhhdmUgcmVjb3JkZWQgd2hhdCBwcmludGVkIG91dCBvbiB0aGUgY29uc29sZSB3aGVuIGl0IGNyYXNoZXMgYnV0IEkgaGF2ZSBub3QgZG9uZSBiYWNrdHJhY2UgYW5kIGR1bXAuIEknbGwgZG8gdGhhdCBhbmQgcG9zdCBpdCBzb29uLsKgCgpJIGRpZCBub3QgaGF2ZSBJTlZBUklBTlQgYW5kIFdJVE5FU1MgZW5hYmxlZC4gU2hvdWxkIEkgZW5hYmxlIGl0P8KgCgpJIGhhdmUgdG8gY29ycmVjdCB0aGF0IHdoZW4gSSBsb2FkIHQBMAEBAQE- X-Mailer: YahooMailWebService/0.8.181.645 References: <1395966793.20688.YahooMailNeo@web122101.mail.ne1.yahoo.com> <20140328005604.GD4730@dft-labs.eu> Message-ID: <1395975074.6795.YahooMailNeo@web122106.mail.ne1.yahoo.com> Date: Thu, 27 Mar 2014 19:51:14 -0700 (PDT) From: Victor Sneider Subject: Re: Issue with vn_open(), help me please To: Mateusz Guzik In-Reply-To: <20140328005604.GD4730@dft-labs.eu> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Victor Sneider List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 02:54:13 -0000 Hi Mateusz,=0A=0AThanks for your fast response.=A0=0A=0AI have recorded wha= t printed out on the console when it crashes but I have not done backtrace = and dump. I'll do that and post it soon.=A0=0A=0AI did not have INVARIANT a= nd WITNESS enabled. Should I enable it?=A0=0A=0AI have to correct that when= I load the module, callout() is not used. The file opening and reading are= performed without issue. But when I use callout() in module, I experience = the same crash.=A0=0A=0AHere is the console when the system crash:=A0=0A=0A= Trap cause =3D 2 (TLB=0Amiss (load or instr. fetch) - kernel mode)=0Apanic:= trap=0AUptime: 5m1s=0AAutomatic reboot in=0A15 seconds - press a key on th= e console to abort=0A=0A=0AThanks.=0A=0AV.Sneider.=A0=0A=0A=0A=0AOn Thursda= y, March 27, 2014 8:56:09 PM, Mateusz Guzik wrote:=0A = =0AOn Thu, Mar 27, 2014 at 05:33:13PM -0700, Victor Sneider wrote:=0A=0A> H= i all,=0A> =0A> I used kern_openat()/fget/fo_read() to open and read a text= files inside kernel.=A0=0A> =0A> When I load it as a kernel module, the mo= dule works fine and do its job.=A0=0A> =0A> When I compiled it into kernel,= it crash in kern_openat(), more precisely, in vn_open().=A0I used call_out= () to defer reading the file and wait for the rootfs mount completes. I set= the timeout long enough (5 min, for example) but it still crashes.=A0=0A> = =0A> I googled a lot but have not found any report about this issue. I am n= ot an expert about file reading/writing inside kernel but I feel this could= be a bug in vn_open().=A0=0A> =0A=0ACan you elaborate on the crash? backtr= ace, crashing instruction, dump=0Apointer involved in the crash etc.=0A=0AA= re you running kernel with INVARIANTS and WITNESS enabled? Does the module = with=0Athese options?=0A-- =0AMateusz Guzik From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 09:23:37 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7C8F8C76 for ; Fri, 28 Mar 2014 09:23:37 +0000 (UTC) Received: from mail-wg0-x22b.google.com (mail-wg0-x22b.google.com [IPv6:2a00:1450:400c:c00::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 09EA225A for ; Fri, 28 Mar 2014 09:23:36 +0000 (UTC) Received: by mail-wg0-f43.google.com with SMTP id x13so3296768wgg.2 for ; Fri, 28 Mar 2014 02:23:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=daKF47Nn70d7moPndoOawW65NF6LMWMYRd504nOHgxo=; b=g/I65BujQX+xrzaVvHEoMAibQjEC3o60lB3RZvjcwdyNgE420fVQ8kF/bING6QlgVR 5HEwmyHVtJRlgwHdOSNlXS+sW1QdsRZYlsgCN6o0hJVW9T4gh63haSHDzWPuAdikI2p/ tuh33Bre1aaUxQsS5qgVhWTPx1PYaNUkrPd/B4vd6qLwq3aTwdncXxhda+++0+5jYn5F pecQVqgnsHsSTt05+ntt8W3Ar21Y/N+BtBJgJeYEBzqkxXUAmU7sdX23uO4KSW1P2xOm YdT5EKuhbCcuA2nAXjo8XwDlGVxTVuQx3B5fnuI49U2KuEy+HbA0fpuOor94UT8SEYDi VyDw== MIME-Version: 1.0 X-Received: by 10.180.20.71 with SMTP id l7mr10829282wie.35.1395998615213; Fri, 28 Mar 2014 02:23:35 -0700 (PDT) Received: by 10.216.146.195 with HTTP; Fri, 28 Mar 2014 02:23:35 -0700 (PDT) In-Reply-To: <20140328005911.GA30665@neutralgood.org> References: <20140328005911.GA30665@neutralgood.org> Date: Fri, 28 Mar 2014 10:23:35 +0100 Message-ID: Subject: Re: zfs l2arc warmup From: Joar Jegleim To: kpneal@pobox.com Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 09:23:37 -0000 On 28 March 2014 01:59, wrote: > On Thu, Mar 27, 2014 at 11:10:48AM +0100, Joar Jegleim wrote: >> But it's really not a problem for me how long it takes to warm up the >> l2arc, if it takes a week that's ok. After all I don't plan on >> reboot'ing this setup very often + I have 2 servers so I have the >> option to let the server warmup until i hook it into production again >> after maintenance / patch upgrade and so on . >> >> I'm just curious about wether or not the l2arc warmup itself, or if I >> would have to do that manual rsync to force l2arc warmup. > > Have you measured the difference in performance between a cold L2ARC and > a warm one? Even better, have you measured the performance with a cold > L2ARC to see if it meets your performance needs? No I haven't. I actually started using those 2 ssd's for l2arc the day before I sent out this mail to the list . I haven't done this the 'right' way by producing some numbers for measurement, but I do know that the way this application work today is that it will pull random jpegs from this dataset of about 1.6TB, consisting of lots of millions of files ( more than 20 million). And that today this pool is served from 20 SATA 7.2K disks which would be the slowest solution for random read access. Based on the huge performance gain by using ssd's simply by looking at the spec., but also by looking at other peoples graphs from the net ( people who have done this more thorough than me) I'm pretty confident to say that if at any time when the application request a jpeg if it was served from either ram or ssd it would be a substantial performance gain compared from serving it from the 7.2k array of disks. > > If you really do need a huge L2ARC holding most of your data have you > considered that maybe you are taking the wrong approach to getting > performance? Consider load balancing across multiple servers, or having > your application itself spread the loads of pictures across multiple > servers. yes I have :p but again that would mean I'd have to rewrite the application, or I would have to have several servers mirrored. There are problems with having several servers mirrored related to the application, I'll skip those details here, but I have thought about what if I served those jpegs from say 4 backend servers, I really don't think it would help compared to serving stuff from ssd's, or I would at least have to have 20 disks pr. server for it to be any performance gain... Bu t I'd still have latency and all the disadvantages from having 7.2k disks. The next release of the application actually has taken this into account and I will in the future be able to spread this over 4 servers. For the future I might spread this stuff over more backends. At the moment the cheapest and easiest would be to simply by 2 more 480GB ssd's, put them in the server and make sure as much as possible of the dataset resides in l2arc. > > If a super-huge L2ARC is really needed for the traffic _today_, what about > when you have more traffic in 3-6-12 months? What about if you increase > the number of pictures you are randomly choosing from? If your server is > at the limit of its performance today then pretty soon you will outgrow > it. Then what? The server is actually far from any limit, in fact it has so 'little' to do I've been a bit put off to figure out why our frontpage won't be more snappy. And these things will probably be taken care of, again, in the next release of the application which will give me control of 'todays' frontpage mosaic pictures where I can either make sure frontpage jpegs stay in arc, or I'll simply serve frontpage jpegs from varnish . > > What happens if your production server fails and your backup server has > a cold L2ARC? Then what? performance would drop, but nothing really serious + I got 2 of them, and my plan is to make sure the l2arc for the second server is warm. > > Having more and more parts in a server also means you have more opportunities > for a failure, and that means a higher chance of something bringing down > the entire server. What if one of the SSD in your huge L2ARC fails in a > way that locks the bus? This is especially important since you indicated > you are using cheaper SSD for the L2ARC. Fewer parts -> more robust server. Good point. Again, I have a failover server and a proxy with health check in front, and actually I have a third 'fall-back' server too for worst case scenarios. > > On the ZIL: the ZIL holds data on synchronous writes. That's it. This is > usually a fraction of the writes being done except in some circumstances. > Have you measured to see if, or do you otherwise know for sure, that you > really do need a ZIL? I suggest not adding a ZIL unless you are certain > you need it. Yes, I only recently realized that too, and I'm really not sure if a zil is required. Some small portion of files (som hundre MB's) are served over nfs from the same server, if I understand it right a zil will help for nfs stuff (?) , but I'm not sure if it's any gain of having a zil today. On the other hand, a zil doesn't have to be big, I can simply buy a 128GB ssd which are cheap today . > > Oh, and when I need to pull files into memory I usually use something > like 'find . -type f -exec cat {} \; >/dev/null'. Well, actually, I > know I have no spaces or special characters in filenames so I really > do 'find . -type f -print | xargs cat > /dev/null'. This method is > probably best if you use '-print0' instead plus the correct argument to > xargs. Thanks, this really makes sense and I reckon it would be faster than rsync from an other server. > > -- > Kevin P. Neal http://www.pobox.com/~kpn/ > > "Nonbelievers found it difficult to defend their position in \ > the presense of a working computer." -- a DEC Jensen paper -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 12:12:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6DAA4718 for ; Fri, 28 Mar 2014 12:12:12 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 1C15933A for ; Fri, 28 Mar 2014 12:12:11 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2SCC6AU044313 for ; Fri, 28 Mar 2014 07:12:06 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Fri Mar 28 07:12:06 2014 Message-ID: <53356711.8010509@denninger.net> Date: Fri, 28 Mar 2014 07:12:01 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Joar Jegleim , freebsd-fs@freebsd.org Subject: Re: zfs l2arc warmup References: <20140328005911.GA30665@neutralgood.org> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms000605000505070701090103" X-Antivirus: avast! (VPS 140328-0, 03/28/2014), Outbound message X-Antivirus-Status: Clean X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 12:12:12 -0000 This is a cryptographically signed message in MIME format. --------------ms000605000505070701090103 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/28/2014 4:23 AM, Joar Jegleim wrote: > On 28 March 2014 01:59, wrote: >> On Thu, Mar 27, 2014 at 11:10:48AM +0100, Joar Jegleim wrote: >>> But it's really not a problem for me how long it takes to warm up the= >>> l2arc, if it takes a week that's ok. After all I don't plan on >>> reboot'ing this setup very often + I have 2 servers so I have the >>> option to let the server warmup until i hook it into production again= >>> after maintenance / patch upgrade and so on . >>> >>> I'm just curious about wether or not the l2arc warmup itself, or if I= >>> would have to do that manual rsync to force l2arc warmup. >> Have you measured the difference in performance between a cold L2ARC a= nd >> a warm one? Even better, have you measured the performance with a cold= >> L2ARC to see if it meets your performance needs? > No I haven't. > I actually started using those 2 ssd's for l2arc the day before I sent > out this mail to the list . > I haven't done this the 'right' way by producing some numbers for > measurement, but I do know that the way this application work today is > that it will pull random jpegs from this dataset of about 1.6TB, > consisting of lots of millions of files ( more than 20 million). And > that today this pool is served from 20 SATA 7.2K disks which would be > the slowest solution for random read access. > Based on the huge performance gain by using ssd's simply by looking at > the spec., but also by looking at other peoples graphs from the net ( > people who have done this more thorough than me) I'm pretty confident > to say that if at any time when the application request a jpeg if it > was served from either ram or ssd it would be a substantial > performance gain compared from serving it from the 7.2k array of > disks. No, the simplest solution is IMHO to stop trying to RAM-back a 1.6TB=20 data set through various machinations. A cache is just that -- a cache. It's purpose is to make *frequently=20 accessed* data more-quickly available to an application. You have the=20 antithesis of cachable data in that you have a pure random access model=20 with no predictive or "frequently used" means to determine what is=20 likely to be requested next. IMHO the best and cheapest way to serve that data is to eliminate=20 rotational and positioning latency from the data path. If it is a=20 read-nearly-always (or read only) data set then redundancy is only=20 necessary to prevent downtime (not data loss) since it be easily backed u= p. For the model you describe I would buy however many SSD disks were=20 necessary to store said data set, design a means to back it up reliably=20 and be done with it. Backing the data store with L2ARC (and the RAM to manage it) is likely=20 self-defeating as you not only are paying for BOTH the spinning rust AND = the SSDs but you have doubled the number of devices that can fail and=20 interrupt service. --=20 -- Karl karl@denninger.net --------------ms000605000505070701090103 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjgxMjEyMDFaMCMGCSqGSIb3DQEJBDEW BBRJyFKgRFAhVuDYkburNBmHs7t/VzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAqndZ7mn02nRN0YG5eJ/8+eIsGA0t gD07Mf+Lts+gl2TBME9O0htaQlYEnHWr2MzBzsDsOG6sVmUI5DI0qfP2b2ho+SbOq2j0Y9nk Sm7ZfwSlm5BMOS91QQxJck5RI1iukQFaWgAozYiXyJRUR2xj8LZp/t19xuSc+aNfie6k+931 4CeD+hPnSVFP/VOolI9sCXT3tEhEPPoMyx5UOgNPFtUDdASM+qiC3/6vKKBx6eiBhZdb2e7n jnpFGyd/bqC1C2WGLR1Pn/Z/9TqjE0y/EEIVcr7cD6M4KZfkHtad1bH4INMxuUrWKwP6/0mr k/pacowNqnIwoBd4SsIin+y5spM/pAzFKDE+0w1uTHkwV5nsEzb1YHZilFzOjgIjhP/PgeGM H3O3EXKv4gu2TYarCd4M1oTQ7PGnafIN5XbcmTm6++RSUFtX6y7RS9EOHYcBYNZ+fvBF7eNL u1ZjYTyjGopXB3KETh2XJyMuEdJoIn36DBfGoTtYh63V0ghUWOZ7XLRMC3rqRtQIfql9sxfR 0/VqJJzv/GCGbUvTdsXxPBrhJzTUIJjbvjXZwQK+5OgeD1x/l1w/ymYh1AvndvKikKM543Wb 1k+kGa0iIMM4u6exB8/OaLoqXCAGS5wo9mURFJBWD2LHwKa/hCOFBG9e5ayZecNWK9S/x4JN jKIzQ/UAAAAAAAA= --------------ms000605000505070701090103-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 14:56:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 20AD7D4C for ; Fri, 28 Mar 2014 14:56:17 +0000 (UTC) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B80AB780 for ; Fri, 28 Mar 2014 14:56:16 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id s2SEu8YM001709; Fri, 28 Mar 2014 09:56:08 -0500 (CDT) Date: Fri, 28 Mar 2014 09:56:08 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Joar Jegleim Subject: Re: zfs l2arc warmup In-Reply-To: Message-ID: References: <20140328005911.GA30665@neutralgood.org> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Fri, 28 Mar 2014 09:56:09 -0500 (CDT) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 14:56:17 -0000 On Fri, 28 Mar 2014, Joar Jegleim wrote: > The server is actually far from any limit, in fact it has so 'little' > to do I've been a bit put off to figure out why our frontpage won't be > more snappy. The lack of "snappy" is likely to be an application problem rather than a server problem. Take care not to blame the server for an application design problem. You may be over-building your server when all that is actually needed is some simplification of the web content. The design of the application is important. The design of the content provided to the web client is important. Something I learned about recently which could be really helpful to you is there is a Firefox tool called "Web Developer Toolbar" which has a "Network" option. This option will show all files loaded for a given web page, including the time when the request was initiated, and when it completed. You may find that the apparent latency problem is not your server at all. You may find that there are many requests to servers not under your control. The performance problem is likely be due to the design of the content passed to the browser. For example, I just requested to initially load an application-generated page and I see that the base page loaded in 722ms and then there were two more subsequent loads in parallel requiring 335ms and 445ms, and then one more load subsequent to that requiring 262ms. The entire page load time was 1.7 seconds. The load time was dominated by the chain of dependencies. If I reload the page (request is now 'hot' on the server) then I see several of the response times substantially diminish, but some others remain virtually the same, resuling in a page load time of 1.13 seconds. >From what I have been seeing, web page load times often don't have much at all to do with the performance of the server. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 17:02:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B676D33F for ; Fri, 28 Mar 2014 17:02:39 +0000 (UTC) Received: from mail.tdx.com (mail.tdx.com [62.13.128.18]) by mx1.freebsd.org (Postfix) with ESMTP id 5E3D5790 for ; Fri, 28 Mar 2014 17:02:39 +0000 (UTC) Received: from Mail-PC.tdx.co.uk (storm.tdx.co.uk [62.13.130.251]) (authenticated bits=0) by mail.tdx.com (8.14.3/8.14.3/) with ESMTP id s2SH2VRC088833 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Fri, 28 Mar 2014 17:02:32 GMT Date: Fri, 28 Mar 2014 17:02:31 +0000 From: Karl Pielorz To: freebsd-fs@freebsd.org Subject: File system corruption with 9.2-R on PC Engines Alix boards Message-ID: X-Mailer: Mulberry/4.0.8 (Win32) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 17:02:39 -0000 We have a number of PC Engines Alix boards, running FreeBSD 8.2. They boot off of an onboard CF cards. I recently installed a new one of these with 9.2-Release (i386) - only to discover that it silently (i.e. with no errors) destroys the file system when in use. Typically we install these systems then flip the file system over to 'read-only' when sending them out. The corruption happens while we're installing various packages etc. We don't run journal soft-updates on these boxes - just regular soft-updates . No console errors are logged, no syslog messages are logged. Just after a while you might go to edit '/etc/rc.conf' - to find when you vi it - it's now become a copy of '/etc/ntp.conf' - or other oddities. A reboot runs fsck - which will usually fail then. Running a foreground check reals off thousands of duplicate errors. If you foreground check the file system, you're usually left with "not a lot" when it's finished (i.e. if you run 'fsck -y /'). 8.2 runs fine (we have systems that have been running embedded for years) - 9.2 doesn't. I found a similar thread: This eludes to CF card quality etc. - the cards we've been using have worked fine for years - and a 9.2 'flaky' system reformatted to 8.2 then runs fine. Anyone else running later than 8.2 on PC Engine Alix kit? -Karl From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 20:40:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1EDA8ED4 for ; Fri, 28 Mar 2014 20:40:08 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AACD0F58 for ; Fri, 28 Mar 2014 20:40:07 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id s2SKe4nV047536; Sat, 29 Mar 2014 00:40:04 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Sat, 29 Mar 2014 00:40:04 +0400 (MSK) From: Dmitry Morozovsky To: Joar Jegleim Subject: Re: zfs l2arc warmup In-Reply-To: Message-ID: References: <20140328005911.GA30665@neutralgood.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Sat, 29 Mar 2014 00:40:05 +0400 (MSK) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 20:40:08 -0000 On Fri, 28 Mar 2014, Joar Jegleim wrote: [snip most of] > > Have you measured to see if, or do you otherwise know for sure, that you > > really do need a ZIL? I suggest not adding a ZIL unless you are certain > > you need it. > Yes, I only recently realized that too, and I'm really not sure if a > zil is required. > Some small portion of files (som hundre MB's) are served over nfs from > the same server, if I understand it right a zil will help for nfs > stuff (?) , but I'm not sure if it's any gain of having a zil today. > On the other hand, a zil doesn't have to be big, I can simply buy a > 128GB ssd which are cheap today . Please don't forget that, unlike L2ARC, if you lost ZIL during sync write, you're effectively lost the pool. Hence, you have two points: - have ZIL on a enterprise-grade SLC SSD (aircraft-grade prices ;P) - allocate mirrored ZIL from fraction (rule of thumb if I'm not mistaken was "get all of write performance of your low-level disks per 1 second, double it, and it will be size of your ZIL) of your existing otherwise used for L2ARC SSDs We (by all means not at your read pressure) used the second approach, like the following: pool: br state: ONLINE scan: resilvered 13.0G in 0h3m with 0 errors on Sun Aug 18 19:52:20 2013 config: NAME STATE READ WRITE CKSUM br ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/br0 ONLINE 0 0 0 gpt/br4 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gpt/br1 ONLINE 0 0 0 gpt/br5 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gpt/br2 ONLINE 0 0 0 gpt/br6 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gpt/br3 ONLINE 0 0 0 gpt/br7 ONLINE 0 0 0 logs mirror-4 ONLINE 0 0 0 gpt/br-zil0 ONLINE 0 0 0 gpt/br-zil1 ONLINE 0 0 0 cache gpt/br-l2arc0 ONLINE 0 0 0 gpt/br-l2arc1 ONLINE 0 0 0 where logs/cache are like root@briareus:~# gpart show -l da9 => 34 234441581 da9 GPT (111G) 34 2014 - free - (1M) 2048 16777216 1 br-zil0 (8.0G) 16779264 217661440 2 br-l2arc0 (103G) 234440704 911 - free - (455k) (this is our main PostgreSQL server, with 8 SASes and 2*Intel3500 SSDs) -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 20:52:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 69F0E696 for ; Fri, 28 Mar 2014 20:52:25 +0000 (UTC) Received: from sasl.smtp.pobox.com (a-pb-sasl-quonix.pobox.com [208.72.237.25]) by mx1.freebsd.org (Postfix) with ESMTP id 27ED0129 for ; Fri, 28 Mar 2014 20:52:24 +0000 (UTC) Received: from sasl.smtp.pobox.com (unknown [127.0.0.1]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTP id A005810320; Fri, 28 Mar 2014 16:52:18 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=pobox.com; h=date :message-id:from:to:cc:subject:in-reply-to:references :mime-version:content-type:content-transfer-encoding; s=sasl; bh=k/zOqQAA7lpSYNBQjRbrN6ApJM4=; b=GT9pwMNgFFnP8IEBwA7jcBk1YCnR +pcdFE+EsLTDgXgaVnywxcCPZuUAVE17Xb0R0trnYfSYTa9B62Y1XRuf37CEvdP2 iUzIxMVKOSgks9HcdbHTIgY/7aJlGyE3ux9z1VfTUYjPJ3i+/mCCfNtOZf7s6hZJ ktSzt3UfdXQ9t1g= DomainKey-Signature: a=rsa-sha1; c=nofws; d=pobox.com; h=date:message-id :from:to:cc:subject:in-reply-to:references:mime-version :content-type:content-transfer-encoding; q=dns; s=sasl; b=bz+vH8 Cd+Sa7KMEMJrGIkkwIBL4wQPd2KeQPOkXNaeGTrpFOVmDGdfBDLqtdYoXPXIIaR5 csYheovbZwcYwREh15pA8ffTbKZyaRS7taFSUQn3+BR2TXIIcFKJuXRdIg9Pfqcy K425e+LJXoad8FCEPaKa2qM+Dpg1fjq45CYEY= Received: from a-pb-sasl-quonix.pobox.com (unknown [127.0.0.1]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTP id 969B41031F; Fri, 28 Mar 2014 16:52:18 -0400 (EDT) Received: from bmach.nederware.nl (unknown [27.252.207.198]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTPA id A4A281031E; Fri, 28 Mar 2014 16:52:17 -0400 (EDT) Received: from quadrio.nederware.nl (quadrio.nederware.nl [192.168.33.13]) by bmach.nederware.nl (Postfix) with ESMTP id 9760433EDE; Sat, 29 Mar 2014 09:52:16 +1300 (NZDT) Received: from quadrio.nederware.nl (quadrio.nederware.nl [127.0.0.1]) by quadrio.nederware.nl (Postfix) with ESMTP id 635154043FFF; Sat, 29 Mar 2014 09:52:16 +1300 (NZDT) Date: Sat, 29 Mar 2014 09:52:16 +1300 Message-ID: <87ha6iqc5b.wl%berend@pobox.com> From: Berend de Boer To: Dmitry Morozovsky Subject: Re: zfs l2arc warmup In-Reply-To: References: <20140328005911.GA30665@neutralgood.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 EasyPG/1.0.0 Emacs/24.3 (i686-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) Organization: Xplain Technology Ltd MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: multipart/signed; boundary="pgp-sign-Multipart_Sat_Mar_29_09:52:16_2014-1"; micalg=pgp-sha256; protocol="application/pgp-signature" Content-Transfer-Encoding: 7bit X-Pobox-Relay-ID: D948786E-B6BA-11E3-8F98-873F0E5B5709-48001098!a-pb-sasl-quonix.pobox.com Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 20:52:25 -0000 --pgp-sign-Multipart_Sat_Mar_29_09:52:16_2014-1 Content-Type: text/plain; charset=US-ASCII >>>>> "Dmitry" == Dmitry Morozovsky writes: Dmitry> Please don't forget that, unlike L2ARC, if you lost ZIL Dmitry> during sync write, you're effectively lost the pool. Wow, is that true? I'm using a ZIL on Amazon AWS, and these machines are virtual, i.e. I have no guarantee they will exist. Obviously that's not usually a problem :-) I thought I would just lose my sync write, and my pool would still be there? Can people enlighten me if this is indeed correct? -- All the best, Berend de Boer --pgp-sign-Multipart_Sat_Mar_29_09:52:16_2014-1 Content-Type: application/pgp-signature Content-Transfer-Encoding: 7bit Content-Description: OpenPGP Digital Signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (GNU/Linux) iQIcBAABCAAGBQJTNeEAAAoJEKOfeD48G3g5+O4P/0PNxqZWaXk0Cgtg6lNomBC6 +8/ma49zl5vx5GOE1yXaPki68NvyEyYSVFpDgSfXllwnF25r1sa6xh2CBDzOnk4S /+hoQ12I9hbShnaXnEABkjLHbAAeahqe7RWXtdNySFTCIxnrwVa7bUPhYUbOt5mh n6saz57Lr1AdBdDuFdwsz7uq4WbTqYk4PdGPL22aeHpCces6A8O7Zti7ZqzOgdLB j0MA1Ae5jsKS09O/lx4kWmGLUygMJmFrl6Oh6vMo6f2VIc+HYI1taUWn9Tt4HPsK jOnidbi3duAbMS7bmkmKDQP+kzbhMPTObtkqtePCIqmYcfUqexOMyfvRD5tysod+ DvknMdKKoMT9NtzpSfUNwZNgXUoblkEuwY6dDWkr2AyaLxrI1X7+JgXwVhbFsGCd IN0EnqNX/EPPETJ19Cmd84AN0gmChfUBdrxRQ2xr4obb/3frbqKdiN/hLybzkIeY 6kGTQ3j03YcN+CV5mXME5h/Dfl0GbwMi6WXkActh++5dbuWb3OQMQSzZf0EeJKzj OA+bCsM66+353rkB+yx/HK9fJ3AoqxfXwwAiBiKsz7CfaKKKmvH3vr1Ac5p6SmGO ANRtgtHdeX7EgX0MIwWyqXDSH17JFzu8L76vphvFCRlcY9QOUtusYTW+NISEdnnw JDurNmpXT6w4n3JHKmcB =4sPL -----END PGP SIGNATURE----- --pgp-sign-Multipart_Sat_Mar_29_09:52:16_2014-1-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 21:00:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9E6B495C for ; Fri, 28 Mar 2014 21:00:04 +0000 (UTC) Received: from mail-ob0-x22f.google.com (mail-ob0-x22f.google.com [IPv6:2607:f8b0:4003:c01::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 60FE0182 for ; Fri, 28 Mar 2014 21:00:04 +0000 (UTC) Received: by mail-ob0-f175.google.com with SMTP id uy5so6533531obc.34 for ; Fri, 28 Mar 2014 14:00:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=TQKQ4Qw0QrRAIocQ9ZGkxIEMbJ3DR1MUXCSs76MK6BQ=; b=GjpDDr5nuxAMcewE+95IzkEygYt9td5dR0MW0CjNsfD3t2cKYJwOenLquIocDIv8Qj oInxPkS4oSNugZzmguv5NtqzeUPpDkiWgVHCtOAMcvtD1/YFuNuYyX6+BYrkt+TA1sth 3pJgX6UyPaN990pLcFrci5S+OdYLTsH6K00M7HmZ8deq2linqMvjJf4tQ1WPc07IYMsx NooPXRqkNciopLyTqxFPJQ+E76cy9Lpn/Zlqt+vPlazFOV0klP7XyXI/PyrplWY0I7qf XDNxrScB5gUaFkU11in0IBsbInKW/3OL7+c47F5rW0DNEhKrEFks70pRvHt3Zbr2rHEv Wz9g== MIME-Version: 1.0 X-Received: by 10.60.132.12 with SMTP id oq12mr8599333oeb.42.1396040403336; Fri, 28 Mar 2014 14:00:03 -0700 (PDT) Received: by 10.76.180.40 with HTTP; Fri, 28 Mar 2014 14:00:03 -0700 (PDT) In-Reply-To: References: <20140328005911.GA30665@neutralgood.org> Date: Fri, 28 Mar 2014 14:00:03 -0700 Message-ID: Subject: Re: zfs l2arc warmup From: Freddie Cash To: Dmitry Morozovsky Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 21:00:04 -0000 On Fri, Mar 28, 2014 at 1:40 PM, Dmitry Morozovsky wrote: > On Fri, 28 Mar 2014, Joar Jegleim wrote: > > [snip most of] > > > > Have you measured to see if, or do you otherwise know for sure, that > you > > > really do need a ZIL? I suggest not adding a ZIL unless you are certa= in > > > you need it. > > Yes, I only recently realized that too, and I'm really not sure if a > > zil is required. > > Some small portion of files (som hundre MB's) are served over nfs from > > the same server, if I understand it right a zil will help for nfs > > stuff (?) , but I'm not sure if it's any gain of having a zil today. > > On the other hand, a zil doesn't have to be big, I can simply buy a > > 128GB ssd which are cheap today . > > Please don't forget that, unlike L2ARC, if you lost ZIL during sync write= , > you're effectively lost the pool. > =E2=80=8BNope. Not even close. The ZIL is only ever read at boot time. If you lose the ZIL between the time the data is written to the ZIL and the time the async write of the data is actually done to the pool ... and the server is rebooted at that time, then you get an error message at pool import. You can then force the import of the pool, losing any *data* in the ZIL, but nothing else. It used to be (back in the pre-ZFSv=E2=80=8B13-ish days) that if you lost t= he ZIL while there was data in it that wasn't yet written to the pool, the pool would fault and be gone. Hence the rule-of-thumb to always mirror the ZIL. Around ZFSv14-ish, the ability to import a pool with a missing ZIL was added. Remember the flow of data in ZFS: async write request --> TXG --> disk sync write request --> ZIL \--> TXG --> disk All sync writes are written to the pool as part of a normal async TXG after its written sync to the ZIL. And the ZIL is only ever read during pool import. =E2=80=8B[Note, I'm not a ZFS developer so some of the above may not be 100= % accurate, but the gist of it is.]=E2=80=8B --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 21:01:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A8F7EB22 for ; Fri, 28 Mar 2014 21:01:29 +0000 (UTC) Received: from mail-ob0-x22e.google.com (mail-ob0-x22e.google.com [IPv6:2607:f8b0:4003:c01::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6C6B2259 for ; Fri, 28 Mar 2014 21:01:29 +0000 (UTC) Received: by mail-ob0-f174.google.com with SMTP id wo20so6527120obc.19 for ; Fri, 28 Mar 2014 14:01:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=XAzsxm+/6OUxu4QXtzNpnQnx/HHAEMLSvJ5gl2IQ+N4=; b=hAY8t+Mj2hglnzL+J1piOR6q5gJM820bh/rmpkMqo2b2WhL98tQcCUJOd+5Eev2CH5 Aoz6AE4wrq/xDVEkUIWa09PE5DjqP1CVDVgBykA60xocRL6RU8s0dTm1TPq+YeOyBJqC OG5/Tk4Ys1c3mvgiSVypAg8+f1GKu4iIGhJxTYK80HNGryP37tHzV+XePKM57eBFeumJ HWsgibIaIzC3Gjwl5Pq8IpEz/IGdVb9AZY3H7mQi8FxsvnRSGDJH7iPTJ0KtWgm72uN5 d5wUPLolYMf/h+fC84Ick7ikWA4w80QkNTmVElFBXMn6FMd32f4+U6DpDeeHfEKQXb+p fhAA== MIME-Version: 1.0 X-Received: by 10.182.43.161 with SMTP id x1mr8393740obl.5.1396040488769; Fri, 28 Mar 2014 14:01:28 -0700 (PDT) Received: by 10.76.180.40 with HTTP; Fri, 28 Mar 2014 14:01:28 -0700 (PDT) In-Reply-To: References: <20140328005911.GA30665@neutralgood.org> Date: Fri, 28 Mar 2014 14:01:28 -0700 Message-ID: Subject: Re: zfs l2arc warmup From: Freddie Cash To: Dmitry Morozovsky Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 21:01:29 -0000 On Fri, Mar 28, 2014 at 2:00 PM, Freddie Cash wrote: > On Fri, Mar 28, 2014 at 1:40 PM, Dmitry Morozovsky wrote= : > >> On Fri, 28 Mar 2014, Joar Jegleim wrote: >> >> [snip most of] >> >> > > Have you measured to see if, or do you otherwise know for sure, that >> you >> > > really do need a ZIL? I suggest not adding a ZIL unless you are >> certain >> > > you need it. >> > Yes, I only recently realized that too, and I'm really not sure if a >> > zil is required. >> > Some small portion of files (som hundre MB's) are served over nfs from >> > the same server, if I understand it right a zil will help for nfs >> > stuff (?) , but I'm not sure if it's any gain of having a zil today. >> > On the other hand, a zil doesn't have to be big, I can simply buy a >> > 128GB ssd which are cheap today . >> >> Please don't forget that, unlike L2ARC, if you lost ZIL during sync writ= e, >> you're effectively lost the pool. >> > > =E2=80=8BNope. Not even close. > > The ZIL is only ever read at boot time. If you lose the ZIL between the > time the data is written to the ZIL and the time the async write of the > data is actually done to the pool ... and the server is rebooted at that > time, then you get an error message at pool import. > > You can then force the import of the pool, losing any *data* in the ZIL, > but nothing else. > > It used to be (back in the pre-ZFSv=E2=80=8B13-ish days) that if you lost= the ZIL > while there was data in it that wasn't yet written to the pool, the pool > would fault and be gone. Hence the rule-of-thumb to always mirror the ZI= L. > > Around ZFSv14-ish, the ability to import a pool with a missing ZIL was > added. > > Remember the flow of data in ZFS: > async write request --> TXG --> disk > sync write request --> ZIL > \--> TXG --> disk > > All sync writes are written to the pool as part of a normal async TXG > after its written sync to the ZIL. And the ZIL is only ever read during > pool import. > > =E2=80=8B[Note, I'm not a ZFS developer so some of the above may not be 1= 00% > accurate, but the gist of it is.]=E2=80=8B > > =E2=80=8BOh, and if you lose the separate log vdev during normal operation,= then the pool reverts to using the ZIL inside of the pool. IOW, there's always a ZIL in a pool, whether that be internal to the pool or as part of a separate log vdev. --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 21:16:31 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0B279DB9 for ; Fri, 28 Mar 2014 21:16:31 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 67B2C349 for ; Fri, 28 Mar 2014 21:16:29 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id s2SLGPVc047900; Sat, 29 Mar 2014 01:16:25 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Sat, 29 Mar 2014 01:16:25 +0400 (MSK) From: Dmitry Morozovsky To: Freddie Cash Subject: Re: zfs l2arc warmup In-Reply-To: Message-ID: References: <20140328005911.GA30665@neutralgood.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Sat, 29 Mar 2014 01:16:25 +0400 (MSK) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 21:16:31 -0000 On Fri, 28 Mar 2014, Freddie Cash wrote: > > > > Have you measured to see if, or do you otherwise know for sure, that > > you > > > > really do need a ZIL? I suggest not adding a ZIL unless you are certain > > > > you need it. > > > Yes, I only recently realized that too, and I'm really not sure if a > > > zil is required. > > > Some small portion of files (som hundre MB's) are served over nfs from > > > the same server, if I understand it right a zil will help for nfs > > > stuff (?) , but I'm not sure if it's any gain of having a zil today. > > > On the other hand, a zil doesn't have to be big, I can simply buy a > > > 128GB ssd which are cheap today . > > > > Please don't forget that, unlike L2ARC, if you lost ZIL during sync write, > > you're effectively lost the pool. > > > > ?Nope. Not even close. > > The ZIL is only ever read at boot time. If you lose the ZIL between the > time the data is written to the ZIL and the time the async write of the > data is actually done to the pool ... and the server is rebooted at that > time, then you get an error message at pool import. > > You can then force the import of the pool, losing any *data* in the ZIL, > but nothing else. > > It used to be (back in the pre-ZFSv?13-ish days) that if you lost the ZIL > while there was data in it that wasn't yet written to the pool, the pool > would fault and be gone. Hence the rule-of-thumb to always mirror the ZIL. > > Around ZFSv14-ish, the ability to import a pool with a missing ZIL was > added. > > Remember the flow of data in ZFS: > async write request --> TXG --> disk > sync write request --> ZIL > \--> TXG --> disk > > All sync writes are written to the pool as part of a normal async TXG after > its written sync to the ZIL. And the ZIL is only ever read during pool > import. > > ?[Note, I'm not a ZFS developer so some of the above may not be 100% > accurate, but the gist of it is.]? Ah, thanks, I stand corrected. Great we're tighten the window we could possibly lose precious data. -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 21:19:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1602BFA4 for ; Fri, 28 Mar 2014 21:19:29 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8D382377 for ; Fri, 28 Mar 2014 21:19:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id s2SLJQT4047936; Sat, 29 Mar 2014 01:19:26 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Sat, 29 Mar 2014 01:19:26 +0400 (MSK) From: Dmitry Morozovsky To: Freddie Cash Subject: Re: zfs l2arc warmup In-Reply-To: Message-ID: References: <20140328005911.GA30665@neutralgood.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Sat, 29 Mar 2014 01:19:26 +0400 (MSK) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 21:19:29 -0000 On Fri, 28 Mar 2014, Freddie Cash wrote: [snip most again] > Around ZFSv14-ish, the ability to import a pool with a missing ZIL was > added. > > Remember the flow of data in ZFS: > async write request --> TXG --> disk > sync write request --> ZIL > \--> TXG --> disk > > All sync writes are written to the pool as part of a normal async TXG after > its written sync to the ZIL. And the ZIL is only ever read during pool > import. On the other side, doesn't it put the risk on sync-dependent, like database, systems? I'm thinking not about losing the transaction, but possibly putting your filesystem in the middle of (database PoV) transaction, hence render your DB inconsistent? Quick googling seems to be uncertain about it... -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 21:37:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D30A68FC for ; Fri, 28 Mar 2014 21:37:01 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 778D0839 for ; Fri, 28 Mar 2014 21:37:00 +0000 (UTC) Received: from firewall.mikej.com (162-238-140-44.lightspeed.lsvlky.sbcglobal.net [162.238.140.44]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s2SLaq3P081488 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Fri, 28 Mar 2014 17:36:53 -0400 (EDT) (envelope-from mikej@mikej.com) Received: from firewall.mikej.com (localhost.mikej.com [127.0.0.1]) by firewall.mikej.com (8.14.8/8.14.8) with ESMTP id s2SLaBwh050755 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Fri, 28 Mar 2014 17:36:51 -0400 (EDT) (envelope-from mikej@mikej.com) Received: (from www@localhost) by firewall.mikej.com (8.14.8/8.14.8/Submit) id s2SLaAiw050754; Fri, 28 Mar 2014 17:36:10 -0400 (EDT) (envelope-from mikej@mikej.com) X-Authentication-Warning: firewall.mikej.com: www set sender to mikej@mikej.com using -f To: Subject: Re: zfs l2arc warmup MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Fri, 28 Mar 2014 17:35:39 -0400 From: mikej In-Reply-To: References: <20140328005911.GA30665@neutralgood.org> Message-ID: <33ff828c517307c9681c361a12cff2ee@mail.mikej.com> X-Sender: mikej@mikej.com User-Agent: Roundcube Webmail/0.6-beta X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 21:37:01 -0000 On 2014-03-28 17:19, Dmitry Morozovsky wrote: > On Fri, 28 Mar 2014, Freddie Cash wrote: > > [snip most again] > >> Around ZFSv14-ish, the ability to import a pool with a missing ZIL >> was >> added. >> >> Remember the flow of data in ZFS: >> async write request --> TXG --> disk >> sync write request --> ZIL >> \--> TXG --> disk >> >> All sync writes are written to the pool as part of a normal async >> TXG after >> its written sync to the ZIL. And the ZIL is only ever read during >> pool >> import. > > On the other side, doesn't it put the risk on sync-dependent, like > database, > systems? > > I'm thinking not about losing the transaction, but possibly putting > your > filesystem in the middle of (database PoV) transaction, hence render > your DB > inconsistent? > > Quick googling seems to be uncertain about it... As I understand it..... (And I am always looking for an education) Any files system that honors fsync and provided the DB uses fsync should be fine. Any data loss then will only be determined by what transaction (log) capabilities the DB has. --mikej From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 21:41:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6CB48C61 for ; Fri, 28 Mar 2014 21:41:23 +0000 (UTC) Received: from mail-ob0-x233.google.com (mail-ob0-x233.google.com [IPv6:2607:f8b0:4003:c01::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 293838ED for ; Fri, 28 Mar 2014 21:41:23 +0000 (UTC) Received: by mail-ob0-f179.google.com with SMTP id va2so6641454obc.38 for ; Fri, 28 Mar 2014 14:41:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=v1oFRIcimTSHtl1CIhDJMMtE8YOrmFfvZnBPftHPn6Y=; b=petliEa4I7lID4BXCzERsp0nAlMRgK/i7UdfNKnkoFIf8BZl0YuXsiRwQ9agaaGdkU 1HfVlr09J4c5RoV5lB24yWOpltlom6v+0Bzz+B65OynAKTtNq/s7pOy0c+C0OmeOLaX+ 6fx1bRrIY77RVcwQU4Calz7TlvNYitNwoB+Sh9hKQCfu6uzbybCIxm5tcKV7hGc6HFe8 U1zlrqBQQluhXRme8TJAkSBd3Zy0WYpHwt6hqRNfnKlE1Hsioi8MsSndcc8NNpN84N9E ksPND4PKyqMmJB8l1HKPl3eg/NpuHwOvEPOMSeTdw2stPfN0LbKmwAtl8CwdqmFWO1qd 07Rg== MIME-Version: 1.0 X-Received: by 10.60.148.196 with SMTP id tu4mr4221535oeb.25.1396042882399; Fri, 28 Mar 2014 14:41:22 -0700 (PDT) Received: by 10.76.180.40 with HTTP; Fri, 28 Mar 2014 14:41:22 -0700 (PDT) In-Reply-To: References: <20140328005911.GA30665@neutralgood.org> Date: Fri, 28 Mar 2014 14:41:22 -0700 Message-ID: Subject: Re: zfs l2arc warmup From: Freddie Cash To: Dmitry Morozovsky Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 21:41:23 -0000 On Fri, Mar 28, 2014 at 2:19 PM, Dmitry Morozovsky wrote: > On Fri, 28 Mar 2014, Freddie Cash wrote: > > [snip most again] > > > Around ZFSv14-ish, the ability to import a pool with a missing ZIL was > > added. > > > > Remember the flow of data in ZFS: > > async write request --> TXG --> disk > > sync write request --> ZIL > > \--> TXG --> disk > > > > All sync writes are written to the pool as part of a normal async TXG > after > > its written sync to the ZIL. And the ZIL is only ever read during pool > > import. > > On the other side, doesn't it put the risk on sync-dependent, like > database, > systems? > > I'm thinking not about losing the transaction, but possibly putting your > filesystem in the middle of (database PoV) transaction, hence render your > DB > inconsistent? > > Quick googling seems to be uncertain about it... > =E2=80=8BThat I don't know. Again, I'm not a ZFS code guru; just a very happy/active ZFS user and reader of stuff online. :) You're thinking of the small window where: - database writes transaction to disk - zfs writes =E2=80=8Bthe data to the ZIL on the log vdev - zfs returns "data is written to disk" to the DB - zfs queues up the write to the pool - the log device dies - the pool is forcibly exported/server loses power Such that the DB considers the transaction complete and the data safely written to disk, but it's actually only written to the ZIL on the separate log device (which no longer exists) and is not stored in the pool yet. Yeah, that could be a problem. A very unlikely event, although not entirely impossible. =E2=80=8BI would think it would be up to the database to be able to roll-ba= ck a database to prior to the corrupted transaction. If the DB has a log or journal or whatever, then it could be used to roll-back, no? It's still considered best practice to use mirror log device. It's just no longer required, nor does a dead log lead to a completely dead pool.=E2=80= =8B --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 21:45:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8E88EE1D for ; Fri, 28 Mar 2014 21:45:11 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EE89197E for ; Fri, 28 Mar 2014 21:45:10 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id s2SLj8GJ048212; Sat, 29 Mar 2014 01:45:08 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Sat, 29 Mar 2014 01:45:08 +0400 (MSK) From: Dmitry Morozovsky To: mikej Subject: Re: zfs l2arc warmup In-Reply-To: <33ff828c517307c9681c361a12cff2ee@mail.mikej.com> Message-ID: References: <20140328005911.GA30665@neutralgood.org> <33ff828c517307c9681c361a12cff2ee@mail.mikej.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Sat, 29 Mar 2014 01:45:08 +0400 (MSK) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 21:45:11 -0000 On Fri, 28 Mar 2014, mikej wrote: > > [snip most again] > > > > > Around ZFSv14-ish, the ability to import a pool with a missing ZIL was > > > added. > > > > > > Remember the flow of data in ZFS: > > > async write request --> TXG --> disk > > > sync write request --> ZIL > > > \--> TXG --> disk > > > > > > All sync writes are written to the pool as part of a normal async TXG > > > after > > > its written sync to the ZIL. And the ZIL is only ever read during pool > > > import. > > > > On the other side, doesn't it put the risk on sync-dependent, like database, > > systems? > > > > I'm thinking not about losing the transaction, but possibly putting your > > filesystem in the middle of (database PoV) transaction, hence render your DB > > inconsistent? > > > > Quick googling seems to be uncertain about it... > > As I understand it..... (And I am always looking for an education) > > Any files system that honors fsync and provided the DB uses fsync should be > fine. > > Any data loss then will only be determined by what transaction (log) > capabilities the DB has. And? 1. The DB issues "sync WAL" request, which is translated to fsync-like FS requests, there are (IIUC) should ne directed to ZIL. 2. ZIL is failing in the middle of the request, or, even more bad, after reporting that ZIL transaction is done, but before translating ZIL to the underlying media 3. inconsistend DB? I'm in hope I'm wrong somewhere... -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 21:52:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D7EB1F7F for ; Fri, 28 Mar 2014 21:52:20 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 56572A2B for ; Fri, 28 Mar 2014 21:52:19 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id s2SLqHCt048309; Sat, 29 Mar 2014 01:52:17 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Sat, 29 Mar 2014 01:52:17 +0400 (MSK) From: Dmitry Morozovsky To: Freddie Cash Subject: Re: zfs l2arc warmup In-Reply-To: Message-ID: References: <20140328005911.GA30665@neutralgood.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Sat, 29 Mar 2014 01:52:17 +0400 (MSK) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 21:52:21 -0000 On Fri, 28 Mar 2014, Freddie Cash wrote: > > I'm thinking not about losing the transaction, but possibly putting your > > filesystem in the middle of (database PoV) transaction, hence render your > > DB > > inconsistent? > > > > Quick googling seems to be uncertain about it... > > > > ?That I don't know. Again, I'm not a ZFS code guru; just a very > happy/active ZFS user and reader of stuff online. :) > > You're thinking of the small window where: > - database writes transaction to disk > - zfs writes ?the data to the ZIL on the log vdev > - zfs returns "data is written to disk" to the DB > - zfs queues up the write to the pool > - the log device dies > - the pool is forcibly exported/server loses power Pretty much the same I'm just written in the parallel reply ;) > Such that the DB considers the transaction complete and the data safely > written to disk, but it's actually only written to the ZIL on the separate > log device (which no longer exists) and is not stored in the pool yet. So, if ZIL dies but server is alive the sync write will be done, IIUC from your and other's comments? Then, of course, it will shorten the width of dangerous to nearly zero. > Yeah, that could be a problem. A very unlikely event, although not > entirely impossible. > > ?I would think it would be up to the database to be able to roll-back a > database to prior to the corrupted transaction. If the DB has a log or > journal or whatever, then it could be used to roll-back, no? If it could detect this situation, yes. I'm not sure detecting this while getting (previously) state like "write has been done" couldbe simple though ;P > It's still considered best practice to use mirror log device. It's just no > longer required, nor does a dead log lead to a completely dead pool.? Well, I suppose the middle point is like "weight your abilities to lose data, and prepare measures" ;P For the database case, I'm not so sure, alas. But, anyway, thank you very much for thougthful comments! -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 22:41:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8076A203 for ; Fri, 28 Mar 2014 22:41:32 +0000 (UTC) Received: from secure.freebsdsolutions.net (secure.freebsdsolutions.net [69.55.234.48]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 48454E91 for ; Fri, 28 Mar 2014 22:41:32 +0000 (UTC) Received: from [10.10.1.198] (office.betterlinux.com [199.58.199.60]) (authenticated bits=0) by secure.freebsdsolutions.net (8.14.4/8.14.4) with ESMTP id s2SMcNs2075640 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT); Fri, 28 Mar 2014 18:38:24 -0400 (EDT) (envelope-from lists@jnielsen.net) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: File system corruption with 9.2-R on PC Engines Alix boards From: John Nielsen In-Reply-To: Date: Fri, 28 Mar 2014 16:39:49 -0600 Content-Transfer-Encoding: quoted-printable Message-Id: <0D92C2CC-3640-42AA-B589-33929641E984@jnielsen.net> References: To: Karl Pielorz X-Mailer: Apple Mail (2.1874) X-DCC--Metrics: ns1.jnielsen.net 1282; Body=2 Fuz1=2 Fuz2=2 X-Virus-Scanned: clamav-milter 0.97.8 at ns1.jnielsen.net X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 22:41:32 -0000 On Mar 28, 2014, at 11:02 AM, Karl Pielorz = wrote: > We have a number of PC Engines Alix boards, running FreeBSD 8.2. They = boot off of an onboard CF cards. >=20 > I recently installed a new one of these with 9.2-Release (i386) - only = to discover that it silently (i.e. with no errors) destroys the file = system when in use. >=20 > Typically we install these systems then flip the file system over to = 'read-only' when sending them out. The corruption happens while we're = installing various packages etc. >=20 > We don't run journal soft-updates on these boxes - just regular = soft-updates . >=20 > No console errors are logged, no syslog messages are logged. Just = after a while you might go to edit '/etc/rc.conf' - to find when you vi = it - it's now become a copy of '/etc/ntp.conf' - or other oddities. >=20 > A reboot runs fsck - which will usually fail then. Running a = foreground check reals off thousands of duplicate errors. If you = foreground check the file system, you're usually left with "not a lot" = when it's finished (i.e. if you run 'fsck -y /'). >=20 > 8.2 runs fine (we have systems that have been running embedded for = years) - 9.2 doesn't. >=20 > I found a similar thread: >=20 > = = >=20 > This eludes to CF card quality etc. - the cards we've been using have = worked fine for years - and a 9.2 'flaky' system reformatted to 8.2 then = runs fine. >=20 > Anyone else running later than 8.2 on PC Engine Alix kit? I have a pair of alix3dw systems that I use as wireless access points. = Both are running 10-STABLE on internal CF without problems. (I do world = and package builds on a faster host.) One has SU+J, the other has just = SU. I know I ran some version of 9-STABLE on the same hardware as well = (probably 9.1-ish) and don't remember any issues like those you are = describing. FWIW, both use this CF card: http://www.newegg.com/Product/Product.aspx?Item=3DN82E16820134575 You might try to see when the badness appeared in the FreeBSD 9 branch = by building kernels from various points and running them. If you use a = different build host and just build a custom kernel without (many) = modules or world it shouldn't be too onerous. I'd be curious to see if a = different CF card has the same problems too. JN From owner-freebsd-fs@FreeBSD.ORG Fri Mar 28 22:44:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2E7083C0; Fri, 28 Mar 2014 22:44:43 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id BFC30EAB; Fri, 28 Mar 2014 22:44:42 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ArwEAFH6NVODaFve/2dsb2JhbABZg0FXgwq4N4YXTVGBLnSCJQEBAQMBAQEBICsgCwUWGAICDRkCKQEJJg4HBAEcBIdQCA2vFaJ7F4EpiCOEXQEBGzQHgm+BSQSVd4QKkQKDTCExgQQ5 X-IronPort-AV: E=Sophos;i="4.97,753,1389762000"; d="scan'208";a="110110613" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 28 Mar 2014 18:44:36 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 5C9DBB3F23; Fri, 28 Mar 2014 18:44:36 -0400 (EDT) Date: Fri, 28 Mar 2014 18:44:36 -0400 (EDT) From: Rick Macklem To: araujo@FreeBSD.org Message-ID: <1377879526.2465097.1396046676367.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Mar 2014 22:44:43 -0000 Marcelo Araujo wrote: > 2014-03-28 5:37 GMT+08:00 Rick Macklem : > > > Christopher Forgeron wrote: > > > I'm quite sure the problem is on 9.2-RELEASE, not 9.1-RELEASE or > > > earlier, > > > as a 9.2-STABLE from last year I have doesn't exhibit the > > > problem. > > > New > > > code in if.c at line 660 looks to be what is starting this, which > > > makes me > > > wonder how TSO was being handled before 9.2. > > > > > > I also like Rick's NFS patch for cluster size. I notice an > > > improvement, but > > > don't have solid numbers yet. I'm still stress testing it as we > > > speak. > > > > > Unfortunately, this causes problems for small i386 systems, so I > > am reluctant to commit it to head. Maybe a variant that is only > > enabled for amd64 systems with lots of memory would be ok? > > > > > Rick, > > Maybe you can create a SYSCTL to enable/disable it by the end user > will be > more reasonable. Also, of course, it is so far safe if only 64Bits > CPU can > enable this SYSCTL. Any other option seems not OK, will be hard to > judge > what is lots of memory and what is not, it will depends what is > running > onto the system. > I guess adding it so it can be optionally enabled via a sysctl isn't a bad idea. I think the largest risk here is "how do you tell people what the risk of enabling this is"? There are already a bunch of sysctls related to NFS that few people know how to use. (I recall that Alexander has argued that folk don't want worry about these tunables and I tend to agree.) If I do a variant of the patch that uses m_getjcl(..M_WAITOK..), then at least the "breakage" is thread(s) sleeping on "btallo", which is fairly easy to check for, althouggh rather obscure. (Btw, I've never reproduced this for a patch that changes the code to always use MJUMPAGESIZE mbuf clusters. I can only reproduce it intermittently when the patch mixes allocation of MCLBYTES clusters and MJUMPAGESIZE clusters.) I've been poking at it to try and figure out how to get m_getjcl(..M_NOWAIT..) to return NULL instead of looping when it runs out of boundary tags (to see if that can result in a stable implementation of the patch), but haven't had much luck yet. Bottom line: I just don't like committing a patch that can break the system in such an obscure way, even if it is enabled via a sysctl. Others have an opinion on this? Thanks, rick > The SYSCTL will be great, and in case you don't have time to do it, I > can > give you a hand. > > I'm gonna do more benchmarks today and will send another report, but > in our > product here, I'm inclined to use this patch, because 10~20% speed up > in > read for me is a lot. :-) > > Thank you so much and best regards, > -- > Marcelo Araujo > araujo@FreeBSD.org > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to > "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sat Mar 29 00:09:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 52C3AF90 for ; Sat, 29 Mar 2014 00:09:59 +0000 (UTC) Received: from sasl.smtp.pobox.com (a-pb-sasl-quonix.pobox.com [208.72.237.25]) by mx1.freebsd.org (Postfix) with ESMTP id 0F3C387F for ; Sat, 29 Mar 2014 00:09:58 +0000 (UTC) Received: from sasl.smtp.pobox.com (unknown [127.0.0.1]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTP id 0A11810B3D for ; Fri, 28 Mar 2014 20:09:53 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=pobox.com; h=date :message-id:from:to:subject:mime-version:content-type :content-transfer-encoding; s=sasl; bh=VJo/DtPf3BrOrb6mLPCv0UtSt XA=; b=Lic2rN6HF7Al004c5hp4sr1iwdtbvktqskWlcJtvpqWZA0Tytn8ztWh1P BcpbWPd5Fs6xEN9TjFPb/o4+gsaVqoSmnrPfDJVVnYoeRb8KBMiy0TLZfCcJ4mX5 086i2k7KYb/gmuUY3XLz8tDJoAJykE7svF7VjmlxtJGtIWgVyg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=pobox.com; h=date:message-id :from:to:subject:mime-version:content-type :content-transfer-encoding; q=dns; s=sasl; b=W6wsJhtNkYDlrwO/aTl wkExKgW70/EM2eeGPr5EJ9hp4aXXJdXqSd0zAP1kiOBcjB0anYe4qZ50lwRl0a/8 yvkeHdNTyHgQrj/L0dySg2N9Zr+qB4irlFcvOfwxSwHFE99oj2lzfW0GVIhgTbJ4 zE34xuUxApvkata7T8BRgTSw= Received: from a-pb-sasl-quonix.pobox.com (unknown [127.0.0.1]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTP id 0284310B3C for ; Fri, 28 Mar 2014 20:09:53 -0400 (EDT) Received: from bmach.nederware.nl (unknown [27.252.207.198]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTPA id 2622710B3B for ; Fri, 28 Mar 2014 20:09:52 -0400 (EDT) Received: from quadrio.nederware.nl (quadrio.nederware.nl [192.168.33.13]) by bmach.nederware.nl (Postfix) with ESMTP id EE697334CA for ; Sat, 29 Mar 2014 13:09:41 +1300 (NZDT) Received: from quadrio.nederware.nl (quadrio.nederware.nl [127.0.0.1]) by quadrio.nederware.nl (Postfix) with ESMTP id AC1764043FFF for ; Sat, 29 Mar 2014 13:09:41 +1300 (NZDT) Date: Sat, 29 Mar 2014 13:09:41 +1300 Message-ID: <87d2h5rhkq.wl%berend@pobox.com> From: Berend de Boer To: freebsd-fs@freebsd.org Subject: nfsd server cache flooded, try to increase nfsrc_floodlevel User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 EasyPG/1.0.0 Emacs/24.3 (i686-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) Organization: Xplain Technology Ltd MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: multipart/signed; boundary="pgp-sign-Multipart_Sat_Mar_29_13:09:41_2014-1"; micalg=pgp-sha256; protocol="application/pgp-signature" Content-Transfer-Encoding: 7bit X-Pobox-Relay-ID: 731AA12C-B6D6-11E3-8414-873F0E5B5709-48001098!a-pb-sasl-quonix.pobox.com X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 29 Mar 2014 00:09:59 -0000 --pgp-sign-Multipart_Sat_Mar_29_13:09:41_2014-1 Content-Type: text/plain; charset=US-ASCII Dear all, I have a subversion repository on an ZFS file system exposed through NFS on FreeBSD 10.0. This is mounted by linux clients running Apache. When people check out the repository, I get: nfsd server cache flooded, try to increase nfsrc_floodlevel when this is mounted using nfs4, on the client you see then: E175002: REPORT of '/!svn/vcc/default': Could not read chunk size: connection was closed by server If nfs3 mount is used, there is no such problem. This is probably a bug report, but not sure filing this will help anyone. So that's why I throw it out here. -- All the best, Berend de Boer --pgp-sign-Multipart_Sat_Mar_29_13:09:41_2014-1 Content-Type: application/pgp-signature Content-Transfer-Encoding: 7bit Content-Description: OpenPGP Digital Signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (GNU/Linux) iQIcBAABCAAGBQJTNg9FAAoJEKOfeD48G3g59IgP/Rrs/BObkOstbUG6vta3FIj1 tMPB3llHxhhO51jFTMVEaiiGNPu3++djx0xRYCkCZ59VdmmuEV6FINbgqQIKiRqK PpsE1dCIus6kwqTTSznnpzMYP71Sr7sY/EXqxPenlXb0QUZnHgQiBg1JnoNP8com eu3hP1CoxAimAx6WVEF4sev45N5VHBSho5NgwNPJorJilXkJu3YnH87/FU9VxVqL vZb9Gsj3z/Qd7aiaDzrSlgrLyObHElgyD9MSRv8nZ6pOFET3DxYe+1iKaAsqA57Z nxOIZOH+gnJDNPaKx/sliiGwT+3DNdgxfLlv1pAXIhdsIvvgaLDtr5IhvmLnBtCS /C3MaqFqYhO0hEGinTORTYFzll/D87iLh9ppoyXbCMcwWYpVuyvSSkZQgaF8fyeV 0TGZ/BZNGSkUtBYrDxBRVqcS+rfo9iW2aN3LShMWYrWSjEblPUQ8M/PIK7r+4Eu8 wdJCIlr+GJmyjBladNQ+vd4RJYKe0dz0pDprYX5Cs1f6OcKtlBrWR0T1dk2IvAkX TMBQzc2qdfeRF5tFqoWwlfEc+KFHx+V+svgqVBspY7MGyYTDm9csOJWTzPfh+7FC zdIEg1vZzWiUY2NPPtpB0gZaym1HXcLohF7ozS95X0q5wSTX8hbbwUPRHri6U9KR C7riaLj+7JZ3eOw2BgpB =2jBV -----END PGP SIGNATURE----- --pgp-sign-Multipart_Sat_Mar_29_13:09:41_2014-1-- From owner-freebsd-fs@FreeBSD.ORG Sat Mar 29 01:31:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8831856B for ; Sat, 29 Mar 2014 01:31:10 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 4DF13FB9 for ; Sat, 29 Mar 2014 01:31:09 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAAAhNlODaFve/2dsb2JhbABZhBiDCr9ugSx0giUBAQEDASNWBRYYAgINGQJZBogECK8Fom0XgSmMbxEBBxU0B4JvgUkEoyqHWYNMIYEtCBci X-IronPort-AV: E=Sophos;i="4.97,754,1389762000"; d="scan'208";a="110128868" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 28 Mar 2014 21:31:08 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id DC064B404A; Fri, 28 Mar 2014 21:31:08 -0400 (EDT) Date: Fri, 28 Mar 2014 21:31:08 -0400 (EDT) From: Rick Macklem To: Berend de Boer Message-ID: <262750574.2512992.1396056668889.JavaMail.root@uoguelph.ca> In-Reply-To: <87d2h5rhkq.wl%berend@pobox.com> Subject: Re: nfsd server cache flooded, try to increase nfsrc_floodlevel MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 29 Mar 2014 01:31:10 -0000 Berend de Boer wrote: > Dear all, > > I have a subversion repository on an ZFS file system exposed through > NFS on FreeBSD 10.0. > > This is mounted by linux clients running Apache. > > When people check out the repository, I get: > > nfsd server cache flooded, try to increase nfsrc_floodlevel > Increasing the value for the sysctl vfs.nfsd.tcphighwater increases nfsrc_floodlevel. You might also want to decrease vfs.nfsd.tcpcachetimeo since the large default timeout can result in a large cache. Hard to say what a correct value is, but Garrett Wollman runs large NFS servers (using ZFS storage) and sets them to: vfs.nfsd.tcphighwater=100000 vfs.nfsd.tcptimeo=300 I think. There have been patches added to head and stable/10 that help to keep the cache from growing too big (thanks to Alexander Motin), but you'd need to upgrade to stable/10 to get those. rick > when this is mounted using nfs4, on the client you see then: > > E175002: REPORT of '/!svn/vcc/default': Could not read chunk size: > connection was closed by server > I am not sure if this is caused by the flooded DRC cache or not, but if you still get this after you have increased vfs.nfsd.tcphighwater so that the "try to increase nfsrc_floodlevel" no longer occurs, email again. > If nfs3 mount is used, there is no such problem. > > This is probably a bug report, but not sure filing this will help > anyone. So that's why I throw it out here. > > -- > All the best, > > Berend de Boer > From owner-freebsd-fs@FreeBSD.ORG Sat Mar 29 04:36:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 89DA3D6C for ; Sat, 29 Mar 2014 04:36:50 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4C9DF24F for ; Sat, 29 Mar 2014 04:36:49 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s2T4acOu047939 for ; Fri, 28 Mar 2014 23:36:39 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Fri Mar 28 23:36:39 2014 Message-ID: <53364DD1.6040100@denninger.net> Date: Fri, 28 Mar 2014 23:36:33 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zfs l2arc warmup References: <20140328005911.GA30665@neutralgood.org> <33ff828c517307c9681c361a12cff2ee@mail.mikej.com> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030402090301030105090402" X-Antivirus: avast! (VPS 140328-1, 03/28/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 29 Mar 2014 04:36:50 -0000 This is a cryptographically signed message in MIME format. --------------ms030402090301030105090402 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 3/28/2014 4:45 PM, Dmitry Morozovsky wrote: > On Fri, 28 Mar 2014, mikej wrote: > >>> [snip most again] >>> >>>> Around ZFSv14-ish, the ability to import a pool with a missing ZIL w= as >>>> added. >>>> >>>> Remember the flow of data in ZFS: >>>> async write request --> TXG --> disk >>>> sync write request --> ZIL >>>> \--> TXG --> disk >>>> >>>> All sync writes are written to the pool as part of a normal async TX= G >>>> after >>>> its written sync to the ZIL. And the ZIL is only ever read during p= ool >>>> import. >>> On the other side, doesn't it put the risk on sync-dependent, like da= tabase, >>> systems? >>> >>> I'm thinking not about losing the transaction, but possibly putting y= our >>> filesystem in the middle of (database PoV) transaction, hence render = your DB >>> inconsistent? >>> >>> Quick googling seems to be uncertain about it... >> As I understand it..... (And I am always looking for an education) >> >> Any files system that honors fsync and provided the DB uses fsync shou= ld be >> fine. >> >> Any data loss then will only be determined by what transaction (log) >> capabilities the DB has. > And? > > 1. The DB issues "sync WAL" request, which is translated to fsync-like = FS > requests, there are (IIUC) should ne directed to ZIL. > > 2. ZIL is failing in the middle of the request, or, even more bad, afte= r > reporting that ZIL transaction is done, but before translating ZIL to t= he > underlying media > > 3. inconsistend DB? > > I'm in hope I'm wrong somewhere... If the DB is EVER lied to on a fsync'd write (that is, it gets back=20 completion when the write was not actually complete and on stable=20 storage) you're asking for a corrupted database. It doesn't much matter WHY the DB was lied to. All modern database=20 systems rely on the filesystem and operating system NOT lying to them=20 about the fact that data written is on stable storage when a fsync'd=20 write call returns. --=20 -- Karl karl@denninger.net --------------ms030402090301030105090402 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjkwNDM2MzNaMCMGCSqGSIb3DQEJBDEW BBRhDK07XPKEper+Fc87iaZpQeHkCzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAlFd0otKNNOQlm00pSmAZsQWQSFEU PrrsqXGP5vIE36fHDikg3vSDDdhdUxjyEMTccybp63jqQOKsP4N/AuO107rTkzn9uKGLQPAa xiVTZzAwTDhJxahK6EIAhl7L4r/qrJMq/Mbr3dPv8EQDizqi/Du9qA19O0wEbbhbpuHl/kB3 2xPbiJZ/Q/XvwHLkvNn8rtTrUgYP4YOTPR6t0zBuiZ34X3VouOqoWAldMPmxpeikvbs2KeII 86hAhxu/2kfiBiPFvaMUTDBWxcQwplDnK8AisnlCKVEcY/08XYF+Ou78xfuTdmewucO1aEW4 zcU5J1VfYapQ4f3atPJ/S7911HuDFFGSe+qDvciHkQBUNAk8N7ZbHLwlUejkgE9ygFPqhEeZ jJDYxiQNeFZR5q4Ge/K21D7ZHOv9Ftu24YidMEZ8kc4LRk+P8Y37tLuwLTkr0X77WragFt9r SeP8f1fvqEi23jXFUau5Jbn/5XGivKvQFPJ5Itcv+MjB/FoIhfFX3UEWDaTCDGppSQtItL/9 W/JLOeG+vuYG7tRKNjQBUMqArxCSy3KJzVZimrKRmSNYK4N1VoM+rA09aWrS2T8+bIjf7b1V +U9PL9K3JcRZ+GTFYvS88q+tjUX4SHwUTM12K15vQ/pJ/tEq/a1kSbL8GeW8/QtBh/kycEKa KWnQwEkAAAAAAAA= --------------ms030402090301030105090402-- From owner-freebsd-fs@FreeBSD.ORG Sun Mar 30 18:43:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CD3717D2 for ; Sun, 30 Mar 2014 18:43:51 +0000 (UTC) Received: from mail-yh0-x22f.google.com (mail-yh0-x22f.google.com [IPv6:2607:f8b0:4002:c01::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 95081D2B for ; Sun, 30 Mar 2014 18:43:51 +0000 (UTC) Received: by mail-yh0-f47.google.com with SMTP id 29so6804276yhl.6 for ; Sun, 30 Mar 2014 11:43:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=zIhWYLa4xUJUS3RnWC6rF+CZJXL7gUqZTEVIVyA3yag=; b=i/M2uWC+D+hJsCoh06MipIaFV47OKzf8ReJ1vjd6fhCmqg4tPL5cCGN+1Wm1+wji7d hcTxv5HBCsAgQqjoS1FEFP6Wmo4kYViyI/1O6ML+GJKXx0e0bKS5x5FDf1rQ2wwSa/AA Izbp2cpmOhovsHddBQXZWcDvxYLXLGHeRiUDdoQNwxFDJ2XbEfqhuB6ZD+5GvCf/WDG6 vwZ755DzeaLNZrAbTnzVYP39Nc0OPjx0ENxC0a4rStC11IaftGcroaj+ciadsUPJC87j e1Uxlpn8c58ZnyHhKzGT6Xw1DplXGL1Xcm8bn38dqfZeysB3l9zVatazLWOaJQ/Uu+zj An+g== MIME-Version: 1.0 X-Received: by 10.236.90.225 with SMTP id e61mr29343782yhf.15.1396205030779; Sun, 30 Mar 2014 11:43:50 -0700 (PDT) Received: by 10.170.95.212 with HTTP; Sun, 30 Mar 2014 11:43:50 -0700 (PDT) Date: Sun, 30 Mar 2014 20:43:50 +0200 Message-ID: Subject: ZFS panic: spin lock held too long From: Idwer Vollering To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 30 Mar 2014 18:43:52 -0000 ==== dmesg (from a cold boot) ==== $ dmesg Copyright (c) 1992-2014 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 FreeBSD clang version 3.3 (tags/RELEASE_33/final 183502) 20130610 CPU: AMD A8-5500 APU with Radeon(tm) HD Graphics (3194.26-MHz K8-class CPU) Origin = "AuthenticAMD" Id = 0x610f01 Family = 0x15 Model = 0x10 Stepping = 1 Features=0x178bfbff Features2=0x3e98320b AMD Features=0x2e500800 AMD Features2=0x1ebbfff Standard Extended Features=0x8 TSC: P-state invariant, performance statistics real memory = 9110028288 (8688 MB) avail memory = 7711780864 (7354 MB) Event timer "LAPIC" quality 400 ACPI APIC Table: FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs FreeBSD/SMP: 1 package(s) x 4 core(s) cpu0 (BSP): APIC ID: 16 cpu1 (AP): APIC ID: 17 cpu2 (AP): APIC ID: 18 cpu3 (AP): APIC ID: 19 ioapic0 irqs 0-23 on motherboard random: initialized kbd1 at kbdmux0 acpi0: on motherboard acpi0: Power Button (fixed) cpu0: on acpi0 cpu1: on acpi0 cpu2: on acpi0 cpu3: on acpi0 atrtc0: port 0x70-0x71 irq 8 on acpi0 Event timer "RTC" frequency 32768 Hz quality 0 attimer0: port 0x40-0x43 irq 0 on acpi0 Timecounter "i8254" frequency 1193182 Hz quality 0 Event timer "i8254" frequency 1193182 Hz quality 100 Timecounter "ACPI-fast" frequency 3579545 Hz quality 900 acpi_timer0: <32-bit timer at 3.579545MHz> port 0x818-0x81b on acpi0 hpet0: iomem 0xfed00000-0xfed003ff on acpi0 Timecounter "HPET" frequency 14318180 Hz quality 950 pcib0: port 0xcf8-0xcff iomem 0xa0000-0xbffff,0xc0000-0xdffff,0xe0000000-0xffffffff on acpi0 pci0: on pcib0 vgapci0: port 0x2000-0x20ff mem 0xe0000000-0xefffffff,0xf0100000-0xf013ffff at device 1.0 on pci0 vgapci0: Boot video device hdac0: mem 0xf0140000-0xf0143fff at device 1.1 on pci0 hdac0: hdac_get_capabilities: Invalid corb size (0) device_attach: hdac0 attach returned 6 xhci0: mem 0xf0148000-0xf0149fff at device 16.0 on pci0 xhci0: 32 byte context size. usbus0 on xhci0 xhci1: mem 0xf014a000-0xf014bfff at device 16.1 on pci0 xhci1: 32 byte context size. usbus1 on xhci1 ahci0: port 0x2410-0x2417,0x2420-0x2423,0x2418-0x241f,0x2424-0x2427,0x2400-0x240f mem 0xf014f000-0xf014f7ff at device 17.0 on pci0 ahci0: AHCI v1.30 with 8 6Gbps ports, Port Multiplier not supported ahcich0: at channel 0 on ahci0 ahcich1: at channel 1 on ahci0 ohci0: mem 0xf014c000-0xf014cfff at device 18.0 on pci0 usbus2 on ohci0 ehci0: mem 0xf014f800-0xf014f8ff at device 18.2 on pci0 usbus3: EHCI version 1.0 usbus3 on ehci0 ohci1: mem 0xf014d000-0xf014dfff at device 19.0 on pci0 usbus4 on ohci1 ehci1: mem 0xf014f900-0xf014f9ff at device 19.2 on pci0 usbus5: EHCI version 1.0 usbus5 on ehci1 pci0: at device 20.0 (no driver attached) hdac0: mem 0xf0144000-0xf0147fff at device 20.2 on pci0 isab0: at device 20.3 on pci0 isa0: on isab0 pcib1: at device 20.4 on pci0 pci1: on pcib1 ohci2: mem 0xf014e000-0xf014efff at device 20.5 on pci0 usbus6 on ohci2 sdhci_pci0: mem 0xf014fa00-0xf014faff at device 20.7 on pci0 sdhci_pci0: 1 slot(s) allocated pcib2: at device 21.0 on pci0 pci2: on pcib2 pcib3: at device 21.1 on pci0 pcib3: failed to allocate initial I/O port window: 0x1000-0x1fff pci3: on pcib3 re0: mem 0xf0004000-0xf0004fff,0xf0000000-0xf0003fff at device 0.0 on pci3 re0: Using 1 MSI-X message re0: turning off MSI enable bit. re0: Chip rev. 0x48000000 re0: MAC rev. 0x00000000 miibus0: on re0 rgephy0: PHY 1 on miibus0 rgephy0: none, 10baseT, 10baseT-FDX, 10baseT-FDX-flow, 100baseTX, 100baseTX-FDX, 100baseTX-FDX-flow, 1000baseT-FDX, 1000baseT-FDX-master, 1000baseT-FDX-flow, 1000baseT-FDX-flow-master, auto, auto-flow re0: Ethernet address: acpi_button0: on acpi0 orm0: at iomem 0xed800-0xeffff on isa0 sc0: at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=0x300> vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0 atkbdc0: at port 0x60,0x64 on isa0 atkbd0: irq 1 on atkbdc0 kbd0 at atkbd0 atkbd0: [GIANT-LOCKED] ppc0: cannot reserve I/O port range uart0: <16550 or compatible> at port 0x3f8-0x3ff irq 4 flags 0x10 on isa0 acpi_throttle0: on cpu0 hwpstate0: on cpu0 acpi_throttle1: on cpu1 acpi_throttle1: failed to attach P_CNT device_attach: acpi_throttle1 attach returned 6 acpi_throttle2: on cpu2 acpi_throttle2: failed to attach P_CNT device_attach: acpi_throttle2 attach returned 6 acpi_throttle3: on cpu3 acpi_throttle3: failed to attach P_CNT device_attach: acpi_throttle3 attach returned 6 ZFS filesystem version: 5 ZFS storage pool version: features support (5000) Timecounters tick every 1.000 msec hdacc0: at cad 0 on hdac0 hdaa0: at nid 1 on hdacc0 pcm0: at nid 20,22,21,23 and 24,26 on hdaa0 pcm1: at nid 27 and 25 on hdaa0 pcm2: at nid 30 on hdaa0 pcm3: at nid 17 on hdaa0 random: unblocking device. usbus0: 5.0Gbps Super Speed USB v3.0 usbus1: 5.0Gbps Super Speed USB v3.0 usbus2: 12Mbps Full Speed USB v1.0 usbus3: 480Mbps High Speed USB v2.0 usbus4: 12Mbps Full Speed USB v1.0 usbus5: 480Mbps High Speed USB v2.0 usbus6: 12Mbps Full Speed USB v1.0 ugen0.1: <0x1022> at usbus0 uhub0: <0x1022 XHCI root HUB, class 9/0, rev 3.00/1.00, addr 1> on usbus0 ugen6.1: at usbus6 uhub1: on usbus6 ugen5.1: at usbus5 uhub2: on usbus5 ugen4.1: at usbus4 uhub3: on usbus4 ugen3.1: at usbus3 uhub4: on usbus3 ugen2.1: at usbus2 uhub5: on usbus2 ugen1.1: <0x1022> at usbus1 uhub6: <0x1022 XHCI root HUB, class 9/0, rev 3.00/1.00, addr 1> on usbus1 ada0 at ahcich0 bus 0 scbus0 target 0 lun 0 ada0: ATA-8 SATA 2.x device ada0: Serial Number ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled ada0: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad4 ada1 at ahcich1 bus 0 scbus1 target 0 lun 0 ada1: ATA-8 SATA 2.x device ada1: Serial Number ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada1: Command Queueing enabled ada1: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada1: Previously was known as ad6 Netvsc initializing... SMP: AP CPU #3 Launched! SMP: AP CPU #1 Launched! SMP: AP CPU #2 Launched! Timecounter "TSC-low" frequency 1597128784 Hz quality 1000 uhub1: 2 ports with 2 removable, self powered Root mount waiting for: usbus5 usbus4 usbus3 usbus2 usbus1 usbus0 uhub3: 5 ports with 5 removable, self powered uhub5: 5 ports with 5 removable, self powered uhub0: 4 ports with 4 removable, self powered uhub6: 4 ports with 4 removable, self powered Root mount waiting for: usbus5 usbus3 uhub4: 5 ports with 5 removable, self powered uhub2: 5 ports with 5 removable, self powered Trying to mount root from zfs:zroot_mirror_hd103sj []... ugen3.2: at usbus3 run0: <1.0> on usbus3 run0: MAC/BBP RT3070 (rev 0x0201), RF RT2020 (MIMO 1T1R), address wlan1: Ethernet address: run0: firmware RT2870 ver. 0.236 loaded wlan1: link state changed to UP ==== dump 1 ==== $ sudo cat info.0 Dump header from device /dev/gpt/swap0 Architecture: amd64 Architecture Version: 2 Dump Length: 2289082368B (2183 MB) Blocksize: 512 Dumptime: Thu Mar 20 13:59:54 2014 Hostname: machete Magic: FreeBSD Kernel Dump Version String: FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC Panic String: spin lock held too long Dump Parity: 422405026 Bounds: 3 Dump Status: good $ sudo kgdb /boot/kernel/kernel.symbols vmcore.0 GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"... Unread portion of the kernel message buffer: spin lock 0xffffffff814fa030 (smp rendezvous) held by 0xfffff8000fb42920 (tid 100428) too long timeout stopping cpus panic: spin lock held too long cpuid = 3 KDB: stack backtrace: #0 0xffffffff808e7dd0 at kdb_backtrace+0x60 #1 0xffffffff808af8b5 at panic+0x155 #2 0xffffffff8089cb71 at _mtx_lock_spin_cookie+0x241 #3 0xffffffff80c7ef54 at smp_targeted_tlb_shootdown+0xf4 #4 0xffffffff80c80335 at pmap_invalidate_page+0x265 #5 0xffffffff80c88983 at pmap_ts_referenced+0x6c3 #6 0xffffffff80b2182a at vm_pageout+0x10fa #7 0xffffffff8088198a at fork_exit+0x9a #8 0xffffffff80c758ce at fork_trampoline+0xe Uptime: 1h29m59s Dumping 2183 out of 7624 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% Reading symbols from /boot/kernel/zfs.ko...done. Loaded symbols for /boot/kernel/zfs.ko Reading symbols from /boot/kernel/opensolaris.ko...done. Loaded symbols for /boot/kernel/opensolaris.ko Reading symbols from /boot/kernel/if_run.ko...done. Loaded symbols for /boot/kernel/if_run.ko Reading symbols from /boot/kernel/pf.ko...done. Loaded symbols for /boot/kernel/pf.ko #0 doadump (textdump=) at pcpu.h:219 219 pcpu.h: No such file or directory. in pcpu.h (kgdb) bt #0 doadump (textdump=) at pcpu.h:219 #1 0xffffffff808af530 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:447 #2 0xffffffff808af8f4 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:754 #3 0xffffffff8089cb71 in _mtx_lock_spin_cookie (c=, tid=, opts=, file=, line=) at /usr/src/sys/kern/kern_mutex.c:554 #4 0xffffffff80c7ef54 in smp_targeted_tlb_shootdown (mask={__bits = {4}}, vector=245, pmap=0xfffff8000f0bc9f8, addr1=34388402176, addr2=0) at /usr/src/sys/amd64/amd64/mp_machdep.c:1179 #5 0xffffffff80c80335 in pmap_invalidate_page (pmap=, va=) at /usr/src/sys/amd64/amd64/pmap.c:1376 #6 0xffffffff80c88983 in pmap_ts_referenced (m=0xfffff8021168a470) at /usr/src/sys/amd64/amd64/pmap.c:5744 #7 0xffffffff80b2182a in vm_pageout () at /usr/src/sys/vm/vm_pageout.c:1360 #8 0xffffffff8088198a in fork_exit (callout=0xffffffff80b20730 , arg=0x0, frame=0xfffffe02151b6a40) at /usr/src/sys/kern/kern_fork.c:995 #9 0xffffffff80c758ce in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606 #10 0x0000000000000000 in ?? () Current language: auto; currently minimal (kgdb) up #1 0xffffffff808af530 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:447 447 doadump(TRUE); (kgdb) list 442 * been completed. 443 */ 444 EVENTHANDLER_INVOKE(shutdown_post_sync, howto); 445 446 if ((howto & (RB_HALT|RB_DUMP)) == RB_DUMP && !cold && !dumping) 447 doadump(TRUE); 448 449 /* Now that we're going to really halt the system... */ 450 EVENTHANDLER_INVOKE(shutdown_final, howto); 451 (kgdb) up #2 0xffffffff808af8f4 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:754 754 kern_reboot(bootopt); (kgdb) list 749 /*thread_lock(td); */ 750 td->td_flags |= TDF_INPANIC; 751 /* thread_unlock(td); */ 752 if (!sync_on_panic) 753 bootopt |= RB_NOSYNC; 754 kern_reboot(bootopt); 755 } 756 757 /* 758 * Support for poweroff delay. (kgdb) up #3 0xffffffff8089cb71 in _mtx_lock_spin_cookie (c=, tid=, opts=, file=, line=) at /usr/src/sys/kern/kern_mutex.c:554 554 panic("spin lock held too long"); (kgdb) list 549 printf( "spin lock %p (%s) held by %p (tid %d) too long\n", 550 m, m->lock_object.lo_name, td, td->td_tid); 551 #ifdef WITNESS 552 witness_display_spinlock(&m->lock_object, td, printf); 553 #endif 554 panic("spin lock held too long"); 555 } 556 557 #ifdef SMP 558 /* (kgdb) up #4 0xffffffff80c7ef54 in smp_targeted_tlb_shootdown (mask={__bits = {4}}, vector=245, pmap=0xfffff8000f0bc9f8, addr1=34388402176, addr2=0) at /usr/src/sys/amd64/amd64/mp_machdep.c:1179 1179 mtx_lock_spin(&smp_ipi_mtx); (kgdb) list 1174 if (CPU_EMPTY(&mask)) 1175 return; 1176 } 1177 if (!(read_rflags() & PSL_I)) 1178 panic("%s: interrupts disabled", __func__); 1179 mtx_lock_spin(&smp_ipi_mtx); 1180 smp_tlb_invpcid.addr = addr1; 1181 if (pmap == NULL) { 1182 smp_tlb_invpcid.pcid = 0; 1183 } else { (kgdb) up #5 0xffffffff80c80335 in pmap_invalidate_page (pmap=, va=) at /usr/src/sys/amd64/amd64/pmap.c:1376 1376 smp_masked_invlpg(other_cpus, pmap, va); (kgdb) list 1371 if (pmap_pcid_enabled) 1372 CPU_AND(&other_cpus, &pmap->pm_save); 1373 else 1374 CPU_AND(&other_cpus, &pmap->pm_active); 1375 if (!CPU_EMPTY(&other_cpus)) 1376 smp_masked_invlpg(other_cpus, pmap, va); 1377 } 1378 sched_unpin(); 1379 } 1380 (kgdb) up #6 0xffffffff80c88983 in pmap_ts_referenced (m=0xfffff8021168a470) at /usr/src/sys/amd64/amd64/pmap.c:5744 5744 pmap_invalidate_page(pmap, pv->pv_va); (kgdb) list 5739 m)); 5740 pte = pmap_pde_to_pte(pde, pv->pv_va); 5741 if ((*pte & PG_A) != 0) { 5742 if (safe_to_clear_referenced(pmap, *pte)) { 5743 atomic_clear_long(pte, PG_A); 5744 pmap_invalidate_page(pmap, pv->pv_va); 5745 cleared++; 5746 } else if ((*pte & PG_W) == 0) { 5747 /* 5748 * Wired pages cannot be paged out so (kgdb) up #7 0xffffffff80b2182a in vm_pageout () at /usr/src/sys/vm/vm_pageout.c:1360 1360 act_delta += pmap_ts_referenced(m); (kgdb) list 1355 * 2) The ref was transitioning to one and we saw zero. 1356 * The page lock prevents a new reference to this page so 1357 * we need not check the reference bits. 1358 */ 1359 if (m->object->ref_count != 0) 1360 act_delta += pmap_ts_referenced(m); 1361 1362 /* 1363 * Advance or decay the act_count based on recent usage. 1364 */ (kgdb) up #8 0xffffffff8088198a in fork_exit (callout=0xffffffff80b20730 , arg=0x0, frame=0xfffffe02151b6a40) at /usr/src/sys/kern/kern_fork.c:995 995 callout(arg, frame); (kgdb) list 990 * cpu_set_fork_handler intercepts this function call to 991 * have this call a non-return function to stay in kernel mode. 992 * initproc has its own fork handler, but it does return. 993 */ 994 KASSERT(callout != NULL, ("NULL callout in fork_exit")); 995 callout(arg, frame); 996 997 /* 998 * Check if a kernel thread misbehaved and returned from its main 999 * function. (kgdb) up #9 0xffffffff80c758ce in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606 606 call fork_exit (kgdb) up #10 0x0000000000000000 in ?? () (kgdb) list 601 602 ENTRY(fork_trampoline) 603 movq %r12,%rdi /* function */ 604 movq %rbx,%rsi /* arg1 */ 605 movq %rsp,%rdx /* trapframe pointer */ 606 call fork_exit 607 MEXITCOUNT 608 jmp doreti /* Handle any ASTs */ 609 610 /* (kgdb) up Initial frame selected; you cannot go up. ==== dump 2 ==== $ sudo cat info.1 Dump header from device /dev/gpt/swap0 Architecture: amd64 Architecture Version: 2 Dump Length: 479866880B (457 MB) Blocksize: 512 Dumptime: Thu Mar 27 18:30:42 2014 Hostname: machete Magic: FreeBSD Kernel Dump Version String: FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC Panic String: spin lock held too long Dump Parity: 287807286 Bounds: 0 Dump Status: good $ sudo kgdb /boot/kernel/kernel.symbols vmcore.1 GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"... Unread portion of the kernel message buffer: spin lock 0xffffffff814fa030 (smp rendezvous) held by 0xfffff8000e525920 (tid 100396) too long timeout stopping cpus panic: spin lock held too long cpuid = 2 KDB: stack backtrace: #0 0xffffffff808e7dd0 at kdb_backtrace+0x60 #1 0xffffffff808af8b5 at panic+0x155 #2 0xffffffff8089cb71 at _mtx_lock_spin_cookie+0x241 #3 0xffffffff80c7ef54 at smp_targeted_tlb_shootdown+0xf4 #4 0xffffffff80c80335 at pmap_invalidate_page+0x265 #5 0xffffffff80c88983 at pmap_ts_referenced+0x6c3 #6 0xffffffff80b2182a at vm_pageout+0x10fa #7 0xffffffff8088198a at fork_exit+0x9a #8 0xffffffff80c758ce at fork_trampoline+0xe Uptime: 20m22s Dumping 457 out of 7624 MB:..4%..11%..21%..32%..42%..53%..63%..74%..81%..91% Reading symbols from /boot/kernel/zfs.ko...done. Loaded symbols for /boot/kernel/zfs.ko Reading symbols from /boot/kernel/opensolaris.ko...done. Loaded symbols for /boot/kernel/opensolaris.ko Reading symbols from /boot/kernel/if_run.ko...done. Loaded symbols for /boot/kernel/if_run.ko Reading symbols from /boot/kernel/pf.ko...done. Loaded symbols for /boot/kernel/pf.ko #0 doadump (textdump=) at pcpu.h:219 219 pcpu.h: No such file or directory. in pcpu.h (kgdb) bt #0 doadump (textdump=) at pcpu.h:219 #1 0xffffffff808af530 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:447 #2 0xffffffff808af8f4 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:754 #3 0xffffffff8089cb71 in _mtx_lock_spin_cookie (c=, tid=, opts=, file=, line=) at /usr/src/sys/kern/kern_mutex.c:554 #4 0xffffffff80c7ef54 in smp_targeted_tlb_shootdown (mask={__bits = {8}}, vector=245, pmap=0xfffff8000e2542f8, addr1=34385813504, addr2=0) at /usr/src/sys/amd64/amd64/mp_machdep.c:1179 #5 0xffffffff80c80335 in pmap_invalidate_page (pmap=, va=) at /usr/src/sys/amd64/amd64/pmap.c:1376 #6 0xffffffff80c88983 in pmap_ts_referenced (m=0xfffff80216069c60) at /usr/src/sys/amd64/amd64/pmap.c:5744 #7 0xffffffff80b2182a in vm_pageout () at /usr/src/sys/vm/vm_pageout.c:1360 #8 0xffffffff8088198a in fork_exit (callout=0xffffffff80b20730 , arg=0x0, frame=0xfffffe02151b6a40) at /usr/src/sys/kern/kern_fork.c:995 #9 0xffffffff80c758ce in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606 #10 0x0000000000000000 in ?? () Current language: auto; currently minimal (kgdb) up #1 0xffffffff808af530 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:447 447 doadump(TRUE); (kgdb) list 442 * been completed. 443 */ 444 EVENTHANDLER_INVOKE(shutdown_post_sync, howto); 445 446 if ((howto & (RB_HALT|RB_DUMP)) == RB_DUMP && !cold && !dumping) 447 doadump(TRUE); 448 449 /* Now that we're going to really halt the system... */ 450 EVENTHANDLER_INVOKE(shutdown_final, howto); 451 (kgdb) up #2 0xffffffff808af8f4 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:754 754 kern_reboot(bootopt); (kgdb) list 749 /*thread_lock(td); */ 750 td->td_flags |= TDF_INPANIC; 751 /* thread_unlock(td); */ 752 if (!sync_on_panic) 753 bootopt |= RB_NOSYNC; 754 kern_reboot(bootopt); 755 } 756 757 /* 758 * Support for poweroff delay. (kgdb) up #3 0xffffffff8089cb71 in _mtx_lock_spin_cookie (c=, tid=, opts=, file=, line=) at /usr/src/sys/kern/kern_mutex.c:554 554 panic("spin lock held too long"); (kgdb) list 549 printf( "spin lock %p (%s) held by %p (tid %d) too long\n", 550 m, m->lock_object.lo_name, td, td->td_tid); 551 #ifdef WITNESS 552 witness_display_spinlock(&m->lock_object, td, printf); 553 #endif 554 panic("spin lock held too long"); 555 } 556 557 #ifdef SMP 558 /* (kgdb) up #4 0xffffffff80c7ef54 in smp_targeted_tlb_shootdown (mask={__bits = {8}}, vector=245, pmap=0xfffff8000e2542f8, addr1=34385813504, addr2=0) at /usr/src/sys/amd64/amd64/mp_machdep.c:1179 1179 mtx_lock_spin(&smp_ipi_mtx); (kgdb) list 1174 if (CPU_EMPTY(&mask)) 1175 return; 1176 } 1177 if (!(read_rflags() & PSL_I)) 1178 panic("%s: interrupts disabled", __func__); 1179 mtx_lock_spin(&smp_ipi_mtx); 1180 smp_tlb_invpcid.addr = addr1; 1181 if (pmap == NULL) { 1182 smp_tlb_invpcid.pcid = 0; 1183 } else { (kgdb) up #5 0xffffffff80c80335 in pmap_invalidate_page (pmap=, va=) at /usr/src/sys/amd64/amd64/pmap.c:1376 1376 smp_masked_invlpg(other_cpus, pmap, va); (kgdb) list 1371 if (pmap_pcid_enabled) 1372 CPU_AND(&other_cpus, &pmap->pm_save); 1373 else 1374 CPU_AND(&other_cpus, &pmap->pm_active); 1375 if (!CPU_EMPTY(&other_cpus)) 1376 smp_masked_invlpg(other_cpus, pmap, va); 1377 } 1378 sched_unpin(); 1379 } 1380 (kgdb) up #6 0xffffffff80c88983 in pmap_ts_referenced (m=0xfffff80216069c60) at /usr/src/sys/amd64/amd64/pmap.c:5744 5744 pmap_invalidate_page(pmap, pv->pv_va); (kgdb) list 5739 m)); 5740 pte = pmap_pde_to_pte(pde, pv->pv_va); 5741 if ((*pte & PG_A) != 0) { 5742 if (safe_to_clear_referenced(pmap, *pte)) { 5743 atomic_clear_long(pte, PG_A); 5744 pmap_invalidate_page(pmap, pv->pv_va); 5745 cleared++; 5746 } else if ((*pte & PG_W) == 0) { 5747 /* 5748 * Wired pages cannot be paged out so (kgdb) up #7 0xffffffff80b2182a in vm_pageout () at /usr/src/sys/vm/vm_pageout.c:1360 1360 act_delta += pmap_ts_referenced(m); (kgdb) list 1355 * 2) The ref was transitioning to one and we saw zero. 1356 * The page lock prevents a new reference to this page so 1357 * we need not check the reference bits. 1358 */ 1359 if (m->object->ref_count != 0) 1360 act_delta += pmap_ts_referenced(m); 1361 1362 /* 1363 * Advance or decay the act_count based on recent usage. 1364 */ (kgdb) up #8 0xffffffff8088198a in fork_exit (callout=0xffffffff80b20730 , arg=0x0, frame=0xfffffe02151b6a40) at /usr/src/sys/kern/kern_fork.c:995 995 callout(arg, frame); (kgdb) list 990 * cpu_set_fork_handler intercepts this function call to 991 * have this call a non-return function to stay in kernel mode. 992 * initproc has its own fork handler, but it does return. 993 */ 994 KASSERT(callout != NULL, ("NULL callout in fork_exit")); 995 callout(arg, frame); 996 997 /* 998 * Check if a kernel thread misbehaved and returned from its main 999 * function. (kgdb) up #9 0xffffffff80c758ce in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606 606 call fork_exit Current language: auto; currently asm (kgdb) list 601 602 ENTRY(fork_trampoline) 603 movq %r12,%rdi /* function */ 604 movq %rbx,%rsi /* arg1 */ 605 movq %rsp,%rdx /* trapframe pointer */ 606 call fork_exit 607 MEXITCOUNT 608 jmp doreti /* Handle any ASTs */ 609 610 /* (kgdb) up #10 0x0000000000000000 in ?? () (kgdb) list 611 * To efficiently implement classification of trap and interrupt handlers 612 * for profiling, there must be only trap handlers between the labels btrap 613 * and bintr, and only interrupt handlers between the labels bintr and 614 * eintr. This is implemented (partly) by including files that contain 615 * some of the handlers. Before including the files, set up a normal asm 616 * environment so that the included files doen't need to know that they are 617 * included. 618 */ 619 620 #ifdef COMPAT_FREEBSD32 (kgdb) up Initial frame selected; you cannot go up. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 31 02:32:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1E32AC1B; Mon, 31 Mar 2014 02:32:56 +0000 (UTC) Received: from mail-pa0-x236.google.com (mail-pa0-x236.google.com [IPv6:2607:f8b0:400e:c03::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D8F7693E; Mon, 31 Mar 2014 02:32:55 +0000 (UTC) Received: by mail-pa0-f54.google.com with SMTP id lf10so7588471pab.27 for ; Sun, 30 Mar 2014 19:32:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=E0KhTnbURd8QuFlqldN8V4UJLFnqvrb1S78F+c6Z7mQ=; b=Tk3IrTtDGi/g14mHY/fSECh1HXScD/0PdTxd/GefRksIQfeeP69spImETymKA3HfsV UZ+p/ilENX3ru1MA7EJ9tv7XAL8QkGn043D7VeIZAWhWDAN6P+SUqZAGEyvp+ktfcVLT BY/bNdq4QxZLFi8thAZPotQd/OYz+dWTZ0PyB5jlP+SUpPsaOork9CSSn0uBzao0ybXH Xb2/CoRNaMTv07Hta8zQdd//gO0BkmpMMVZGWoikv1LfX9EjO3PDQeuotMljDpBVq4lj R7TPVaazwTurvqlS0cDJp3IQSbb7oBMMrxdxYS++dK0euNm6/+gvJnRQf2VOQVB6dmk2 HB2A== X-Received: by 10.68.58.34 with SMTP id n2mr4460385pbq.122.1396233175467; Sun, 30 Mar 2014 19:32:55 -0700 (PDT) Received: from pyunyh@gmail.com (lpe4.p59-icn.cdngp.net. [114.111.62.249]) by mx.google.com with ESMTPSA id dn1sm39954451pbb.54.2014.03.30.19.32.52 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 30 Mar 2014 19:32:54 -0700 (PDT) Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Mon, 31 Mar 2014 11:32:53 +0900 From: Yonghyeon PYUN Date: Mon, 31 Mar 2014 11:32:53 +0900 To: Rick Macklem Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem Message-ID: <20140331023253.GC3548@michelle.cdnetworks.com> References: <20140326023334.GB2973@michelle.cdnetworks.com> <1903781266.1237680.1395880068597.JavaMail.root@uoguelph.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1903781266.1237680.1395880068597.JavaMail.root@uoguelph.ca> User-Agent: Mutt/1.4.2.3i Cc: FreeBSD Filesystems , FreeBSD Net , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: pyunyh@gmail.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Mar 2014 02:32:56 -0000 On Wed, Mar 26, 2014 at 08:27:48PM -0400, Rick Macklem wrote: > pyunyh@gmail.com wrote: > > On Tue, Mar 25, 2014 at 07:10:35PM -0400, Rick Macklem wrote: > > > Hi, > > > > > > First off, I hope you don't mind that I cross-posted this, > > > but I wanted to make sure both the NFS/iSCSI and networking > > > types say it. > > > If you look in this mailing list thread: > > > http://docs.FreeBSD.org/cgi/mid.cgi?1850411724.1687820.1395621539316.JavaMail.root > > > you'll see that several people have been working hard at testing > > > and > > > thanks to them, I think I now know what is going on. > > > > > > Thanks for your hard work on narrowing down that issue. I'm too > > busy for $work in these days so I couldn't find time to investigate > > the issue. > > > > > (This applies to network drivers that support TSO and are limited > > > to 32 transmit > > > segments->32 mbufs in chain.) Doing a quick search I found the > > > following > > > drivers that appear to be affected (I may have missed some): > > > jme, fxp, age, sge, msk, alc, ale, ixgbe/ix, nfe, e1000/em, re > > > > > > > The magic number 32 was chosen long time ago when I implemented TSO > > in non-Intel drivers. I tried to find optimal number to reduce the > > size kernel stack usage at that time. bus_dma(9) will coalesce > > with previous segment if possible so I thought the number 32 was > > not an issue. Not sure current bus_dma(9) also has the same code > > though. The number 32 is arbitrary one so you can increase > > it if you want. > > > Well, in the case of "ix" Jack Vogel says it is a hardware limitation. > I can't change drivers that I can't test and don't know anything about > the hardware. Maybe replacing m_collapse() with m_defrag() is an exception, > since I know what that is doing and it isn't hardware related, but I > would still prefer a review by the driver author/maintainer before making > such a change. > > If there are drivers that you know can be increased from 32->35 please do > so, since that will not only avoid the EFBIG failures but also avoid a > lot of calls to m_defrag(). > > > > Further, of these drivers, the following use m_collapse() and not > > > m_defrag() > > > to try and reduce the # of mbufs in the chain. m_collapse() is not > > > going to > > > get the 35 mbufs down to 32 mbufs, as far as I can see, so these > > > ones are > > > more badly broken: > > > jme, fxp, age, sge, alc, ale, nfe, re > > > > I guess m_defeg(9) is more optimized for non-TSO packets. You don't > > want to waste CPU cycles to copy the full frame to reduce the > > number of mbufs in the chain. For TSO packets, m_defrag(9) looks > > better but if we always have to copy a full TSO packet to make TSO > > work, driver writers have to invent better scheme rather than > > blindly relying on m_defrag(9), I guess. > > > Yes, avoiding m_defrag() calls would be nice. For this issue, increasing > the transmit segment limit from 32->35 does that, if the change can be > done easily/safely. > > Otherwise, all I can think of is my suggestion to add something like > if_hw_tsomaxseg which the driver can use to tell tcp_output() the > driver's limit for # of mbufs in the chain. > > > > > > > The long description is in the above thread, but the short version > > > is: > > > - NFS generates a chain with 35 mbufs in it for (read/readdir > > > replies and write requests) > > > made up of (tcpip header, RPC header, NFS args, 32 clusters of > > > file data) > > > - tcp_output() usually trims the data size down to tp->t_tsomax > > > (65535) and > > > then some more to make it an exact multiple of TCP transmit data > > > size. > > > - the net driver prepends an ethernet header, growing the length > > > by 14 (or > > > sometimes 18 for vlans), but in the first mbuf and not adding > > > one to the chain. > > > - m_defrag() copies this to a chain of 32 mbuf clusters (because > > > the total data > > > length is <= 64K) and it gets sent > > > > > > However, it the data length is a little less than 64K when passed > > > to tcp_output() > > > so that the length including headers is in the range > > > 65519->65535... > > > - tcp_output() doesn't reduce its size. > > > - the net driver adds an ethernet header, making the total data > > > length slightly > > > greater than 64K > > > - m_defrag() copies it to a chain of 33 mbuf clusters, which > > > fails with EFBIG > > > --> trainwrecks NFS performance, because the TSO segment is dropped > > > instead of sent. > > > > > > A tester also stated that the problem could be reproduced using > > > iSCSI. Maybe > > > Edward Napierala might know some details w.r.t. what kind of mbuf > > > chain iSCSI > > > generates? > > > > > > Also, one tester has reported that setting if_hw_tsomax in the > > > driver before > > > the ether_ifattach() call didn't make the value of tp->t_tsomax > > > smaller. > > > However, reducing IP_MAXPACKET (which is what it is set to by > > > default) did > > > reduce it. I have no idea why this happens or how to fix it, but it > > > implies > > > that setting if_hw_tsomax in the driver isn't a solution until this > > > is resolved. > > > > > > So, what to do about this? > > > First, I'd like a simple fix/workaround that can go into 9.3. > > > (which is code > > > freeze in May). The best thing I can think of is setting > > > if_hw_tsomax to a > > > smaller default value. (Line# 658 of sys/net/if.c in head.) > > > > > > Version A: > > > replace > > > ifp->if_hw_tsomax = IP_MAXPACKET; > > > with > > > ifp->if_hw_tsomax = min(32 * MCLBYTES - (ETHER_HDR_LEN + > > > ETHER_VLAN_ENCAP_LEN), > > > IP_MAXPACKET); > > > plus > > > replace m_collapse() with m_defrag() in the drivers listed above. > > > > > > This would only reduce the default from 65535->65518, so it only > > > impacts > > > the uncommon case where the output size (with tcpip header) is > > > within > > > this range. (As such, I don't think it would have a negative impact > > > for > > > drivers that handle more than 32 transmit segments.) > > > From the testers, it seems that this is sufficient to get rid of > > > the EFBIG > > > errors. (The total data length including ethernet header doesn't > > > exceed 64K, > > > so m_defrag() fits it into 32 mbuf clusters.) > > > > > > The main downside of this is that there will be a lot of m_defrag() > > > calls > > > being done and they do quite a bit of bcopy()'ng. > > > > > > Version B: > > > replace > > > ifp->if_hw_tsomax = IP_MAXPACKET; > > > with > > > ifp->if_hw_tsomax = min(29 * MCLBYTES, IP_MAXPACKET); > > > > > > This one would avoid the m_defrag() calls, but might have a > > > negative > > > impact on TSO performance for drivers that can handle 35 transmit > > > segments, > > > since the maximum TSO segment size is reduced by about 6K. (Because > > > of the > > > second size reduction to an exact multiple of TCP transmit data > > > size, the > > > exact amount varies.) > > > > > > Possible longer term fixes: > > > One longer term fix might be to add something like if_hw_tsomaxseg > > > so that > > > a driver can set a limit on the number of transmit segments (mbufs > > > in chain) > > > and tcp_output() could use that to limit the size of the TSO > > > segment, as > > > required. (I have a first stab at such a patch, but no way to test > > > it, so > > > I can't see that being done by May. Also, it would require changes > > > to a lot > > > of drivers to make it work. I've attached this patch, in case > > > anyone wants > > > to work on it?) > > > > > > Another might be to increase the size of MCLBYTES (I don't see this > > > as > > > practical for 9.3, although the actual change is simple.). I do > > > think > > > that increasing MCLBYTES might be something to consider doing in > > > the > > > future, for reasons beyond fixing this. > > > > > > So, what do others think should be done? rick > > > > > > > AFAIK all TSO capable drivers you mentioned above have no limit on > > the number of TX segments in TSO path. Not sure about Intel > > controllers though. Increasing the number of segment will consume > > lots of kernel stack in those drivers. Given that ixgbe, which > > seems to use 100, didn't show any kernal stack shortage, I think > > bumping the number of segments would be quick way to address the > > issue. > > > Well, bumping it from 32->35 is all it would take for NFS (can't comment > w.r.t. iSCSI). ixgbe uses 100 for the 82598 chip and 32 for the 82599 > (just so others aren't confused by the above comment). I understand > your point was w.r.t. using 100 without blowing the kernel stack, but > since the testers have been using "ix" with the 82599 chip which is > limited to 32 transmit segments... > > However, please increase any you know can be safely done from 32->35, rick Done in r263957. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 31 07:56:30 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 66C6F4C6 for ; Mon, 31 Mar 2014 07:56:30 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 9FCF482D for ; Mon, 31 Mar 2014 07:56:29 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id KAA01472; Mon, 31 Mar 2014 10:56:21 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WUX4u-000CZI-Pp; Mon, 31 Mar 2014 10:56:20 +0300 Message-ID: <53391F6C.9070208@FreeBSD.org> Date: Mon, 31 Mar 2014 10:55:24 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Idwer Vollering , freebsd-fs@FreeBSD.org Subject: Re: ZFS panic: spin lock held too long References: In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Mar 2014 07:56:30 -0000 on 30/03/2014 21:43 Idwer Vollering said the following: > Unread portion of the kernel message buffer: > spin lock 0xffffffff814fa030 (smp rendezvous) held by > 0xfffff8000fb42920 (tid 100428) too long Please note the tid and obtain a stack trace for that thread. You can switch to the thread using 'tid' command in kgdb. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Mar 31 11:06:43 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B9322963 for ; Mon, 31 Mar 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9C416B96 for ; Mon, 31 Mar 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s2VB6hPY058671 for ; Mon, 31 Mar 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s2VB6hu6058669 for freebsd-fs@FreeBSD.org; Mon, 31 Mar 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 31 Mar 2014 11:06:43 GMT Message-Id: <201403311106.s2VB6hu6058669@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Mar 2014 11:06:43 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/187905 fs [zpool] Confusion zpool with a block size in HDD - blo o kern/187778 fs [zfs] Two ZFS filesystems mounted on / at same time o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix o kern/187261 fs [fuse] FUSE kernel panic when using socket / bind o bin/187071 fs [nfs] nfs server only start 2 daemons 1 master & 1 ser o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186574 fs [zfs] zpool history hangs (infinite loop) o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185858 fs [zfs] zvol clone can't see new device o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 344 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 31 03:53:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4F8F7CB6; Mon, 31 Mar 2014 03:53:25 +0000 (UTC) Received: from mail-wg0-x22d.google.com (mail-wg0-x22d.google.com [IPv6:2a00:1450:400c:c00::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 896FA12A; Mon, 31 Mar 2014 03:53:23 +0000 (UTC) Received: by mail-wg0-f45.google.com with SMTP id l18so5452675wgh.16 for ; Sun, 30 Mar 2014 20:53:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=7oRk5DQk/OhqYya/rxSSCXJeiNM5SbOyw2UpNnOjoAM=; b=xjJ/AriVAhmYJNAchPqR7ANCSuRCfmiAQYHEbYxiOi+W5VWoIHIHTLCXO0U94NzdEa /Go+7B+pcc8iyHnIJ3ccDIchC/VwzJdsztrPL1HCX47dnwtdR2WolAxuF+j5PVDQz8le syVRveRwiLZ+Gy1E7xW0uakjgT5W/7rLRd1l7XkGeGG3p9ocRwIywFnSSrS+i9kySwi9 0W4jscI8NMsQkluNgYVC+9vXKb2qk2rE3VdLaXnrn4NFQpZ+UaXLz0nqRhA7V3zBvlfy bmTT5Q07chC0nkcB0n8EmSBg7SywOOyFR4QTHcUGF5DqV1/RIyFkk7UwexbFrct+1JWu N6Tw== MIME-Version: 1.0 X-Received: by 10.194.175.70 with SMTP id by6mr10961943wjc.3.1396238001585; Sun, 30 Mar 2014 20:53:21 -0700 (PDT) Received: by 10.216.190.199 with HTTP; Sun, 30 Mar 2014 20:53:21 -0700 (PDT) In-Reply-To: <1377879526.2465097.1396046676367.JavaMail.root@uoguelph.ca> References: <1377879526.2465097.1396046676367.JavaMail.root@uoguelph.ca> Date: Mon, 31 Mar 2014 11:53:21 +0800 Message-ID: Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem From: Marcelo Araujo To: Rick Macklem Content-Type: multipart/mixed; boundary=089e013d175a1cb6b104f5defe9c X-Mailman-Approved-At: Mon, 31 Mar 2014 11:18:51 +0000 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: araujo@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Mar 2014 03:53:25 -0000 --089e013d175a1cb6b104f5defe9c Content-Type: text/plain; charset=ISO-8859-1 Hello Rick, We have made couple more benchmarks here with additional options, such like '64 threads' and readahead=8. Now, we add nfsstat and netstat -m into the table. Here attached is the full benchmark, and I can say, this patch really improved the read speed. I understand your concern about add more one sysctl, however maybe we can do something like ZFS does, if it detect the system is AMD and have more than X of RAM it enables some options by default, or a kind of warning can be displayed show the new sysctl option. Of, course other people opinion will be very welcome. Best Regards, 2014-03-29 6:44 GMT+08:00 Rick Macklem : > Marcelo Araujo wrote: > > 2014-03-28 5:37 GMT+08:00 Rick Macklem : > > > > > Christopher Forgeron wrote: > > > > I'm quite sure the problem is on 9.2-RELEASE, not 9.1-RELEASE or > > > > earlier, > > > > as a 9.2-STABLE from last year I have doesn't exhibit the > > > > problem. > > > > New > > > > code in if.c at line 660 looks to be what is starting this, which > > > > makes me > > > > wonder how TSO was being handled before 9.2. > > > > > > > > I also like Rick's NFS patch for cluster size. I notice an > > > > improvement, but > > > > don't have solid numbers yet. I'm still stress testing it as we > > > > speak. > > > > > > > Unfortunately, this causes problems for small i386 systems, so I > > > am reluctant to commit it to head. Maybe a variant that is only > > > enabled for amd64 systems with lots of memory would be ok? > > > > > > > > Rick, > > > > Maybe you can create a SYSCTL to enable/disable it by the end user > > will be > > more reasonable. Also, of course, it is so far safe if only 64Bits > > CPU can > > enable this SYSCTL. Any other option seems not OK, will be hard to > > judge > > what is lots of memory and what is not, it will depends what is > > running > > onto the system. > > > I guess adding it so it can be optionally enabled via a sysctl isn't > a bad idea. I think the largest risk here is "how do you tell people > what the risk of enabling this is"? > > There are already a bunch of sysctls related to NFS that few people > know how to use. (I recall that Alexander has argued that folk don't want > worry about these tunables and I tend to agree.) > > If I do a variant of the patch that uses m_getjcl(..M_WAITOK..), then > at least the "breakage" is thread(s) sleeping on "btallo", which is > fairly easy to check for, althouggh rather obscure. > (Btw, I've never reproduced this for a patch that changes the code to > always use MJUMPAGESIZE mbuf clusters. > I can only reproduce it intermittently when the patch mixes allocation of > MCLBYTES clusters and MJUMPAGESIZE clusters.) > > I've been poking at it to try and figure out how to get > m_getjcl(..M_NOWAIT..) > to return NULL instead of looping when it runs out of boundary tags (to > see if that can result in a stable implementation of the patch), but > haven't had much luck yet. > > Bottom line: > I just don't like committing a patch that can break the system in such an > obscure way, even if it is enabled via a sysctl. > > Others have an opinion on this? > > Thanks, rick > > > The SYSCTL will be great, and in case you don't have time to do it, I > > can > > give you a hand. > > > > I'm gonna do more benchmarks today and will send another report, but > > in our > > product here, I'm inclined to use this patch, because 10~20% speed up > > in > > read for me is a lot. :-) > > > > Thank you so much and best regards, > > -- > > Marcelo Araujo > > araujo@FreeBSD.org > > _______________________________________________ > > freebsd-net@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-net > > To unsubscribe, send any mail to > > "freebsd-net-unsubscribe@freebsd.org" > > > -- Marcelo Araujo araujo@FreeBSD.org --089e013d175a1cb6b104f5defe9c Content-Type: application/pdf; name="Benchmarkoriginal_64t_rahead8.pdf" Content-Disposition: attachment; filename="Benchmarkoriginal_64t_rahead8.pdf" Content-Transfer-Encoding: base64 X-Attachment-Id: f_htf7zpmk0 JVBERi0xLjUNCiW1tbW1DQoxIDAgb2JqDQo8PC9UeXBlL0NhdGFsb2cvUGFnZXMgMiAwIFIvTGFu Zyh6aC1UVykgL1N0cnVjdFRyZWVSb290IDEwOSAwIFIvTWFya0luZm88PC9NYXJrZWQgdHJ1ZT4+ Pj4NCmVuZG9iag0KMiAwIG9iag0KPDwvVHlwZS9QYWdlcy9Db3VudCA0Mi9LaWRzWyAzIDAgUiAx MyAwIFIgMTcgMCBSIDE5IDAgUiAyMSAwIFIgMjMgMCBSIDI1IDAgUiAyNyAwIFIgMjkgMCBSIDMx IDAgUiAzMyAwIFIgMzUgMCBSIDM3IDAgUiAzOSAwIFIgNDEgMCBSIDQzIDAgUiA0NSAwIFIgNDcg MCBSIDUxIDAgUiA1NSAwIFIgNTkgMCBSIDYxIDAgUiA2MyAwIFIgNjUgMCBSIDY3IDAgUiA2OSAw IFIgNzEgMCBSIDczIDAgUiA3NSAwIFIgNzcgMCBSIDc5IDAgUiA4MSAwIFIgODMgMCBSIDg1IDAg UiA4NyAwIFIgODkgMCBSIDkxIDAgUiA5MyAwIFIgOTUgMCBSIDk3IDAgUiAxMDIgMCBSIDEwNyAw IFJdID4+DQplbmRvYmoNCjMgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3Vy Y2VzPDwvRm9udDw8L0YxIDUgMCBSL0YyIDcgMCBSL0YzIDkgMCBSPj4vWE9iamVjdDw8L0ltYWdl MTEgMTEgMCBSPj4vUHJvY1NldFsvUERGL1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUldID4+L01l ZGlhQm94WyAwIDAgNjEyIDc5Ml0gL0NvbnRlbnRzIDQgMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9T L1RyYW5zcGFyZW5jeS9DUy9EZXZpY2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFyZW50cyAwPj4NCmVu ZG9iag0KNCAwIG9iag0KPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAxNzAzPj4NCnN0cmVh bQ0KeJyVWW1v2zYQ/m7A/4EfpWKmeXyTNDQFmqQpOqzdhqTYh2IoVEd52WK5s9126a/fHSk5kmzK bIskknx87nQvzx1pNv+dPX8+f3v25pyJFy/Y6fkZA3Y7ncwEm4HiipnCcG1YriVbV9PJzbPphL16 e8ZYZyXMfy3rW5ZU9ez9ZdrAnF5NJ/MLYCB5YdnVzXQCTOB/YMqyTGmu2dVyOhGkTbDX08mHhKV/ satfppNXVweUyJCSLqzIuJAO90MCnKUznbxKdVJ/TWdKJPfpLEvWq3pZ1XifbLv6ukiA1hndRxu1 TYUdIBkAF3roAFvgUxuDrY+9txGa428DPFcxgCbCkUYV3BRdR2bJRQoqWZPjqipVySn69RJ/zlOT fHyZWpE4d9/j7cZdlezdhZMBgx/ZZFOtv6ZFUq1TI5OfSMKJjcGe4rIhKokT3Dt8dHHZCiwe8Jak MLQYWXrEA9FV0nKl+q846jAb4zBpuMhi0LIYNBDcmAZN/oj7S3r9a4bCYicMOuDVcl15561oVU2w ixTAu6+6TpUXu75PIWvQFtuHRx8B/MDJ3e3sWn3BFbd37bJyJ3eGWl+muUiuQiHJkGqK/mvPgrIZ B+jLWrZI86T8hKopDyr2jXRTOngTGYjXZOEul+pt5T2UJTfpDB1ULpyfQjljkAqU7isdjXIeEWWN 1FpADFoRg5YDNy2a4kNum1+oA0ykcy51f+mcfLS5S116pCD9g/pms3UJsCE3bffAD9EcQMGzvI8e pFyJbaIv2mUSKSgvX9MNpTLVv0u5conJiebSp/842W9EN53Iv79oUl8yZAuFEXbEYwRmJL7WLsmd NoJxd9V/lECfV2mT2t2KsEm5YSWlzz4NLXd1UG8xI90zh+KTDh+FMkxnOYcRZ+03XRFuOooVB5qu ttlTi5hvPt1jF8TwLq+d5VgHlvx92Lwi4ybXfYhQhYIQvOjJMsGlxMvFh+RtMANwUJCyv8qtCKoB gzb1FswE5qDyii7Dqab3zBtXJHMchHqvviLPsXJDxPxYL4KqLNaXjfRaBtzKviyOLUXCZBA+3zMs iI7zhh1YsmESS0DcsiW9DWYBVg4IKq856d3clWt6v2pe3zhO2FabQ4W/n2o4OOamr2ufjg4ORmC5 hUHmCCMKFx/GQo5Q2H37GpuQjldQcG7tFo1R3EY1doiZUDVOHdB2EP1jLN1d2pJy064dMatIXs6x l+UDvGDdY8MTA9kOyRFLbj2j2oZxq4Yk/bxAAl7ed1zmu0vZEmvVjnO+Da8e8PLateK1b9gx/pFa 8Lw44KDDbSz7EWcppCUNcc5SyEhDWVc6fjxa4vTrR951VdEQtpvIyAkvQ6AFohVhA/bzcGQ3EmgM WD66zfElcptJvhAluP6wPRSHdrEtfCp1AUIUlEme6YGsB6+P9B4kBRmpBJ1VDA3yZL2mOeH+O11X J5gDkjjOSmI40PiZDrIsDqe4+41SD5l1I8+++jKd5ThgoM5Hym7n2mDfkMiDWRanUkrBQR1SWbtI ltv7paP2KqgtQ4SBNiBL8TE1CE6mu8aAV1jD7vJnitycKnnmBmnV7RP4TOJf6iPQpNENyYdayAHK kzg7Fn2bxrP+6D6ZEh0VqbjpKmaXrAoXphi44B4yeESiciS11lbpzzIu3vzm/l5W2+19fRs9hYsB 3LitR3eoWMkCC1JZTV33aVpfEfejWc24S3cnX1OX9pgjj9Sr6kWocqjJyT7muJlHt1itmUbQft9D 4nbB7rayJ1QfIjhe4iTeXzxuz9FNWmuPMk+R6NmzWlNwnZ9O5kRMrr/6onral+9V0M5iYzkM8MeP 1IJbiKHJEoghmrGRgvud7KlOjCAuEEtGBI7mNw0OLIYeP8Jfy+Ds6nbUPeRxY49Oa62xAvuFeWow OJOsq/I6uFlRLu+6i8btODrmNXbIQhIxe8iaxiU3CP1NSbeiowoX0xMI7jpcNHso44YF+/7QsCyj KdtDPk1hfgKj4D40Px/Jyi83KRQ+Cav1JnhiCtT3esjjxh6l69ZYi+SYdUmGduHV5/aU7+4Eky3s wWyAMG7UUdJvjUKy0nHZEnN4KJXiUSeRMubwUAINZVFwMadUUiAjR51Uy5hjKijoLCDqUP0oOxFc ZnhUbFXMZg8s0CFMDFzMZg+0pU13DNzR2iU4JWO/j4gZhgAyHpXDKmYWAoFVFuc62yD8S18zceH+ 4Q7H6ILj6G4BpxhleeG/cPrzGaunE21zr0oZdz5ugCp7gcrmb5blbQXAzlfsj0bb/x8LlFkNCmVu ZHN0cmVhbQ0KZW5kb2JqDQo1IDAgb2JqDQo8PC9UeXBlL0ZvbnQvU3VidHlwZS9UcnVlVHlwZS9O YW1lL0YxL0Jhc2VGb250L0FCQ0RFRStUcmVidWNoZXQjMjBNUyxCb2xkL0VuY29kaW5nL1dpbkFu c2lFbmNvZGluZy9Gb250RGVzY3JpcHRvciA2IDAgUi9GaXJzdENoYXIgMzIvTGFzdENoYXIgMTIx L1dpZHRocyAzMTEzIDAgUj4+DQplbmRvYmoNCjYgMCBvYmoNCjw8L1R5cGUvRm9udERlc2NyaXB0 b3IvRm9udE5hbWUvQUJDREVFK1RyZWJ1Y2hldCMyME1TLEJvbGQvRmxhZ3MgMzIvSXRhbGljQW5n bGUgMC9Bc2NlbnQgOTM5L0Rlc2NlbnQgLTIwNS9DYXBIZWlnaHQgNzM3L0F2Z1dpZHRoIDQ3NC9N YXhXaWR0aCAxMzQ5L0ZvbnRXZWlnaHQgNzAwL1hIZWlnaHQgMjUwL1N0ZW1WIDQ3L0ZvbnRCQm94 WyAtMjE5IC0yMDUgMTEzMCA3MzddIC9Gb250RmlsZTIgMzExNCAwIFI+Pg0KZW5kb2JqDQo3IDAg b2JqDQo8PC9UeXBlL0ZvbnQvU3VidHlwZS9UcnVlVHlwZS9OYW1lL0YyL0Jhc2VGb250L0FyaWFs L0VuY29kaW5nL1dpbkFuc2lFbmNvZGluZy9Gb250RGVzY3JpcHRvciA4IDAgUi9GaXJzdENoYXIg MzIvTGFzdENoYXIgMTIyL1dpZHRocyAzMTE1IDAgUj4+DQplbmRvYmoNCjggMCBvYmoNCjw8L1R5 cGUvRm9udERlc2NyaXB0b3IvRm9udE5hbWUvQXJpYWwvRmxhZ3MgMzIvSXRhbGljQW5nbGUgMC9B c2NlbnQgOTA1L0Rlc2NlbnQgLTIxMC9DYXBIZWlnaHQgNzI4L0F2Z1dpZHRoIDQ0MS9NYXhXaWR0 aCAyNzEwL0ZvbnRXZWlnaHQgNDAwL1hIZWlnaHQgMjUwL0xlYWRpbmcgMzMvU3RlbVYgNDQvRm9u dEJCb3hbIC02NjUgLTIxMCAyMDQ2IDcyOF0gPj4NCmVuZG9iag0KOSAwIG9iag0KPDwvVHlwZS9G b250L1N1YnR5cGUvVHJ1ZVR5cGUvTmFtZS9GMy9CYXNlRm9udC9BcmlhbCxCb2xkL0VuY29kaW5n L1dpbkFuc2lFbmNvZGluZy9Gb250RGVzY3JpcHRvciAxMCAwIFIvRmlyc3RDaGFyIDMyL0xhc3RD aGFyIDEyMi9XaWR0aHMgMzExNiAwIFI+Pg0KZW5kb2JqDQoxMCAwIG9iag0KPDwvVHlwZS9Gb250 RGVzY3JpcHRvci9Gb250TmFtZS9BcmlhbCxCb2xkL0ZsYWdzIDMyL0l0YWxpY0FuZ2xlIDAvQXNj ZW50IDkwNS9EZXNjZW50IC0yMTAvQ2FwSGVpZ2h0IDcyOC9BdmdXaWR0aCA0NzkvTWF4V2lkdGgg MjYyOC9Gb250V2VpZ2h0IDcwMC9YSGVpZ2h0IDI1MC9MZWFkaW5nIDMzL1N0ZW1WIDQ3L0ZvbnRC Qm94WyAtNjI4IC0yMTAgMjAwMCA3MjhdID4+DQplbmRvYmoNCjExIDAgb2JqDQo8PC9UeXBlL1hP YmplY3QvU3VidHlwZS9JbWFnZS9XaWR0aCA2MjIvSGVpZ2h0IDE4MC9Db2xvclNwYWNlL0Rldmlj ZVJHQi9CaXRzUGVyQ29tcG9uZW50IDgvSW50ZXJwb2xhdGUgZmFsc2UvU01hc2sgMTIgMCBSL0Zp bHRlci9GbGF0ZURlY29kZS9MZW5ndGggNzA1Mz4+DQpzdHJlYW0NCnic7Z19sFXVeYePwx1xtH5E ifgdVBQqigRUiooKEq8KggpGUSg4NxYHFesnVQyJVgJRIwGLGosMhWLRSHH8oqV+BDJEaWWuRlsl OtY2tuM0mUlm6h/9wz/687xldbnPOfuefbkvZ+17n988w5y79zpr733uvu+z1/44VCqEEEII2a2M a590/S13A/Q4p40Z2+q9m5AvQ5UDJ0KVu+/hxzv//b8BehztZq2tn4RYqHLgRKhy7GPgBCYliYQq B05gUvAGk5JEQpUDJzApeINJSSKhyoETmBS8waQkkVDlwAlMCt5gUpJIqHLgBCYFbzApSSRUOXAC k4I3mJQkEqocOIFJwRtMShIJVQ6cwKTgDSYliYQqB05gUvAGk5JEQpUDJzApeINJSSKhyoETTZp0 3l33nTZmbF2cds7MEs85/8LJV1zzZ3/+o63vfZppue7FLRMvv2rQ8ScecdQxRw86Tq//8umX4wbt k6fFXanBZdNnqavX3/6kyZW5du6teuOiZStb/vsqI5iUJJKcYrV+07ZGJU4FxOPvYsXajZkFqcRd N+/Oja/vyLR8ZcdHmj5k2HCVODH67HEqX299/LvQQNsV9zOufZJ1pY3KX4faLb1q9pyf/M0LLS8a paNJk+r3ojaHfH2g/Spj7vj+Dz1WzJY48PAjw4JsPTXlpV/8c2j2vQdW9Gtr27v/Ptq79Bb9q9dq JhGHNvbe0I96sK72P+DA5auf7XJN5G71qaUMHnJSy39fZQSTkkSSU+V0+F2p1oTaEvfN08d4/F1o ZTJL3He//TRFpWbpyvWh2fNb31Hh1fSTho/UKGDCxZdaBdNabf/wN/GfWFyfrQwqGgXkrIMaqGV4 10EHH2LvuqbjhpbXjXJRyKSZsZ4rtsRYmtptbrhjoSaGQ8Rt73+mfU/j0L/b/kFopteaor3xp/+w 3aaYSePO1ZUUrPeK0KwROvzT23Wotoc/gV4DJiWJpEuTal/dY38XZtLMEldt2Cy3imBJDTBVzTSA DW00Gr1s+iy9V/Uw/hPLVKc1z7026PgTNf3WexY1WgfN1VA0nqL6KWVrutak5aWjRPS4SV/Z8ZEO ouIpGtOt37RN0+u21w4jl218fUd8sqKznkkNO+6y1412/iUrVleiYWmtSeNm2lHzt0j7lQ4Ctf7a n53O8/RuMClJJD1iUh3AZ+pVoyIWkJ5UA/XGeGJdkwo7aJcH7Ucd7Z8y8vTaddBYUqUp/hOrrc+q WublzKIDlRqTioce/+tKpGlohp4yqRpIWHautbLr5IB+j/JU2I2HDBu+7sUt4S3a666de6ud0FAO OvgQjf4yS6w1qZqFs6zqrVJPhdqxNStcUW1kUqFjNq1wo92ss3r1pLLrDMk551+ovbfRIQE0ApOS RLI7JrUGKnF2rtWO52uLWOZq1/LVzx496Dibq1KjsWSoNvkmDaXPTrrWlh2VuHAurpFJO6v3eGiW Rg11N6rS2KQLFi9reekoET1oUh35aI/SrqLX2n8kMu1CmqJdSwds+u1IW/oxXE+3ExTTZnTo6GvV hs0TLr5UP4b9sNakeq3Gmhjf+aOjtUrV0bfes0jWq3tMmGNS6zDnPIbt1bbOtoPFV2ChGTApSSS7 b1JVOdWcP55z83Xz7uzcVcT0rxWx9snTKlERUxmsVK9vLl25XjVQpUMyHX32OJtba1LVTClPRVIH 7WGi3lWpXgPV8GTF2o11D/tzTKq3VBpf96x81aQag2grVKgHHn5k7b2dkMNu3nGUEV+8V9g1Tdkn TNERlMZ0MqZea7/S3ImXXxUv5ZunjwknIqzD2mQuoL/+9ie291q0E2qIqr0xVmqOSW3zGx2wab/S 0WA4taIf7faAlv/WygUmJYmkG3cchduNrIG0GGpL3SImMYUipqGEambsPh3w6y12o6OZtDYaF2Qs pnepT5srF6siSa9xmxyT2lk1ldO6W113BbSsLm/6hQyFTKpfcaNHYKxB/Plrl9NvJNOPjscku85d B1qZX73d22P3YFuH0q5eGDKm3bSmA8JMt3L0gsXL1MBOvFSqN7Y1c3Y336R2ITU+y2Fj2Gbu+IXM h0xIy9OlSVUrGj0CYw3iI3krYvG9QJ27ipgmmmczg0FVqkr19sXOXSZVUQ0lTtjdPvo3I1MdxmtU ovozeMhJti06yA8n03bHpKqZ8Qqced63JGt1Hi7UQjP04NndyldPxobnVmrzyo6PGg05w5rUvU6q 3UkuruyybV3WvbhFO0OlesolXpO6jfPP7lo/2qvDn5XdCxefe4EuwaQkkez+2d24h/wiZu3rRoWl s/F1UhO0SlOjNZGOr5t3p3ynkYXd4ptjUjvD3OTZXUP1UJ1rQN3y0lEiXE0q1G0t297/zNrfes+i 2rl262+jO46Wrlxf2TUs1Wix7v721se/0z4WRsQ5JtUBXqM7juzoUf1kjlF1tKa3xM/dQD6YlCQS D5M2KmLWfuLlV9XOtWFsI5NKjppuIlMnWkrtdzV0fnUUkGNSObdS8I6jzuqFNs2iyjWPn0lPGXn6 vvvtF56KMtZv2mbPb9r1gsyJEb19zXOvxddJa01qZ1ztcr9dIa17Sj8+t9zIpBrYVhoPMO2Tia/z Grbm+c87Q+0nSUjL07MmzS9i0l8lOjNmqB6qH/t2tUYm1Xs1fdDxJ3buusvRyl0GU2S+SdXVIV8f uHf/fRrdPlRpYFK7k5PnFJrHz6R2jiK+J1xzwzNQdhFBv8Rw+V4vdCCk4Z51Utek+s0OGTY87D92 4kK7XKaZ2Tb/7K62RfuYFtfocoDeFT8fHdBfgbZC72307BhkwKQkkfSsSc2VOUVMpUk1pLYq2uN+ dU2qHux5ATvtJhXaMxGZezM05h14+JGqQjlnd7VcrVv+RlXqmdTqpypty0tHifAzqX7FdnF82owO dS6l6levfSx0Yg86acdbsHjZ9x5YYRdAw1jPOtShUTitKgXbw6rxzXLXdNxQqX7hlYaW2ve0E9ol e/suBWtjJg39aIn2pVjqrXbIadhwNXM8GbAHdhq9FzJgUpJIetaknV0VMb1FRUa+szPAqk72paOm PzNpfI+TPbxQqY4OwreCy6H21X9Sm3pQ5/Zge/ylgvYnFt8RGh5ibZ88LeeYv1K9Uze+emV3dcrd 3HFUiCZNKknpN57z2VqDzHl1HVDJdHY/rX7v2k8yLtbuZ/fwVKrXBeJvZrAOY2RV7RXxN1IaMpp2 gPBwtFrqvfG302u5cT/ak88871vXzbsz811MtZvT6E4keVZzG90OBxkwKUkkOVVOxS3/W8StQe3x c04R69x1A6SpUM7SwXkoTeoqU+KkQlky84RLZ/UMnoYP4UtxVet0MB9/y41WO9OVSqKGMF1+F33m XdoQHQyo+uXURqgL/6saeINJSSKhyoETmBS8waQkkVDlwAlMCt5gUpJIqHLgBCYFbzApSSRUOXAC k4I3mJQkEqocOIFJwRtMShIJVQ6cwKTgDSYliYQqB05gUvAGk5JEQpUDJzApeINJSSKhyoETmBS8 waQkkVDlwAlMCt5gUpJIqHLgBCYFbzApSSRUOXACk4I3mJQkEqocOIFJwRtMShIJVQ6cwKTgDSYl iYQqB05gUvAGk5JEQpUDJzApeINJSSKhyoETmBS8waQkkVDlwAlMCt5gUpJIqHLgBCYFbzApSSRU OXACk4I3mJQkEqocOIFJwRtMShIJVQ6cwKTgDSYliYQqB05gUvAGk5JEQpUDJzApeINJSSKhyoET mBS8waQkkVDlwAlMCt5gUpJIqHLgBCYFbzApSSRUOXACk4I3mJQkEqocOIFJwRtMShIJVQ6cwKTg DSYliYQqB05gUvAGk5JEQpUDJzApeINJSSKhyoETmBS8waQkkVDlwAlMCt5gUpJIqHLgBCYFbzAp SSRUOXACk4I3mJQkEqocOIFJwRtMShIJVQ6cwKTgDSYliYQqB05gUvAGk5JEQpUDJzApeINJSSKh yoETmBS8waQkkVDlwAlMCt5gUpJIqHLgBCYFbzApSSRUOXACk4I3mJQkEqocOIFJwRtMShIJVQ6c wKTgDSYliYQqB05gUvAGk5JEQpUDJzApeINJSSKhyoETmBS8waQkkVDlwAlMCt5gUpJIqHLgBCYF bzApSSRUOXACk4I3mJQkEqocOIFJwRtMShIJVQ6cwKTgDSYliYQqB05gUvAGk5JEQpUDJzApeINJ SSKhyoETmBS8waQkkVDlwAlMCt5gUpJIqHLgBCYFbzApSSRUOXACk4I3mJQkEqocOIFJwRtMShIJ VQ6cwKTgDSYliYQqB05gUvAGk5JEQpUDJzApeINJSSKhyoETmBS8waQkkVDlwAlMCt5gUpJIqHLg BCYFbzApSSRUOXACk4I3mJQkEqocOIFJwRtMShIJVQ6cwKTgDSYliYQqB05gUvAGk5JEQpUDJzAp eINJSSKhyoETmBS8waQkkVDlwAlMCt5gUpJIqHLgBCYFbzApSSRUOXACk4I3mJQkEqocOJGgSVes 3aiVgWRZtWFz9/YxQlqb+5KpctDLSM2kS1asbuVfGmki/draCskUk5JEkkiVg95Haia19Zlw0fib 7pgLCXLGWacX3VswKUkkiVQ56H2kadIlj9y/87/ehQSRTIvuLZiUJJJEqhz0PjApFAKTkvImkSoH vQ9MCoXApKS8SaTKQe8Dk0IhMCkpbxKpctD7wKRQCExKyptEqhz0PjApFAKTkvKme1WuDz7hXvSZ ccCkUAhMSsqbblS5vvmEe1vBZ8YBk0IhMCkpb7pR5WzvnXLheQtvm9NHOHfMqO59Vn0ZTAqFwKSk vOm2SVf9+N4v/mNHH0Ey7d5n1ZfBpFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3A pFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yK SZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErKG0yKSZ3ApFAITErK G0yKSZ3ApFAITErKG0yKSZ0or0n/6cNfbHn7lZabpS6v7fj7zn/d3mRjtfzlr99q+To3CSYl5Y2r ST9684Xf/+rnTdpKLT//+I2WSxOT9hTlNWn7JRdMvOyi/DZqMGr0yD3smk3bntcmvLh1Y5ctN29/ 6eQRw+zzf+jRJV12q4OHHlnDp174K0xK+mD8TPrOq8+o2XtbNnTpKbU5ddgQW59nnnigy25/+/7P esSPWzY+iUn9KKlJ/+U/3z7gwP27bHbGWacfefQRe9ikC5fcPeDQAc20nDxtkjb2ihlTpScNY3Na Ln3iwX5t/fLbNMmMjum785lgUlLe+Jn0B3ffNOjoI5rx1NSJE9Rhx9WXSlgaxua0fOqxxW1t/fLb NMmcmdOaXD1M2j1KalKNqtSsy7O7LTHp2PFnX37VlGZaavUOO2Jg8/7qEZPu5meCSUl5k9lvFy1b ecMdC5/f+k6Xe2+XJj1/7GjZqhlPDRt6/FGHD2zeaD1i0nPHjMKke8Bc6XxuTZp07q1zThg6OPz4 5gdbf7JuxUOPLtmweX1da8i5y1c9rDa1ly81RV7WEjU3VrOGvXa5c9t7r2tIGFtMzR5ds1wd1qrt l79+q3///mq/s3ol1xrohTpX+9C/mmnWiFHDZVK90CLiDdHb4w3R3FlzZupj0XrGa6hO1m5cVbvV xubtL2mJWk+9CBPjhWpZmJT0qWT227BnDhk2vJFSmzHp73/18336722nan/7/s/MfZ+9++rfrvqR Jn7y1svW7POP39AsaVToxaedm226Wj6/ZplGoDs2PxX61Nybr7tai96y8cnQgy1r89OPrXnk/u0v r61dkw+2Pad+tNzYv3o9euQpttBmzhVj0m6bK53PrUmTSgd/Mq/DXs///u2SV/hj+caxx2za9nxs UrXs19bP5h5w4P6SS+jnT++6ad/99o3/1q6YMTVIRz/qvdZA/8qq0uv0Wd8OvSmTp02K7bx6w0pN tAuaJh3pLCxCb9QxgGbJgPFCtZ7WPu45bIjmholhOKmPSNsSph93wrFhq7U+4y44N+5fP9pKxhOb HDhjUtJr0sikIbVKbcak8mBbWz+73cg0JNPJrdanZt03f65mvfrsE/GyNE7UxAU3f6ct+qsfOniQ XWzV3DAxDCefeOi7Bx3w/3/1pw4bEq7Maunt486M+59y4Xm2SvFErR4m9TNXOp9bMybVYEpt5Cy9 fnrTukpVChq4SXMa0ElbJ48YFkxaqSrpyacflxlNal87+GtqqblqrLkTLhovB2mKRnajRo/UFDUO JpXa2i+5YNHSe+VrTZRnK1W9anHSpSaqgWQa1q1j7mxZPpaOfKe3qzeNKOW7SvVmpNoxqZ2vlqb1 WnNtVW1Dasek5mut7XOv/VRrrh8HHDpAxAbXZyh7aorcrR81sQfHpJdPn6UdpknGtV/SZKEjxDWZ /XZc+6R47l577RVeB6U2Y9Ibrr3StBg0JN/JehoDakQ5eNDRlerNSLVjUnPrnJnTNCzVXA1g5V/r qnZMqpGmfjzrjBHvvPrM//zbP2pketihh6grG2bOv3G2rafsqSkStH40gzMm3WPmSudza8akDz26 RJaxJ0fUUu3lxDBX2tJIMzZpfPJzRsd0TbHhm5qdMHRwfEOsGer2794STBqfQ968/SVNydwwLPdp YjiDKleas4J0TMGG/KgpWsN4yGyvFy75csPju2rjDclcJ5VhdTzQaM3Hjj9b+rajBUPqtxPOO3vo OikhfSSnjhpd6cqkcqU5K2jo0R/eHeauqdao0IMGmGGM+YO7b9Ks+KSu3qiJcVfhPK1GoFJn/KCN ZKoG1l7+HXDwQTJsmHvb9TPDvcFcJ90z5krnc2vGpBqBjrvgXHttY1JpRbJYu3FV5sFMWUNOqRWB Wmb61PBNwrWhXxi+6bXMG9rc++BCTdG/mhWQ72zizur1U73WKsXL0rAx9GAndYNqY6nZhmhcaRsS ezBjUglUr6XLeDVE//79NdGOENTgD08eKonXPozTIybVwbx+U01y2pixe6rsEZKXzH6bs2fu3b// 5CuuWb9pW5dj0g+2PacG4aqlaUjDxtDABp51TarxpmbJjxpCqlnswYxJNWjV64kTxurHgC1aE+0s caV6vveBhbfUPoyDSfeMudL53JoxqXSjEVw8dpNJbSs0Vp08bVKQV601MibV8FZt4rdXvmrSYL38 4Zg105rEg8HaG25zTGqmzmxI8GDcla1Y3Xzj2GN2Vu+V0kg5XDs+7IiBHXNnh3O53LtL+mxyrpP2 6/d/FysPOviQ6+bd+cqOj+I2OSbVKFIqrKu/Lk0qli+ar7GkLfoP9tv3mqkXBwvHXcnUjTZq9MhT vqheJ+24+tJwyVWL0Jg0nMvFpHvGXOl8bl2aVCPHSnQ21ZA7JCmN5jQQq1Q1ZNLJN2kYu+nF8lUP q8/YdI1MuuD++WqWwRaX+bKIoiatuyF2YbTWpFpW7WqE4fDO6ih76RMPSqkyqdrH144xKembyb/j aNDxJy5YvGz7h7+pbZNjUg0Jpb9um1TYRc/5N862b2yQT+223rgr++aHK6e0q7cM8clhqfOpxxbP mTntqMO//Ks3yWLSPWaudD63Lk16+3dvsZFXEKvUFjewC47WQ75J5SnNjc+j2j1IjUxqVzkzi9P4 V8uy+50yXxZRyKSSYDzQti0NGxJ3ZQ/aZL67SUvX2+0y66Nrlsf3J2vWmef8UWXX47eYlPTZNDLp 6LPHLV/9bM7e28ikn3/8hsS35pH7u2fSbS+sjq+ofhHd+pvpSkNOjTfPHzs6s3QNadWJXj/zxAPP r1kW2/msM0bo7SZlTLpnzJXO59alSSUCjbPCj3ZjanzHkUxXqT570qVJ5aPjTjg2zJKh7A6lRibV KE+uVIfh8U9Jym73fXHrxtoviyhkUtsQu2043hDbNBs+h1unNPLVj+EmotDY7lAaMWq41jOspJBJ +7X1szuU7Gw2JiV9MJn9dt5d99nF0C733kYmNUt+9u6r3TPpbdd/eWPGy+v+IjS+b/6Xf1/mxMwl 1ykXnhdmxY3tZie7Hym+NVfa3af/3naHkkwan4LGpE7mSudzyzepXCYjxN5884OtAw4dICfad+7p XzU4Yehgu/Uo36T2ZX3jLjh30dJ7NQDUu+xbcGfNmVnXpDbcU/9aYsfc2XKfnYO1J1szXxZR1KQS n7rVMFnHCWFD1L8NmW04rMYTLhpvjTUwVwMpVY1NrFq6PTQqHWuWPUiruWPHnx1WcueuB3nsnDYm JX0qPf5tgfNvnD3q1JNqNdSkST9562UJTqPaG669Um+0C50aS9qtR3bTrxpPnTjBGh91+EA1uHJK uxrr30r1oRj7Jnx7RkaNtUqaK43qxwU3f8cWdM3UiyvV+5HCjcGYtGcpl0lXb1gpg2S+p0iqkn3k EblDgpDRwuMh8qOUl1Hh5VdNsadg1I/8Ym/UOE7DOmlL7U06crRaxqdJjac3rZOCtRp6l8Z64Zvn 2y+5IOMmW1b82KaWG/eZWT3bEA2TbUPkrLClWjGtlcwrzK3qVu9VM2usRccfi5St9dEsobfEn6cs LPNqoh0wYFLSd9LjJp04YWzGTTLarG9fEo9S39uyQVPCF8hrHCrCXDlXAh06eJAkKNNpgBmec5FP 1VLDyfCwqrrVlGFDj69t/EX1oRg510ytt8Tr/GnnZplXE+VZTOpBuUwKLQeTkvKG/5+0GTBpN8Ck UAhMSsobTIpJncCkUAhMSsobTIpJncCkUAhMSsobTIpJncCkUAhMSsobTIpJncCkUAhMSsobTIpJ ncCkUAhMSsobTIpJncCkUAhMSsobTIpJncCkUAhMSsobTIpJncCkUAhMSsobTIpJncCkUAhMSsob TIpJncCkUAhMSsobTIpJncCkUAhMSsobTIpJncCkUAhMSsobTIpJncCkUAhMSsobTIpJncCkUAhM SsobTIpJncCkUAhMSsqbbpu04+pLJdM+wpQLz+veZ9WXwaRQCExKyptuVLkFi5e1eq1bkxVrN7Zc ByUCk0IhMCkpb7pX5SRT7cN9iiUrVrfcBeUCk0IhMCkpbxKpctD7wKRQCExKyptEqhz0PjApFAKT kvImkSoHvQ9MCoXApKS8SaTKQe8Dk0IhMCkpbxKpctD7SNOkV8yYKplCgky4aHzRvQWTkkSSSJWD 3kdqJu2zT2+VK4WeNcOkJJEkUuWg95GaSTv75NNb5aLos2bXY1KSRtKpctDLuD49k0IvA5OSREKV AycwKXiDSUkiocqBE5gUvMGkJJFQ5cAJTAreYFKSSKhy4AQmBW8wKUkkVDlwIlS5y6bP0m4G0OOM a5/U2vpJiIUqB05Q5QghhJDdz/8CwmVzaQ0KZW5kc3RyZWFtDQplbmRvYmoNCjEyIDAgb2JqDQo8 PC9UeXBlL1hPYmplY3QvU3VidHlwZS9JbWFnZS9XaWR0aCA2MjIvSGVpZ2h0IDE4MC9Db2xvclNw YWNlL0RldmljZUdyYXkvTWF0dGVbIDAgMCAwXSAvQml0c1BlckNvbXBvbmVudCA4L0ludGVycG9s YXRlIGZhbHNlL0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggOTczPj4NCnN0cmVhbQ0KeJzt3S9s G2ccx+EXFBQUGBQYFAQEBAQUGhSYBQwEFAQMBAYUFAwUBBwYGCwoCCgwGAgIMCgoNAgoKCgoCBgo CCgMCBgweHd2K22ePDXS3b4nRc8j+XR/rDvrp4/ss2TJtd4by2flHjoaeqw9Wg79Avp0PHQZ/4dm 6KmyndrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5aiNHbeSo jRy1kaM2ctRGjtrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5 aiNHbeSojRy1kaM2ctRGjtrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5aiNHbeSojRy1kaM2ctRG jtrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5aiNHbeSojRy1 kaM2ctRGjtrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5aiNHbeSojRy1kaM2ctRGjtrIURs5aiNH beSojRy1kaM2ctRGjtrIURs5aiNHbeSojRy1kROs7eHpJHSlZuipsl2wtuM6DV2pGXqqbKc2cjrV 1kwn54vfd8tP7xavH7fbD168W5wftis7zdP18cPVc/Zni/lRKYfzOmt6SOkur2voqbJdp9rq5XLx fvn19M/5x/qxlEeX9cPscz0rZfrtvHXWPt7ffrq4qb+UV1f106Kfmn6kGXqqbNettnpQyqt6s1fK Rd0rZ/WkfX87a8/5z9rqz6WMb774JKVjbe0bWlvWm3Z5UqcPblebZXR7uVHb1WptXtVGx9rOy6q2 1d1Ym9JOfb3e++F2o7b1p+dMbXStra3p79om9duXgEVVG1v1WNuT+na99/P199pGamNDj7WV6+uH 7dpuvWj3vWzXDv5d20H3kO6kGXqqbNdnbSd1/rjsXi2flfHyj90yuVpu1HZQ3+z30dKPNUNPle36 rK38uqzX9eZ5u3m6OvdvXzZqG3+tddw9pTtoBh4q/6FTbdO9djGa7rTL8XTULp8cnz5/tD60/+Ll fpm0x6dPV5t709Vzjo5GXS53Z83QU2U7vzgiR23kqI2ct8f30HzoqQJDmzf30GLoqbKd+zZy1EaO 2shRGzlqI0dt5KiNHLWRozZy1EaO2shRGzlqI0dt5KiNHLWRozZy1EaO2shRGzlqI0dt5KiNHLWR ozZy1EaO2shRGzlqI0dt5KiNHLWRozZy1EaO2shRGzlqI0dt5KiNnNQf1EadDD3VPv0FudPF1Q0K ZW5kc3RyZWFtDQplbmRvYmoNCjEzIDAgb2JqDQo8PC9UeXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jl c291cmNlczw8L0ZvbnQ8PC9GMSA1IDAgUi9GMiA3IDAgUi9GNCAxNSAwIFI+Pi9Qcm9jU2V0Wy9Q REYvVGV4dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4vTWVkaWFCb3hbIDAgMCA2MTIgNzkyXSAv Q29udGVudHMgMTQgMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9TL1RyYW5zcGFyZW5jeS9DUy9EZXZp Y2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFyZW50cyAxPj4NCmVuZG9iag0KMTQgMCBvYmoNCjw8L0Zp bHRlci9GbGF0ZURlY29kZS9MZW5ndGggMjY5MT4+DQpzdHJlYW0NCniczVxtb9s4Ev4eIP+BXwo4 i4Th+0uwKtAk7V5vG7Sb5tAD2uLgOErijV+yltx2gf74IyXbkSmJZmvRd20RsRQ1M5xnhpwZMQLH 78Cvvx5fnL0+B+j5c3B6fgYwuNvfO0LgCFNIAdccMg4UI2CW7u/d/rK/B15enAFQeRIfv+lP7kAv nRz96/3Bgszp1f7e8SsMMIFagKvb/T0MkPmLARVAMgbN5Wq8v4csOwR+29/72KMQHByx3qvXb4vr aXpAe5PB/cGR6o37B0e8N3sAs4Mj3UuzgyPSm49yO6wY+9V2DPP7JYGDz+Dqn/t7L40UVpIlb0IV 1LLK/6N5/mlsbW7EP7faxLCGSpUTg6yYmrkUk7tMD0Svf9MimKBw7ekNYtF2sQjAGCJWkwyZKaua ymtc/trfM6OFYlAzwKmGmAHzo4T/wy9gYsZVBGFNggTQKGVlQLmiMgS5fVZCQgppwRGCiJhBg4+9 Wdq/AeyhKvQf+3vB7EidnTT2oFiV37pO/ij+1fTPFzMtVEm5pSo0g0gAZM1drJxlec/IU94qfqzu isWthdRrN7nktq9CFyECGu4vadvb6+RL5soqsLyDme1xKTwNsCTWx1SMggmzFtClrAQpyFWLWYh2 s9hAZZNhCAlxxYxd6/jP7XCUnoBPvbvZdP44vEnQIfhzep0l+NPBCUhns6TJdn5WJmLWSIZrQtVM aFsuCpI6l+rU0Qmwk8UIca1PwNU8BRf9GSDcrFEnGJ8wyw+zn5aqwXMYKp7bOPea78hotsGV9ZRG BYFOEeEIYj+/YqU6AcNpgo1ToYvTQ3D9NSGYUMR+Pz3ODs2txyzhFElxCGbzSZ4AKbEk4ywddAkT ZRQiWpN1M0wqGkxMe2DqGCgNpZ/jYNTPzXIxN2q3K8R4OEnIIRj3vyVSaiwPQf/LXYIlpPoQZPlN +iXRCErSKURmxbX+7Ui5GSIdDSLzmNlLWyHqFCPBNrEMhEiI6BA5Qm6GCKN4GCEvRhHcyMOxcKPH dDZIJ7nZgbMnvLqEAUsT46maIAEwNOYoncBAMFRkt67iYfkdABP9o/w++WiZk8+HAPBKBzUdGDkd xOno1HNMAqloTeYAyBpTr04gMznpriHzsPwOqAsIczu42yF2AJkjcwBkjWlpJ5AhCuVuEWvn+N0k 0w4cyu3Q1Q5mOypeSHAcwNZFDsDLk71vhxc3Setu8fJw/A5MirxUPsPIoqEhNx1dioGVtgUWjxyW PVeqZK8X8giykEfzZYeKYh2OYAHWwaNZh+J2/E7Nw8OysA+tF+oX6HOnIQsTNj92+QeoP1oNhUvh VX/3kaOP4/VXEyv+fmrmnq2ieyT5IsInVCpp2ia0tNUN48bPynjfpNKMEkhWMT+WmAgou61xUBP2 05r4AeBFK3JwISHbafbs41jNzMAJYIkikFmIMEowhlQ8s9FlgiCWz2wUkxBIaXHbdiL2rNOFTpAC LUfeALSi1To4V5DuNEnzcXTQItyCIFWBjIUGs2c/v/M0FS4JhQjXROq6PEokVHUu1YnLp5kayytM D3drekRBETDTuulFq+FwZuIRvFPT83AsTW+8Mr0SApPpLBsYLVtk1eIxkCrroa6sm5Ei0So53Fyw 3i1S7RwdpHAMd8HmORsrO2IEgBCtjsMp9oDQ/csDH7/B4xys/pyAeTZLJER2L83+zkxIBLXdTQf5 twRzjZiyodKft/ZVl4mebhMsYuyqjrwBWEUr4HBCIGpNLyNg5eH3+i24SR/z+6zECttYxy5iiytb XNXiikUZxpomLRZB03ieCNbpa8llxucR23LuNFLGvFhWHZYBVhKtZsQx9VhJnCzTwzKbX4+HeWkl aIE8WxmDWvQY+3CsQ7CqmXSPmy5MxRE8ALd4tSNkFji+W9w8LAfT8eMozdMlbvj/CjdH8ADcolV1 mGa7xs3Hcphl8/Sm9Ld8mvdHySxhyB5xQcdfE3RcHA3J7qez3NxYdXWbFFCbmLhCBmC0bemn8bhG oTBlAq8fkyVaJYNJAWVLunA5n4As7+fzDBQHecxjn3r90ag4y9P1S0XF7EhXnADVRCsbMCEhb88I Oo1wmD0X5mN4+fLF+fpJlv7d3ex67TCLiTudjv632og8WZ1wKQZU/t8lntwee6tPKehcH9ELFBdH 5yRbHZ1bP7Uni7MavmN7T4+uH9srKTOBINXlk/YUmSAugcoIS8AZVDmWR5G2h9AWsmBWvCJvPpZH PTnsBjKbLJYZbbesxGejYTrJwevJ7bTRcYM4NwCtFTS+W2O92XGpJ4vcUg1UWZtt9qPHATibzid5 1q0WqEmlWY1zgBY8+dmWWiC6VQvNq9fP8ivzMx+/31KzjeQzs2q+X7XAm+n0wWwq4DLt34yGk4cy 07b/KxofZkMb7wFwNkv7ZesyHU+/pF3itojgXNkDcPNkTNvihrwlw7Yg7me5Lg5f+9jiVR0ErVpO H+F2X2oYV2lFAQ79YAWRelKmLYGz71ragevU48p4wcPwMp30x2mp+Delc73/e7xys4uHm+Gs9Ldx 2bJ+V7Quzc93o3lRSnkxGKRZFgM3R/QA3Dwp05a4IeKt/MZyOA/bDY4U2MIxgHOkDgDOk0dtBxw1 uy721JY69biiZO/jePEwmRY716stPKbpSIs996y8rM2WWjIemugOgHf9/P5sOrm1fWfT8XiYd2kI xETGCtfkCTAETxK7pSGY5BH5ih5xPNjHNthbY0DjyBUAjSeJ3hIaI1Kbjmws3nk+oiiUFhuHb4AO PC+Bt9SByaG1L7rckgm3v0bpMqkq+mo4Tm/ezu0i8XrypT8a3oB/mw3/0aSEWbH157OidZn+NU+z vNMNf2mTm3RQw4NFS5MpF62qihKo+RgWgTP9X64WjnAByETL3CkrTpE0FzD6g/s0Qv2izNxdzgFa iJa5U6patfDCJs//GOZFeH4xzDLjt29sGu30nQ6nl2Vfl+ZMFLInbHwCduxAxB7Z9nN8mvEHRwsR 4l9XkAA7iVYpoERD8UOnJbar8Pj4lS91xFPusWqRtT7ja2hRKGgvGcTAzZE9ALdohQJKUKserd++ qfvyuevf58PZS7fPZOvZDjzAkT5Ak9FSd4ox5L7TBZECfw/b4MC/1upSUMqlLWj69dMpQyHt0RMf w6cgJ0ZhwuUc9KaJibU3TcyEa8v3QWtvmhjXlbdg5j7XWtfeFT093fSyidpvf8jFFyB48bEHd0Dx HYPlByrWWASMEMvbfikrYhTfmViXZDVoxWjxPYtmUt5RdW03FgA8Z96K3V+3glpn0JjGVr/wYmkq DokO+b4La8wIXXKSQUlDyPHGhMYlJ9iyrLKJXOsXf6rkOLUBWwi51o/sVMnZc7s8iFzrx3Gq5ChZ /pLAJnKNm7FLjmAYBCxv3JBcamZBafwaTZ1cY4XTJYfMthk211Y3qZDD2oRkQdRCfAJLDTkLIhfi E1jYfTKEnAjxCcwVZEFIiBCfwCbVk0GTFSE+genq2yubyIX4hP39KRE22RCfwFgsv2exiVyIU2DE YZD/ixCfsGeyWBgSIT6hGDTLZxC5EKeQZukMwzXEJwS1H8MKoCZDXIITG/62UPsvVCSzmw0KZW5k c3RyZWFtDQplbmRvYmoNCjE1IDAgb2JqDQo8PC9UeXBlL0ZvbnQvU3VidHlwZS9UcnVlVHlwZS9O YW1lL0Y0L0Jhc2VGb250L0FCQ0RFRStDb3VyaWVyIzIwTmV3L0VuY29kaW5nL1dpbkFuc2lFbmNv ZGluZy9Gb250RGVzY3JpcHRvciAxNiAwIFIvRmlyc3RDaGFyIDMyL0xhc3RDaGFyIDEyNC9XaWR0 aHMgMzExNyAwIFI+Pg0KZW5kb2JqDQoxNiAwIG9iag0KPDwvVHlwZS9Gb250RGVzY3JpcHRvci9G b250TmFtZS9BQkNERUUrQ291cmllciMyME5ldy9GbGFncyAzMi9JdGFsaWNBbmdsZSAwL0FzY2Vu dCA4MzMvRGVzY2VudCAtMTg4L0NhcEhlaWdodCA2MTMvQXZnV2lkdGggNjAwL01heFdpZHRoIDc0 NC9Gb250V2VpZ2h0IDQwMC9YSGVpZ2h0IDI1MC9TdGVtViA2MC9Gb250QkJveFsgLTEyMiAtMTg4 IDYyMyA2MTNdIC9Gb250RmlsZTIgMzExOCAwIFI+Pg0KZW5kb2JqDQoxNyAwIG9iag0KPDwvVHlw ZS9QYWdlL1BhcmVudCAyIDAgUi9SZXNvdXJjZXM8PC9Gb250PDwvRjQgMTUgMCBSL0YyIDcgMCBS Pj4vUHJvY1NldFsvUERGL1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUldID4+L01lZGlhQm94WyAw IDAgNjEyIDc5Ml0gL0NvbnRlbnRzIDE4IDAgUi9Hcm91cDw8L1R5cGUvR3JvdXAvUy9UcmFuc3Bh cmVuY3kvQ1MvRGV2aWNlUkdCPj4vVGFicy9TL1N0cnVjdFBhcmVudHMgMj4+DQplbmRvYmoNCjE4 IDAgb2JqDQo8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDI1NTc+Pg0Kc3RyZWFtDQp4nM1c bW/bOBL+HiD/gV8KpIuU4TupYFWgSbp7vU2w3TSHHtAWB8dREm/9krXktgv0x99Qkt9pkWeL9bVA zFDSzIjPPOTMkA46eYt+/vnk6vzNBSIvX6Kzi3NE0cPhwQuCXlCOOZKJxEIiIxgaZ4cH9z8dHqDX V+for8MDrpAWBBuDJE8wFQh+VHe9/wkN4b4F4fTksjN8QEfZ8MW/3j2vNQXIOLs5PDj5RSCDiUA3 94cHFBH4TxE8JO2zHFOFbgaHB8SaTdCvoPcFwYTB7d0PR+Osc4coM5+ff0I3/zw8eA3y/jg8CNbM 1jUbjrURbtUfjtCyoj/q4VocC1a/fvkYl1a+liAbJwqp2ShPL5Qf5bXyx+yyml6rzV+6KrW0fVOx hDDkuDgTba8vS6+Uw8dULxW2Z1XE/AYrYvmeBTeRjGLGp5YyYrDmGxyFb3YUjxSfqzBuAW52lf/c 9/rZKfp49DAeTZ56dyk5Rn+ObvOUfnx+irLxOEXkFNkLlBAp+Sm6mWToqjNGTCJKT6U+lQZMo8Ll b0Ev4PA4QYCD3P0Gax635m4i2ojCM5uHE6Gth8ChURJLtmaNJddPUW+UUpkk5ursGN1+TZlWROnf zk7yY7j0lKeMSn2MxpNhkcLsxgUd5Fm3Tbg4N/a5dWP9WMloWBGBE9MIV8uAJVgHqO32OwXQbQIQ WIYNesOUieQYDTrfUkGUJPQYdb48pALkSX6M8uIu+5JSxiTWrFXQYDon3G2wHzcVCzcFy69pWuBQ 28ApEaT3f8AtofFxc9rrx01Hw80oP27t882rtuTbUzbuZsMClrp8jmGboFBdsshpjR8UEw0UrbFu XrPikMmn9ztCFBNSPKYfEOKMfTpGSM47hO2gZN4hBXSwhQ5NPh23yipLJ+423A9gEg1AZfYDoE/v d8QX0DAWL7HQkQA8NoSZdxjoUPMOQUUcAF2G+wGkJBqCME+ppig8FoI+vd9h6bdwtKmaElZOgz7d lQsIBT5hFnyidJJk3gFg2o75xKBJJNq7DA7wmoYkf1evIXvyGo/e7yhJ5nCIEh8IVKcdhlQdSdXB aMLqDll2SG07oiDoMDwAQRYNQUGx3AOAHrUlfkliwaDcGPOp1ShIKEyN04YAKKIVQhRnXigiRKYe rbdfIRT97QzGIJ8mFJRrxUiVU3BihIE2hK629AKce1YlGJDeG4g21Sw3BJIJyL9lqwUYeMAmGY53 CEAyWgFGMY75j0/qvWpnuWGbuhMDZBJ+5ZDLoFPEJEmhjz6zcRdJYVml1mM0dFODE/3MBtTE3qP0 swjFH6eZAb4SrQCkqNiLr/jUVr4yKHNQixsgQgWgIyx8ooRp2mJli5eQVuDGKCS4DA4ALl4FiEjM 9lBJ8KldAY6SJcK1Dg8D2jLqNisAnmiFHpkoHzzt18G9SrtPEzT7d4omOayamFkS5X/nsLBiIaHd Lb6lTHBdFe7+vLe7HbDw3qeStlujU9pSy2l1AHbR6kHSaPzD9zC8St/8ju6yp+Ixr7Cjdka0s1/9 uf3OksMkRjg21G9TrdvUn1RVQRg0ubXLNl6mqpyzW2W9oLL0HJd9AZ4TrRAlNVzYQ0Lq1ZtPbge9 ovIcUiMjZmCZugfwW0Gvxi4ajInNSp3W+2Fk0apRUiX7gdGntzsaPPWzIpvCSP+vYHRZHwBjtPKQ tJtMe0DRo7aX55PsruJiMSo6/RRWYWYg7Tj5mpKT8nBB/jgaF9BPTlqd1o3EgnoNrK1odakHFyFu zQEesmv5yXl2ooKKYL6FQdGKMFJQrHgDNteTIcqLTjHJUXkWBZ79eNTp98vjKG1v1xnwZ+O2KWCQ otU3JGfNg9R25iPsaSav1uvXry6Wj5l0Hh7Gt9VJE1WdNIEweqWj823tjiKdnT8pb1j4vU14pT3N teG9AuDdtSSxmZSszLi3OUGnajPqc2osmZ1TWz4hx2Xj8bj5c8sn5CqxkipMk+pJBjmT1qsCFu6w AlZuWjgAx4WyQ1PbQhUkYGrDATjWkKp6xPjoBME+b6rBnvd72bBAb4b3I+f8EqTeAXddunPqD3DA hvxvtwERibDu28D0py46H02GRd7ueNjaoHCrDxiPhqxmx/GAsKFxPNzT7bZKq3zYq/TXDJbBYgxz /btZC12ORp9hUUTXWeeu3xt+riod9rey8X7cs8EzQufjrFO1rrPB6IuzIr0tjHU47HwBP4y8IavZ EUYNsxL1xcMb1s5tVUNql1if9umms7LUvCVmLWJ/MAmzadUSXCfc8YQzZt0VR5fpATg2pDU74qi0 F8dW+ViGP16t19mwM8gqHC4r6r37ezAj4dXnu964YuOgallWlq1r+Pm2PykLW6+63SzPY8Dosj8A xobcY0cYpcaJNz2NREefbrJMvS1bNAaOLtMDcGxI2XbEURg/jq3ysdyI8aq9+jwclaveLzvwyXV6 iWq7E+bVD2typb0HcSNCbzvF4/loeG/7zkeDQc95pGpb52AQeBvqNirAORpS1R2dAx4zek8k9+kO JnQMpFzGBSDVkHXuihRpHi0b9LeeAtVfGnMqDxiNht3lHUcDMnjtzcl21CQx36BpcdxveoPs7veJ nUzeDL90+r079G+IHZ4gKc3LKKIYl63r7K9Jlhetxg5TZw0ajXV4oqXsgrLmQYsTAvq0SpLA6rDP WcVlYQBQ8UoJhDcXDc873ccsQmmlLiW41AeMR7RSAk8EVk1ZxCubyP+jV5TJwFUvz4HalzalX+k7 642uq742/ZwZYs3zWtkyvZg9vB2gdv7u71fGI0Kg7bQm4Gul0eoX3O5eeQLt9ic9r9Zy607Ncx5H a2MhA9nT+7zqk5zrGDC67A+AMVr5gmvVPKCW2pfrdL9YnQIueuPXq32vut38B1DD9QoBYxqtlmBP UvE9pRle3cFphqMl52yKgaPL9AAcd60lbNz64lJjs4Vj7Zq/bjYIEmrlDaicNsmlvThJyWzTbGkv TlK5sFcI12UCGfzahhpZ3FBb3ZHjdoe6Fk6VwXLtOrFjMP1rGUsaAu5Q08vNRi5YUf7ViyVDZvfM 9NR/XMMtqfGu9bF25oEN5y4R5xyzZkdb1+JMZ6yWRcGMWdOtYJ84Z9C9Kg4CeqqDxDlj1lVxhE5j Go846QxjVsSxhGIig8Q5l9NVcRB98qCXlc6VZFWcSmxOFCLOOaGtipMJZmEv65yOVsXZmSUJEues +ayK4waHvetGrixKYxpLESQuhBSMzr7w7hMXQgpGFA6imAzhBIXkQwe9qwrhBIV4k5ogcSGcsH8y oP7Gok9cCCeoEvbMS4i4EE5QyXEQJVQIJahg2IS9awglKGfTc/U+cSGcoLYIFiQthBKU0unRbZ+4 EErYI7AyzOtCOJHU0atfnA7hhE4wD8JVh1DCfnc/CFYdwghp8EbT/gv0I4adDQplbmRzdHJlYW0N CmVuZG9iag0KMTkgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwv Rm9udDw8L0Y0IDE1IDAgUi9GMiA3IDAgUj4+L1Byb2NTZXRbL1BERi9UZXh0L0ltYWdlQi9JbWFn ZUMvSW1hZ2VJXSA+Pi9NZWRpYUJveFsgMCAwIDYxMiA3OTJdIC9Db250ZW50cyAyMCAwIFIvR3Jv dXA8PC9UeXBlL0dyb3VwL1MvVHJhbnNwYXJlbmN5L0NTL0RldmljZVJHQj4+L1RhYnMvUy9TdHJ1 Y3RQYXJlbnRzIDM+Pg0KZW5kb2JqDQoyMCAwIG9iag0KPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xl bmd0aCAyNTQzPj4NCnN0cmVhbQ0KeJzNXP9v27YS/z1A/gf+EiAdXIXfRQVTgSbp9vqWYF2ahw5o iwfHVhIvsZ1ZctsB/ePfkbJsWaFFzhaX1wIxQ1J3p/vckXdHOujoHfrxx6OL07dnCL96hU7OThFB t/t7LzF6SVjEkEhExAVSnKJZtr9388P+HnpzcYr+3N9jEsUcR0ohwZKIcAQ/ylkffkATmFcjTo7O +5NbdJhNXv7n/YsFJw8aJ1f7e0c/caQizNHVzf4eQRj+EwQPCf0si4hEV+P9PazFxuhn4PsSR5jC 9MHHw1l/MpyOgWB/iPj9i8/o6t/7e2+A6G/7e97s6VP2BMdRrOz8Px6idUa/LXRWVwhd6MA8xoSm HwugHSUSyaWqqwHzYcbMj+WwrMYW4q+NiljovoosxhRZBpek9fg69ZI5fFR8Cdc9TRKrCZrE+pya rQiGIyYqSUmSRJJssBa22VocVFz2QpkGuMVewFD+ezN6yI7Rp8Pb2XT+OBqmuIf+mF7nKfn04hhl s1mK8DHSAwRjxdgxuppn6KI/Q1Qgwo55cgzUKCbcZm9eL2CxOI7BEZn9DZ5Y3BNz48E0Cs9sVidC W6vAwlFg7WztHI2nH6PRNOUJECYXJz10/TWN41jQX06O8h4MPQKUCWeqh2bzSZEiKZTA4zwbdIkX g6UBM4u0brBEMLAwjxLVilfHiCVR7MF28NAvwN/mAIF2sfFokrIeGve/pQlg00P9L7cAZ0STHsqL YfYlJSwG5XYKF6xblNhFdSMmQyEmYftVbRsc6hoyyb34+iEmyRpinS6IC8SskroRi4MhpqQbse59 zMnW+NhjNhtkkwL2t3yFXpegkNj4j1UaNygqGCgxhGvtG1UYN3Lx/Y4QiTAu7tKPWgL2uYeQqHVw 6CBYd3QpHJXSuI5LupoQFNekEp97nXqydmFmF8dtNEkwo5HqeYzGxfc7YnU0JMDDax0kgQ5R66AY OmSjIwSANsHdABIcDEFYG2VbuB8KQRff7xBo1OAggI9qdiT1Dr0wJKuFQRAZBkGb4B4ItqT3uyKI nwlBB9/vCChWcEjjYdAhlh2q7EiqjlguOoTpIFjyQAhaBPdAkAZDkJNIPAOADrYGvyQpwZAcf+40 CuIyIsoqgwcUwaofklEnFAEiUwfX668Qiv5yAjrIq1QCcU5kmU5wxijGPR25QmoRJfSgTC5iwYT+ tcovEsjhRZTEnZZcKI8ks76AB4zBSi6Ssoj981m8k209JUTHiKckYvJAx68pMFbxgY4iU6LLWAc6 QEnBS7AyE3AKZMhBp+uh1CUCu9Qe6AWrwUhIW58DPRfbBnpULDDRSOlmopGKoVdEFJegmRm0W9So 0jOt0nqgFq4Og+G1nyGrd7EtURsvUdOIdClGtZO55BBgEbwyGBLAnSkHdyZ2QTwMI1i5R8Cm5DCM 7kvgTqaDxzla/jtG83yWgkNpbPK/8pRBhJRAe1B8SxWmJNFb7R83+pwDdt+blMhOXVrE+uDCKrMH csFqQkLF0T9+eOFk+vZXNMwei7u8RI6AJ2nY6OKTLz7V4pNIfQgV6QWZGeeHxqtUGlfs1P847Nuw nVrl90AxWJFGxDDwDAmik28+vx6PihJFvECGL8FSix7Ar4HeArtgMCY6S7RK74aRBqvUCJk8D4wu voPpuEuuoPdEcTfbx4esyCrTIf9XpmMT3cN0gpWIBIRKyTNYjoPtKM/n2bD0/2Ja9B/SWUqogqgZ H31N8ZG5VpDfTWcFDCy7uo2lWaSIVVAPvHYtCFmvMJSKg6e3EChYWURwotP8zVBezicoL/rFPEfm Sgg8++mw//BgboV0fYCmwLqUXSYPJQUrOghG25XUdf7D9aUiJ9fLN6/PGrc9+re3s2tz4YOUFz4g pl3/vf+tOV6kyzsgZrz2e5fgCioivuGt/G6NiQWki7tZNFnezVq/FbaSznolbPXc+q2wkqwRk5VP Ugz2yJoEajM0gcak2qUvZopyS01Jpt/ffumLtiTvDjIu2yUUEp4WKzp9GGWTAr2d3EytzuzF3oJ3 oiB55nb+Hs7ckrTuqBAM8+M2t3ocoNPpfFLk3epD1+K4nb2HPlpSwd30wRPerg/72rYt0zIVdDL9 OYM9p5jBwvp+2ULn0+k97EDoMusPH0aT+zLF17+ZxofZSMeNCJ3Osn7ZuszG0y9ZlzAuIkHrC3jA 2JIL7gijEnpZdoSCGzaqbVkvInonb7Ksx6xafNnCZgAWU2sEuK1sNCa6aOaUzXBHJja1SLm9SC22 YxPJbTusJQHd0XZiGdEt4put1wAT3zi5XmaT/jgrcTgv3f39X+Ol41/cD0ezcgUYly29EpjWJfx8 9zA3daTXg0GW5yFgtMnvAWNLMrgjjDJ2whhsCXDxXjnXLi0SAkeb6B44tiSJO+IoIHQiW5y3bO2P 5rzFyfbifjI1O+1Puc4Iy9YI4kaE3vWLu9Pp5Eb3nU7H45H1yt62QFEIvCGZtwroAVRL8rwjUDxx AxXK4Vy8vZ0rBFI24TyQasngd0SK6QsvjqC/8xRIsSjWUNmYe2ij5Rh+V21gD4F25AQxzgZOdb1f jcbZ8Ne5XkzeTr70H0ZD9Dvs44+QlOZmRy9mpnWZ/TnP8qLTfbwyVi9tPIUnWMrOKYlUe0YWIhxz caUcQmT1nKuKTUIPoIKVEjih7So77Q/usgCllbKUYGXvoY9wpQQMC15brvVaJ/L/GhUmML8Y5Tm4 9rlO6Rt9J6PppaXvQ6MvRDBoewUPnQbL61nCo7g9qO++PONkSmj38aeTqc7R5SoVsLR0TYEQWuXy pqagW2VXuTqZTD+A5VjF9/hyarCsnimhS/Ob9am97Pypl501Pe9sNHvT7IOsOg/vjdZX8NBpsBSb Qc7fqtOAEb+Tt3fEb2mJlTeFwNEmugeOu6bYG89hzQXZLQTaNZXcLJCII+UMQq0y8bVjMQEar86v 1o7FBDUV69rJmD69ap5trZ62HY4x2B6SBXEiVSSa45D3gQ5w/YDtb8yQ1XC7kDUpzB9dWBOkmrPi U/1tBxul9llPdW1NyVquYQMHqk8H23B9ysWaWWgudcKMaNE1YRc5a/zbJAexNSde5KzhY5McwdVm 6CJnjZya5DCEJV4vK6zbaYMcVUm1HLnIWXeSJrk4ibDwImdd0JrkwJ554kXOuhw1ycHKknhBIazl lyY5HkfU72U3OkudHIOdQXqR8/EKSmXkB6yPU1AiIsG9yPk4BV39fQUXOR+nIPoIkvqQkz5OQRSr vrbuIufjFCRm+vK8DzkfpyCSVl/tc5HzcQoiaIT9XtbHKcjyy24uaj4+QXRBysvspI9PQPJTfZ/E Rc7HKQiGbM2Lmo9P6ItL3O9dfXxCQSigvMjFPj4Re6kt9vEHCDj93jP2cQch9d/t2kDtf1DCUbcN CmVuZHN0cmVhbQ0KZW5kb2JqDQoyMSAwIG9iag0KPDwvVHlwZS9QYWdlL1BhcmVudCAyIDAgUi9S ZXNvdXJjZXM8PC9Gb250PDwvRjQgMTUgMCBSL0YyIDcgMCBSPj4vUHJvY1NldFsvUERGL1RleHQv SW1hZ2VCL0ltYWdlQy9JbWFnZUldID4+L01lZGlhQm94WyAwIDAgNjEyIDc5Ml0gL0NvbnRlbnRz IDIyIDAgUi9Hcm91cDw8L1R5cGUvR3JvdXAvUy9UcmFuc3BhcmVuY3kvQ1MvRGV2aWNlUkdCPj4v VGFicy9TL1N0cnVjdFBhcmVudHMgND4+DQplbmRvYmoNCjIyIDAgb2JqDQo8PC9GaWx0ZXIvRmxh dGVEZWNvZGUvTGVuZ3RoIDI0NDk+Pg0Kc3RyZWFtDQp4nM1c7W/bNhP/HiD/A78USIdU4bvIYCrQ JN3WZwnWpRk6oC0eOLaSePFLZsltB/SPf46SLNsyI3KxWD9pYDEUdXe63x15d6SLjt6iH388ujh9 c4bwy5fo5OwUEXS7v/cCoxeERQwJLSIukOIUzdL9vZsf9vfQ64tT9Pf+HpMo5jhSCgmmI8IRfJSj 3v+AJjBuhTg5Ou9NbtFBOnnxx7vnFScPGidX+3tHP3GkIszR1c3+HkEY/hEEDwnzLIuIRFfj/T1s xMboZ+D7AkeYwvD+h4NZbzKYjoFgb4AIVffPP6Gr/+zvvQayv+/veQtANwUgJI6Yskvw4QCtM/q9 0tqqSmilheIxJgz9WADtSEska2UvbhSX4l7xUd+Wi3uV+Gt3RSxM34IsxhRZbtakzf116iVzuCz4 Em56miSWAwyJ9TEr1mJE5PFCUqJxxPUj9sIetxcHFZfFUGYAbrEYMJX/3gxH6TH6eHA7m84fhoME H6K/ptdZQj4+P0bpbJYgfIzMDYKxYuIYXc1TdNGbISoQYceCHFOJKCbcZm9eL2CxOI7BFZn9DTYs bsPceDCNwjOPqxOhJ6vAwlFg42ztHAtfP0bDacK1VhG9ODlE118AqVgq9uvJUXYI9x6yRHFyiGbz SZ4gHguqx1na7xIuxoA5swjrxkoEwwrzSKtWuDoGTEexB9v+qJeDu80BAuNh4+EkiTU7ROPe14QK LLA6RL3PtwkhikSaH6IsH6SfEyJjeC3RKWo8jhSzS+wGToYCTsIyrNoWOtQ1cpJ78fUHjkaMhgfO KrAbuDgYcEq6geve45xsC497SGf9dJLDYpctQewSFBIXbmSVxg2KCgZKHEdx+6oVxptcfL8hRCKM 87vkAyKY4k+HCIllB1fQQXCjg650KHikU68y7sTsgrsB1MEAlGo3ALr4fkNsiQYhFODhKx3U4CVW OjiHDrnSEcswANoEdwNIcDAEYZ6SbXF4KARdfL/B2g9wdMkZolCiuJt1aQIUGyNRS5ug1BiJXukQ 0nQsJwaqVBirsUnsYTUt2f62VoN3ZDUOvt+Q1jUcyszCpkOUHZTRqkOXHZwXE7fpEEWHUCKQ39sE 90CQBkOQk0jsAEAH2wI/rQswpJD4U6dREJcwAVhl8IAiWClEMuqEIkBk6uB6/QVC0V9PQAfZIqMQ XDNc5hSwhMJsd2giV1N7AZd7ViUYOFYYosw6N4wVM0FnpwUYqk0BxvYGHjgGK8BIykwl8rsD6WK7 TA27ZA5hcZFSuLh/fI6OYTDGCY6ofNZtVoMLf7aJ4GEIwao7kvCdGIKLbWkI4yK9BFBoAgGRjsFx OWAj2TOT00CLGF+mwjQxeWbi5KrZ6boo4sKHbTJ7YBeuwIMFmOl3LqQ6mfYf5qj+OUbzbJYYDgBO 9k8Gc26kDIz9/GuiMKbSzNF/3ZhqOUzbN4kgtFPoJDUFVavQHtAFK/EILb8/dE6mb35Dg/Qhv8tK 6IhxMONf1ZVXV1VdiSzXU2gyM8Y0XiaSF81OF1ECeQuzy++BYrCakFBx5NjJCBKZOvlm8+vxMC9R xBUyvAZLVT2AXwO9CrtgMGqTX1il94AxWGVIxHBjBxmik29/On4YpXm6gJH8X8Fok94NIw1WHhJS 7wZGF99hls3TQemN+TTvjZJZwiHKxEdfEnxU7BRnd9NZDt11V6eLIAQulNjl9ABs28qMdVu6UBzE gvwJAgUrNAhYJ+O2RPNyPoFUsZfPM1Rs88OzHw96o1Gx09/1PogC61J2mTyUFKwEIDiBALzdzTqN Xbg5KOLkevn61VljB793ezu7XtvEhxCz0dH7ujEiT+q9/WLAyt9d4mseUI+8mN9hIF6hWh25obo+ crN+2IeJ1pM+y+fWD/uw6owQN8lPeZJHi0iwJoGVEcVRnvVBK2d5mGLmXNhCU7zY57Of5aEtCa+D jMt8KYmEbjGk09EwneTozeRmavVnL/YWvKuKvpW/hz+3ZJFbKoTQdoVcPvTR6XQ+ybNu9WEqIdzO 3kMfLanZlvrAzPjzv03Nnsq0Ss1cTH9OYdnJZzC3vqtb6Hw6vYdFCF2mvcFoOLkvc27zV9F4Pxua 2BGh01naK1uX6Xj6Oe0SxkU0aHsBDxhbcrPtYOSau2BEjwaET2UNIbEGm3byJnWBZNnidQsXn6pL waiQkSRuwUxUWnCPY6qERUprYLql7Vil8rCdloRwS9tRImKObY5O54AixHFyvUwnvXFa4nBeuvu7 f8a141/cD4azcgYYly0zExStS/h8O5oXdZ1X/X6aZSFgtMnvhpG1JIRbwhhLU/zbzRTg4o0b7v60 FgmBo010Dxxb8sQtcZSxG8dO/bHYo3CyvbifTIuV9qfMJIVlawhxI0Jve/nd6XRyY/pOp+PxMO8S KAqBtyJ2AT2AasmftwRKQIxLduRwLt64W7ZV+Or7yl2yJlSakT5v3DaJhLBIm0geFtlSrNjSIrlu 15FJbjpP9eCx2NiGjbmHNlp2/bfUBjMHVlwCbckJgoBHOK3q/Wo4Tge/zc2k+WbyuTcaDtCfEK88 QPKdFZFLPital+nf8zTLO41XFsbqpY1NeIKVJjjD7UoLE3a6uBKhOVG+oUkQoCwSegAVrGTCKYlU Wxp12uvfpQFKSNWaY2PvoY9gJRNOaLs+XpmCxS/DvEhALoZZBq59bkoXjb6T4fTS0ve+0Rci6LW9 godOw9UvMGvfiwgU9LrYmo0rGSCmMWVA5fXWIUIpF1uvgk1dPEGGbnGNtYqtX+jc1lxtAnuYa7CS CdM8ituSJePa55uufdZ097Ph7HWz71W/n4WfAqyv4PEl2WD1C6aEKdrtJJ1y8naEAa2tlcJiCBxt onvtqHGytqMmqKq3vtZ21ARjK1t+i42v5rbY8mnbvhoDRXNS7auZr/fQjQGx0cLi+/tCa/1vRsjF 7aWUawMqKVfEKDbv1iWpB9WMqu/720m1jtrUtrUOsUCXkM2zfPAbR7R92ttkY00uDZtVwoIufN9F zpqdNckxME7tRc6aTTTJURaJ2IucNeZtksM6wsKLnDVkbJCjMG8KP91Zo6UmuRi82E931tWsSU5C QuqlO2GdyJvkwEGUl+6EtR7aJMdk5KU68aizrFKDxCBWXuR8nILi+qCei5yPUxBNF98rcpHzcQoC MxfxoubjE0QyE3/6kPPxCcJhGfVTnY9PEHPEwk91Pj5BYKH08jDp4xIEi8W5eBc5H5fQ4GDSS3XS xyfMZoYXMR+PkCSSfm/q4xA8Xvw3QBZq/wMHAqCKDQplbmRzdHJlYW0NCmVuZG9iag0KMjMgMCBv YmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwvRm9udDw8L0YxIDUgMCBS L0YyIDcgMCBSL0Y0IDE1IDAgUj4+L1Byb2NTZXRbL1BERi9UZXh0L0ltYWdlQi9JbWFnZUMvSW1h Z2VJXSA+Pi9NZWRpYUJveFsgMCAwIDYxMiA3OTJdIC9Db250ZW50cyAyNCAwIFIvR3JvdXA8PC9U eXBlL0dyb3VwL1MvVHJhbnNwYXJlbmN5L0NTL0RldmljZVJHQj4+L1RhYnMvUy9TdHJ1Y3RQYXJl bnRzIDU+Pg0KZW5kb2JqDQoyNCAwIG9iag0KPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAy NjE5Pj4NCnN0cmVhbQ0KeJzNXP9vE7kS/71S/wf/gtSeWtffv1S3SLRw93hQHVf6xEmAnkK6LTma pJfdACfxxz/bm6Qb765jyDr3WtQYezMzns+MPTN2Ak5egZ9/Prk4f/4UoMePwdnTc4DB7f7eMQLH mEIKuOaQcaAYAbN8f+/mp/098OziHIDaO/HJy8HkFhzkk+P/vD5ckDm72t87+QUDTMDVzf4eBsj8 YkAFkIxDanrH+3sIMsvNvvy6v/f24M0hYQez0eExOygPj+lBfvgeXP17f++ZIVYRJABjiNgaTSGg JGtkHdGKJKiRcJL/tb9nhSAMagI41RAzoN3k3vwEJuah2sxI28w2EagEZUD5cjIEuX2jhEg5OcEx goiYh4ZvD77MRmUO2IuauL/v78XxIk1eUkGtWJ3Zuip+d/8aUNLFHJ0GKbdUJaWWOYJaALEyguWY st1o9Wc1LJZjC6HXRrnktq9GGSECWsZX1O34OoOKP9ZQqWoEM9vjk3h4wJJYf6ZmDhxxSORSWIIU FLjDJli3TWygsskwkMGqZsAN6/jvzeguPwXvDm5n0/n96DpDR+DP6Yciw+8OT0E+m2UAnQI7gBGS hJ6Cq3kOLgYzQLjxw1POTg19gjBrM7Io4VssjWEECW1I3zC3hq3xVIoUWnUqEoAfnnsLP44gDvNz wJ2C0TTDxgLRxdkR+PAlU5xJ/uLspDgyI/dFRjAV6gjM5pMywxoLRcZFPuwTJWp8yaDki7oZJZEM JWW8uwuknmHSUAYZDu8GpfGruVG6daXxaJKRIzAefM0wVprKIzD4fJsxBjE+AkV5nX/OsNRQkF4h IhQq7Iu5GSGZDCEzxQBCvUIk2AaOTYRoK0JUpUdoXcrNCKl0CCEo1U6dKMDRedF9Phvmk9LsVMUD Xn2CgKUJhlRDkM0o6GQoCBxEIYWjBFh+A8CEyKj8mL21zOn7IwC412H27PUO4nX06jYmh1C0IfNm xDBKBhknUIjdQhZg+Q1QHxDmd3C/Q+wAMk/mCMhak8FeIGN055AFWH4zEe9S/31yxoRArDbNtleO VEAe5mjNTdXNjZkOXe/gtuNhlWFKpDFIT8gIgwzk8FsaJDVGwnZrkAGW34DWK/0roSwgGvJFh0aL Dr3sEGzRwRcdOhFknswRkNFkkBG+c8gCLB1kWjv9Y6zJ+17jJCbcQuLxj1B/sgqHwAKyzsQ8Sbga 4PjhiwlQX5yZuRfLlMIKIau0QmuzLh7ZcNZWU4xjPapSDMW5SaUZWWYZgpt9gPdcUdHOcTzZI5BL V1JBctfIBTjWc0FwClgmU+yHAQEIZOyRjaIzgiFRj2z8nNnhRzZOsy3shhfNXldVQVwhx5MuwjiS VXK4VpDu0jYCDD3TIA9ocIMGhsyCJU0vgZJXGNluLfsFibgnPTkjMEpWy+FK7xijboYVRuMVRhYh 607swXGWLZLYmZi0K60nawROySo63ITZeKcVnRBHHymeAAMsOKSqIUYECMkKOlxiiDqTr/4PEEL8 hvdzsPo5BfNilnEobDRS/F1kJkQwS9cRGJZfM8IlpdpGL3/2KSJFzK1lARlv7EmUiZ/Ma6/eyc0e RxucN1sGSVY34mb33allBPg9/w1c5/flx6KyDGxc0y2Zi1e2eFWLVyyqQNY0qVtyTeNxJtyi26tD M8whog3RI2BLVjvinECNd5r3hVgW8w/jUVnBhhZQsBU6atFjAPPgWoCVDDeXdviCR+CWrMTCGd05 bgGWw+n4/i4v8yVu+P8KN0/wCNyS1Vk4ZVDu9ugwxHJUFPP8uvK3cloO7rJZhk6+ZAzZayHoxN2k KD5OZ+ViwHUlODn0hYzAaNtiTJvBVBgRE3d9n70kKy9wE8dw3g7e5XwCinJQzgvg7r2Yt707GNzd uasvfZ8tKmZTf1+cCNWkS65Not+lmiQRR4Dfm8vnV8/WL7MMbm9nH9x9FlbdZzFh4Pr/B1/98fod F/dAqksvnCJImlOKuglH5ALE5U0zurpptn7NDava3bzWa24Pb12/5lZR5sjdn3LvJOZRwX0CtScs Ae+h2i02yjQkfCkLZu6gvONmYyCD3UBmg8EyJSHtWIjP70b5pATPJzfTVr+N4twCtFYmd2QN1hF+ G8ght1SDVJ1quLwfgvPpfFIW/WqBmkSaNThv1gIN5EtbakFo67nfsXr9KL9q9Qrx+zU3u0g5M1HA 61ULvJxOP5k9BVzmg+s+xVksUiF57kaTT1Vab5m7xht3Cdj8nM/yQdW6zMfTz3mfZrKIF33RIswk kJ9tayYoYCZdAeOP8jQTd34SYIqlWpRcqBFv+cNWLWT/EC6wdH0cE7Mjrkbxw3NJgEPfC1wgQdsS OI4h7oxO+nVwZnfIEMPLfDIY55XiX1bO9frv8crNLj5dj2aVv42rlvU717o0f1/dzV0l5clwmBdF Ctw80SNwCyRoW+LGSAi33l1u8XGEENsH99nY6nWhVhwyHKOQPrlSYxIozLVt7jiFWXoyRJhlICfd 0iwptaXD7zr9+OH1xJ1+hDhefJpM3b78S2ET0Ko1MrErAK8G5cfz6eTG7dXT8XhU9okMMXG/wg3Z IpAJZOhbIkNYEJlUC0aAbfSCkQIaT64IaAIVgi2hwayzYmkzjd6zLUWhtNh4fCN0EDje3lIHiIdl 2ZIJh7TJpK7oq9E4v/5tbheJ55PPg7vRNfjDxBf3JuEtXKRRzlzrMv9rnhdlr/HF0iY36aCJR7Ii ANUCyu6Avv+4MMRQCqXUP7la+MJFIJOsLkGV7FTU+WD4MU9QnanqEj7nzVpgyeoS1OwvoiPwemJL A73mm2wTy3+NSpd7XIyKwqwSL21Jwus7G00vW/reeH0JAkRf8AjckhUKqMndu5SYMAwJsZXL2HxV MqgXAFjPAhETrRPcEOjtwXG/XKRNT0LTfih+LFtUUI4VceUSWxCRrZditzXGxsQ3GmOy4gcV5qUj OLXu+rLprk99F346mj3z+54Mh8UO3NqTPkKTycoRxnA6NZnSrQNso+OFlhZ/WAhSAOdJHXWcxdja cRbTbHXotHacxbSuHbWZca619g+kau9uO9GiZkZmjaq+lcF+bI00HqBWAaj+7Q7f8YRYDtcO3eoP LKSsieG++2FdktVDK0bL75hoJRV8qqnt1jw8cKcbUOoixQ5Qmwxas0nLoE6TcCt188s/muRaEzOf nMn3hI4i15pX+OQQg5hGkWsNhj1yxESZXMaQ461RpU9OEaiiJss7v/6mTk4SSKMmy1u3K5+c/Sht 3GRb12yfnFlRCI4i11r688mZRV1E2R3v9JM6OfvZWBFFLsYrCLbHP1HkYrzCHvQpEkUuxiuwVvbK Qgy5GK/ANueKoSZinAJLaVfDGHIxToGF+6BNDLkYp8CcwyiPFTE+gW3NJ26uMT6B7c2vuLnG+AQm Zm+LsjoR4xMYrz5Wu4lcjE9gRGHUciJiXEITyFkcEjEuoTBUUdRkjEtI+/mvKGoxHmFCchmFqoxx iOpKRAe1/wE+A6X5DQplbmRzdHJlYW0NCmVuZG9iag0KMjUgMCBvYmoNCjw8L1R5cGUvUGFnZS9Q YXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwvRm9udDw8L0Y0IDE1IDAgUi9GMiA3IDAgUj4+L1Byb2NT ZXRbL1BERi9UZXh0L0ltYWdlQi9JbWFnZUMvSW1hZ2VJXSA+Pi9NZWRpYUJveFsgMCAwIDYxMiA3 OTJdIC9Db250ZW50cyAyNiAwIFIvR3JvdXA8PC9UeXBlL0dyb3VwL1MvVHJhbnNwYXJlbmN5L0NT L0RldmljZVJHQj4+L1RhYnMvUy9TdHJ1Y3RQYXJlbnRzIDY+Pg0KZW5kb2JqDQoyNiAwIG9iag0K PDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAyNTg0Pj4NCnN0cmVhbQ0KeJzNXG1vGzcS/m7A /4FfAriFQ/OdXKMboHbSXq4xmjo+5IAkOCjy2lFjSa52laRAfvwN90VaraglL1rGZwMWzaVmhnz4 kDNDSujkJfrpp5OL8+dPEXnyBJ09PUcU3R4ePCboMeWYI5lILCQygqFFdnhw8+PhAXp2cY7+Ojzg CmlBsDFI8gRTgeBP1er1j2gG7VrC6cmL0ewWHWWzx/969UOtKUDG2dXhwckvAhlMBLq6OTygiMAv RfAmad/LMVXoanp4QKzZBP0Keh8TTBg0H785+ryYFBmizHz84R26+ufhwTMQ+MfhQbBqtq3aGCyN cOt+c4Q2Ff1Rj1d7MFjd//JtXFr5WoJsnCikVsPcPChfymfln9Vj1Tyrzd94KrW0dY1YQhhyPFyJ ts83pVfK4aXRS4Wt6YpYN7AiNtu05olkFHPZWMqIwYrumCl890zxSPHNFcYtwJ658p+byV12it4e 3S7my/vJdUqO0Z/z93lK3/5wirLFIkXkFNkHlBAlzSm6WmboYrRATMI8O6XqFHQxQoVrwgX1wDHl BCWYcXcXtqbc1nwT0YYU3rN7PBH65iFwaJTEsq1fY4ngKZrMUyqTxFycHaP3n1OluKT0t7OT/Bge 3eeppEYco8VyVqSICZWQaZ6Nh4SLc1PCtWWsHysZDSsicGJ64RoYsATrALXju1EBdFsCBJZh08ks FfQYTUdfUk610PIYjT7dAqACS36M8uI6+5RSWG+wYTEwc9nrh03Fgk3B/mv6djg0NG5KBOl1wMZa sKkGNthIvgNsTnP9sOlosBnlh214tnnVlmy7zxbjbFbARpevIRwSFJgAlkROa/ygmGigaI11/44V h0s+vV8RopiQ4kP6BgwQ+t0xQnJdodSQNjFwIgj1GwVGgNexNsJaxdoVybvjQZlsKczddvknTRJt 0ijzMJPGp/cr4i00NAF4RLuCQoVsV3CoUO0KFQdAl+F+ACmJhiCsjarP8Y+FoE/vV0BtDYdhgI9p VwDBUNKqSIStaC0MSSQKugwPQLAnyN8XQfJACHr0fgUIGjgoJyU+CZZ1hZKqqkjKCs4lbypkWSGl EZEQdBgegCCLhqCgWD4AgB61JX5JYsGQkgjyblAvSChMjdOGACii5UEUZ14oInimHq3vP4Mr+tsZ jEHexBMwl5kSVUxBgV1aQoABvqvNvADpHlURhlJaSpCtV0EGE4JTrPmgCRgBLbmzFwFYRkvAKPDl +PcP6r1q28EhOgU3JKVYJI+sR0nShIPpUGZQzTGTj6yfQlKCKYWilrZI6KNBl0WtSvhcdgfgFy0p o6gIMGhPTRLzHZo6MyXCRPFp7UwUmB7lRCCDok/BzbILcdBgb6MfL7dDJGYPkCTwqa1AmTagMCAv s8utqKCxLLYU5ZbDNVkthe1TNixtBYSq3G1wAHDRsjsyUQ8BnFdtBzhKVugw2QIqwgLLNLVJBaeF AUhFS/lIo/F3P6TwKh3fL9Hq5xQt80XKwGHXgE7+dw6OD2Z2sxwXX6BMSxfozxt7EgVuEbwOSjHw nWBndJocAFy0tIvU8KA/5IsAnE/p89/RdXZffMgr4Khd8iy76ldRv5r6lQ6atzPEtvTaqCr/GNRz a5ctPEmVKIvDnkna1L7bHv/MYdHSPVIlvpkTJdb06s2X76eTopo5pEZGrMAydQ1VaQe9GrtoMCY2 Y+C0PgDGaDkfCXte8gAoetSO59P7u6zIGhDp/xWIDuMDMIyW9ZGwrBr9ECB69E7yfJldV1Qs5sXo Ll2k5ORzSmGRJeSkvPWRf5gvirq+rBp02wXArL/kMjQAsX2TQ64pVCEmwNf/BoOiZTgkL/2T3VBe LmcoL0ZDTiItIFwUftXFMkflNSF419uj0d1deVNo6LNUA1PauI0JACZa6kIyjiX93g6aT+nry+dX zzZvAI1ubxfvq0tApLoEBF50p2L0ZatFka6uBpUNWv8PCa+0N+12dCzsMqGqQa2v7LFkdWVv87Lg 2jrnTcH1+zYvC1ZiJVWY0+qdjHKsZFdAq4UV0GnUugvIgV6cN7ZQe4PQ7LgLyHoieI8Y3+wlHIu+ HeLyfozO58tZkTvZHKTdAXdiE5DCrT6AzT1x8n7jIRLRPx5uNn+r0orNXqW/ZsWoKBawTb9aldCL +fwjLLnoMhtd301mH6so2v5XFl6XN37h53yRjarSZTadf8qGhLH2t5wdCICxJ2reE0Yj7SrkcbgG hRLiBjujvZqpIXXCg6umhJAY3qB67Q0ZikHVVtdqvGqbvlOt7YskRPL1eNBVyelu7jtfXcb55yvv idX3nK9a2dt73w0kKewG5tV6mc1G06zC4UW1xLz6e7pabC4+Xk8W1aozrUp29SlLl/D35d2yzA/9 PB5neR4DRpf9ATD2xOp7wqi0F8bBea4NNBR+3Wty7VNyXlrdF0eX6QE49sTre+IoDab9Pv3AfCzP NbxqLz7O5uXu/ksOoV5RlSazmzlCL0fFh/P57MbWnc+n00kxJFAMfFtD3QYGANUTpu8JlEj8QMUi nE93MLliIOUyLgCpnvzFnkhxe8HIE2g8h6k8aJhhONYWKpfygNHoSRrsOxokwKA9NZX3HZya2uN+ NZlm178v7WLyfPZpdDe5Rv+Gffz+bpLl5Y5eLMrSZfbXMsuLQffxZrIGjcY2PD0XEvaEh1FvvjaC O+bTKikh7EFXFZeFAUBFS18IyvqH7Hw0/pANv67U6Qun+oDxiJe+ILDg9UVdP9vkwT8mRemYX0zy HKj9wqYROnVnk/mlo+51py6GM+jqQsCYRssl8ERg/UBOvVe3rnWvswrt2DlGVkEqm1VwGvbm6PGg qhTFYoeqjYyKXo2CLXEmKZOyHA9JqZAxMgg7+u/9BHK0DAI30uKye5Qso19sM/ppl+VPJ4tn3TqI 4PP4zHd2IWBMo4Xz5W3ZB4ouvLqD/QBHSaxKMcJ5p+lBZziCbZzhgL+zOmzZOMORVOLNYxx71LJ1 EEPaBzHdkxyu7O2y5jsdKG4JaBoYOwrNN07IBFzU/6GFah6vrdxoUFvZMqP65ogNS1aNVorqb6hw i+pttT3azrB891G97RHBpt9R39biDCmtlrZgSazpVrBPnDMm64qDIdc6SJwzhuiKY4ARDRLn9HS7 4qixR4Ih4pyOYlcc0TgJ66zTR+qIY4m2u26AOOnczLri7Gefgzornet4V5xN+SZB4pzZwa44JZtt 3CduJ1na4iQQMQgKGcIKJgQWYZ0NYQXjHIdJCyGFPefhIkhcCCkYhFLaBIkLIQUjDDMWJC6EFDSh WKkQcSqEFNRQHEQxFcIJ+7EOEdRXFcIJKpPm2wp84kI4QUWCgximQihB+epD1T5xIZSgzNhPxISI C+EEpbr5uKlPXAgnKFE4CetsCCcSVSYtQsSFcAK8dcOCxOkQToAPEygthBNKYBWEqw6hhBTN1005 pP0XyPWRRA0KZW5kc3RyZWFtDQplbmRvYmoNCjI3IDAgb2JqDQo8PC9UeXBlL1BhZ2UvUGFyZW50 IDIgMCBSL1Jlc291cmNlczw8L0ZvbnQ8PC9GNCAxNSAwIFIvRjIgNyAwIFI+Pi9Qcm9jU2V0Wy9Q REYvVGV4dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4vTWVkaWFCb3hbIDAgMCA2MTIgNzkyXSAv Q29udGVudHMgMjggMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9TL1RyYW5zcGFyZW5jeS9DUy9EZXZp Y2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFyZW50cyA3Pj4NCmVuZG9iag0KMjggMCBvYmoNCjw8L0Zp bHRlci9GbGF0ZURlY29kZS9MZW5ndGggMjYzMD4+DQpzdHJlYW0NCnicxVzrbxs3Ev9uwP8DvwRw Cofm+2F0A8RO2uYao6njQw5IgoMirx01luRqV0kK5I+/4T70pJe8aGkrgUST3JkhfzPkzJASOnqN fv756Oz05XNEnj5FJ89PEUXX+3tPCHpCOeZIWomFREYwNMv3965+2t9DL85O0d/7e1whLQg2Bklu MRUI3upeb39CE+i3QpwevRpMrtFBPnny7zePG04RNE4u9veOfhHIYCLQxdX+HkUE/lEED0n3LMdU oYvx/h5xYhP0K/B9QjBh0H347mA2mFxOx+jrbFTmSHx+/AFd/Gt/7wVQ/XN/L5o/2+ZPKcPS+AV4 d4DWGf3ZTNrqjLBmEqrHuHT0tQTa2CqkFnPdNlQfVVv1tmhWbVsj/lqr1NLVtWQJYcjTuCDt2tep 18zho+VLhavZJLHs4Eis91lRFsko5rKVlBGDFb1DXfjd6hKgElIYxh3AdytMpSn/vRrd5Mfo/cH1 bDq/HV1m5BD9Nf1YZPT942OUz2YZIsfINVBCjOTH6GKeo7PBDDEJ4z9m8pgSkI0Kn8JFjcCjcoIS zLh/CFsqt6VvItmUwjN3zydCPzwFHo6SOGvr5lgheIxG00xYC/TPTg7Rx6+ZsZZj9fvJUXEIbbdF xpgwh2g2n5SZVJYZMy7yYZ94cVgbCPdIGwZLJgOLCGxNJ149I2axjmA7vBmUYG9zgMCZ2Hg0ycQh Gg++ZVpIovQhGny5zgRwl2CLRXmZf8m41Bwb1itknGNB/eKGUVOpUFOwCZuubQ71DZsSUXy3UZMr qJklavYeUPNKG0ZNJ0PNqDBq/dtakG1la7f5bJhPStjniiWCfYJCdWVDXmnCoJhkoGiNdfeGlcaU Qny/I0QxIeWn7B0IwMmHQ4TkooIyBRXgALQVjFKoYCsVin047NWqnDlxv+BhAG0yAJV5GABDfL8D aEs0tAF4xEqFcQDKlQrrIFarFSINgD7BwwBSkgxBWKdUlw+eCsEQ3++w8ddw9Mm8iRSD3N85qwcb RmapE0I4rbHLCgDTVSwXBi1MGq3xiRuhNR2B/q5aQx5IawJ8vyOIMho4KFHC4ePc1KpCacnqCltV UE4ZaSpkVSGYTrRw+wSPQJAlQ1BQLB8AwADbCj9rHRiCM8CvVy9IKEyNV4YIKJKlQRRnQSgSeKYB rh+/giv6+wnMQdGGE8CDN4GgkMJAFAGOq8u6gMU9qsMLSy04TmYRX1hOseg58SJdIO8TPwLEZIkX xTjm9x/LB9muBoXoGNbEXndTQFebsBAZAbqPnIecuazZI+d7ZcBSP3JuNMkYIOqaXS3HVj/qdQVW ulIYn5ARGpMs+6OoeBCNCbHd0BgJ+Ch37AAAaQcQQM5r3EgGYMlewWJGVtkDn5ARYKVL+hCJ2QOk D0Jsa7DGLVistS/hTI5UMEFJmtb4qK2NDyjQfq2szq96BY4ALlneR1r1EMAF2W4A51bBCpN6FWyK clmrk4AGa6KhfmEjQEuWF5JG43s/yAgyHd7O0eJ1jOYF+D5YunWx+KfXXdUIF6MG5SkyZbBhwH9Y fssksYwR5539deWOyMBhg89etUVqLLhfrghtSZaEkhoauoPRBNoSYvryD3SZ35afilpbqFt8nXU3 n6L5NM0nVbUjDUXu+rjC00yJqpjAf/bKH0aRJUtESWVDKCaJSIN8i/nH8aisUSQNMmIBlmlqAL8N 9BrsksFoXV7BK30EjMkyQxL2P/sAKAbYDqfj25u8zFsQaRyIfQrKWdXTJ+kWYDtyqo/gQnNyPwoa M9xt/UyW95KwfBv9EAoa4Dsqinl+WS8z5bQc3GSzjBx9zSgzEGmRo+reS/FpOiubhqoqhTfokzQC sl3zYz4dqiETFLMfEChZrkdyhhXvwPJ8PkFFOSjnBaouLcGz7w8GNzfVvaW+j3Ybb9EnU8QkJUtv SNAmSe/bKwsxfXv+8uLFxn2kwfX17GN1Jam+kASe89qfg28brat3lKr2VJeWJCNY3TGquHuNqkG0 uT3I7OL24Pq9xaV03kuLy+fW7y3WZCVVmNP6SUY5VnKTwEoPR2Cj08q1RK4F5ryVhbrLjOaOa4ms I2cQIBNSXQL7Z9dafX47RKfT+aQsvKYcxd0Dt3UZSuFnH2HKHeH4bvMhrOieD78p/yjT2pSDTH/N YYUtZ7BhvlmU0Kvp9DOst+g8H1zejCaf62Dd/VUV3lZXj+F1OssHdek8H0+/5H3C2Hg+3gFEwNgR J+8Io5FuFQq4Pv1iCY/BBhVmLW3DnFKzyLGIRYlU7+AISddKGbFM6UUrXfZLgaNP9jCOvCNS3hFH rdzV404ce7VH4Rb2INfzfDIY5zUOr2rTe/PPeGGEZ58vR7PaGsd1yVllVTqH99c38ypT8mw4zIsi BYw++SNg7IiUd4RR6SCM/dujgY4izJtsmN6PlWgKHH2iR+DYEVHuiKM0mHY7uj3bY3XCEGR79nky rXa9XwoXi9Sl0eRqitDrQfnpdDq5cnWn0/F4VPYJFAOfD+JIr4ARQHXEkTsCJWwYqFQGF+IdbVwp kPIJF4FUR4C9I1LuknpX6so54C9BlXt1vw3H2kHlYx4xGx2R9K6zQSIE2pET+Dh3cFqd94vROL/8 Y+4Wk5eTL4Ob0SX6D+zjtzejvKh29HJWlc7zv+d5Ufa6j7fKGjUb2/B0XA3YER4I7gMZxQTRUYgp ZVbAK8GKRql2dzPuediUKdczyPUBVk+fSBEKmSx9ISjrnqPTwfBT3v/62aQvvOwj5iNd+oLAwt4V fD5zyYPfRmUVgJyNigKWsFcujbBRdzKannvq3m7UpXB6fUOImNNkuQRuBdYPFLwEeS8yA4uswmqO QPQtFhMaE+oX693Bk15ZSeqWnOAMtJmSRc6EEk64gY3DZU+oosZ7J3dHLb1jBoJfhk6WKeFGukz7 3fPkLPrVtkU/37Ty56PZi7quVzQNceIFpexbX903ViLYNmN/NhwW6Vc4rzQRupMsPcO1CqKSbIUL 8Y6OFj0luVwTU+DoEz3qrEqwtbMqScniUGntrEpSidePq9yR0taBE1k9cNo8seLKYmbbn9GgeIVA 28G4WWh/5ENaCDn+jx6qbV5KudahkXJFjPrHOtYkWXRaMGp+FMRPqrPX9mx70ywdl+jhP8GmO/Da 5uJNETguq4QhpOH1thEi542xN8nBlGsdRc4bE26SY4ARjSLn9eg3yVHjjj5jyHkd4k1yRGMbN1iv L7hBjlntbg5FkJPeTXuTnPu+edRgpXcd3yTnUvg2ipw327tJTsnWXQmRu9NYVslJMMQoKGSMVTAh sIgbbIxVMM5xHLUYo2DuyzQiilyMUTAIGbWJIhdjFIwwzFgUuRijcF9pUyqGnIoxCmoojjIxFWMT VBMsosaqYmyCStv+QkSIXIxNUGFxlIWpGJOgfPHl+RC5GJOgzLivg8aQi7EJlxdTcUjE2AQlCtu4 wcbYhFVVciaGXIxNgLduWBQ5HWMT4MNEUouxCSWwisJVx5iEFO0vfHmo/Q+FmNojDQplbmRzdHJl YW0NCmVuZG9iag0KMjkgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2Vz PDwvRm9udDw8L0Y0IDE1IDAgUi9GMiA3IDAgUj4+L1Byb2NTZXRbL1BERi9UZXh0L0ltYWdlQi9J bWFnZUMvSW1hZ2VJXSA+Pi9NZWRpYUJveFsgMCAwIDYxMiA3OTJdIC9Db250ZW50cyAzMCAwIFIv R3JvdXA8PC9UeXBlL0dyb3VwL1MvVHJhbnNwYXJlbmN5L0NTL0RldmljZVJHQj4+L1RhYnMvUy9T dHJ1Y3RQYXJlbnRzIDg+Pg0KZW5kb2JqDQozMCAwIG9iag0KPDwvRmlsdGVyL0ZsYXRlRGVjb2Rl L0xlbmd0aCAyNTIxPj4NCnN0cmVhbQ0KeJzNXG1vGzcS/m7A/4FfAjiFTfN9SaMbIHbSnq8xmjo+ 5IAkOMjy2lFjSa52laRAfvwN90VaSdSSFy3jswGL5lIzw3lmyJkhJXT8Gv388/HF2fkLRJ49Q6cv zhBFd/t7RwQdUY45kkZiIZEWDM2y/b3bn/b30MuLM/TX/h5XKBEEa40kN5gKBH+qUW9/QhMY1yJO j18NJnfoIJsc/evN05pTAI3Tq/29418E0pgIdHW7v0cRgV+K4E3SvpdjqtDVeH+PWLEJ+hX4HhFM GAwfvjuYDSY30zH6MhsVGaJMf3r6AV39c3/vJdD9Y38vWAK2KQFlDFPtFuHdAVpl9EettrZOWK2G 8m1cWvqJBNrYKKQW2m4elC/ls/LP4rFqntXirzyVibR9DVlCGHI8XJC2z1epV8zhpeFLhe1ZJ7Ec YEmsjmmZi2QUc9lIyojGim4xGL7dYDxUfCbDuAV4u8mUtvKf29F9doLeH9zNpvOH0U1KDtGf0+s8 pe+fnqBsNksROUH2ASVEy+QEXc0zdDGYISZh/ieCnHAKslHhMrigGThMTlCCGXdPYcPkNuxNRFMp vGe7PhH6bhU4OEpiva2bY4ngCRpNUyqM0Renh+j6S6o4TRT57fQ4P4RHD3kqDFeHaDafFCliglMy zrNhn3Bxrku4NoT1YyWjYUUENroTrp4BMzgJYDu8HxTgbnOAwHrYeDRJBT9E48HXVEhCFLQHn+9S qjg28hDlxU32OWXECKxkDMxc8vphU7FgU7AN666NDvWNmxJBfENhE9j2x4bNKa4ftiQabFr5Yevf 27xsS297yGbDbFLARpcvIewTFJqUTuSUxg+KjgZKkuCke8eK40s+vt8QopiQ4mP6DgQQ5sMhQnLZ oVSfMjEIIgj1CwVCQNSxFEJDB2t1JPTDYa+ebF2Yu+XyG42JZjRKP47R+Ph+Q7yNBgN4RLtDQIds dyjoUO0OEwdAl+B+ACmJhiCsjaor8I+FoI/vN4g2lnBoDvjododdB0yrw1gEzXJhoETEQdAleACC Hbn+rgiSR0LQw/cbMqaBg3Je4mOwrDsEIVWHKTs4l80II8sOIRWJhKBD8AAEWTQEBcXyEQD0sC3x M8aCIamBLa7XKEgoWydyyRAARbQ6iOLMC0WEyNTD9foLhKK/nYIO8iaf4InWukopKOWC26QdQldb eAGfe1IlGIpzJkHBpMkxqLSDsaC91l8EL93KMYkAKKPVXxSEcvzH5/Retu3cEJ1AFJJCeMuf2ICS pIZio6DNoFtgbYGEhDGFdyfQTGTZpE96XRUThRV3yx2AX7SajKLCh1//JbQfzbQyGR/TNZMBQynt QPdqBxTiLbsiu4QJsIN4RR4iMXuEaoGPbQXKuAGFgRsnDJxUADQ0Kf25clbrwrav8fH+PVgk5QLs kjgAuWh1HmnUYyDnZbuOnGwBFQEeRjVm1C1WADzRKj5SJ/iHn1F4mQ4f5mjxc4LmOcQ0kDPbTTH/ O0+BK7O74rD4mgpuhA2B/ry1B1EQFcFrr34lmd0ZnSIHABet6iITeNCd8UUAzsf0/Hd0kz0UH/MK OGqXPBvJ1K+iftX1K1VVrApN3qe0nHOIb/3iWrks82epEmWz3yNJCTGdWwi/5bBo1R6pjM9yoqSa Xr75/Ho8KirLITUyYmEguu4Bm6ktpkGvxi4ajMZurE7pA2CMVvKRsNGZR0DRw3Y4HT/cZ0XWgEj/ r0B0CB+AYbSij4RlVSePAaKH7yjP59lN5YrFtBjcp7OUHH9JKYO0gxyXlz7yj9NZUfeXXb1uuwAY oW5BAxDbtTbkMqEKMUEx+w6BolU4JC/jk+1QXs4nKC8GxTxH5Y0d9P04OYSgTNmDTKcUG2rZkRWH 5GoLq/aE3x8M7u/LK0l9H9pqcJ7QuW6aQLQiiWQcS/qjQ0Ef07eX51cvV68aDe7uZtflbSNpqttG EK+vdQy+bowo0sUdpHJA6/8+4ZX2St+WiYXdWlQ1qPXdQGYWdwNXbyUupXNeSVy+b/VWYkVWUoU5 rd7JKMdKrhNojbAE1ga1Lh3yRGDOG1movaqot1w6ZB0FAg8Zn/USiNW79qLLhyE6m84nRe705iDu DriNLXUKN/sAb+7IyHfThzCiWx9ub/5eppU3e5n+msEWUswgIHizaKFX0+kn2FDQZTa4uR9NPlX5 uv2vbLwtrxbDz9ksG1Sty2w8/Zz1CWMd2TknEABjR36+I4xa2lXIE9r1CiVkKNaivZxpQurSChdN CyGxaDV9fcrGFLUZuV82VcohlGEmWUhEl7LFsB2XUH7b4R0Z+o62kyh7Za/TdnpdA4TdTLxcL7PJ YJxVOLyq3P3N3+OF4198uhnNqhVgXLXsSlC2LuHv6/t5WRV6PhxmeR4DRpf8ATB2ZOg7wqgSL4y9 O1qiYaDw8146/i4t5xnyrji6RA/AsSNL3xFHqTHtjq979sfyCMPL9uLTZFrutL/kNsGrWqPJ7RSh 14Pi49l0cmv7zqbj8ajoEygGcaambgEDgOpIzncEShg/ULEczsc72LliIOUSLgCpjqrFjkhxe6vI E/Sfgyn3GvJrjhMLlYt5gDY6EvhdtUECBNqRE8Q4Wzi19X41Gmc3v8/tYnI++Ty4H92gf8M+/nA/ yvJyRy9mZesy+2ue5UWv+3hjrEHa2ISn4/LBjvAw6q3SRgjHfFxFoqlQj7mquCQMACpaKUFQ1q2y s8HwY9b/ulKXEpzsA/QRr5RAYMHryrae20T+H6OiDMwvRnkOrv3KpvRrfaej6aWj7+1aX4xg0DWF AJ1Gy+u5ETjxBfX9B4M/nG0d1nj5Nkn6sqjQTtdF//EWk8oWdJ2CvTs4ilC08OqgLlrYmdsWJ0IT Tkt9iMQoE6NosWX+3k86RytacC0tLtu1ZBeRV5uLyIv1heXFaPZyve/5cJjHX2ycUwjQabQKQnkt 95ESGi/v4ITG0ZLLtSIGji7Rg45wBFs5wpGULM5aVo5wJJV49RTHnrRsnMOQ9jnM+kEOVwYz03x3 BMUtAs0AbbXQfLOFNBAV/w8jVPN4KeXKgFrKlhjVN1SsSLIYtGBUfxOGm1TnqE1tOysBDbqUblwK gCnZmkP3srfJxpnGWjZtwoI3NuMj58wD18kxg4kJIufMW9bJUYFlEkTOGV2vkWP2Qz08iJwzOF0n B+umCNOdMy5bJ5cQrIN0J5272To5CBh4kO6kcyFfJwcOkgTpTjorkuvkWGJv3IeQ2+otbXKQgygV RC7EK6jRzbVXH7kQr6CaN5+685EL8QqqdPNVFD5yIV5BIfEWYboL8Qpqy0xhugvxCsrAyYJ0p0K8 gsLmlATpToV4RXmvPWiyKsQrtP0cRRC1EKewBwrbiP0XZ0URWw0KZW5kc3RyZWFtDQplbmRvYmoN CjMxIDAgb2JqDQo8PC9UeXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291cmNlczw8L0ZvbnQ8PC9G MSA1IDAgUi9GMiA3IDAgUi9GNCAxNSAwIFI+Pi9Qcm9jU2V0Wy9QREYvVGV4dC9JbWFnZUIvSW1h Z2VDL0ltYWdlSV0gPj4vTWVkaWFCb3hbIDAgMCA2MTIgNzkyXSAvQ29udGVudHMgMzIgMCBSL0dy b3VwPDwvVHlwZS9Hcm91cC9TL1RyYW5zcGFyZW5jeS9DUy9EZXZpY2VSR0I+Pi9UYWJzL1MvU3Ry dWN0UGFyZW50cyA5Pj4NCmVuZG9iag0KMzIgMCBvYmoNCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9M ZW5ndGggMjk0Nz4+DQpzdHJlYW0NCniczVxtb9s4Ev4eIP+BXwrYi0Thu6RgVaBJ271eE7Sb5tAD muLgOkrijV+yltwXoD/+hpTsyJREs7Hou+0ikilpZjjPDDkzIoWO3qPffz86P33zEuHnz9HJy1NE 0O3+3iFGh4QFDIlYBFygiFM0T/f3bn7b30Ovzk8RqjxJjs4G01vUS6eH//rQL8mcXO7vHb0miNAg lujyZn+PIAz/CGIShZwHcLic7O9hxQ6jP/b3PvV4gPqHvPf6zTt9PEn7rDcd3vUPo95k0D8Uvfk9 mvcP416a9Q9pbzHO1W363m+qYZTf6R/5XaqPD/AQ6+VDaP2MLv+5v/cKhFKCLUWhEQ4iURXnU696 b62r1N7VWj9pGMS06GfAdU/hoPt6kfZlb3BtCHb0miJCAszXKEkWrNOq6MwqLmsTtyojIMRpW+f/ 3t+De2QMKEZIsDggHMGfwho+/oamcF+FIW9i6ECj6DtHkdl1jgOhnhUBLQBChzjAFG4afurN08E1 4vcVmf/c33PmRuvcQh6QiFfZravkT/1/Tc2i7KjGhQlFNcQ0AKrK9uXKc5aXQiWTvqb/rC7L5bVS 7LWrIhSq7ZEwxhQ1XF4RV9fX6Wv2MgI7KmkT6G+NROUGRWL9nopRcGWQ8VJWKph6qtksZLtZbKCy yTCUyI8+YRrHf25G4/QYXfVuE3zVP0bzb4lqPkBfsoS/bbKbp4pDYmU6pjyfeofdMomUUVs6zd8e ddsvSlmAied+UXguqjHx2y9BGlh23S8hA7mhXwdoNEunt6NpmnzNfkyH6vd1+pDfJUQ+WZaGoY1j HEhW77AxvNXGttCb60rZppWb0axLGATXw/A6v46hFnEQWvsEo3bAugQ0YkGo5iqzW5sAjbwBCtYO o1Nj7z/kg3k+ArYE5XdqCO5SE4RSPfYa/DerIvamCh62qqIyL50NfiidzBY5gpBXtV31sqs+zFek 8usIEZiI8fnJVb9LrdE41iOgIepmrRG8pdoaRyetNhYF7Ndkacw9OoGQgkdzl9BiPls8jK4TfID+ mkFcQVSgkc7nCcLHPiZJQ66ajrbkwqiesiy9V50lGHMWHaPLRYrOB3NgAEZ6TMkxY4hiwruduSAF ZZs7XjeOxmytG+OAyZQ2qwd1iofAKvq28dNJ0DFEDUk5TEB0+y2hXDCK354cZSqgeMgSSRgDE50v pnkCk0YYyUmWDrvEiXEIFVlNVgecGtPUTnCCXLodp46RKiIAC8fheJDDgLEAvasxYjKaJvQATQbf E0KxDMMDNPh6m0C+FccHKMuv068JBPw07BQkEut43hDTASRLar8lSJDRinaMOgVJ8g0c7RBFJUQC UPEO0bqQDggJXwiJmNkQ6t6LLAy1Ez2k82E6zWEGzh7R6jSkDLkuH63L4YCBt0qHiPhabLQDN7Gx /IkQCTCGNPWTYk4/HyAkjAaCKw0MGqjR0KnXxCKIWE1mB8i8ZbiqnrZryCwsfyJmAsLNBmE2yB1A ZsjsAJm3HFZANk3lbiGzsPyJQhOQyGyIqw1cNVT8kFI/kBkyO0DmLdcWQle8dwqZheVPBHn1Uv+M RQqQOBDQ0GklOIqVz9gE0fwlLfjHpUCC47JBlA0R9mMhhmSbLYRuW1ZotxAeBaSlLOfLQiwstYXE caF/GcrPnQYuXKpamMnfQf3eKimCRerpXYaPFo5fvkHE+PYE+p4tI3wShYLJIsxnWBB1DhGmKnKA Jz8rYn6VUKv3sHQZ9xNGIfTHnYb+HB6AFNqU3wE9b6UOQeNdo2fhWE3P0DHiScQg+3qmYk2AK+D0 mYoyExxQ8UxFMwmFVEpfVo2YPet0qJM0oKwmrwNa3goeAkLsKNwtWu0cDbQo4EGCKNbIaGjo0/Fo ql0SSMWJVaIDFAplCQQXRqGtgnRrFTRSFVRTCger8FZhEYTs2iosHAurmKysooAAkpHlCcHLM7o6 Ez6QKgqWpqwOSPmrtGAahLsttVg4GkgR/AiMDzyW/mtI5ICHt6oLj5kFj+4L/TZ+w4cFWv13jBbZ PAkh1AY0sh9ZAmyZGtSG+feEhJiqAuVk8NeNei8FQc5Nx8sJyrnPlNcBK2/lFh5xtcJhd1hZ+L15 h/QijqzAiqgpR7lNeeTlMSqPRBbBJpwy2qWkkIBqmCyiJgXf54nUA3CnHs2JUCOsyd7BSrxVeHgo LFbiJRm0scwWXyajvLASXELBV8YQlS1gH0vrKE9KsLzhFqsk3hTcATdvZR4uZSB2m8TbWA5nk4dx mqdL3Mj/FW6G4JtxY96KL1yEO8fNwnKUZYv0uvC3fJYPxsk84VitdMVH3xJ8pNdxZHezeQ4XVk3d 5gdMreMwhXTAaNsKTeu6G67WAP2aLN7qDaoEQlsyh4vFFGX5IF9kSK+6QU8HpqmmSvX6MVOAjhfY EEaCsM6l2s2r3mA81ouJun6rCfOwcOhhHW5vBQvOYOxsT3i6XwZqY3jx6sXL9ZU0g9vb+Rdd+yNx sZgGYmmjYfC9dkeerFbY6Bsqv7vEU6gV/fUuOe1ZYLxEcbkrgKx2BaztSJCRrGykaNqRUHl0fUtC QZlTGsSieJIKvYDdIFC5QxEwbqrsOKAhcAqXshDOl/fVdxwwS4q+gcwmiyW4tbR/Oh6l0xy9md7M Gh3XiXMD0HGk96WYrB0c15IZb6kGCHXaqrQXD0N0OltM86xbLbCAKS0YnB20YMk5t9MCi/XTv5Bz PpVfkXPa+P2RwtSYz2HU/LA6Q2ez2T1MlOgiHVyPR9P7onqgfumTj/ORimEROp2ng+LsIp3MvqZd 4lZGpabsDrhZssAtcYuo2vjXHpW2zD1P5RqqIY1b2eJVbYeszvjqTF8VhMK8pJfXCFmeGU80RkTb AmdI7QCcJQ3cEriQ2YDr1ON0vGBjeJFOB5O0UPxZ4VwffkxWbnZ+fz2aF/42Kc6U3+mzC/j7frzQ 5aEXw2GaZT5wM0TfjBu3pIFb4gapWWgru3hyOAvbR/fZ5qyxXLctcIbUDsBZcsMtgRPCClynHqff SNg4nt9PZ3rmer2FxzRlZSRU62lsrGFKLRiPILpD6P0gvzudTW9U2+lsMhnlXRoChcg4IjV5HAzB kphvaQhcb2bcuQdb2Dp7qw9oDLkcoLEk0VtCw8JWHalYvPN8pNx7aPJ10IHlHfeWOqBRwG1ZwZZM hPpChMmkqujL0SS9frdQg8Sb6dfBeHSN/g0T/gOkhJme+vO5PrtI/16kWd7phL+0yU06qOPhLU1W uxjaVOUnULMwDGWEcfS/HC0M4RyQ8Za5M4ID1pJDng6Gd6mH+kWRuZucHbTgL3PHpFULL1Ty/I9R rsPz81GWgd+eqTTaaDsZzS6Kti7NWX2UBkZVm4AdOxBVS8btHB97/NHQgo/41xDEwU68VQpoTAP6 SytAtqvw2PgVL6pkw5i1XjJgIuJleWBVMmC6SbcVxQMPuJmyO+DmrVBA1crQFj0qvz2r+/JL079f juavzDbI1jP/HmBKv1mTwlvqTtVXkXafutvYOgf+tbMuBWUiVBvT7frplKEM1XIaG8PHiqGPwoTJ 2e3rWGTtTRMncvU+aO1NE6ek8hYMrosYfprvih6fbnrZRCVcZuXHrdRmNlq7QSgFLD++BSzqJCx3 yOXlNSlrr8QqYuhPaK1Lsrppxaj8VFczKetddW1bPlHXuI5Pv28TraDWGbh8VI7ycFl63ESuMSM0 yUGi2fhBtjq5xoTGJEdlEDpRawzCTWpgkuVWp03kGqNZkxwW6iW9C7nGoMcgR1Rs1/p5vzVyjXOx SU7tr3XqrGyckExyoaokOJFr/cpklZyEadOps7LVTarkBA2kk9lJF6cgnJafatpEzcUn1EIS4YaE i08QtV/Bra8uTkFUqudEzcUnYvXpF7e+uvhEFOvX6S7kXHwCAhYnxYUuHiGLIdqBmotDqAil1b3+ CyIpQ00NCmVuZHN0cmVhbQ0KZW5kb2JqDQozMyAwIG9iag0KPDwvVHlwZS9QYWdlL1BhcmVudCAy IDAgUi9SZXNvdXJjZXM8PC9Gb250PDwvRjQgMTUgMCBSL0YyIDcgMCBSPj4vUHJvY1NldFsvUERG L1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUldID4+L01lZGlhQm94WyAwIDAgNjEyIDc5Ml0gL0Nv bnRlbnRzIDM0IDAgUi9Hcm91cDw8L1R5cGUvR3JvdXAvUy9UcmFuc3BhcmVuY3kvQ1MvRGV2aWNl UkdCPj4vVGFicy9TL1N0cnVjdFBhcmVudHMgMTA+Pg0KZW5kb2JqDQozNCAwIG9iag0KPDwvRmls dGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAyNTcxPj4NCnN0cmVhbQ0KeJzNXG1PG7kW/o7Ef/AXJLqi xu8vaKdSge7e3gVtl3LVldrVVQgDZCEJm5m0Xak//h7P5D1m7CXj5rYSGNtznjN+zrF9jp2gw3fo xx8Pz0/eniLy6hU6Pj1BFN3u7rwk6CXlmCNpJRYSGcHQKN/duflhdwe9OT9Bf+3ucIW0INgYJLnF VCD4Uff68AMaQL8F4fTwrDO4Rfv54OV/3r+YIEXIOL7c3Tn8SSCDiUCXN7s7FBH4TxE8JN2zHFOF Lvu7O8SpTdDPgPuSYMKge/fj/ijvXCPKzP2LP9Dlv3d33oC833Z3opHZOrLhWBvhh/64j5aBfpsM 1+JYsMnrV49x6eRrCbKxVUjNRnnaUP2q2qofs2Y1bZuov9QqtXR1U7GEMORpnIl27cvSa3D4NcWl wtWsiph3cCKW+yyYiWQUMz7VlBGDNX/CUPjThhKQEjIVxh3Bzaby35veQ36EPu3fjobjx951Rg7Q n8OrIqOfXhyhfDTKEDlCroESIhg7QpfjHJ13RohJRNURlUfCgmpU+Owt6gU8FicI+CD3v8Gaxa2Z m0g2ovDM08OJ0LOHwIMoiXO2ZsTK149Qb5hRaa05Pz5AV18yTqVU5pfjw+IAmh6LjAklD9BoPCgz JKmlvF/k3Tbp4ty459aVDXMlk3FFBLamka6WCbNYR8B2HzoluNsYKHAe1u8NMsbFAep3vmbCSqLY Aep8vs0EYZiBMxbldf45o0LBzK1aJQ2mc8L9Cod5U6l4U7D8mqYFDrVNnBJRuD7e5AJvfM6bNOl5 8+ob5k0n482oMG/t+1sQtvK3x3zUzQclLHXFnMM2SaG68iKvNmFSTDJStMa6ec1K40wh3G8IUUxI eZd9RIgz9scBQnJewQ1UUDKvkAQq2EKFgkda9SrnTtyveJhAm4xAZbZDYAj3G+ILbGhHj1iscATK hQrjKFaLFSoNgT7FwwRSkoxBmKdU0y48FYMh3G+w9Ds62oSmsPi5aTCEXZuAFWATZm4TgoBNILtQ waqK+cQg3DyQwmp8CkdYTUOQv6nVkC1ZTQD3G7J2RofSFT8Wy0mFFqSusHUFE3TSw8qqQgpiEjHo UTyCQZaMQUGx3AKBAdiKP2sdGZQpIKPVXZBQmBqvDhFUJEuEKM6CVCTYmQZQr77AVvSXYxiDYhpQ GM24qUMKzhnhEArCztVlXsDl9ur4glMYXoOpnIYYzFBDsWk3/8KFe873ChFEJsu/KMYx//4xfRB2 HhqmWExD6J9eoCPEJMmgiu65bRfJrMJKQ1lDNcNG7rntNHFdmNhrN/XDsaB+JSNMJVn6R1GxFVMJ wdam0q8iUEcbMEIdacKxV9PkSmLPhTuuxGrqanLbzSPoauH0aRzBXLoEEJGYbSGREIJdZW7Z4Vqn h8Ekz6hfrQh6kuV5JEwtAXraT4MHQbuPYzT7d4TGBayaWLg1s/i7yKis189u+TVjQmih3CL7501G D9y6e5NJylp1LaVdis6rdQR3ydJB0mj83Y8wgqBvf0XX+WN5V9TcUTcluulv8ltkberkeioaVqrG NhMdqKo3YVDkTi9XeJWpatJu1euFM1XuVy/CcpLloaSGhi3Eo0HcYnzV75W15ZAJM2JGlpnUAH8r 7E24S0ajdWurV/swjSxZMkoqux0aQ7jdYf/xIS/zKY30/4pGn/YRNCbLDknYstgtsBiA7RXFOL+u fbEclp2HDFZhZiDuOPySkcPqbkFxNxyVUA9VrU7rlrjdUkjDWotWl3owEeIHjrCQTbNP3qsTNVUE 82colCwHIwXFijdQczEeoKLslOMCVVdR4NlP+52Hh+o2StundQbs2fh1ihikZPkNyVnzILUd+Qh3 mSmIevHm9enyLZPO7e3oqr5oouuLJrCNXqnofF3rUWaz6ydVh4W/26RXustcT7xXBL2b5iSedkpW ndw/5wKdmqgxuabG7Oya2vIFOS4bb8fNn1u+IFeLlVRhausnGeVY61UBCz2cgJVOC/ffuJLVfFzr QkV1Ou6//8YaQtWAmJA7EY55Uwr25KGXD0r0dnAz9M4vUfAeuq3BMMX48SMMsCH+22xAhK3Sug2e /thFJ8PxoCzaHQ+XHBR++IjxaIhqNhwPI5vHwz/dPhe0joeDoD/nsAyWI5jr389K6Gw4vIdFEV3k neuH3uC+znS4v6rCh1HPbZ4ROhnlnbp0kfeHn/M2aZxsh70vEKaRN0Q1G9KoYVaiof3wE2vnc6Eh tLPOpkPYdJaWmpfErETcDyZhNq1LAuYM7nnCu2fdlEef6hE8NoQ1G/KodJDHVv2x2v4EUS/yQaef 1zyc1a73/u/+zAnP7697o9ob+3XJeWVVuoCf7x7GVWLrdbebF0UKGn36R9DYEHtsSKPU2AbD00Tu GMImy673zJI3W7wpjz7VI3hsCNk25FGYMI+t+mN1EBOEPb8fDKtV76cN/Ml33kq1OwkL4sOaXKP3 YN+I0LtOeXcyHNy4upNhv9/z3qh6rnEw2Hgb6lcqwjgaQtUNjQMeM3pLTh7CjnboFEz5lItgqiHq 3JQp0jxabtPfegg0+cyYFzxiNBpOlzccDYjgdTAm2xBJYv4E0uK4X/b6+fWvYzeZvB187jz0rtHv sHd4hKC0qHYR5agqXeR/jfOibHXvMDXWqNFYpydZyC4oax60NFvAEKqkLpG1zVnFp2EEUelSCYQ3 Jw1POt27PEFqZZJK8MFHjEeyVAK3wp0rPz0er10g/69eWQUD572iANc+cyH9St1xb3hR17Vp58wQ p15Qy5bdi7m72xGw83f/sDIeCTbaXm0iPlWaLH/BjcQisNFuf9ILolZHd/NJ7x8lMlwblXWdlEam oNGnfwSNydIXXKvmAXWufbbu7qerU8Bpb/Rmte51t1t8B9fwvULEmCbLJbibVHxLYUYQOzrMWCu1 qS2X2n1WPmKkWkWtr7gFUeczRIqciRc+6kRN8KUTNUnJ7Ohr6URNUrlw4gft0kIcvnYsRhaPxVbP 1biCZj35Vgn3oTi22kFaNwrT77xYgojooabNzVouqFF9d8WyJtNOc6DJd2R4RTX3Wh9tbx6i4f4k IFB3tNjE7DqKN4Z2KIuCBcWknuJC4rxB6Ko4CHAljxLnDZpWxYFdWx0lzru1XxVHrLudHyPOuzNe EceswUbGiJPezdKqOGMwi6JCehftVXFaT/d1IXHe9WpVnJvh4l7Wm4JdFSdnS2tI3JPOsihOSGyj 7E7GeAWDaYaLKHExXsGYwNpEiYvxCkYFjiM2xikYxI4y7l1jnIJaNv0CjYA4FeMU1DAsWJS4GKeg mk6/kSAkLsYpqKI4yupUjE+4JUhHMaFifIK6T6DHvWuMT1BmcZT/qxiXoBR2lXHvGuMSlJjpx3ZC 4mJ8wmpsWJx2MT5hNGYqSpyO8QmISmQUrzrGJVScqBh3kBCCPvma/wMbpZlrDQplbmRzdHJlYW0N CmVuZG9iag0KMzUgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwv Rm9udDw8L0Y0IDE1IDAgUi9GMiA3IDAgUj4+L1Byb2NTZXRbL1BERi9UZXh0L0ltYWdlQi9JbWFn ZUMvSW1hZ2VJXSA+Pi9NZWRpYUJveFsgMCAwIDYxMiA3OTJdIC9Db250ZW50cyAzNiAwIFIvR3Jv dXA8PC9UeXBlL0dyb3VwL1MvVHJhbnNwYXJlbmN5L0NTL0RldmljZVJHQj4+L1RhYnMvUy9TdHJ1 Y3RQYXJlbnRzIDExPj4NCmVuZG9iag0KMzYgMCBvYmoNCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9M ZW5ndGggMjU3NT4+DQpzdHJlYW0NCnicxVztb9s2E/8eIP8DvxRoh5Th+0swFWjSbk+fJViX5kEH tMXgOkrqNbYzS247oH/8c5Rkx1ZokYjFugVi+ijdnfi7I++OlNHha/Tzz4dnJ69eIPLsGTp+cYIo ut7fe0rQU8oxR9JKLCQygqFZvr939dP+Hnp5doL+2d/jCmlBsDFIcoupQPCnvurtT2gC160wp4en g8k1epxPnv7vzZNGUgSP44v9vcNfBDKYCHRxtb9HEYH/FMFN0t3LMVXoYry/R5zaBP0Kcp8STBhc Pnz3eDaYXE7HwHBwicRn9OQDuvjv/t5L4PrH/l60fHZfPqUMS+NX4N3jlqA/mkFbHRHWDEJ1G5eO v5bAG1uF1HKsFx3VR9VX/Vl2q0Vfo/5ar9TS0RZsCWHI07lk7frXudfC4WMhlwpHabO4u8CxWL9m xVgko5jxhaaMGKz5BnPhm80lwCVkMIw7gDsMBizlr6vRTX6E3j++nk3nt6PLjBygv6cfi4y+f3KE 8tksQ+QIuQ5KiGDiCF3Mc3Q2mCEmEVVHVB9xA6pR4bO3qAfwWJwg4Inc/wT3LO6euYlkIwr3bB5O 5HW5h0qUxDlbt8TK1Y/QaJoJa4H/2fEB+vg1s1xa9tvxYXEAXbdFxji35gDN5pMyQ1JoIsdFPuwT Lw5TA+EebcNgyWRgEYGt6cSrZ8Qs1hFihzeDEvxtDhA4FxuPJhk/QOPBtwzmWU4P0ODLdeYeAahF eZl/Ac/jWNle8aIWE+rXNQyZSgWZggXYdC1xqG/MlIiSGwmZkauQaZIAMq+qYch0MsiMCkPWv5cF xVZedpvPhvmkhBWuuIOvT1CorhzIq00YFJMMFK2x7l6q0vhRSO53hCgmpPyUvXMa8A8HCMkVggAC JS0CWyXIDwe9epVzJ+5XPAygTQagMrsBMCT3O+KraCiARywJferDYAI1NGYgepUKeaCIkMoIPLhc GYmKoFYJNI2h+jQLGyolySwV5mPVlWSkstSQ3O8Q3azDgcwqgQHBrhLcVGPvJiPORRoEfYpHINhR VdgWQbIjBANyvyPIaRZwSOrmGiDIBcGYmmAbgiakIciGwFIh6FE8AkGWDEFBsdwBgAGxFX7WVmBQ DjNkr5GXUJgarw4RUCSruSjOglAkiIYDUj9+hfD3t2MYg2KRv4BEJeocRiirmTmAaLnXpVQwzGhQ tVlmDWbkUZ1DWWph3ZfqLo0iVELEyHstLjHriks+zSJMJ1lxSTGO+Y+vVwTFrua+6AiJjGLNHrk4 PYNYUOhHLkLPIH52IEqSSQjc6n6SOS6Pep2CFXP3eZWOAC9ZsUlRsRPwQmJb4DFZYcIrpAAqTCU0 NVDByUQNWgrUmMGK+rWNQC1dvYlIzHZQvAiJrVEbL1FbYCZcg9a+17RY0+oTLubCYOpXMwKuZLUm aVUIrv4r8EGhw9s5Wv47QvNiBn6lnTMV/xbZw7dFPBpRWCutCavEMHd+PSy/ZYZwod06//eV29qB pf8qo6pX55baLadepSKsJVkRTBqNf/h+TVDoq9/RZX5bfipqa6HgvZUbN5+i+TTNJ1UuBsJuaeXV NACNZ5mqJoJefV5Q6XZwvPpHoJisEiY1dOwgOw3KLeYfx6OyRpE0yIglWKahAH4t9BrsksFoXYrq 1T4MI0tWJpLK7gbGkNzhdHx7k5f5AkYaDWOfyjb7p0Ftf4Tp+HSIMJ1k9SkJQZPdgeUExI6KYp5f 1v5fTsvBTTbLKDMQP5PDrxk5rE5SFJ+msxI6lqR+o+qqRO5TNAKvbatR3lMb9cARzB+gULKajBQU K94B5fl8gopyUM4LVJ2CgXvfPx7c3FQHYfreMTTCnaXy6hQxSMmqD5Kz7kHqOxMS7hxVUOr5y+cv WgdcBtfXs4/VGRdan3GBmHb9++Bbu7/Mlsdeqv6V732CK5nEYsNTRYC7bXVis0uyanl5yMk91ajR nI9jdnk+bv1kHpedx/Lu7ls/mVezlVRhaus73Rac1m0GK1c4Bq2LVg7ecSVd2bHRhYpqf95/8I51 JKoBNiFnIhwysA6zPrkZ5ZMSvZpcTb2zS5R4D9yQH8IE45cfYYAdudh2AyKscObb4ee3Q3QynU/K ot/xcGVC4RcfMR4dWc2W42Fk93j4J9uHCq1z06DQX3NYBMsZzPRvli10Op1+hiURneeDy5vR5HNd 53Dfqsbb2cgFzwidzPJB3TrPx9Mv3r2Eh8LYhKbeBwjDyDuymi1h1DAr0VBsumHlfKho7WpAIiyb LotSdy2xbJGqQ7lwtfpulq31O7wR67Y4+lSPwLEjxdgSR6WDOPbqj1XwE5R6nk8G47zG4bR2vTf/ jpdOePb5cjSrvXFct5xXVq1z+Pv6Zl4VmZ4Ph3lRpIDRp38EjB2Zx5YwSo1tMFVM5I4h2XfOtU2L pcDRp3oEjh0J25Y4ChPGsVd/rLZlgmLPPk+m1ar3S+HSxbo1ghgOodeD8tPJdHLlaCfT8XhU9gkU gyAYMn2vghFAdSSNWwIFtxm9I4cLyY52rhRI+ZSLQKojA9wWKdI9Wi4A7z0dMZClVVB5hEeMRse+ 75ajAdm0DuZHW0qSmG+QtDruF6Nxfvn73E0mryZfBjejS/QnrOO3kCAW1YpezqrWef7PPC/KXtfx hbFGjcZ9eJKlz4Ky7kFLE46FpDIBIbJNMKNRqt25iIjH7lUqU+7KoNQdzJ4+lSIMMl35wr2A1ZU5 nwyGn/IE5ZymfOETHzEeycoX3Ap3nmfzeDx3xYP/jMoqATkbFQVMYaeujNCiHY+m5x7a2xYtQdDr fYSIV0uT1RK4kVh0B739l4SCQusdLbV0901FBUrZooCwLCrUpHqqqMoLKWD0PUAEjMlKCVyr7hF1 Jn963+RftN3gxWj2sqb1CTvA6cw+qGXPK1xzoiwotnn258Nh8QOmAJ82EbaTrH7BFcQBO0qngrKj 0ylP667smKJ+4VU9andL8LXdLUnJchtqbXdLUrmy+wb90kIefm+LiqxuUbX3uLiCbt38tIR7dYy1 L5DWjcLihy/WRERcoRbd3VquqFH9gMW6JouL7gQ1P5ThZdV91f3R9tYhOo56gwTqtvm6kL0vxZtD OymrjAXFpHb9EDtvEtpmx93r+lHsvElTmx3YtdVR7Lwhb5sdse6lxhh23oixxY5Zg42MYSe9wVKb nTGYRUEhvYt2m53Wi7guxM47j7fZwdxC4h7WWw5ts5PLJSfEbqOzrLITEtsou5MxXsFgmuEiil2M VzAmsDZR7GK8glGB44CNcQr3frGMe9YYp6CWLX5JI8BOxTgFNQwLFsUuximopovfJwixi3EKqiiO sjoV4xNuCdJRSKgYn6DuffS4Z43xCcosjvJ/FeMSlEK0FfesMS5BiVm8UBNiF+MTVmPD4rSL8Qmj MVNR7HSMT0C0LqNw1TEuoeJYxbiDhBR042P+HxdQmw8NCmVuZHN0cmVhbQ0KZW5kb2JqDQozNyAw IG9iag0KPDwvVHlwZS9QYWdlL1BhcmVudCAyIDAgUi9SZXNvdXJjZXM8PC9Gb250PDwvRjQgMTUg MCBSL0YyIDcgMCBSPj4vUHJvY1NldFsvUERGL1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUldID4+ L01lZGlhQm94WyAwIDAgNjEyIDc5Ml0gL0NvbnRlbnRzIDM4IDAgUi9Hcm91cDw8L1R5cGUvR3Jv dXAvUy9UcmFuc3BhcmVuY3kvQ1MvRGV2aWNlUkdCPj4vVGFicy9TL1N0cnVjdFBhcmVudHMgMTI+ Pg0KZW5kb2JqDQozOCAwIG9iag0KPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAyNDQ3Pj4N CnN0cmVhbQ0KeJzNXG9v2zYTfx8g34FvCqRDyvA/qWAq0CTd1mcJ1qUZOqAdHjiOkniJ7cyS2w7o h3+Oki3bMi2ysdg8CWAxFHV3ut8deXdkjA7eoh9/PDg7fnOCyMuX6OjkGFF0s7vzgqAXlGOOZCKx kMgIhibZ7s71D7s76PXZMfpnd4crpAXBxiDJE0wFgo9q1Psf0AjGLRGnB6e90Q3ay0Yv/nj3fMYp gMbRxe7OwU8CGUwEurje3aGIwC9F8JC0z3JMFboY7u4QKzZBPwPfFwQTBsP7H/YmvdHVeAgEe1eI MnP3/C908Z/dnddA9vfdnWAB2LoAlGrMjVuCD3toldHvM60tq4TNtFA+xqWlryXQxolCqlb2/EZ5 Ke+VH/VtNb83E3/lrtTS9s3JEsKQ42ZN2t5fpV4xh8ucLxW2p0liMcCSWB2zZC2SE8z4XFKaJFjz DfbCN9uLh4rPYhi3ALdYDJjKf68H99kh+rh3MxlPHwZXKdlHf48v85R+fH6IsskkReQQ2RuUEMGg fTHN0FlvgphEVB1SccgSxAgVLnsLegGHxQkCrsjdb7BmcWvmJqJpFJ7ZrE6EHq0CB0dJrLO1cyx9 /RANxqlIEoPZ2dE+uvycUiWY0r8eHeT7cO8BsGSG76PJdFSkiFMq9TDP+l3ixWFqINwhrR8sGQ0s InBiWvHqGDEYGMC2f98rwN+mAIF1seFglMLKs4+GvS8p6JFTs496n25SrRmWbB/lxVX2KaXarlEq Bmgugf24qVi4KViGTdtCh7oGTokgvt+AW0Li4+aU14+bjoabUX7cuvc3L9vS3x6yST8bFbDW5QsM uwSF6tKLnNL4QTHRQNEa6/ZFK44z+fh+RYhiQorb9AOCdUP9tY+QrDu6FIgRgxX1S6Q5AyEoWUgl CHSwpQ4JYnbqydaFuVs0v9Ek0YxGmacxGh/fr4g30LAS1x1KQIdsdqhGRwwAXYL7AaQkGoIwN6q2 0D8Wgj6+XyHcaOBjljq0dcFkqcMY27GYGHQSyQVdggcg2JLsb4sgeSIEPXy/IqA4h8MwO0VCh5x3 aFV1JFUHpULMOuSsQ5FICDoED0CQRUNQUCyfAEAP2xK/JKnAMOBOnUZBQmFqnDIEQBGtEqI480IR ITL1cL38DKHor0egg7xOKJjmqkopqIKoCFIKiFxt6QVc7lmVX0B2z7XCWtcphhTCYJF0WoDhwj7n eoUAIKMVYBTjthL53ZH0sV1ODdGhrUKmVGBuIQPsSGokFvpZlwAxQMZGtS7JAhCKVnVR9r2fACEf 2wqh4RwhlhKYqAAeAQ1Cnz1+RXJmHBwb6pZpDZtukhvf60swwvJFO7VBScPfc90G41WQiMSsvRLR faXWy7T/MEX1zyGa5pPUSgs2mP+bp1RjKqHdL76khhCu7Srw97Utx8PCcJ1KyjoNmhQr53eX0AHQ RSsiyUR9f+i8TN/8hq6yh+I2r6CjdvIArNjsKmZXM7tSVa3Y0OR2jG28TFU51XTqgIJKWwl0yh+A YrSqkzQae7ZKosS+Xr759HI4KCoUyQwZUYNlZj2AXwO9GXbRYExsBuOUPgDGaHUgqeHGE+SgXr79 8fDhPiuyEsYu+VOmbBXXK0Dt+P8vpuMS2G86LFoBSqrkaUzHx3eQ59PsqpoBinHRu08nqYAQnRx8 TslBuf2d344nBXTXXZ0uvBAsMeqWMwCwbetNzr32UnEQR4tHCBStfCJhbdZt6fP5dAQJcK+Y5qg8 uwDPftzr3d+Xxxe63t0xYF3GLVOAkqIVNqSgWPF2N+s0XhL29IuX6/nrVyeNYwm9m5vJ5crJBAhr Gx29L2sjirQ+r1AOWPq7S3ztA2bDiwXgu229Y7NXQpZPvQbnlEnOxJgdbGJJfbBp9UgVl3PhnOep Fs+tHqmqyEoGo5LqSUYE1rpJYGmEJdAYtHRiiutqYpwpSpTbqe4TU6wlc/SQ8fkTZZi3TTrH94Ns VKA3o+uxc4IJYu+AG9wE5hg3/wADbMnHtlQI4e0KOX/oo+PxdFTk3erDlrWEm32APloym+30IZKy Lvqt+eljmVb5qZfpzxmsg8UEJvt3dQudjsd3sCqi86x3dT8Y3VWFB/tX2Xg/GdgAGqHjSdarWufZ cPwp6xLGWXjqfIEAGFsymy1hNNIHI9oYoT6WNcToCdi0lzetq0SLlqhbpPw0NmQtW1prlTiecEat 2+LoEt2PI29JM7bEUStM6bfHP4/2xzL+8XI9z0a9YVbhcFq53rt/h7UTnt1dDSaVNw6rlvXKsnUO n2/vp2Wh6VW/n+V5DBhd8gfA2JJ8bAmj0l4Yo7mjjzdpuN7jWjQGji7RA3Bsydm2xFFqnCSP2Px5 tD+Wmz9etmd3o3G56v2U24yxag0ghkPoba+4PR6Prm3f8Xg4HBRdAsUgCDbULWAAUC1545ZACeMH KpbD+XiHuVSXkjFFsaAhWumUq6aYbOAKWohhh25WXjtsyW+3tEObd2tPetF5smU41tYQXcwDtNGy d76tNkiAQFtykphv4LSs94vBMLv6bWqnyjejT737wRX6E6KUB0h/8zJeKSZl6zz7Z5rlRadRytxY g7SxDk+04oBgtL0iGSfY9HGlMkkYDZozO84K5kC5JAwAKlrRQlDWrrLjXv82i1DEqYoWTvYB+ohX tCC8vV77ypYMfhkUZdpxNshzcO1TWzxo9B0NxueOvveNvhihrusVAnQarYLAE2EP3nzvUNfL1u5l qUXW4WgtlxI6DWcMsaJ9d8WwJLGzkJfvomQCP4kuLzpJCI9grk5hAv5xNVqhhNsTiG2hrXXt03XX Pmm6+8lg8rrZ96rfz+NPAc5XCNBptKqFPYXbqtOISZSXd1gS5W7JxawRA0eX6EF7WoKt7GlJoDff fFrZ05JMLO25wX2IkJK1janF066dLa7gtp59E4D9nyK2NsBYLcy/p2CFRcAINb/dLuWSGOX3DaxK Ug+qGc2+18BNqnXUurad1Yc5upSuHykEXIzNaNugXWfjTC4tm2XCgs9txkfOmZ01yTHwIh1EzplN NMnZA7A8iJwz5m2Qmy1oIeScIWOTnC3ah+nOGS01yWlid3ADyEnnatYkJxVmMoiccyJvkrM7+EG6 k84qaJMc05ioIHIbvWWZHGQGUgSRC/EKmtSnvnzkQryCmvp/CXzkQryCwnxjwnQX4hUU0mEeprsQ r6C2+BOmuxCvoEzioBlAhTgFpRBKB6lOhThFomD+D1KdCnEKAy4WRCzEJRQ4WJDNqRCPkAxvftH/ ATzuuIsNCmVuZHN0cmVhbQ0KZW5kb2JqDQozOSAwIG9iag0KPDwvVHlwZS9QYWdlL1BhcmVudCAy IDAgUi9SZXNvdXJjZXM8PC9Gb250PDwvRjEgNSAwIFIvRjIgNyAwIFIvRjQgMTUgMCBSPj4vUHJv Y1NldFsvUERGL1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUldID4+L01lZGlhQm94WyAwIDAgNjEy IDc5Ml0gL0NvbnRlbnRzIDQwIDAgUi9Hcm91cDw8L1R5cGUvR3JvdXAvUy9UcmFuc3BhcmVuY3kv Q1MvRGV2aWNlUkdCPj4vVGFicy9TL1N0cnVjdFBhcmVudHMgMTM+Pg0KZW5kb2JqDQo0MCAwIG9i ag0KPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAyNjQxPj4NCnN0cmVhbQ0KeJzNXG1v2zgS /h4g/4FfCiSLhOE7qWBVoEm7e71NsN00hxzQFgfXUVJvYztryW0X6I+/oeRXSqLZWPRdWjgMKc2M 5pkhZ4a00Mkb9PPPJ5fnr18i8vw5Ont5jii63987JuiYcsyRTCQWEhnB0CTb37v7aX8Pvbo8R2jl Tnpy0Rvdo4NsdPyvt4czMmfX+3snv1BEGbq+29+jiMA/irhCWkjMoXe4v0ewsNzsr1/3994d3Bwy cTAZHB6Lg+LwmB9khx/Q9T/3914BsYogQ5RiItZoKoU1WyNbEq1IohUSpeR/7e9ZIZjACUOSJ5gK lJQPd/MTGsFFK0/Gmp5sE4FKUIGMK6cgWNobNSamlBMdE0wYXNR/d/B1MigyJD6viPvH/l4YL1bn pQ1OjFhltq6KP8r/NSj57BlLDXJpqWrOLXOCE4XUwgjmY8Z2k8XHYljNx2ZCr41KLW3fCmVCGGoY X1C34+sMKv40wcZUI1TYHpfE8gJLYv2aFXMQicJUzoVlVGPNW2xCtNvEBiqbDIMAVisGXLOO/9wN HrJT9P7gfjKePg5uU3KEmszlqWJQKbE2Xjn+HH/MU/r+8BRlk0mKyCmyclBCgOcpup5m6LI3QUwi qk65PuUUMULFk4VsMGxBCWa8JmTNumumLWPhphLTqi/UKUCSYOrnV9rJKRqMUwoGTy7PjtDHr6kx mvLfzk7yIxh5zFMGLM0RmkxHRUqNUEYO86zfJUocXBdQckXdjJKKhpKByaQNpI5hSrD2Muw/9Apw 4yko3brScDBK2REa9r6lTElmxBHqfblPBcNcH6G8uM2+pAzmSqU7hYhxbKgr5maEdDSEdOJDqFOI lNjAsY4Qb0RIifgIrUu5GSETDyFiF4hdOpGHY+lFj9mkn40KWBjzJV5dgkA1xF6mJshmFJJoKCjq RSGGo3hYfkcIInJSfErfWeb8wxFC0umANXu9gzkdnboNpCyG12TejBgl0SCTMFOo3ULmYfkdcRcQ 4XZIt0PtADJH5gDIGnPPTiATfOeQeVh+h4jXAcSsdgjoSFY7pO1Y+iFPSBzIHJkDIPMk1VtCxkGN YreQeVh+R0my0L8RJUIJlvMOraqOZNaRCDLrkLMOzeJA5sgcABmPBhmTO4fMw7KELElK/VNK2YdO IwkB95ka/wD1Rys5KKqwaE1dowR0Ho4fv0II99sZPHs+D7rLmawKvCkRUqsjG/HZggN41rMqCjcm oQILPg/ENQNFK9Zp0YEJrHhN+ADo4lUdiN41dB6Oq+kSOkUCQMDUAkRJyiig8czGfKm94ZmNLWyL lsOzZqfznGJl8cGRNwCtaNUHmRjMdwmWh2GFVZcsDYdIW/h4LiyDLbGXgD2E6AqaGnoZlrqyCNtt TLcmwQxW1JUwwCKiVTukSXZsEe0MK4sYLjCyCHHAQizddN5iK/BF8FyIJgl3RQ2AKVrJQ0KUTXda 8vBxdIFad6bO4WDUYEZrEgXgEa34ITXFpDUt677Y7uPXf5yixc8pmuaTVFZLXf53ngIjY52oX3xL mdRclnHMn3d21wZCG/jdqedIWOx4TdzNULFoRQ8Jy/BOofLwe/07us0ei095BRUFXylns9lvkXYp DiPVYuORp2JrZuypqiJaaPJy8oXG81SV02+nDi2oLOdXR7IAK4lWZ5GS4YTuNAP0scynH4eDorIS MoNCLNAxsx4AzIFrBlY03BKbubuCB+AWrdgiYaneNW4elv3x8PEhK7I5bvT/CjdH8ADcolVcJBdY 73abzcdykOfT7Lbyt2Jc9B7SSUpOvqaC2BMb5KQ85JB/Gk+KcqDTaTohZXTjke5rWkoQYVPP5Rpg EttWgZrsszIJJjH/MfOMVtaQVGEpm9G4mo5QXvSKaY7KEzBw2/uD3sMDsqdSut72M2C6piZOgGri 1RCIblVNlHjKw+/m6vX1q/VzJr37+8nH8qgJq46aQJC7/nfvmzu+evykvCDWeRTJwdfrjxQA57YF gFavE4nGiU+WRnHMTIL5+Te+OP+2fvgOMrflicHGw3fLW9cP31WURUIxpdWdjDGstUtg5QpLwLlo 5WwdB92bZC4LlQIz1nbe0pM3biCzSdta/+B8yz2JUZAsrY4sVGKj8EbHOn8YZKMCvR7djRvns6dq ITGQwosa6wA1eCL/bdVAWtVw9dhH5+PpqMi71QLHvNQC+VEteOLoLbUAk217HP3kh2+d1X38fs1g dS0mEIy9XbTQxXj8GdZadJX1bh8Go89VucP+VTZuynPL8HM+yXpV6yobjr9kXeI2i6Nd2QNw88TR W+ImqD//6RQ7cNvScD1MKcy+1Q+XYlGWWraI/WBSUZWUu2qUUckWo3R5XQzgHLkDgPNEu1sCx5kv AerU44RdPn0Mr7JRb5hVir+onOvt38OFm11+vh1MKn8bVi3rd2XrCj7fPEzLgtaLfj/L8xi4OaIH 4ObJDLbEDRInb+LaMXizb1D42JJ153pii8YAzpE6ADhP3rIlcFRgpX9sC+XJHlduofg4Xn4ejcuV 65ctPKbpyxMQ/nPjZQ1LasV4ANEdQm96xafz8ejO9p2Ph8NB0aUhMIjSDa3JE2AInoxnS0Mg0msI sTzYwzbYW2NA48gVAI1nm3M7aDikstITi3eej8yODLh8A3SwbYrYrgOjsfAFKVsykZjXmawq+now zG5/n9pJ4vXoS+9hcIv+DQv+I6SEebn0F5OydZX9Nc3yotMFf2aTG3VQw0NES5M5+G+bqqIEaj6G Whmj5f9wtnCFC0AmWubOIR1hLTHRea//KYtQv6gyd5dzgBaiZe5ckVYtvLDJ8z8GRRmeXw7yHPz2 wqbRTt/ZYHxV9XVpzswQuwHtE7BjB2L2qLmf4/KJbxwtRIh/XUEC7CRapYBLiunuwx4f23mlYFkz WK0AiK6NQzC70eYK9O7guFsuujRBz2Mvqx/zFldwveFlHcRWRJSJYYy1B99ojNGqHxywaFOPnYwu 6hPUS3fSejmYvHL7XvT7+Q7c2pE+QJPR6hGcc0xat+PiubWHbXA2U2t1KejsKJpfP50yVNqW830M l5FbjGqLyzloK0+ota08YeRiw21tK08kZGWXEcZlkiS1zbjl3U27eZxBDCln78mQid0KdS8QVgHz F3mssQi4Qs2H/VKuiFG+jWNdksVFC0azt340k/JeVdd2Y1XD8xUCxKm2b75pAbXOoDE3twxWaRJl pa6/jqVOrjHNdcgxyJ45DyEnG7M0lxwYkZJB5Fpf+bNKDpAiQQ8rG2N0l5w9TBT2sI2hnEtOcpzo IHKNi7FLTvB5eraJXOOK5JLjDBsWRK6xbuuSY8w6TQi5Vj9ZJUepPRwTQi7EKxih87eEbCIX4hUU JkyhQsipEK+gi9cvbKIW4hTU7uQFIaFCnIJKM//67SZyIU5BhcFBRqxCfIJyPf+m4yZyIT5BmbJv YgghF+ITdPllvk3kQnyCEljbwpAI8QnIpzkJ012ITxiBFQsip0N8Qpfv0QqhFuITCqbOIFx1iEvY M9KtsP4XTYfU+w0KZW5kc3RyZWFtDQplbmRvYmoNCjQxIDAgb2JqDQo8PC9UeXBlL1BhZ2UvUGFy ZW50IDIgMCBSL1Jlc291cmNlczw8L0ZvbnQ8PC9GNCAxNSAwIFIvRjIgNyAwIFI+Pi9Qcm9jU2V0 Wy9QREYvVGV4dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4vTWVkaWFCb3hbIDAgMCA2MTIgNzky XSAvQ29udGVudHMgNDIgMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9TL1RyYW5zcGFyZW5jeS9DUy9E ZXZpY2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFyZW50cyAxND4+DQplbmRvYmoNCjQyIDAgb2JqDQo8 PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDI2NDY+Pg0Kc3RyZWFtDQp4nMVcbW/bOBL+HiD/ gV8KpIuU4ftLsCrQpN293jbYbppDD2iLg+soqbexnbXktgv0x99QshzZpkVuLDYJEDMUPc+Izww1 M6SNjl6jn38+Ojt9+RyRp0/RyfNTRNH1/t4Tgp5QjjmSVmIhkREMzfL9vauf9vfQi7NT9Nf+HldI C4KNQZJbTAWCP/Wotz+hCYxrCadHrwaTa3SQT578583jBVKEjJOL/b2jXwQymAh0cbW/RxGBX4rg TdK9l2Oq0MV4f484tQn6FXCfEEwYDB++O/g6G5U5Mp8ff0AX/97fewHi/tjfiwZmm8DaYGuEH/nd AVoF+mMxW+2pYIu7r97GpZOvJcjGViG1nOTmQvVSXav+LC+r5tpC/ZWrUkvX14glhCHPxaVod31V eg0OLw0uFa5nXcTdACdidUzLSiRlmNJGU0Y1NnKLnfDtdhKQErIUxh3BAUv539XoJj9G7w+uZ9P5 7egyI4foz+nHIqPvHx+jfDbLEDlG7gIlREh7jC7mOTobzBCTiKpjxo+FRYxQ4TO4qDvwmJygBDPu v4UNk9uwN5FsSuE92+cToXtPgQdREudt3YgVg8doNM2otNacnRyij18zJbni6reTo+IQLt0WmaRE H6LZfFJmwBpRfFzkwz7p4txUdG0oG+ZKJuOKCFi2OunqmTCLdQTs8GZQgrvNgQLnYePRJBPsEI0H 3zKpiWTgf4Mv10CoxBpYK8rL/EvGuKBYkxSc+fQN06ZS0abg6Wu6nm+ob96UiML10MZ9tCnM2Qpt yiagzatumDadjDajwrT1721B2MrbbvPZMJ+U8KAr7ijskxSqKyfyahMmxSQjRWusu59YaXwphPsd IYoJKT9l70ABoT8cIiTvOpTrgAjgrsNCB2t1aPLhsFevcu7E/YqHCbTJCFTmYQgM4X5HvM0GA3pE u0NAh2x3OEZVq8PwNAT6FA8TSEkyBmGdUl1BeCoGQ7jfwYcqOvrEhjgTaxMGr03AgE2Ylk1YCh22 3aFcx93CQIlJYzU+fSOspiPF39VqyANZTQD3O7K2oYMy5twcOuSiQ1ULAXTYqoNz0oywsupQxFGa hEGP4hEMsmQMQuAnH4DAAGzFn7WODEWoIh96jYKEwtR4dYigIlkdRHEWpCJBZBpA/fgVQtHfTmAO iiafQFwqXacUlDJONOQRELq6wgv43KM6wYC1iirbSg2phmxfY6V6rb8IXrmV5yYiqExWf1GMY/7j c/og7DI37BPbGvAmEQaHZAYdQ7SVUWzEIxc4k8xSTCW0GXS7yuYjF46RDBZJDU0tXRMG9LoSa4UV 96sbYTPJ6kCKigexmRBsu54A/AFrjhSieyWFQpDnlmSfMhGkpKvyEInZA5QLQrA1KeOGFAaMGAYe Ixw1tnIu13L+xKoWrTzLeRPt15tgASbcr3AEccnqPNKqhyAuCLtGHCVLdphsEbVs6qa3V9Jg7TPU r2wEacnqQNJo/MN3LoKgw9s5Wv4co3kxyxjBlgM7xd9FBqjaud6w/Jb1qR2T8JyiYfUg2RTGRWN/ Xrk9MQjQ4LVXa5EaC+5XJMJakhWdpIYL3clnAmsJgb78HV3mt+WnorYW6pZc592LV7F4NYtXquqw GZrcjXGNp5mq1vFefV5Q6RZqr/5hFlmywpOEJCHAYpIMNIhbzD+OR2XNIlkwI5ZkmUUP8LfG3oK7 ZDRal/B4tY+gMVklSMLzzz4AiwHY4XR8e5OXeUMivReJfeotABvSj5DiqezGAxthNsnKTxJWVaMf wm4CuKOimOeXtfeX03Jwk80ycvQ1o8xAAnRUHT8pPk1n5aK/6ur1qQuEEepXNIKxXatUPhOqGRMU s3solKzWIjlzDrWdyvP5BBXloJwXqDo7BO99fzC4uamOD/W9wWrAvIxfp4hJSlZckBDxS/qjY6UQ 6NvzlxcvVo8FDa6vZx9XTgZBQLvWMfi2MaLMlueFqgGt//ukV7rjd1tuLILeXcsU253SHfAL2ptX J71QY3GwkNnlwcLVI41cdp5nvHvf6pHGWqwkGhNdv5Mxho1dF9Aa4QSsDWqdWOSQFhHb6EJFdaDB f2KRdSSqATEBdxJWuMMt2y379GaUT0r0cnI19a4vUfAeuhe1Vi9+hAF25GI7ToiR3RNyfjtEp9P5 pCz6nQ9XxBV++PB88I6sZsf50Mq58z9db+8LWq+3QdBfc3gMljMIat4sW+jVdPoZHoroPB9c3owm n+s6h/uvarytjmnDz+ksH9St83w8/eLdQrgvjYvo1HsDETR2ZDU70qh0iMZt8el9gSGxqyw6hEy1 XJSkuCTL8pRYtpo+qpl7kYRSypZX6d24FDz6lI/gsSPN2JFHqbENpKe9+qNwT7og6nk+GYzzmodX teu9+Xu8dMKzz5ejWe2N47rlvLJqncPf1zfzqsj0bDjMiyIFjT79I2jsyD12pFGYII29O+TiUyVB bJIAm0IcyE3MffeKypQbGX3Hd4tNfIumsFefwhH22pGa7miv8LZgdaPXdafahArCnn2eTKun+y+F S4zr1ghiVYReD8pPp9PJles7nY7Ho7JPohgE+4b6FYwgqiM93pWomDJUooUlhB3tXEmYulf5iXdk ujsyxSjWXeeVXKLRe9plONaOKh94xGx07HLvOBuURSi0I5LEfAtSe94vRuP88ve5W0xeTr4MbkaX 6L8Qr9xCIlxUkUs5q1rn+V/zvCh7jVcaY42ajU160pUJCO8uTaYJO0OokjJuNzOFutVrIGGFO9UT Mw09IzPGXfU8iJx49fTBRxhksjIN0O7OFHTUrQbDT3mCslVdpvHCR3xMNlmZhhuJRVeI+8wVSf41 KqtE62xUFLCEvXLlkrW+k9H03NP3dq0vQdDrvYWIOU1WM3HnKzvnNGEsFcTWTfqxrJ60ayEiwUJU n+LxKvbu4EmvUIpisQVqtXLElrMALc7gecDq+ZCQcDKZwkr99x+00mQVIa4gs+6Kup1Hv9r06Ofr Xv58NHux3vdsOCx6tSIuq/3EoNLpVxufBhE8JisJcWm6pyTlahPCjs7cPK3W+pSCR5/qURuGQqxs GEpClzt7KxuGkqjWhiZcl9Z6dv1oe9dvfduQS7gsF19z4j6myTYGuOPOd1/CQv7ZCNVcbu1stgcs tGypUX2Zyqomy0FLoOZLW7yiOkdtzra35LH90AjckYClojsJ2kTxpusOpS2YM6e6ExwS581318VB Li15lDhvfrYujsJc6ihx3uh6XRwhmEXdrPQGp2vimLFYR92s9MZl6+I0mGvUzUrvA3RdnDJNWB4S 513H18XB2kJklDhv5XVdnNBN6BASt9VZ2uK4ch/9jREX4xWMKcxYlLgYr2BUYqWixMV4BSOw6kRJ i3EKl85LESNOxTgFNbz5RpeQuBinoNp9KVqMtBifoIo1H9APiYvxCQoBHI271xifoO7rWqKsTsX4 BOXUHWuKERfjE5QRHOX/KsYlKLHYxjER4xLWYqbixMX4hDFYkyhxOsYnILKkUbzqGJeAYF1G0apj XEJufeL8HyXIB/YNCmVuZHN0cmVhbQ0KZW5kb2JqDQo0MyAwIG9iag0KPDwvVHlwZS9QYWdlL1Bh cmVudCAyIDAgUi9SZXNvdXJjZXM8PC9Gb250PDwvRjQgMTUgMCBSL0YyIDcgMCBSPj4vUHJvY1Nl dFsvUERGL1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUldID4+L01lZGlhQm94WyAwIDAgNjEyIDc5 Ml0gL0NvbnRlbnRzIDQ0IDAgUi9Hcm91cDw8L1R5cGUvR3JvdXAvUy9UcmFuc3BhcmVuY3kvQ1Mv RGV2aWNlUkdCPj4vVGFicy9TL1N0cnVjdFBhcmVudHMgMTU+Pg0KZW5kb2JqDQo0NCAwIG9iag0K PDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAyNjU2Pj4NCnN0cmVhbQ0KeJzNXG1v27YW/h4g /4FfCqRDwvCdVDAVaNJuy12CdWmGDmiLC8dREi+xnVly2wH98fdQL7ajMCJXi/V1gZilpHMe8TmH POeIMtp/g378cf/06PgVIi9eoMNXR4ii6+2tPYL2KMccyURiIZERDM2y7a2rH7a30OvTI/T39hZX SAuCjUGSJ5gKBH+qs979gCZw3opwun8ymFyjnWyy98fb57WmABmH59tb+z8JZDAR6Pxqe4siAv8o goukvZZjqtD5eHuLWNgE/Qx69wgmDE4fvt+ZDSaX0zH6PBsVGRK36PlHdP6f7a3XIPb37a1gAOwx AEo15saN4P1OS9Hv9aitDgmrR6G8jEsrX0uQjROF1GKwmwPlV3ms/LM4rJpjNfwHR6WWtq8RSwhD joML0fb4Q+mVcvhq9FJhe9oilidYEQ/PWbEWySjmskHKiMGKPmEv/Gl78UjxWQzjluCnLaY0lf9e je6yA/Rh53o2nd+PLlOyi/6aXuQp/fD8AGWzWYrIAbIHKCFC0QN0Ps/Q6WCGmERUHXB+IChgo8Jl cEF34DA5QQlm3H0Lj0zukb2JaEMK1zw9nsjpc9+qURLrbd0aSwYP0GiaiiQB+aeHu+jic2qMVlj/ erif78Kx+zxljCa7aDafFKnUSptknGfDPvniMDcQ7kDrJ0tGI4sInJhOvnpmLME6QO3wblCAv82B Auti49EkFbtoPPiSGqW1pLto8Ok6FQJWI7OL8uIy+5RyYaw59EoZ5xg81wnXz5qKxZqCVdh0rXOo b9qUCNLbzRpbsmZ0fNacaP2s6WisGeVnrX9f86otfe0+mw2zSQHrXL5ksE9SqC59yInGT4qJRorW WHcvWHFcyaf3K0IUE1LcpO8BAJMfdxGSy45EfOwTFGMaG+pHtYsg6GhAUG1RsWUHk9DRqydbF+Zu WH6jSaIZjTKbMRqf3q+Ir7ChDdAjVjqMgg650pEQ6FCrHSYOgS7gfgIpicYgzI2qK+6PxaBP71cI NhZ0cMKAH7PsEMx2JMsOpSyDyXJi0CISgy7gAQx2ZPrrMkg2xKBH71cEWUZNBxei5CfBsupIqGRV R1J2UEGoqjtk2SGpisWgA3gAgywag4JiuQECPWpL/pLEkqEhY19jnXVFQUJhapwYAqiIVgZRnHmp iBCZerRefIZQ9NdDGIO8SScg5tB1SgFpBFe7NnC1VRfwuGdVepEQLjCVTX4hiAWn+y28SJvIu+AH kBit8KIYtyXI786iT+1qUogOYFFPCWbmmY0coSXkMxuTpBQr2wdUpgzczx62vRwr8azX2VDpkjwX 6gD2olViFBUbYc+ntmavT83GYGmEX3VjMRJsQkksBRiFtkZBMBeVrZCUQzrZq4EwI8vqgQtdgIHE K/oQidkGygc+tZWBjBuyGPg0te4rbIOVNEFL0cbhS9eXtkVov55d1VedgAOIi1b3kYnaBHFetS3i 7MxbclLNvHVTLnv1srf0vP4JNBQL7gYeQGC0GpE0Gn/3hxpepcP7OVp8DtA8hzgI0AI7+T95Cgk3 t+1h8SWViglKbdT015V9dAWBFHz3ypzUJXMuzAHMRSvUSA0HupPECMz5lB7/hi6z++Im71N3YuMZ EXLH/d4zpRAFG79emGDsgmBnl/pb1N+m/qaqCuShye05tvEiVaJsRojfnZD91sqiFaWkSvzcRciI vXrz+cV4VFTzDKmZEQuyTN0D/LXYq7mLRmNi6xpO9AE0RqtMSVh/kw2w6FE7nI7v77Iia0ik/1ck OsAHcBitNiVhKjd6EyR69I7yfJ5dVq5YTIvBXTpLyf7nXmf06kmdF0pKmYE4kOyX+2Hym+msqMCk ZVev8QVk64a6IQWYybp1M5fdVmYiKGbfAChaDUhyhhXvIO1sPkF5MSjmOSo3M8G1H3YGd3flfqa+ H/kaUYYGLkwBgxSt1CLBmiT93lGhT+m7s+Pz1619SoPr69lFuVWp2qgEkfuD/w6+tI6u7l0qj8fa zCQZ5PRP3FXYfkdVM1rvKmTJYlfhw/2MS3TOzYzL6x7uZ6zESqowp9WVzBYTZVvAyhlWQOukle2K XEnMeYOFinKSdG9XZB21BI8Yn+kSjkXXpHx0N8omBTqeXE2dvhyk3sF3YjC16YVLf4Avd+Tm6w2I SET3gJzdD9HRdD4p8n7Ho0q3nOoDxqMj411zPIy0/vxv57ZvVVrNbV6lP2ew5BQziFreLlroZDq9 hQUInWWDy7vR5LaqZNj/lY135SZt+BzN+kTMZbkaeSFng0r7WTaefsr6NJ065HUi8JsO70g/1zQd rex+Yk/M26/9QKhppxWvailq5ZSYRdFLLFqk/AvRqGT2HEYSBkPbfOjyvBg8urAH8NiRf67Jo9Je HnudA4RdXb1az7LJYJxVPJxU7v72n/HC8U9vL0ezagYYVy07E5StM/j75g4iVvi8HA6z3Fk4W5dG F/4AGjtS0DVplLDkdkepEfzRwInCr5u0XO/bWjQGjy7oATx25Ihr8igSP4+9+mP5+Mer9vR2Mi1X 2p9ymxBWrRHEjQi9GRQ3R9PJle07mo7Ho6JPohgE3pDMOwEGENWRO69JFLf7fjbkcD7dwc4VgykX uACmOhL4dZki3aNlg/7eUyDDsS6pcigPGI2OjQFrjgak8P6q2Jqa4LInNK2O+/lonF3+NreTyfHk 0+BudIn+hHX8HpLSvFzRi1nZOsv+nmd50es63hhr0Gg8pidayi4o85WSI2RkPqWUJdoQtclZxQUx gKh4pQQCDt6VEh4NhjdZhNJKXUpwqQ8Yj2ilBJ4IrLsi+pc2kf9lVJSB+ekoz8G1T2xK3/T1adc0 sSulH1SN43A0PWtjg753rb4IAagTYcD7uNHyem6kLepuJK7x6taNcrnM5pf5uugbFhMaE+qG9X5n r1dVktppzjsCTdViUb+ghEND2bmZMpoI7nxrfF0rdY+A10qjVS24Vt3jZD365LFHv2p7+avR7HW7 7+VwmH8Hz3fdQsCYRish2L3OYkMZjVd3/IzG5ZRV7BE8ML2ODAczIQHKl/e+UsOMYa8uIEEP7wR/ 8PBO2h3Z1PHwTlKJHz6/s8/YHj2BI6tP4NqP8LgimCX1743YlxdZ+wSZ2FFofg1FJpCd/YszVHN4 ifLBCTXKFRjlr5o8RNKctFRU/3qKU1T3WY9H21ny6HjDATTA2tPN7GMtznTdalkVLKiFbgX7xDnz 3bY4yKWVDBLnzM/a4uy7yEmQOGcW0RZHEizDbtYZhLfEMRvB6hBx0hkLtsUZY99TCBHnXLTb4rTG JogK6Vyv2uJgbqFhN+usvLbFycXS6hP3pLOsihMKk7CbDfEKBtOMUEHiQryCMYGDrFiGOAWzL7KI IHEhTsEgTVVh9xriFBSy3yBeVYhPUMOwDLpXFeITVNPmd1984kJ8giqKBQsSF+ITdgkyQUyoEJ+g EDMGTXYqxCUoW7wv7hMX4hLUhk5h9xriE5QYHOT+KsQlEo0TEWZ1IS5hdFkPChCnQ3wCkpIgE9Yh HqFU81tjPmkhDiGlzVKekPY/PwYExw0KZW5kc3RyZWFtDQplbmRvYmoNCjQ1IDAgb2JqDQo8PC9U eXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291cmNlczw8L0ZvbnQ8PC9GNCAxNSAwIFIvRjIgNyAw IFI+Pi9Qcm9jU2V0Wy9QREYvVGV4dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4vTWVkaWFCb3hb IDAgMCA2MTIgNzkyXSAvQ29udGVudHMgNDYgMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9TL1RyYW5z cGFyZW5jeS9DUy9EZXZpY2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFyZW50cyAxNj4+DQplbmRvYmoN CjQ2IDAgb2JqDQo8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDI1NzE+Pg0Kc3RyZWFtDQp4 nM1c7W/bNhP/HiD/A78USAeH4buoYCrQpN2WZwnWpRk6oC0eOI6SeIntzJLbDugf/xwp+Z0RuViM nxSwWIq6O93vjrw70kYH79CPPx6cHZ+8QeTVK3T05hhRdLO7s0/QPuWYI5lKLCTSgqFxvrtz/cPu Dnp7doz+3t3hCiWCYK2R5CmmAsFHNerDD2gI4xaI04PT7vAG7eXD/T/ev6w5BdA4utjdOfhJII2J QBfXuzsUEfhHETwkzbMcU4UuBrs7xIhN0M/Ad59gwmB47+PeuDu8Gg3Q13G/zBFl+u7lZ3Txn92d t0D3992dYAnYugSUMUy1W4SPe2iZ0e+12hZ1wmo12Me4NPQTCbRxqpCaaXt6w17sPfsxu62m92rx l+7KRJq+KVlCGHLcnJE295epV8zhMuVLhelZJTEfYEgsj1kwF8ko5nIqKSMaK/qIwfDHDcZDxWcy jBuAHzcZayv/ve7f54fo097NeDR56F9lpIP+Gl0WGf308hDl43GGyCEyNyghQiaH6GKSo7PuGDGJ qDpk7JCnIBsVLoMLegOHyQlKMOPuV1gzuTV7E9FUCs88rk+EnqwCB0dJjLc1c7QIHqL+KKMiTfXZ UQddfs0USwlNfj06KDpw66HIREpFB40nwzJDTAiqBkXeaxMuzrWFa01YP1YyGlZE4FQ3wtUyYClO Atj27rsluNsEIDAeNugPMyE7aND9ljGliFYd1P1yk1HFseAdVJRX+ZeMJjTBjMbAzCWvHzYVCzYF y7BuWuhQ27gpEcTXA1syhy19Btic4vphS6LBppUftva9zcvWettDPu7lwxIWumIOYZug0MQ6kVMa Pyg6GihJgpPmFSuOL/n4fkeIYkLK2+wjCCDp5w58zjsS0wERwLxDQAdb7Eg+d1r1KuNO3C24H8A0 GoBKbwdAH9/viC+ikQI8grQpB1USc+0XZC6EJiCEXJBKM+hQix0yjtG4ZPQbDSXRrAbmRtUU+Mey Gh/f7xBtzOFIjZfrxQ5wapTOOyjhpmM+MVAq4iDoEjwAwYZcf1MEyZYQ9PD9jtJ0CgflxEzL0CHr DqV11ZHaDs7hsbpD2g4hFYmEoEPwAARZNAQFxXILAHrYWvzS1IAhEwJwtRoFCWXqRC4ZAqCIVgdR nHmhiBCZerhefoVQ9Ncj0EExzSc4rCRCVznF03MGh0SMcKypVyRq3JN0TLhsij3g5y+qpEZxAtYC Ee4sr2E8AY/TvNWaj4CR3CllgPlEq/koxk0k8uz242O7mI+iQ4h8MhBXvzCBM8m0wCSFNoNuU93m L0xsZIbwBJqJHU2SF61OxZpiwd2CBwAYrRCkqNgKgD62KwACbBYU3iooFPzWzMkuYQJAiVfmIRKz LdQLfGwrUAZTUFhGMVXgMQKgYcR6lwGJWn+CFuW1x9nOVt1JJDaycUkcgFy0So9M1TaQ87JdRU4u ABUBHkY1ZtQtVgA80Wo+UifYX4bfkBPk5s/DqdoEcXJaqvI9TNDs7xBNCghgNOYCwC/+KTLBsDbt XvktU6kQJsT66zqjHRN2XWdPL1y43FYyE8WEKWfdLqKVkmQCN5pTygjA+Zie/Iau8ofytqiAo2ZG NWFLfRX1VddXqqrAFJrcjDGNV5kSttnu/qOEldgtvx9FFq20I1XqQzFKXunlW0wuB/2yQpHUyIgZ WLruAfxW0KuxiwZjatZQp/QBMEar70hY09ItoOhh2xsNHu7zMp+CSOcgtlrbTYWJSX3S/B/YjUPC ALOJVlSSJudOtmE3Hr79opjkV5X3l6Oye5+NM3LwNaMMkhpyYA+VFLejcVn3265WV10AjFC3oAGI bVp7cplQhZigmD1BoGjVDMltePI4lOeTISrKbjkpkD0RBM9+2uve39tDQW1vm2owL+2WKUBJ0SoG knEs6XPHSj6mH85PLt4uH/bp3tyML5fO+0BAu9LR/bY2osxmp4DsgIX/twmvNIfqHnmxsHODqga1 Pp3H0tnpvOVzgXPpnIcC588tnwusyEqqMKfVk4xyrOQqgYURhsDKoIVjf9zsWvKpLFTYUwHuY3+s IUH3kPFZL+FYNM3Wx/f9fFiik+H1yOnOQewdeKcalnXh5h/gzg0p8WYKERBvNCrk/KGHjkeTYVm0 qw9TCBVu9gH6aEgFN9SHlsaf/+309lSm1fTmZfpzDqtOOYYY4v2shU5HoztYg9B53r267w/vqgzf /M82PtjTzvB3PM67Ves8H4y+5G3CWAeDzhfww8gbcsENYUyUOQnmiQZbhRLyKGvRPs5UiboYw5me FWbErEXqK4WFxNxQCYTas7t0Pi4Gji7hA3BsSAY3xFElXhxb9UdhVjov1/N82B3kFQ6nleu9/2cw c8Kzu6v+uPLGQdUyXmlb5/D57n5iazqve728KGLA6JI/AMaG5GxDGCUsf81BYwwYA7i2yzfRMFD4 GZO5F2/Qcu6Db2o6LtEDTKchS9zQdETqB7FV27FbKl62Z3fDkV1ofypMSli1+hA2IvSuW94ej4bX pu94NBj0yzaBYhB3a+oWMACohux5Q6C4OT3jXXLjOJyPd7BzxUDKJVwAUg0p/KZIkWZtmZi/9QxI c5xYqBzMA7TRsNu+oTYgg/fXxTbkZDcFnZwW9X7RH+RXv03MZHIy/NK971+hPyF0eICctLBBRDm2 rfP870lelK2GDlNjDdLGOjzRMnZBmbeuGyF08HEVmrC0jWDctQfAKbhK0Ju3P6PWX9fwMn+22dMl SIBBxquYEJjImpL1427vNo9QQaorJi72AfqIVjHhqcBJU7L02tQrfumXNuc56xcFTGGnpnKx0nfU H507+j6s9EUIep2vEPBV2mjlC66lqSNvJZby8k6m6YeaFy3mM6FoXzAmlZHIKdjHvf1WWSmKxSOs los4dKYFaHFi1wNlFZCkCXF+4XtTK3W/v9dKoxVneKKatWQ8+nTdo9+sevmb/vjtrK9VOM0XZKhf 0FqO171e8QyzjUuaAByjVWe4SrDYUubm5R2cuTlacj4/xcDRJXrQ3p3gS3t3kpLZJtvS3p2kEi9v 35kttrUNOLK4Abe6g8cVwSytf7bDfAWQrQ6QqdHC9EdFZArZ2b8Yoaa351IuDailXBDD/jjIsiTT QXNG9Y+QOEk1j1rXtrPkMUWX0rXzEvBKprjSHBCss3Hm64bNImEupjbjI+dMeFfJgSWmSRA5Z4K2 So7Yn/kJIecMr1fIsRSw4UHknNHpKjmYNwUNISedgdkqOVjqkyDdSecKukpOJJgF6U46J/JVcpzh MOGclddVatRsPweRe9RZFslBDiJFELkQp6B6dgTRRy7EKUwGGwZEiE9Qc84tTHUhPkEFuJgKIhfi E5QTrINUp0J8gprDHEHUQlyCElvHCCEX4hIaHEwFWZ0K8YkEHCxMcyEuISFKCHvVEI8Q3MRlj1D7 H3V6bAoNCmVuZHN0cmVhbQ0KZW5kb2JqDQo0NyAwIG9iag0KPDwvVHlwZS9QYWdlL1BhcmVudCAy IDAgUi9SZXNvdXJjZXM8PC9Gb250PDwvRjEgNSAwIFIvRjIgNyAwIFI+Pi9YT2JqZWN0PDwvSW1h Z2U0OSA0OSAwIFIvSW1hZ2U1MCA1MCAwIFI+Pi9Qcm9jU2V0Wy9QREYvVGV4dC9JbWFnZUIvSW1h Z2VDL0ltYWdlSV0gPj4vTWVkaWFCb3hbIDAgMCA2MTIgNzkyXSAvQ29udGVudHMgNDggMCBSL0dy b3VwPDwvVHlwZS9Hcm91cC9TL1RyYW5zcGFyZW5jeS9DUy9EZXZpY2VSR0I+Pi9UYWJzL1MvU3Ry dWN0UGFyZW50cyAxNz4+DQplbmRvYmoNCjQ4IDAgb2JqDQo8PC9GaWx0ZXIvRmxhdGVEZWNvZGUv TGVuZ3RoIDM2Mj4+DQpzdHJlYW0NCnicjZJda8IwFIbvA/kP5zIVmuYkOW0K4oVfwzGHw45dyC6K tu7G6mT7/0urQnWTtYF8NOe8z5vDgWgB/X40H83GoAYDGI5HgLDlLFQQopEGKCVpCZzVcCw4K3uc wWQ+AmhlYvSUV1sQRRW+LoOzzDDjLJoioJZpDFnJGYLyA8HEkFgr/ZLtOFM1TsEDZytBEoLQikVA ShTHch+EJI67vP5XrQt/p8W+9IsRhyBMRV7vvtYfxaZJa47Vpt425+8qeIfskbOJt1LbuRjQnm7T tomVCO/GplLfxB7O4ACVOMFJPE+XNwrRVAOiVPb68Zqkwms9aGX+Kq6+V9yLonWx9HOspEm7CJr/ BL0va53USRc120XNGJlQFzXq9lgk9HMHvfgs8Vl3tFSnLyWw5CRpiFGDdv7CNc391oOKM0uqgenE 14Aa/6farj0vmu3ybeGbZ7yHlz+AyR0g1m3kugKb91EL6CNawB+1M8XzDQplbmRzdHJlYW0NCmVu ZG9iag0KNDkgMCBvYmoNCjw8L1R5cGUvWE9iamVjdC9TdWJ0eXBlL0ltYWdlL1dpZHRoIDYwMC9I ZWlnaHQgMzcxL0NvbG9yU3BhY2UvRGV2aWNlUkdCL0JpdHNQZXJDb21wb25lbnQgOC9JbnRlcnBv bGF0ZSBmYWxzZS9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDY0MDM+Pg0Kc3RyZWFtDQp4nO3d v2sk5x0HYP8VaQNpUrgyKa64xmAVNjYYAooMAvkqE4OKXBAkIEJMGjUmbkzwQTDhGiWQuLlCEGzI BRkMsQsjAoFDhEAw4ppAiEgi+bQZbsgwN/tDq92dnc/sPg8u9sbS7mdfve/73fnx7gwGAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAA0IHLy8tbt24999xzP/7xj+vb33333Wpj9TMN3/jGN/7yl79Uv/LFF19U/+ub3/zmv/71rwkv N+zjjz9e7Fsr8wwnGZeh0QLTqz/hzE8CQCfqc3i9Et20Dn700UeN/zuyFE6ogwsvhTetgzNXsbKt Sr/85S8XFB+AZagXhe9973v//e9/y+0j6+C4OvXkyZPvf//7VRWofn64KIx7qvLl6gHmd20drGco 63hjD3dKZXgVEKCPGjtH1WQ+Tx1s/PrIl2s8VVmz6nWwvoPZKDH1/a9G6ax+qyh/f/jDH6avg8Mb RwYof6wol/fv3y/r5htvvDG8g/z3v//929/+dmPjyF9/+PBh8ZPFg+J1y18pA1fP0Mg/8r1X4X// +9+Xf4jhlqkftR7XaKo5sIaqKfTDDz8sJt5q1r1RHRzU5udxZwYbL1d/qqqMVnWzPtuPLNB11W8N H5udvg429gfHBWh8bNjc3HzrrbcadbBecSrlazV+vahHf/3rX6uKOc6EZpl84HpCy1SlcEJTA6yD elEoZ8ty8pzm/GB9whz+mZFFc8K5ucbOVFW/yl8p/1l/PHj2yGfjgp+qts5wfvDaAM89u6tbPy46 /ANlq4779Wq/r9xS1dDy2YZ/d/J7b1S3xv+tJyxbe8I7XUDfAuiDeh2sz5A3rYOlxn7HhPODE55q 5P5Uo7DWDzzWDyTW5/CbXidTZZgQYOS+ZL0ODscYbuH6r5c/3/gMUP1z5FsYfu/DT1v/xeFIN2pq gNXWmEKrXcKbHhcd+ZzXHpOs5uH67tXIw5vVrwwfxytfZbhkjJv/r307EwLUPypUP1+vg8Mx6idP h3/9RnVw3HufXAfHfR64tqkB1kFjCq0m7Xp5urZwjKtBw5dfTrgcpdodK7eMvHa0qpv1Mlr+5PAr 3uh60boJAa6tg9PsD15bB0ce/Jzw3mfeH5zwTgHWxLgp9EZ1cHihRP3E1uSXqyrv8MWW9Qm/fKrG vF1fbTHD+cFxb2dCgGvr4DTnB2erg9O895F1sPHXqe+fTninE3sNwOqYcMJr+jo4GHOEbcr1g9Vs fO1FjCPPZ1Xz9jzXizZMvl50Qh0cF7J+veic+4PD725yHRzZMhOOtbpeFFgrkwvTjc4PNibqkdPp 5PWDz41ZH1F/qvoKwX/84x+NsjLz+sFhIwNMUwcHU6wfnKEOTnjv19bBxl+ncSDU9+EAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAACw5q6urvb394+Pj8t/Xl5ebm1tbWxs7O7uXlxctLcRABIc HR0VFaqsg/WaWGw/ODhoaSMAJDg7O7t79+7e3l5Zp4odt52dnfPz8/J/bW5uFo/b2NjlewaAp4o9 tZ/85CePHj2q9teKOlXUxPLQZVG/tre3T09P29jY5dsGgKeOjo4ODw/rxy1PTk6qU3hVzWpj4zTx PgdYtGkmn++89fm1/8067xKk2k2r10H7gwAz1MGvvvrqxRdfrE76PHny5O7du++8806x/datW9+q efjwYfUzxdz46NGjpb43asrLY+qKfUPnBwHmrINlESzrXaM+Fv+8fft2WfuKx2+//bZL6BPU9wfL x0VBHAxd8LnYjQCx5qmD9SI4GLWfWO0Dfvnll8UOY/mg3FV84YUX7CZ0wvpBgLqZ6+A///nPehEc jNofrPYB33///eInv/7669dee638gQcPHpSVEQA6NFsdvH379ptvvlns1tVr2fD5wTt37hR1sNox LOrgK6+84iwhADlmq4NFvbt//35R11566aXiQbV9+PqZYjewvmNY1crnn39eQQSgc3NeJ9O4GKZe Bwf/PxxanRysKzaWe4stvz8AmGT+dRMPHjwoK1pje3UUtKyGg2drnzoIQIJFrR8sitrf/va34fWD jZWDRU10vSgAOXyfDADrTB0EAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGAJvvPW5wv/r+v3BADTUgcBWGeLrVzqIAD9UlWuhewG qoNAS05OTjaeevXVV8/Pz8uNl5eXW1tbxcbd3d2Li4v2NrLC1EEg39nZ2ebmZln+jo6ODg4OigdX V1f7+/vHx8dtb2S1qYNAv1Q1sdhx29nZKYtjqxs7fbu0Th0E+qXaTSvq1N7eXnnosqhf29vbp6en bWzs8t3SPnUQ6IuiSL3++usvv/xyWZtOTk6qU3hVzWpj4zTZPqe32qiDsBBtTab0XKu7fvYH15D9 QaB37t27d3x87PwgC6EOAvlGHq4sr+08PDwcDF3wudiNrDZ1EOiFoiqV6wer84MD6wdZBHUQgHWm DgKwztRBANaZOgjAOlMHAVhn6iAA60wdBGCdqYMArDN1EIB1pg4CsM7UQQCWb/6iM7IMzZxkIZEG 6iAA01EHAVhni60X6iAA/ZJTfXKSALA+cqpPThIA1kdO9clJAsD6yKk+OUkAWB851ScnCQDrI6f6 5CQBYH3kVJ+cJACsj5zqk5MEgPWRU31ykgCwPnKqT04SANZHTvXJSQLA+sipPjlJAFgfOdUnJwkA 6yOn+uQkAaBV88/zI2f+ecIsJNVAHQRgCupgeBIAWrXYKVodVAcB+iVqws8Jk5MEgFZFTfg5YXKS ANCqqAk/J0xOEgBaFTXh54TJSQJAq6Im/JwwOUkAaFXUhJ8TJicJAK2KmvBzwuQkAaBVURN+Tpic JAC0KmrCzwmTkwSAVkVN+DlhcpIA0KqoCT8nTE4SAFoVNeHnhMlJAkCroib8nDA5SQBoVdSEnxMm JwkArYqa8HPC5CQBoFVRE35OmJwkACtp/tl15Hw7c5KFRBqog+ogwHTUwfAwOUkAJjg7O3v99dc3 njo+Pi43Xl5ebm1tFVt2d3cvLi7a2ziPxU6MKzPh54TJSQIwTlGbtre3T09PB08L4ne/+93i8dXV 1f7+flkTj46ODg4OigdtbJxTzjSbkyQqTE4SgHFOTk7qJenevXtFqSqK487Ozvn5+eBpcdzc3Cwe t7FxzvA502xOkqgwOUkAplHtGxZ1am9vrzx02erGOQPnTLM5SaLC5CQBuFZ53PLw8HDwdCexOoVX 1aw2Nk4T7PPx2phmZ5OTJCpMThJoWPwcSs+VV7CURXDw9Lil/cGeJokKk5MEYILyetHqStHB0zrl /GBPk0SFyUkCMM5wERw8e4y0ccHnYjfOKWeazUkSFSYnCcA4RUnaeFZZE/u1frDzaTYnSVSYnCQA Kylnms1JEhUmJwnASsqZZnOSRIXJSQKwknKm2ZwkUWFykgCspJxpNidJVJicJAArKWeazUkSFSYn CcBKyplmc5JEhclJArCScqbZnCRRYXKSAKyknGk2J0lUmJwkACspZ5rNSRIVJicJwErKmWZzkkSF yUkCsJJyptmcJFFhcpIArKScaTYnSVSYnCQAKylnms1JEhUmJwnASsqZZnOSRIXJSQKwknKm2Zwk UWFykgCspJxpNidJVJicJAArKWeazUkSFSYnCeHm7yEj+wysvJxpNidJVJicJIRbVO1r9BlYeTnT bE6SqDA5SQi32D+ursL6yJlmc5JEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVm kzN2cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaT M3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMz dnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2 cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3Zy kkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKS RIXJSUI4XQVmkzN2cpJEhclJQjhdBWaTM3ZykkSFyUlCOF0FZpMzdnKSRIXJSUI4XQVmkzN2cpJE hclJQjhdBWaTM3ZykkSFyUlCOF2FBGdnZ5ubm+fn5+U/Ly8vt7a2NjY2dnd3Ly4u2ts4j5yxk5Mk KkxOEsLpKnTu5OSkKE+vvvpqWQevrq729/ePj4+Lx0dHRwcHBy1tnFPO2MlJEhUmJwkjzf93GfmX mjnJQiINdBVu7t69ey+//HJRoar9wWLHbWdnp3xc7Se2sXHO5DljJydJVJicJIw0/99l5F9q5iQL iTTQVZhVvTYVj/f29spDl0X92t7ePj09bWPjnJlzxk5OkqgwOUkYabFNqqvQd/U6eHJyUp3Cq2pW GxunCfb5eG2MndnkJIkKk5OEkXL+QMtPstj5k9Vgf7B60OskUWFykjBSzh8oJwnrrF4HnR/sb5Ko MDlJGCnnD5SThHVWr03ltZ2Hh4eDoQs+F7txTjljJydJVJicJIyU8wfKScI6s35wZUZxTpicJIyU 8wfKSQL9kjN2cpJEhclJkmP+phjZOPOEWUiqga4CXcgZOzlJosLkJMkxf1OMbJx5wiwk1UBXgS7k jJ2cJFFhcpLkWOy70FVWuKvANHLGTk6SqDA5SXJEtUlOmJwk0C85YycnSVSYnCQ5otokJ0xOEuiX nLGTkyQqTE6SHFFtkhMmJwn0S87YyUkSFSYnSY6oNskJk5ME+iVn7OQkiQqTkyRHVJvkhMlJAv2S M3ZykkSFyUmSI6pNcsLkJIF+yRk7OUmiwqQlWex/fW+TqDA5SaBfcsZOTpKoMGlJFvtf39skKkxO EuiXnLGTkyQqTGCShViNNokKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME+iVn 7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIk J4kKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5SaBf csZOTpKoMJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME+iVn7OQkiQoj SXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME +iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKo MJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5 SaBfcsZOTpKoMJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME+iVn7OQk iQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kK k5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZO TpKoMJIkJ4kKk5ME+iVn7OQkiQojSXKSqDA5SaBfcsZOTpKoMJIkJ4kKk5MEWnV5ebm1tbWxsbG7 u3txcTH/E+aMnZwkUWEkSU4SFSYnCbTn6upqf3//+Pi4eHx0dHRwcDD/cy62ty9kFHeeJCqMJMlJ osLkJIH2FDuDOzs75+fnxeOzs7PNzc3y8Tzm/+g48sNkr5NEhZEkOUlUmJwk0J6i9u3t7ZWHQ4ua uL29fXp6Oudz5oydnCRRYSRJThIVJicJtOfk5KQ6LXijOvgdgEVrecKDEdrYHwSAvmjj/CAA9EV5 vejh4eFgcdeLAkCPLHz9IAAAAAAAAAAAAABr68GDB++88075+Ouvv/7hD3/YyRWq9Rid0yYjaZZh 2gR66vHjx9VoffLkyd27dx8+fDh4OprKB8tXxXj//fe/9VSH41qbVHSVKcMMtAn0Rzlk7t+/X235 8ssv79y585///OdHP/pRh99dU8Soxm99emlb+Vrl7FE1y5q3SUlXadBVYDUUY+TnP//57du3Hz16 VG0sPkb+7Gc/u3XrVrcfI4sY9enlhRdeWMLEUnyML1/0q6++evHFF6tXXOc2KekqDboKrIbyLMbv fve7+pgtxnUxiotBVB5XWc5wrn+6Ll9xeHqp74y0pHiV8pNz4/zOOrdJSVdp0FVgNRRD+L333hs+ clI/1V6MoJY+TH7wwQfV01afruvqMYpxXURt+8KDchL7Vk2VajltMqg1S0iblHSVBl0FVkMxOsrv 7i5PKDz//PPlUa9i0nvllVfqR8AWrjrf9Nlnnw1qn65L5SUZbceof4q+c+dOOUsUG/f29obbYQlt Mni2WTppk3F0FV0FVlIxuf3mN78pBlExrb355pv1j5RnZ2effvppq69efF4tZpXiY2rxmbZxzXk1 fluNUR0squ/mFG3y9ttvV5+c6zPMEtpkUGuW3/72t8tvk3F0FV0FVlL52b4a1I2rIFpVzCd/+tOf qjmtfjliY3ppL0D1YX5Qm8QaE139JMsS1Jtl+W0yga6iqwDzGz6b3ziy9NJLL9WPOy1c8RKvvfZa NVl98cUX5aflxkS3hCR1E5plyUkG/693Vcnriq4yUlRXAWZQv8SuGLPl43LJ1XJGbnltw/AlfN1+ hO68WSrVpY+dX2nfeZvoKsDCVQeRyqNqv/71r6vBu7QrzIsX+uSTT4YvHqgutytCfvTRR23HqEto luq16hc9dnilfUKb6CpAS6odjfID7TJn2vLK/8HQ5+fqyFI5mZQXYywtVanDZql/90j9JNfyT3gN B9NVhnXYLMBsqh2NcvBWB3aK4fyrX/3qgw8+WP513Y1FcNWJlaV9nG60yaB2vKuTZqnv+tUfL/kb m3WVYWldBbiR+o7G4P+TanUZRrfHcOo7O8VcV8wty7wKpdEmg9rVKZ00S701Oll0pqsMy+wqwE3V dy7SVvUW2TqZSZbcJo3rHusXHNavhKnv+nVy4x5dZeTrxrYJMKXGZ+n6Bedra8lt8vjx43fffbd+ qHPkYc/Op9kOu0rIUpFhhg/km3zj0fIbSKqfWZ/lvROapZM2qde4+neeNGpfq989EttVul0qktZV gClV92Mdd+PR8nKCYlb54x//+O9//3vdlveObJZO2mR42XV94l3CnemSu0rCUpGcrgJMr3E/1pE3 Hm3cjHu11/lOeT/WTtpk+BvA6uuv297FiO0qXS0VSe4qwPSG78d67Y1HV/uO2LPdj3U5bVI/ClpV vaV9AVdyV+lkqUhyVwGmN3w/1mluPPrnP//5k08+WW7SJZn5fqxLaJPG6aflr0aP7SqdLBVJ7irA 9Ebej3VpNx4NlHA/1rrqtFfx0o1vofzwww9/8YtfLO0IW1ddJXapSFpXAWYz8n6snV91vzSB92Ot 1E97NU4IdnKhRVddJWSpSHJXAeYx7n6sa3LHz8z7sQ7HGwRcX9FhV0lYKhLeVYCZdXg/1s5F3Y91 eE3E8Kt3eM+IQUddpfOlItUL5XQVgHlk3o91MOZWdINnp/1ipn3vvfeWswwh5PtYOlwqEttVAOYR eD/WYl79wQ9+sLGxMfJWdEs+05Rz695Sh0tFArsKwPzS7sdalJs33njj4ODg8ePHxR7HyFvRLedM U8L3sQzrcKlIWlcBmF/m/VirctPVrejSbt2bsFQks6sALErn92Otq8pNh7eiC7l1b9pSkUFYVwFY oK7uxzpSJ7cIrOv81r2VqKUipaiuArBAXd2PdVirpSfz+1hGrhMZhC0VqeR0FYBV1d7FMCHfx9Iw bp3IYO2XigCwcAnfx1KZvE5kOFXb0paKALBAId/HUplmnchgvZeKALBA3d66d1ykbteJDPKWigDQ km5v3TtSwjqRQcxSEQBa1e2te8dJqDU5S0UAWKyE72OZLGGdyKCLW/cC0KrA72MZp/N1IgO7gQCr KPD7WJZvynUiA3fLBeiz/Fv3Ll/aOhEA2pNz694cgetEAFiscob/6U9/Ws7znd+6N0rgOhEAFqj+ fSz1rwLr5Na9gTLXiQCwQI0F4F19JUuO/HUiACxQ/UqYbr+SpXM9WicCwAJZ9F2xTgRgDa3nlTDW iQBQWcMrYawTAWA9Tb5173ruHQOwJqa5de8a7h0DsD4Sbt0LAF0JuXUvAHTFUhEA1pmLYQBYcy6G AQAAAAAAACDK/wAWfQ80DQplbmRzdHJlYW0NCmVuZG9iag0KNTAgMCBvYmoNCjw8L1R5cGUvWE9i amVjdC9TdWJ0eXBlL0ltYWdlL1dpZHRoIDYwMC9IZWlnaHQgMzcxL0NvbG9yU3BhY2UvRGV2aWNl UkdCL0JpdHNQZXJDb21wb25lbnQgOC9JbnRlcnBvbGF0ZSBmYWxzZS9GaWx0ZXIvRmxhdGVEZWNv ZGUvTGVuZ3RoIDY3MjM+Pg0Kc3RyZWFtDQp4nO3dv4sc5xkH8PwVaQNpUrgyKVSoMfgKGxsEgssZ BJIqE4OKKBwkIIeYNNeYGIIJNgQT1FwCiRsVByEuFBQIxC6MCATEEQLBHGpCTEQSnaXNoMHDePa9 0+ruZue7734+uFiP9na/O8/M8+782pnNAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAKHh4fnzp372te+9sMf/rA//e233+4m ds8Z+PrXv/7Xv/61+5NPPvmk+6dvfOMb//73vwfv9ejRo+9+97vNv/7iF78YTOk/v32d73znO//7 3/8Gr/CPf/zjW9/6Vv/JTc7f//73J/i8A4OPf7IXPPGLADCVfhvvDyjPOg5++OGHg38tDoX9l519 Oa4N3r19qeKY0h8HuzH0TMbBE49i7SdqdeM7AKuiPy70N8GK4+BRI85gQ697/vy4MNjWa//3tdde 6548v814lFOOg/2/akfewebtgtoZZQQEWFGD7aOun59mHJzNbfd1Bjs2mwGoGX2al20m9t+rHZK6 xzdv3mzHqdu3b7d//vnnn7fvONj27G+WFsem4meZn1h8nfk87Qg+2Drub+T2x9ajPk43B7oP0r3C YJu6v+3ZfZfowv/ud7/r5slgr3J/l/Xgn546xwDq1nXRDz74oD9CPdM4OOu16OLu0Pm3a16qHT2b 5//rX/9qHrT9uT9QDsbo5gl/+9vfjhkH+8PEUY19ke3Bo15nkGdzc/P1118fjIP9EafTvtdRH2f+ +X3dd4n5VMfvte7+cH6XdTcULjLHAOrWHxf6B+YWOT7Y75nzzzlq0Ox2JLZ/0vbkZmJ/HBm87+B4 YvH44GBLs/3b+UH5qccHj3md4jlF/f2i809oZ+lRf95t97VTujG0fbX5v+1Stc8cvOxgdBv8az9h O6sXnGMAdeuPg/0m+azjYGuw6VHcuOhG27YP93t+NxYPhpVuSD1mHCxuiM0Px0/9LMe8TnFbsj8O zp/OOj975z/OYIdq97/9wa57u/5O18FI171s/w/nI3UWnGMAdRt00W6Qetb9osXXLLbftjM3Gy+7 u7tf++rWXDOatO/b35HYP8R2zDg4v/fv+HHwqM9yzOvM55l9dRycH7n6R06P+jgLjoPz+zAXGQeL g+lTP+kiVQaow6CLdn27teA4ON9sBy19/h2bJ7/55puDnXIXL168cuXKYOIzjYPFqw6P+bzzjnmd p46Di2wPPnUcLO787Lbd+hu/bcgTbw8uOMcA6nZUF32mcXD+Qon+sa3Bk/tDbdeEixMXHAf7Y9Bg pDjm+OBRn+WY13nqOLjI8cGTjYODMat900XGwUFp+jNtwTkGULdjjnktPg7OjtjJdtTJh92T+yec zE88fhzs5+wf0zw+wDOd+zp4naeOg7MjDrodv5v3mbYH+xbZL1oszTH7Wp0vCqyb4rgwOI9xweOD g159TEcd7OU7auJTx8EuZ/GSh8WvH5xXfJ1FxsHZAtcPnmAcnPWGs2bKP//5z8FVlseMg7Njrx/0 ezgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACss4ODgwsXLmxsbLzyyisPHjxoJx4eHm5t bTUTr1279vDhw/EmAsCEmrHp0qVL+/v7zeO9vb12hHr8+PGNGzfu3LnTTtzZ2WkejDERAKbVbAxu b2+3W2fNmHj58uVmk7B70D5hc3NzpIlTfnIAOGJ7cDA4tk8YY+KUnxwAnpg/bHf37t3ucTdmjTFx kXgfA5y1RZrPt1//+Kn/nbTvEqTZTLt48WI7JDVDVXuqjO1BgBOMg5999tkLL7zQHfR59OjR9evX 33rrrWb6uXPnvtlz+/bt7jlNb7x3795SPxs9xc00xwcBTjkOtoNgO94Nxsfmf8+fP9+Ofc3jN954 wyn0EypuD7bndu7u7s7mTvg824kAsU4zDvYHwVlpO7HbBvz000+bDcb2Qbup+Pzzz9tMWLJm+Nt4 4qWXXur2Vbp+EFhzJx4HP//88/4gOCttD3bbgO+++27zzC+++OLVV19tn3Dr1q12ZASACZ1sHDx/ /vyVK1eazbr+WDZ/fPDq1avNONhtGDbj4Msvv+woIQA5TjYONuPdzZs3m3HtxRdfbB500+fPn2k2 A/sbht1Y+dxzzxkQAZjcKc+TGZwM0x8HZ1/uDu0ODvY1E9utxZE/HwAc5/TXTdy6dasd0QbTu72g 7Wg4++rYZxwEIMFZXT/YDGp///vf568fHFw52IyJzhcFIIffkwFgnRkHAQAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGDg269/fOb/Tf2ZAGBRxkEA1plxEIB1 ljMO5iQBYH3kjD45SQBYHzmjT04SANZHzuiTkwSA9ZEz+uQkAWB95Iw+OUkAWB85o09OEooUCKhS TnPLSUJRToFykgAVyGkpOUkoyilQThKgAjktJScJRTkFyknCenr8+PGNGzc2NjZeeuml/f39duLh 4eHW1lYz8dq1aw8fPhxvImcup6XkJKEop0A5SVhP77///p07d5oHd+/ebUeodmRsJ+7t7e3s7My+ HC7PdiJjyGkpOUkoyilQThLWULONdvny5QcPHhw18eDgYHNzs3k8xsQJPvAayGkpOUkoyilQThLW UDMkbW9v/+xnP+vvF20ntrsum/Hr0qVLzfQxJk75yeuV01JyklCUU6CcJKyhZni6cOFCu7uyG6q6 HaSz3pg1xsRFEn7MMxqjpax6EopyCrT8JOM1VVbO0jb9bA8uzRgtZdWTUJRToJwkrKFmSHrzzTcH w5Pjgystp6XkJKEop0A5SVhP3fmi3SZbe27n7u7ubO6Ez7OdyBhyWkpOEopyCpSThPXUXdb3yiuv dNtorh9cXTktJScJRTkFykkCVCCnpeQkoSinQDlJgArktJScJBTlFCgnCVCBnJaSk4SinALlJAEq kNNScpJQlFOgnCRABXJaSk4SinIKlJMEqEBOS8lJQlFOgXKSABXIaSk5SSjKKVBOEqACOS0lJwlF OQXKSQJUIKel5CShKKdAOUmACuS0lJwkFOUUKCcJUIGclpKThKKcAuUkASqQ01JyklCUU6CcJEAF clpKThKKcgqUkwSoQE5LyUlCUU6BcpIAFchpKTlJKMopUE4SoAI5LSUnCUU5BcpJAlQgp6XkJKEo p0A5SYAK5LSUnCQU5RQoJwlQgZyWkpOEopwC5SQBKpDTUnKSUJRToJwkQAVyWkpOEopyCpSTBKhA TkvJSUJRToFykgAVyGkpOUkoyilQThKgAjktJScJRTkFykkCVCCnpeQkoSinQDlJgArktJScJBTl FCgnCVCBnJaSk4SinALlJAEqkNNScpJQlFOgnCRABXJaSk4SinIKlJMEqEBOS8lJQlFOgXKSABXI aSk5SSjKKVBOEqACOS0lJwlFOQXKSQJUIKel5CShKKdAOUmACuS0lJwkFOUUKCcJUIGclpKThKKc AuUkASqQ01JyklCUU6CcJEAFclpKThKKcgqUkwSoQE5LyUlCUU6BcpIAFchpKTlJKMopUE4SoAI5 LSUnCUU5BcpJAlQgp6XkJKEop0A5SYAK5LSUnCQU5RQoJwlQgZyWkpOEopwC5SQBKpDTUnKSUJRT oJwkQAVyWkpOEopyCpSTBKhATkvJSUJRToFykgAVyGkpOUkoyilQThKgAjktJScJRTkFyknCOnv8 +PGNGzfu3LnT/u/h4eHW1tbGxsa1a9cePnw43kTOXE5LyUlCUU6BcpKwzvb29poRqh0H+2NiM31n Z2ekiYwhp6XkJKEop0A5SVhbBwcH169f397ebsepZsPt8uXLDx48aP9pc3OzeTzGxCk/c71yWkpO EopyCpSThPXUbKn96Ec/unfvXre91oxTzZjY7rpsxq9Lly7t7++PMXHKj12vnJaSk4SinALlJGE9 7e3t7e7u9vdb3r17tzuE141ZY0xcJN7HPKMxWsqqJ6Eop0DLTzJeR2XldJtp/XHQ9uBKG6OlrHoS inIKlJOENdSeHtPXbBs6PrjSclpKThKKcgqUk4R11t8ebB83A+Js7oTPs53IGHJaSk4SinIKlJOE deb6wWrktJScJBTlFCgnCVCBnJaSk4SinALlJAEqkNNScpJQlFOgnCRABXJaSk4SinIKlJMEqEBO S8lJQlFOgXKSABXIaSk5SSjKKVBOEqACOS0lJwlFOQXKSQJUIKel5CShKKdAOUmACuS0lJwkFOUU KCcJUIGclpKThKKcAuUkASqQ01JyklCUU6CcJEAFclpKThKKcgqUkwSoQE5LyUlCUU6BcpIAFchp KTlJKMopUE4SoAI5LSUnCUU5BcpJAlQgp6XkJKEop0A5SYAK5LSUnCQU5RQoJwlQgZyWkpOEopwC 5SQBKpDTUnKSUJRToJwkQAVyWkpOEopyCpSTBKhATkvJSUJRToFykgAVyGkpOUkoyilQThKgAjkt JScJRTkFykkCVCCnpeQkoSinQDlJgArktJScJBTlFCgnCVCBnJaSk4SinALlJAEqkNNScpJQlFOg nCRABXJaSk4SinIKlJMEqEBOS8lJQlFOgXKSABXIaSk5SSjKKVBOEqACOS0lJwlFOQXKSQJUIKel 5CShKKdAOUmACuS0lJwkFOUUKCcJUIGclpKThKKcAuUkASqQ01JyklCUU6CcJEAFclpKThKKcgqU kwSoQE5LyUlCUU6BcpIAFchpKTlJKMopUE4SoAI5LSUnCUU5BcpJAlQgp6XkJKEop0A5SYAK5LSU nCQU5RQoJwlQgZyWkpOEopwC5SQBKpDTUnKSUJRToJwkQAVyWkpOEopyCpSTBKhATkvJSUJRToFy kgAVyGkpOUkoyilQThKgAjktJScJRTkFykkCVCCnpeQkoSinQDlJgArktJScJBTlFCgnCevp4ODg woULG0/cuXOnnXh4eLi1tdVMuXbt2sOHD8ebyJnLaSk5SSjKKVBOEtZQMzZdunRpf39/9mRAvHjx YvP48ePHN27caMfEvb29nZ2d5sEYExlDTkvJSUJRToFykrCG7t692x+S3n///WaoagbHy5cvP3jw YPZkcNzc3GwejzFxms9cu5yWkpOEopwC5SRhzXXbhs04tb293e66HHXipB+3WjktJScJRTkFyknC Omv3W+7u7s6ebCR2h/C6MWuMiYsE+5hnNEZLWfUkFOUUaPlJxuulrKj2DJZ2EJw92W9pe3B1jdFS Vj0JRTkFyknCemrPF+3OFJ09GaccH1xdOS0lJwlFOQXKScIamh8EZ1/dRzo44fNsJzKGnJaSk4Si nALlJGENNUPSxle1Y6LrB1dXTkvJSUJRToFykgAVyGkpOUkoyilQThKgAjktJScJRTkFykkCVCCn peQkoSinQDlJgArktJScJBTlFCgnCVCBnJaSk4SinALlJAEqkNNScpJQlFOgnCRABXJaSk4SinIK lJMEqEBOS8lJQlFOgXKSABXIaSk5SSjKKVBOEqACOS0lJwlFOQXKSQJUIKel5CShKKdAOUmACuS0 lJwkFOUUKCcJUIGclpKThKKcAuUkASqQ01JyklCUU6CcJEAFclpKThKKcgqUkwSoQE5LyUlCUU6B cpIAFchpKTlJKMopUE4SoAI5LSUnCUU5BcpJAlQgp6XkJKEop0A5SYAK5LSUnCQU5RQoJwlQgZyW kpOEopwC5SQBKpDTUnKSUJRToJwkQAVyWkpOEopyCpSTBKhATkvJSUJRToFykgAVyGkpOUkoyilQ ThKgAjktJScJRTkFykkCVCCnpeQkoSinQDlJgArktJScJBTlFCgnCXAyUWtxTpicJBTlFCgnCayW nHUnJ0lUmJwkFOUUKCcJrJacdScnSVSYnCQU5RQoJwmslpx1JydJVJicJBTlFCgnCayWnHUnJ0lU mJwkFOUUKCcJrJacdScnSVSYnCQU5RQoJwmslpx1JydJVJicJBTlFCgnCayWnHUnJ0lUmJwkFOUU KCcJrJacdScnSVSYnCQU5RQoJwmslpx1JydJVJicJBTlFCgnCayWnHUnJ0lUmJwkFOUUKCcJrJac dScnSVSYnCQU5RQoJwmslpx1JydJVJicJBTlFCgnCayWnHUnJ0lUmJwkFOUUKCcJrJacdScnSVSY nCQU5RQoJwmslpx1JydJVJicJBTlFCgnCayWnHUnJ0lUmJwkFOUUKCcJrJacdScnSVSYnCQU5RQo Jwmslpx1JydJVJicJBTlFCgnCayWnHUnJ0lUmJwkFOUUKCcJjOrw8HBra2tjY+PatWsPHz48/Qvm rDs5SaLC5CShKKdAOUlgPI8fP75x48adO3eax3t7ezs7O6d/zZx1JydJVJicJBTlFCgnCYyn2Ri8 fPnygwcPmscHBwebm5vt49PIWXdykkSFkSQ5SVSYnCQwnmbs297ebneHNmPipUuX9vf3T/maOetO TpKoMJIkJ4kKk5MExnP37t3usOAzjYPfBjhrIzc8KBhjexAAVsUYxwcBYFW054vu7u7Ozu58UQBY IWd+/SAAAAAAAAAAAAAAa+vWrVtvvfVW+/iLL774/ve/P8kZqv0YkzNPisyWeeYJrKj79+93a+uj R4+uX79++/bt2ZO1qX2wfF2Md99995tPTLhemycdi8qCYWbmCayOdpW5efNmN+XTTz+9evXqf//7 3x/84AcT/nZNE6Nbf/vtZWzte7Xdo5staz5PWhaVAYsK1KFZR37605+eP3/+3r173cTma+RPfvKT c+fOTfs1sonRby/PP//8EhpL8zW+fdPPPvvshRde6N5xnedJy6IyYFGBOrRHMX7729/219lmvW7W 4mYlaverLGd17n+7bt9xvr30N0ZG0rxL+815cHxnnedJy6IyYFGBOjSr8DvvvDO/56R/qL1Zg0b6 Mvnee+91L9t9u+7rx2jW6ybq2CcetE3smz1dquXMk1lvtoTMk5ZFZcCiAnVo1o72t7vbAwrPPfdc u9eraXovv/xyfw/YmeuON/3pT3+a9b5dt9pTMsaO0f8WffXq1bZLNBO3t7fn58MS5snsq7Nlknly FIuKRQWq1DS3X//6181K1LS1K1eu9L9SHhwc/PGPfxz13Zvvq01Xab6mNt9pB+ecd+vvqDG6nUX9 zZxmnrzxxhvdN+d+h1nCPJn1ZstvfvOb5c+To1hULCpQpfa7fbdSD86CGFXTT/785z93Pa1/OuKg vYwXoPsyP+s1sUGj6x9kWYL+bFn+PDmGRcWiApze/NH8wZ6lF198sb/f6cw1b/Hqq692zeqTTz5p vy0PGt0SkvQdM1uWnGT25XjXDXlTsagURS0qwAn0T7Fr1tn2cXvJ1XLW3PbchvlT+Kb9Cj35bOl0 pz5Ofqb95PPEogKcuW4nUrtX7Ve/+lW38i7tDPPmjT766KP5kwe60+2akB9++OHYMfoSZkv3Xv2T Hic80z5hnlhUgJF0GxrtF9pldtr2zP/Z3Pfnbs9S20zakzGWlqo14Wzp//ZI/yDX8g94zQezqMyb cLYAJ9NtaLQrb7djp1mdf/nLX7733nvLP697cBFcd2BlaV+nB/Nk1tvfNcls6W/69R8v+RebLSrz 0hYV4Jn0NzRmXzbV7jSMaffh9Dd2ml7X9JZlnoUymCez3tkpk8yW/tyY5KIzi8q8zEUFeFb9jYu0 q3qbbJN0kiXPk8F5j/0TDvtnwvQ3/Sa5cY9Fpfi+sfMEWNDgu3T/hPO1teR5cv/+/bfffru/q7O4 23PyNjvhohJyqcg8qw/kO/7Go+0vkHTPWZ/Le4+ZLZPMk/4Y1//Nk8HYN+pvj8QuKtNeKpK2qAAL 6u7HetSNR9vTCZqu8oc//OE///nPul3eW5wtk8yT+cuu+413CXemS15UEi4VyVlUgMUN7sdavPHo 4GbcdV/nu+D9WCeZJ/O/ANa//nrsTYzYRWWqS0WSFxVgcfP3Y33qjUfrviP2ye7Hupx50t8L2o16 S/sBruRFZZJLRZIXFWBx8/djXeTGo3/5y18++uij5SZdkhPfj3UJ82Rw+Gn5V6PHLiqTXCqSvKgA iyvej3VpNx4NlHA/1r7usFfz1oNfofzggw9+/vOfL20P21SLSuylImmLCnAyxfuxTn7W/dIE3o+1 0z/sNTggOMmJFlMtKiGXiiQvKsBpHHU/1jW542fm/Vjn480Czq+YcFFJuFQkfFEBTmzC+7FOLup+ rPPXRMy/+4T3jJhNtKhMfqlI90Y5iwrAaWTej3V2xK3oZl9t+02nfeedd5ZzGULI77FMeKlI7KIC cBqB92Nt+ur3vve9jY2N4q3olnykKefWva0JLxUJXFQATi/tfqzNcPPaa6/t7Ozcv3+/2eIo3opu OUeaEn6PZd6El4qkLSoAp5d5P9ZuuJnqVnRpt+5NuFQkc1EBOCuT34+1rxtuJrwVXcite9MuFZmF LSoAZ2iq+7EWTXKLwL7Jb93bibpUpBW1qACcoanuxzpv1KEn8/dYiteJzMIuFenkLCoAtRrvZJiQ 32MZOOo6kdnaXyoCwJlL+D2WzvHXicynGlvapSIAnKGQ32PpLHKdyGy9LxUB4AxNe+veoyJNe53I LO9SEQBGMu2te4sSrhOZxVwqAsCopr1171ESxpqcS0UAOFsJv8dyvITrRGZT3LoXgFEF/h7LUSa/ TmRmMxCgRoG/x7J8C14nMnO3XIBVln/r3uVLu04EgPHk3Lo3R+B1IgCcrbbD//jHP277/OS37o0S eJ0IAGeo/3ss/Z8Cm+TWvYEyrxMB4AwNLgCf6idZcuRfJwLAGeqfCTPtT7JMboWuEwHgDLnou+M6 EYA1tJ5nwrhOBIDOGp4J4zoRANbT8bfuXc+tYwDWxCK37l3DrWMA1kfCrXsBYCoht+4FgKm4VASA deZkGADWnJNhAAAAAAAAAIjyfyqiljwNCmVuZHN0cmVhbQ0KZW5kb2JqDQo1MSAwIG9iag0KPDwv VHlwZS9QYWdlL1BhcmVudCAyIDAgUi9SZXNvdXJjZXM8PC9Gb250PDwvRjIgNyAwIFI+Pi9YT2Jq ZWN0PDwvSW1hZ2U1MyA1MyAwIFIvSW1hZ2U1NCA1NCAwIFI+Pi9Qcm9jU2V0Wy9QREYvVGV4dC9J bWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4vTWVkaWFCb3hbIDAgMCA2MTIgNzkyXSAvQ29udGVudHMg NTIgMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9TL1RyYW5zcGFyZW5jeS9DUy9EZXZpY2VSR0I+Pi9U YWJzL1MvU3RydWN0UGFyZW50cyAxOD4+DQplbmRvYmoNCjUyIDAgb2JqDQo8PC9GaWx0ZXIvRmxh dGVEZWNvZGUvTGVuZ3RoIDMxND4+DQpzdHJlYW0NCnicjdRLawIxFIbhfSD/4Vuq4Jmc3AfEhZcW SwVLp3RRupCirrS0/3/RjEqZgtYzA7PL8zIhJ6hWGI2q5XQxgxmPMZlNwdhpNTQYsiOHUAfyAdlb fG+02g60wnw5BToruXpcH3bobQ7Dl+f+mZk0WlV3FsxkPJqtVgxTXobPkco3JUo1mr1Wpi0a3Gv1 1kP/Hc2DVvPmQsheC/2luU4U/JG+AbpboIvgXHbASjQv0ZKhWoIFCRYi2SjRokTzTEm0bUmi2URW tG1ZorGlJMFqAVZn4iz6UTYCLrty2mTc1UnpcFF0PPjmLBQqePJZpLkz8NWOPpnTU4cypYEyIlvY bOh0B7wOcNDKB3MM2ZTJhrZ2nOiAj5KrFvv1bhMcZp94utTzV3pcrptkf4Mm/1tsBz12g74b/AG1 VwTBDQplbmRzdHJlYW0NCmVuZG9iag0KNTMgMCBvYmoNCjw8L1R5cGUvWE9iamVjdC9TdWJ0eXBl L0ltYWdlL1dpZHRoIDYwMC9IZWlnaHQgMzcxL0NvbG9yU3BhY2UvRGV2aWNlUkdCL0JpdHNQZXJD b21wb25lbnQgOC9JbnRlcnBvbGF0ZSBmYWxzZS9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDU0 Nzc+Pg0Kc3RyZWFtDQp4nO3dv2sk5/0HcP8VaQNuUrgyKa5wY7AKGxsODIoCAuWqLwmoyAVBAkqI SaPGxBBMsCGY4EbJl8SNCzUx3zgoEIhdGBEIHOqCEdekiUgi+bTf4QYP49lZ3Z72x7w1z+uFi/Wc tPvWM88+n52Z59mZTAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAGAAl5eXd+7ceeaZZ370ox+1t7/55pvNxuZnOr72ta/9/e9/ b37l008/bf7p61//+r/+9a9rXm7aH/7wh+X+aXWe6ST/+Mc/vvGNb3TCf/DBB7PCzMrcabHJfC0A QJT2IN8e/J+2DnbqyKxCcE0dXHopnLMOPnr06Lvf/e50mG9961v//e9/r8/cLoVztgAAUdqDfDPy T2bUwVl1qiklv/rVr9o/X/9v78t1nqp+uXaAxc1ZB5v61URqDus6f047c/1b08X0iS0AQJTOwU4z bi9SBzu/3vtynaeqS0+7DrYPrzrVpH7y6drd/q2q/P3xj398Yh28Pk/9u70/09k4fwsAEKUZz997 772qOjRV46nq4KRVm64/Gdj7VE0RaapGu9L1Fujek5PTZyafWAdnHTPWOa+plZ3jwflbAIAo7UG+ HtvrsjLP9cH2Ydr0z/QWzWuutTU1pa5TTTWpf6V9aNb8U+9RW525qa3z1MHp87HNr1d/xZzXB+ds AQCitOtg+yDoaetgrXNEds31wWueqj3rclZZqWtZvb2udJ3qOZnv+uAidbD32t8TWwCAKJ2Tfs0h 4dOeF+19zlnnG5unakre9RMv23Vw+rxo/SrTVW+6Mra3L3he9MYtAECUWfM92uXpiYVgVg3qrNHr faqm6jWHTvWW3rmjTd1sl9H6J6dfcZ7jwVkTO+sM18yTuXELABBlepBvn5acsw5OV5N2Hbn+5ZrK 27k+2Cl29VN1SmR7tcXNrg9OWgeYT7Vu4sYtAECU3kG+KQ3znxftPZk55/rBpvB1Ctz0U/VeOmxq zQ3mi06eZh399edF52wBAKJcX5ie6vpgp0j1loDr1+s9M2N9RPup2isE//nPfzZX8ab/dZ71g9NP O/2K818fnKcFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgHBnZ2d3797d2Nh49dVXz8/P 642Xl5dbW1vVxt3d3YuLi6VsBIA0VcHa3t4+PT2tHh8dHdVl6+rqan9///j4uN54cHBQPVhwIwAE qg4G9/b26kO2qibu7OxUh4TNg/oHNjc3F9846F8JAP16jwc7xbH+gQU3DvpXAsBM09fyTk5OmsdN IVtw4zxJPgFYtnkGn2/+zydP/O/GYyzhqmO3119/va5TVf2qp8o4HgSKcoM6+Pnnn7/44ovNRZ8P P/zw2S99/PHHzc/cuXPn2Zb333+//qcvvvjipZdeqjc+//zzLh4NqPfYzfVBoCgL1sGqCN67d68e SOsCV9e7Tq2s/umVV1558OBB/TNNuWz/OuvXezxYT/g8PDycTM0CvfFGgFiL1MGqqL322mvtT/vN P3Xq4KNHj+7fv1+Vv1n1cT1/LNOq8rfx2Msvv9ycwLR+ECjHInXws88+6xzNzap31f++8MILVb2r f8AxIAAhFqyDb7zxRudf33777boOdq4PNudCK9Uv1hufe+45B4MADGi5dXDW8eAszXHikv8qAJjP eq4PNqZLZ338uKK/DgCut/h80WbtwzXzRRud+aLmyQAwrMXXDzYX+zrrB2edF22vH3R9EIBh+T4Z AEqmDgIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAI2rq6v9/f2NjY2XX3759PS03nh5ebm1tVVt3N3dvbi4WMpGAAj0 7rvvHh8fVw9OTk7qslVXxnrj0dHRwcHB5MtyeeONo/HN//lk6f8N/TcBlKs6cNvZ2Tk/P5+18ezs bHNzs3q84MYh/riVUAcBxqSqU3t7e7/4xS/a50XrjfX5zKqobW9vV9sX3DjoX7lM6iDAmFQ16+7d u/U5zKZ+NSdIJ61CtuDGecJ8chusog4Cq7PCAZRRWMWhn+NBx4MAt0VVp3784x93apbrg9dQBwFG ppkv2hzH1RM+Dw8PJ1OzQG+8cTTUQYCRadb6vfrqq82Bm/WDs6iDAJRMHQSgZOogACVTBwEomToI QMnUQQBKpg4CUDJ1EICSqYMAlEwdBKBk6iAAJVMHASiZOghAydRBAEqmDgJQMnUQgJKpgwCsX071 yUkCQDlyqk9OEgDKkVN9cpIAUI6c6pOTBIBy5FSfnCQAlCOn+uQkAaAcOdUnJwkA5cipPjlJAChH TvXJSQJAOXKqT04SAMqRU31ykgBQjpzqk5MEgHLkVJ+cJACUI6f65CQBoBw51ScnCQDlyKk+OUkA KEdO9clJAkA5cqpPThIAypFTfXKSAFCOnOqTkwSAcuRUn5wkAJQjp/rkJAGgHDnVJycJAOXIqT45 SQAoR071yUkCQDlyqk9OEgDKkVN9cpIAUI6c6pOTBIBy5FSfnCQAlCOn+uQkAaAcOdUnJwkA5cip PjlJAFiiq6ur/f394+Pj+n8vLy+3trY2NjZ2d3cvLi6WsnEROdUnJwkAS3R0dFSVrboOtmtitf3g 4GDxjQvKqT45SQBYlrOzs/v37+/t7dXFqzqa29nZOT8/r/9pc3OzerzgxgUT5lSfnCQALEV1+PaT n/zkwYMHzUFcVbyqmlifz6yK2vb29unp6YIbFwyZU31ykgCwFEdHR4eHh+2TmScnJ811vaaQLbhx niSfzLaK6nMzOUmAeaxw9GQUmmO3dh10PHgrkgCwuHp6TFt1bOj64K1IAsAStY8H68dVQZxMzQK9 8cYF5VSfnCQALJH1g7cuCQDlyKk+OUkAKEdO9clJAkA5cqpPThIAypFTfXKSAFCOnOqTkwSAcuRU n5wkAJQjp/rkJAGgHDnVJycJAOXIqT45SQAoR071yUkCQDlyqk9OEgDKkVN9cpIAUI6c6pOTBIBy 5FSfnCQAlCOn+uQkAaAcOdUnJwkA5cipPjlJAChHTvXJSQJAOXKqT04SAMqRU31ykgBQjpzqk5ME gHLkVJ+cJACUI6f65CQBoBw51ScnCQDlyKk+OUkAKEdO9clJAkA5cqpPThIAypFTfXKSAFCOnOqT kwSAcuRUn5wkAJQjp/rkJAGgHDnVJycJAOXIqT45SQAoR071yUkCQDlyqk9OEgDKkVN9cpIAUI6c 6pOTBIBy5FSfnCQAlCOn+uQkAaAcOdUnJwkA5cipPjlJAChHTvXJSQJAOXKqT04SAMqRU31ykhBO VwGWKGdIyUlCOF0FWKKcISUnCeF0FWCJcoaUnCSE01WAJcoZUnKSEE5Xgdvi7Ozs7t27G48dHx/X Gy8vL7e2tqotu7u7FxcXS9m4iJwhJScJ4XQVuBWqgrW9vX16ejp5XBBff/316vHV1dX+/n5dE4+O jg4ODqoHC25cUM6QkpOEcLoK3AonJyftOvXuu+9W9asqjjs7O+fn55PHxXFzc7N6vODGBXPmDCk5 SQinq8Ct0xwbVsVrb2+vPp+5rI0LZssZUnKSEE5XgdulPpl5eHg4eXyQ2FzXawrZghvnyfDJbKsY Um4mJwnhdJUQKxw3GZF6WktdBCePT2Y6HsxPQjhdBW6Ler5oM1N08rh4uT6Yn4RwugrcCtNFcPLV c6SdWaA33rignCElJwm9cnZQThLgGlWd2viquiZaP5ifhF45OygnCTACOUNKThJ65eygnCTACOQM KTlJ6JWzg3KSACOQM6TkJKFXzg7KSQKMQM6QkpOEXjk7KCcJMAI5Q0pOEnrl7KCcJMAI5AwpOUno lbODcpIAI5AzpOQkoVfODspJAoxAzpCSk4ReOTsoJwkwAjlDSk4SeuXsoJwkwAjkDCk5SeiVs4Ny kgAjkDOk5CShV84OykkCjEDOkJKThF45OygnCTACOUNKThJ65eygnCTACOQMKTlJ6JWzg3KSACOQ M6TkJKFXzg7KSQKMQM6QkpOEXjk7KCcJMAI5Q0pOEnrl7KCcJMAI5AwpOUnolbODcpIAI5AzpOQk oVfODspJAoxAzpCSk4ReOTsoJwkwAjlDSk4SeuXsoJwkwAjkDCk5SeiVs4NykgAjkDOk5CShV84O ykkCjEDOkJKThF45OygnCTACOUNKThJ65eygnCTACOQMKTlJ6JWzg3KSACOQM6TkJKFXzg7KSQKM QM6QkpOEXjk7KCcJMAI5Q0pOEnrl7KCcJMAI5AwpOUnolbODcpIAI5AzpOQkoVfODspJAoxAzpCS k4ReOTsoJwkwAjlDSk4SeuXsoJwkwAjkDCk5SeiVs4NykgAjkDOk5CShV84OykkCjEDOkJKThF45 OygnCTACOUNKThJ65eygnCTACOQMKTlJ6JWzg3KSACOQM6TkJKFXzg7KSQKMQM6QkpOEXjk7KCcJ MAI5Q0pOEnrl7KCcJMAI5AwpOUnolbODcpIAI5AzpOQkoVfODspJAoxAzpCSk4ReOTsoJwkwAjlD Sk4SeuXsoJwkwAjkDCk5SeiVs4NykgAjkDOk5CSJktMskgCjlDOk5CSJCiNJchJg/S4vL7e2tjY2 NnZ3dy8uLhZ/wpwhJSdJVBhJkpMAa3Z1dbW/v398fFw9Pjo6Ojg4WPw5c4aUnCRRYSRJThJFs1CC 6mBwZ2fn/Py8enx2dra5uVk/XkTOeycnSVQYSZKTXF1N/vf/Hi79v+ppb3WzwOpUtW9vb68+HVrV xO3t7dPT0wWfM+e9k5MkKowkyUmiwuQkgdU5OTlpLgs+VR38JsCyrXjAgx6rOB4EgNtiFdcHAeC2 qOeLHh4eTpY3XxQAbpGlrx8EAAAAAAAAAAAAoFgffvjhG2+8UT/+4osvfvCDHwwyQ7UdY3DapJdm maZN4JZ6+PBh82599OjR/fv3P/7448njd1P9YP2aGG+//fazjw34vtYmDV1lzjATbQK3R/2Wef/9 95stn3322b179/7zn//88Ic/HPC7a6oYzfu3PbysWv1a9ejRNEvhbVLTVTp0FRiH6j3y85///IUX Xnjw4EGzsfoY+bOf/ezOnTvDfoysYrSHl+eff34NA0v1Mb5+0c8///zFF19sXrHkNqnpKh26CoxD fRXj97//ffs9W72vq3dx9Saqz6us5+3c/nRdv+L08NI+GFmR6lXqT86d6zslt0lNV+nQVWAcqrfw W2+9NX3mpH2pvXoHrejD5DvvvNM8bfPpuq0do3pfV1FXPfGgHsSebWlSradNJq1mCWmTmq7SoavA OFTvjvq7u+sLCs8991x91qsa9F555ZX2GbCla643/eUvf5m0Pl3X6ikZq47R/hR97969epSoNu7t 7U23wxraZPLVZhmkTWbRVXQVGKVqcPvtb39bvYmqYe073/lO+yPl2dnZn//855W+evV5tRpVqo+p 1Wfazpzz5v270hjNyaL2YU7VJt/73veaT87tEWYNbTJpNcvvfve79bfJLLqKrgKjVH+2b97UnVkQ K1WNJ3/961+bMa09HbEzvKwuQPNhftIaxDoDXfsiyxq0m2X9bXINXUVXARY3fTW/c2bppZdeap93 WrrqJV577bVmsPr000/rT8udgW4NSdquaZY1J5l8We+akjcUXaVXVFcBbqA9xa56z9aP6yVX63nn 1nMbpqfwDfsRevBmaTRTHwefaT94m+gqwNI1J5Hqs2q/+c1vmjfv2maYVy/00UcfTU8eaKbbVSE/ +OCDVcdoS2iW5rXakx4HnGmf0Ca6CrAizYFG/YF2nSNtPfN/MvX5uTmzVA8m9WSMtaWqDdgs7e8e aV/kWv8Fr+lgusq0AZsFuJnmQKN+8zYndqq3869//et33nln/fO6O4vgmgsra/s43WmTSet81yDN 0j70az9e8zc26yrT0roK8FTaBxqTLwfVZhrGsOdw2gc71VhXjS3rnIXSaZNJa3bKIM3Sbo1BFp3p KtMyuwrwtNoHF2mreqtsg4wka26TzrzH9oTD9kyY9qHfIDfu0VV6Xze2TYA5dT5LtyecF2vNbfLw 4cM333yzfaqz97Tn4MPsgF0lZKnING8fyHf9jUfrbyBpfqac5b3XNMsgbdKuce3vPOnUvpV+90hs Vxl2qUhaVwHm1NyPddaNR+vpBNWo8qc//enf//53act7e5tlkDaZXnbdHnjXcGe65K6SsFQkp6sA 8+vcj7X3xqOdm3GPe53vnPdjHaRNpr8BrL3+etWHGLFdZailIsldBZjf9P1Yn3jj0XHfEftm92Nd T5u0z4I2VW9tX8CV3FUGWSqS3FWA+U3fj3WeG4/+7W9/++ijj9abdE1ufD/WNbRJ5/LT+lejx3aV QZaKJHcVYH6992Nd241HAyXcj7WtuexVvXTnWyjfe++9X/7yl2s7wzZUV4ldKpLWVYCb6b0f6+Cz 7tcm8H6sjfZlr84FwUEmWgzVVUKWiiR3FWARs+7HWsgdPzPvxzodbxIwv2LArpKwVCS8qwA3NuD9 WAcXdT/W6TUR068+4D0jJgN1lcGXijQvlNNVABaReT/WyYxb0U2+OuxXI+1bb721nmUIId/HMuBS kdiuArCIwPuxVuPq97///Y2Njd5b0a35SlPOrXtrAy4VCewqAItLux9rVW6+/e1vHxwcPHz4sDri 6L0V3XquNCV8H8u0AZeKpHUVgMVl3o+1KTdD3You7da9CUtFMrsKwLIMfj/WtqbcDHgrupBb96Yt FZmEdRWAJRrqfqy9BrlFYNvgt+5tRC0VqUV1FYAlGup+rNNWWnoyv4+ld53IJGypSCOnqwCM1eom w4R8H0vHrHUik+KXigCwdAnfx9K4fp3IdKpVS1sqAsAShXwfS2OedSKTspeKALBEw966d1akYdeJ TPKWigCwIsPeurdXwjqRScxSEQBWathb986SUGtylooAsFwJ38dyvYR1IpMhbt0LwEoFfh/LLIOv E5k4DAQYo8DvY1m/OdeJTNwtF+A2y7917/qlrRMBYHVybt2bI3CdCADLVY/wP/3pT+txfvBb90YJ XCcCwBK1v4+l/VVgg9y6N1DmOhEAlqizAHyor2TJkb9OBIAlas+EGfYrWQZ3i9aJALBEFn03rBMB KFCZM2GsEwGgUeBMGOtEACjT9bfuLfPoGIBCzHPr3gKPjgEoR8KtewFgKCG37gWAoVgqAkDJTIYB oHAmwwAAAAAAAAAQ5f8BpEKttg0KZW5kc3RyZWFtDQplbmRvYmoNCjU0IDAgb2JqDQo8PC9UeXBl L1hPYmplY3QvU3VidHlwZS9JbWFnZS9XaWR0aCA2MDAvSGVpZ2h0IDM3MS9Db2xvclNwYWNlL0Rl dmljZVJHQi9CaXRzUGVyQ29tcG9uZW50IDgvSW50ZXJwb2xhdGUgZmFsc2UvRmlsdGVyL0ZsYXRl RGVjb2RlL0xlbmd0aCA1OTk0Pj4NCnN0cmVhbQ0KeJzt3b9rJOfhBvD8FWkDblK4MimucGOwijNn ODhQZBDIV31J4IpcECQgh5g0akwMwQQfBBOuUQKJGxdq4uKCAoHYhRGBwCHSBCOuS4hIIvm038GD h9ezK93q9sc8q/fzwcV6brX77OzMPDuz8+6MRgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADCAs7OzGzdufOMb3/jxj39cTn/n nXe6id19er75zW/+7W9/6/7k008/7f7pW9/61r///e/ecz19+vR73/te86+/+tWvelPK+7eP893v fvd///tf7xH+8Y9/fPvb3y7v3OT8wx/+MP3rbR+hl/zDDz8sX1f5gBe99t7smublAxCo3M6X2/+r 9mCvSi7qgvJhR1+1Uu/Z24caL5rR13uw69BZerB7kJ6uhS967b2EU758ANKU2/lyF2xiD17UOL0d ve7+3X5fp7ev1/7vG2+80d15fJ/xInPpwa6/ugfpdut6r6V8lvavxsv0mS8fgDS9/Z1u0z1LD47G 9vs6vQObTaE0bdI8bDOxfK62YrrbDx8+bHvn0aNH7Z//61//Kvfjygccfy3jAcrHH39dbRW2jznx Pr2J0798ANJ0m/QPPvigbKgr9WB3/2ceDywfqq2P5v7//Oc/mxvtTmJZlL2Obu7w97///ZIe7DJc UoVlD5Z9Nx7ykq7s7Q9O//IBSFNu58sv5qb5frBsmfH7XFSa7SM3f9v+SVt/zcSym3rP2/s+ceL3 g709zfZvx1tpvAfHT8gpH3nK7wenf/kARCl7sNwPumoPtnqni0w8Mtm1bVtJ7X3aiV0XX/Td3CU9 WJ6ueUkfzasHL3lpl98HgCi9rulK6qrHRSc+5sSDhG0TNe2zt7fX25triqN93nZi2cvl307swfEz Nqfswec4LjrLywcgykWnfJSH/p7ZBeOFMnGYXvmMzZ3feuut3mHMO3fuvPnmm72JV+rBiaMOSxPP k+ntuLUPdcl5MrO8fACijG/nywOMU/bgeKGUVdK7c1m1XW1NnDhlD7ZP2g1FLA+TXv794Kg4v+VK 4yZmefkARJm4ne/aYfrjohMPS170BVl354lD0buJl/dgmbP8TvPyAM89jv7y46JXevkA5Ji4ne/2 ra70/WDvTJVLWqC7Z/mA4xOf2YNdzvH9u4sCTPO7ahPPg33m94PTv3wAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAg2fHx8e3bt9e+dHBwUP7T+fn5zs5ON/Hs7GxjY6O52717905PT686EQDS NIW1ubl5dHQ0+rIQ79y5095u7e/vd+VYdmIzfXd390oTASDQ4eFh2VMPHjzo9v6aWrx///729nY7 pWnMra2tk5OT9p/W19eb29NPHOC1AcBVlPuGzT7dT37yk8ePH3d7dk2jNZ3YHuTs7jn9xEFfGQA8 Q3swc29vr/3f/f395nZ5hLPZc+y+7OvabfqJ02T4BGDeptn4fOf/Pnnmf8+3aWVVtKe1dCXY7dCV PWh/ELiunqMHP//881deeaX70uejjz564SuPHj3q7nPjxo0XCg8fPmz/6Ysvvnj11VfbiS+99JIv j4bVni9aninanh5TairS94PAdTVjDzYlePfu3fbDf1twbd/1urL5p9dee+3x48ftfbq6LP+c5Rsv wVK5P1geOO2dGjrNRIBYs/RgU2qvv/56+Wm/+6deDz59+vT+/ftN/V3Uj8t5sfSM7/qVnWj8IFCD WXrws88+6+3NXdR3zf++/PLLTd+1d7APCECIGXvw7bff7v3re++91/Zg7/vB7lhoo/nDduKLL75o ZxCAAc23By/aH7xIt58451cFANNZzveDnfHqbPcfF/TqAOBys58v2o19uOR80U7vfFHnyQAwrNnH D3Zf9vXGD150XLQcP+j7QQCG5fdkAKiZHgQAAAAAAAAAAAAAaE1zfvVV/xv6NQHAtPQgADXTgwDU TA8CUDM9CEDN9CAANdODANRMDwKwfDntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSA euS0T04SAOqR0z45SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5ME gHrktE9OEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBoB457ZOT BIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkAmKPj4+P19fWTk5Pu f2/fvr22tnbr1q1u4tnZ2cbGRjPx3r17p6enV504i5z2yUkCwLwcHh6Wlde02Obm5tHRUXN7f3+/ 7bLz8/OdnZ2Dg4N24u7ubnNj+okzymmfnCQAzMWDBw9u3rzZ1Fa3P9jsDG5vb7f7cU0nbm1tNdO7 G6Ni53H6iTOGzGmfnCQAzFFZWBP3B3vl2N5h+okzxstpn5wkAMxRb8dt/Au+w8PD7nbXbtNPnCbD JxdbRPs8n5wkwDQWtdHk2il7sLl9586dtryaUmu/N7Q/mJYEgDkqe3DiDp3vB9OSADBHz9wfbM8C 3dvbG42dGjrNxBnltE9OEgDmqLfj1o6kaNy8ebM7qmn8YFQSAOqR0z45SQCoR0775CQBoB457ZOT BIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAeOe2T kwSAeuS0T04SAOqR0z45SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjnt k5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBoB45 7ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAe Oe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGg Hjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0T04SAObo+Ph4fX395OSk+9/b t2+vfeng4KCdeHZ2trGx0Uy5d+/e6enpVSfOIqd9cpIAMC+Hh4dNZ926davtwabFNjc3j46ORl8W 4p07d5rb5+fnOzs7bSfu7+/v7u42N6afOKOc9slJAsBcPHjw4ObNm01tdfuDTS2W5dXcofnXphy3 trbaO3Q7j9NPnDFkTvvkJAFgji4qrG7fsLnD9vZ2e5DzOSbOGC+nfXKSADBHE3uwPcK5t7c3+nIn sfuyr2u36SdOk+GTiy2ifZ5PThJgGvPfXHJNjfdge65LW4LtHewPRiUBYI4mni/anSk6+rLRfD8Y lQSAOSoLa7wER18/Rto7NXSaiTPKaZ+cJADMUdmDTXmtfV3bicYPRiUBoB457ZOTBIB65LRPThIA 6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0T04S AOqR0z45SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9O EgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBoB457ZOTBIB65LRP ThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrktE9OEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0 T04SAOqR0z45SQCoR0775CQBoB457ZOTBIB65LRPThIA6pHTPjlJAKhHTvvkJAGgHjntk5MEgHrk tE9OEgDqkdM+OUkAqEdO++QkAaAeOe2TkwSAeuS0T04SAOqR0z45SQCoR0775CQBYHHOz893dnbW 1tZu3rx5dHTUTjw7O9vY2Ggm3rt37/T09KoTZ5HTPjlJAFicBw8eHBwcNDcODw/bLmubsZ24v7+/ u7s7+qoup5k4o5z2yUkCwII0e3NbW1snJycXTTw+Pl5fX29uTz9xxkg57ZOTBIAFacpre3v7F7/4 RXlctJ3YHuRsmm5zc7OZPv3EGSPltE9OEgAWpCmy27dvtwc2u1LrDpCOinabfuI0z/vJxRbRPs8n JwkwjUVuLLm2Ztz1sz9ofxBgpTXl9dZbb/WKzPeDaUkAWJzufNFu5649C3Rvb280dmroNBNnlNM+ OUkAWJxuAOCtW7e6vTnjB6OSAFCPnPbJSQJAPXLaJycJAPXIaZ+cJADUI6d9cpIAUI+c9slJEhUm JwnAtZSzmc1JEhUmJwnAtZSzmc1JEhUmJwkT5bxBOUlgteSsOzlJosLkJGGinDcoJwmslpx1JydJ VJicJEyU8wblJIHVkrPu5CSJCpOThIly3qCcJLBactadnCRRYXKSMFHOG5STBFZLzrqTkyQqTE4S Jsp5g3KSwGrJWXdykkSFyUnCRDlvUE4SWC05605OkqgwOUmYKOcNykkCqyVn3clJEhUmJwkT5bxB OUlgteSsOzlJosLkJGGinDcoJwmslpx1JydJVJicJEyU8wblJIHVkrPu5CSJCpOThIly3qCcJLBa ctadnCRRYXKSMFHOG5STBFZLzrqTkyQqTE4SJsp5g3KSwGrJWXdykkSFyUnCRDlvUE4SWC05605O kqgwOUmYKOcNykkCqyVn3clJEhUmJwkT5bxBOUlgteSsOzlJosLkJGGinDcoJwmslpx1JydJVJic JEyU8wblJIHVkrPu5CSJCpOThIly3qCcJLBactadnCRRYXKSMFHOG5STBFZLzrqTkyQqTE4SJsp5 g3KSwGrJWXdykkSFyUnCRDlvUE4SWC05605OkqgwOUmYKOcNykkCqyVn3clJEhUmJwkT5bxBOUlg teSsOzlJosLkJGGinDcoJwmslpx1JydJVBhJkpNEhclJAqslZ93JSRIVRpLkJFFhcpLAaslZd3KS RIWRJDlJVJicJLBactadnCRRYSRJThIVJicJrJacdScnSVQYSZKTRIXJSQKrJWfdyUkSFUaS5CRR YXKSwGrJWXdykkSFkSQ5SVSYnCSwWnLWnZwkUWEkSU4SFSYnCayWnHUnJ0lUGEmSk0SFyUkCqyVn 3clJEhVGkuQkUWFyksBqyVl3cpJEhZEkOUlUmJwksFpy1p2cJFFhJElOEhUmJwmslpx1JydJVBhJ kpNEhclJAqslZ93JSRIVRpLkJFFhcpLAaslZd3KSRIWRJDlJVJicJLBactadnCRRYSRJThIVJicJ LNTZ2dnGxsba2tq9e/dOT09nf8CcdScnSVQYSZKTRIXJSQKLc35+vrOzc3Bw0Nze39/f3d2d/TFz 1p2cJFFhJElOEhUmJwksTrMzuLW1dXJy0tw+Pj5eX19vb88iZ93JSRIVRpLkJFFhcpLA4jTdt729 3R4ObTpxc3Pz6OhoxsfMWXdykkSFkSQ5SVSYnCSwOIeHh93Xglfqwe8AzNuCN3gwwSL2BwFgVSzi +0EAWBXt+aJ7e3uj+Z0vCgArZO7jBwEAAAAAAAAAAACo1kcfffT222+3t7/44osf/vCHg5yhWsYY nHkykdkyzjyBFfXkyZNubX369On9+/cfPXo0+nJtam8sXxfjvffee+FLA67X5knHojJlmJF5Aquj XWUePnzYTfnss8/u3r373//+90c/+tGAv13TxOjW33Lzsmjtc7Vbj262VD5PWhaVHosKXA/NOvLz n//85Zdffvz4cTex+Rj5s5/97MaNG8N+jGxilJuXl156aQkbluZjfPukn3/++SuvvNI9Y83zpGVR 6bGowPXQfovx+9//vlxnm/W6WYublag9rrKc1bn8dN0+4/jmpdwZWZDmWdpPzr3vd2qeJy2LSo9F Ba6HZhV+9913x4+clF+1N2vQgj5Mvv/++93Ddp+uS2WMZr1uoi76xIN2I/ZCoUu1nHkyKmZLyDxp WVR6LCpwPTRrR/vb3e0XCi+++GJ71KvZ6L322mvlEbC5675v+vOf/zwqPl232lMyFh2j/BR99+7d divRTNze3h6fD0uYJ6Ovz5ZB5slFLCoWFbiWmo3bb3/722YlajZrb775ZvmR8vj4+E9/+tNCn735 vNpsVZqPqc1n2t455936u9AY3cGicjenmSff//73u0/O5RZmCfNkVMyW3/3ud8ufJxexqFhU4Fpq P9t3K3XvLIiFarYnf/nLX7ptWnk6Ym/zsrgA3Yf5UbER623oyi9ZlqCcLcufJ5ewqFhUgNmNf5vf O7L06quvlsed5q55itdff73bWH366aftp+Xehm4JSUqXzJYlJxl91Xdd5Q3FojJR1KICPIfyFLtm nW1vt0OulrPmtuc2jJ/CN+xH6MFnS6c79XHwM+0HnycWFWDuuoNI7VG13/zmN93Ku7QzzJsn+vjj j8dPHuhOt2tCfvjhh4uOUUqYLd1zlSc9DnimfcI8sagAC9LtaLQfaJe5pW3P/B+NfX7ujiy1G5P2 ZIylpWoNOFvK3x4pv+Ra/hde48EsKuMGnC3A8+l2NNqVtzuw06zOv/71r99///3ln9fdGwTXfbGy tI/TvXkyKo53DTJbyl2/8vaSf7HZojIubVEBrqTc0Rh9tVHtTsMY9hhOubPTbOuabcsyz0LpzZNR cXbKILOlnBuDDDqzqIzLXFSAqyp3LtJG9TbZBtmSLHme9M57LE84LM+EKXf9Brlwj0Vl4vPGzhNg Sr3P0uUJ59Va8jx58uTJO++8Ux7qnHjYc/DN7ICLSshQkXFWH8h3+YVH218g6e5Tz/DeS2bLIPOk 7LjyN0963bfQ3x6JXVSGHSqStqgAU+qux3rRhUfb0wmarcof//jH//znP7UN7504WwaZJ+PDrssN 7xKuTJe8qCQMFclZVIDp9a7HOvHCo72LcV/vcb5TXo91kHky/gtg5fjrRe9ixC4qQw0VSV5UgOmN X4/1mRcevd5XxH6+67EuZ56UR0G71lvaD3AlLyqDDBVJXlSA6Y1fj3WaC4/+9a9//fjjj5ebdEme +3qsS5gnva+flj8aPXZRGWSoSPKiAkxv4vVYl3bh0UAJ12MtdV97NU/d+xXKDz744Je//OXSjrAN tajEDhVJW1SA5zPxeqyDn3W/NIHXY+2UX3v1vhAc5ESLoRaVkKEiyYsKMIuLrsdayRU/M6/HOh5v FHB+xYCLSsJQkfBFBXhuA16PdXBR12MdHxMx/uwDXjNiNNCiMvhQke6JchYVgFlkXo91dMGl6EZf 3+w3W9p33313OcMQQn6PZcChIrGLCsAsAq/H2mxXf/CDH6ytrU28FN2Sv2nKuXRva8ChIoGLCsDs 0q7H2tTNG2+8sbu7++TJk2aPY+Kl6JbzTVPC77GMG3CoSNqiAjC7zOuxdnUz1KXo0i7dmzBUJHNR AZiXwa/HWurqZsBL0YVcujdtqMgobFEBmKOhrsc60SCXCCwNfuneTtRQkVbUogIwR0Ndj3XcQqsn 8/dYJo4TGYUNFenkLCoA19XiToYJ+T2WnovGiYyqHyoCwNwl/B5L5/JxIuOpFi1tqAgAcxTyeyyd acaJjOoeKgLAHA176d6LIg07TmSUN1QEgAUZ9tK9EyWMExnFDBUBYKGGvXTvRRK6JmeoCADzlfB7 LJdLGCcyGuLSvQAsVODvsVxk8HEiI7uBANdR4O+xLN+U40RGrpYLsMryL927fGnjRABYnJxL9+YI HCcCwHy1W/if/vSn7XZ+8Ev3RgkcJwLAHJW/x1L+FNggl+4NlDlOBIA56g0AH+onWXLkjxMBYI7K M2GG/UmWwa3QOBEA5sig745xIgAVqvNMGONEAOhUeCaMcSIA1OnyS/fWuXcMQCWmuXRvhXvHANQj 4dK9ADCUkEv3AsBQDBUBoGZOhgGgck6GAQAAAAAAACDK/wMyurRKDQplbmRzdHJlYW0NCmVuZG9i ag0KNTUgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwvRm9udDw8 L0YxIDUgMCBSL0YyIDcgMCBSL0Y0IDE1IDAgUi9GNSA1NyAwIFI+Pi9Qcm9jU2V0Wy9QREYvVGV4 dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4vTWVkaWFCb3hbIDAgMCA2MTIgNzkyXSAvQ29udGVu dHMgNTYgMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9TL1RyYW5zcGFyZW5jeS9DUy9EZXZpY2VSR0I+ Pi9UYWJzL1MvU3RydWN0UGFyZW50cyAxOT4+DQplbmRvYmoNCjU2IDAgb2JqDQo8PC9GaWx0ZXIv RmxhdGVEZWNvZGUvTGVuZ3RoIDExODE+Pg0Kc3RyZWFtDQp4nJ1ZW2/aSBR+R+I/nJeVzEoMc/dM tu1D6EXdbLRRk2of3H2gYC5qYkfGpGp//R4bUgaC8fESKTjG5/bN9805Q2B0A69eja7HH98Cf/MG Lt+OQcCi3xtyGArFFBhvmDbgtIQi7ffmv/d78O56DBBYitFfk2wBUZoNP98Odm4u7/q90XsBQjJv 4W7e7wng+CNAWYi1Zvh299Dv8Sochw/9XhJZBoOhjsY3AxN9hslgaKNsVt2pb18PdJQ+5IOhi4of UH24WQ+GJsLnVLRIB//C3Z/93juMu40tQQjG9UFs4TEdF8ZP0Pne8kVtsqm2sBrpmKY4U81AafCn cBKGmW2ewBmXGGWaRH9ncJsWT2kBt6tZekErXFq0P3RYOzubsO6cMJfMuOOEAY5yfLbQLrRIonGR TsoUJjBf3afwfVUuYb36mYLgH+DyR5muSbVKjbWaw1zaazVda7XOs1i31jp6b16aY+GBeRLNZrCa vx7N0qfRz7TIIcc/1stJkY6y+RrrLkc1Il/XrwUG+gbTfJOVeI0vEiQaseDxYcrtkFgC+W2s92Wc 9RZ3BtgK5k+Qf3y/SrOyO/lDf+3Fu87pasd8Ox9CCuwtkujTJgOkwTwvcKedbC8mX/NNCRoeVtkG WcBo9Le6ov9BOu3l+s7lKsvE/2f/L+sk+r6suF0Wm/QPmOWw08IB8fOdOLLN/X2tAum+VQ9ntG1f C13TP0i5HRHBKfSXgklJob840yYbEOaOmWOExzfYGbMZXGMrxDa4WU8WFTyAcM0gz2Bdd4YvUb15 TvOHh+phyqIo5EuF0a+gSVTmjy8sT6Zqjy3ZlwFpYVR8FJS0MJSmbLxilrYwnduycZzFnbpcYJFE /+DSVLoulyk8TsrpkraJmSrUYWwCVmda+CkSYHE27riNBRZJpK03MJnhjgW7lwbkVnWBjwvn5RWA sNJdAVLZVverX/pC8cYQvCZJEGb7SbF4vvpUj49GMml+a3Ki9ZGT/eR5AsgD0wr5uAGYSne0PQgb pTh0Qli+xqnk2W0s61QMFu8Fie6NXb2R7rquvwsj9hZbunflupI118PABLA6TxhGxky92GLJTSww TyLBDd4KmY+kl/aI+MbbKygep+unKd7Ez9SFkW3E34c5TXyNc41vI/7eSWfin0TpJPFPwLzjfejj bFDDj55vW/XGQS1c6GoAUCR3jYNQ6I6LPb/Pnx8pQ4R2uCl4krvGISJ0h4jT9gJJ6aS6GptJSyEb O2noDucMR8OusXeF7pShFtu6l1bupGSORBRJOR1p7qnFNm5egTvlLYtp7iiqUA5nJBrvKKpQMY4m JOwURRXKxMySeKcoqlC4D9KKVRRVKCWYJfFOUVShhKMWS1GF4oYZElEURRXSS2qxFFXI2DNDIwpF FRLPH5qGHUUV0iimaUShqELiEKNJ2GmKKiQ2Rk0612iKKiQ2RkX6+kZTVCGxMSpHckdRhXCOkXii KaIQsSGe1TVFFHiIYZL0JbKmiEJozwQNOooohLLUYimiEFIxQeMJRRRCcGKxhiIKHzPBSdkZiigc TqCa5o70/4Ht+ea0t/8ANHxpMQ0KZW5kc3RyZWFtDQplbmRvYmoNCjU3IDAgb2JqDQo8PC9UeXBl L0ZvbnQvU3VidHlwZS9UcnVlVHlwZS9OYW1lL0Y1L0Jhc2VGb250L0FCQ0RFRStDb3VyaWVyIzIw TmV3LEJvbGQvRW5jb2RpbmcvV2luQW5zaUVuY29kaW5nL0ZvbnREZXNjcmlwdG9yIDU4IDAgUi9G aXJzdENoYXIgMzIvTGFzdENoYXIgMTIyL1dpZHRocyAzMTE5IDAgUj4+DQplbmRvYmoNCjU4IDAg b2JqDQo8PC9UeXBlL0ZvbnREZXNjcmlwdG9yL0ZvbnROYW1lL0FCQ0RFRStDb3VyaWVyIzIwTmV3 LEJvbGQvRmxhZ3MgMzIvSXRhbGljQW5nbGUgMC9Bc2NlbnQgODMzL0Rlc2NlbnQgLTIwOS9DYXBI ZWlnaHQgNjMzL0F2Z1dpZHRoIDYwMC9NYXhXaWR0aCA4OTQvRm9udFdlaWdodCA3MDAvWEhlaWdo dCAyNTAvU3RlbVYgNjAvRm9udEJCb3hbIC0xOTIgLTIwOSA3MDIgNjMzXSAvRm9udEZpbGUyIDMx MjAgMCBSPj4NCmVuZG9iag0KNTkgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVz b3VyY2VzPDwvRm9udDw8L0YxIDUgMCBSL0YyIDcgMCBSL0YzIDkgMCBSL0Y0IDE1IDAgUj4+L1By b2NTZXRbL1BERi9UZXh0L0ltYWdlQi9JbWFnZUMvSW1hZ2VJXSA+Pi9NZWRpYUJveFsgMCAwIDYx MiA3OTJdIC9Db250ZW50cyA2MCAwIFIvR3JvdXA8PC9UeXBlL0dyb3VwL1MvVHJhbnNwYXJlbmN5 L0NTL0RldmljZVJHQj4+L1RhYnMvUy9TdHJ1Y3RQYXJlbnRzIDIwPj4NCmVuZG9iag0KNjAgMCBv YmoNCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMzk5MT4+DQpzdHJlYW0NCnicxV17b9u2 Fv8/QL4DcYEB9m4j800xmAes2QO9XbEt68UusA6DY8uJFz8yS27XoR/+HlKyItESLUdStq4JLVE8 h7/z4DlHpItGP6Ivvhi9uXr1NcJffolefn2FCLo9P7vA6IKwgCGhRcAFCjlF2+j8bP75+Rn65s0V QoUnyej7yfoWDaL1xX9/HmbDvHx7fjb6liBCAy3R2/n5GUEY/hDEJFKcB/Dr7er8DBtyGH13fvbr QA1/Q2//c372DTxrnt8/wVkgSg/9OgiGF3SAhhd88O2rH+zvl9GQDdbTu+FFOFhNhhdisL1H2+GF HkSx6bxbJqab7fvBXFgkd/sBauhSFgZalQmjur4cJsrKfd8NajurQDmdJR+KlLvkbhvBBNhgVvM4 wyogrBlfjJAA63JfO/h6lsMR1j0LD4UOnYvavjqQTl+YxsyQsvTu7Ac+eDeskzIJA0HqZ3Wgd7Re 7ygy0+YHekfDgGdDXwHcdxOjM7fwI0JxtDXsvR8SMoi2gIscTDfrIeGDubm+gO63NZwTjgNZHr1O HAR0udQV4QALENDb6a+Dy/qndKDC0lP2AS88rGjQigaYcqTAqJUEoxRBGCKCA+axa16ClCFdgjMf UQSapqYMSsmsOduGNen1PDZI/m40IDY/IvNjCyjTrPn7fJnryK0B33Yb/8s3NVHgLMdIsIDKIj9l ZT0YRFYOIgNSmpSRDyUW7Z0XblU1nOQBCcvD2aG8jIWVI6mDkQKMmR0t8TKmq4YD6alT+SK4ciRR HglkjoZ6II0UOTS8giTkqNkSrQJdUrPCiuGdOKl1EEWXgEmgwxqfMF0uhhIehw+Pi8ejVyB7t0AY LEg1pgtriOJlMn6uWb1bq7FBGepAZGOvDO6bnWF0ndT6E0yNmhefq/PrBAu3a4LWZvi5mbY11loy VAaSNiNDdblrwe42teMzZZxp8aEDLa7w00V+NsbjIBsuwFSIHHwccg1hhLlQS1bJgDecltJuV5iN WX03BkLr9JLFyrjIyIckpTyQuhFJSlUgVAVJG1QICCpgxtmKbD5PwEZn4zQMMByQoRpg0wrSljYN 6Le/tm9eGvGPYvP5Lg23IuPUR2tzc271Iong6bSFRnbW89zBJ1G+JCTOTKr9AGcyCMvz8tsR94el bmwgwZGRbB3j6SrGUydzHYETmLjBWDWXEpb34lA+X/Xn+ZmhK3ggQiSYNnYGP9IV+ZfP0Rr6FSck qibUYJCUW47CA0ixiaslrHlhGnChC2wdijEiE7whfl9g+qfzs4bUcv9klj9epFCG4Sf7/6Ho9kuz hY8Jw7cEZBU4BUgmZB6y7G9ZF2vv2R/5bbm/l7FauitU6tnygTGmqOJ2Pri5Xx4/JW8i34wy4eaK O8RjBzNEuU9BEwSRgdJ7Xo3zDGWdLqh6XTgyzDFtMGZWWGiLKkEUx2rEJWcjSiFqRaub3TxGizXa xRF6N5jutttonYymk+ldNEo2yWRZivZzBWrAYu7OQkhAyQFbB3p0qERhbxCBh1XVCGkKVspESEaE Ki1HVFLCucUJgoldnERbP16j1eSv1pgxRUze5vDZADLdG2SQ2/kgg5VZW5T+naO02SVoM0cPk+l9 lEB2BmHXbLL9iP7erKMaCFsDxzE2SazD7XHgKO4NOKyNq61EDo8I5vZvpmb8HhB5mNxCMrv4G9BA f+xWN5ui4j0Rn0oGKXi3kBxweIBWSyohiOSQShGHfo2JE22N6dg0D5Wisi7WjVLgWjBCogl4Z/A9 oBihGknQaYz0/aEu9O2EQhlQdsBrA9wq07YucBOaBFzWGZP5w6hBi8h/AC7OAswOWGwAV2W+2Alc Ia2FS0uB8esR1RzT17DYCY3Za3TzMYliNFkuN9NJEs1QskHrKPmw2d73FB0wSMXAYbucNkCtMjvo BDXFAsY9SgZD/bmL4iRG802XnpJQakpEPvrbLGKbResFiOfdwH4c7bV8VFqB2wuHhcYDuAw1EI4n 02kpHMmbC2frOoEcNn4/0vcjcBPtw1uNA0oO+GqAkewNIwFZi/ZhFM+r4/6HaHLfiTOkFJwhOeCk ASq95UWCy3pUylqTwZNqS0skIOUydUOXegMkekt/BFMBrgnm65BYTj52AIV9CeKSbwBFb2mNoGFT KF6NfgCDWSQLuzDefISMZj2bL5ZRW1Nh1MTHLifHUWG95SwC0ict6lCZQngQm8jgYbtJNtOOl0Bp rcXDwBLNthPwXFvILxfrKG4LP2YmCXIpNipzsf3Lj30dieV1pHINi5t0tFDCwuAbD2pMrFhjcmtU gpBAiaxGxUig3ecLHWyNqtSnUKOimplCdMYMJTTAdTUq5onjjwxzRMG4BrRJTYgTTWa/G7u6hIXp FqT8sJiN8Qv0x+YmHpN3w0sUbbdjhC+RuUEwFphfol/AJt9MtrCqgpu5JPSShiBawqu0owHvj+yK gLMDfhsYpyeob4ldqI1uV2KHKjP3p9ITOCB+erbEfIkWmzEBrcNvXr5ANx/Gkkip9OuXo/gF3HoA qQlOFIhwu1snYxCRkOEqjqYtZcO4Mnm9y18D2XhSh7aywR7ZdCwdbTYC+ChOl5MEbAhCvKkxm9Vi PaYvEIR341CFQr9Ak/e3YwTeA67GySx6P4ZAn7Y1GUa4CYhdxhqIxZM0tBSLIoGsWeVR13KBROAI yUOxsFwsEudiYboPsTh8NRCLJ09pKRZJvWLpwVo8FK21PETbKSRCsPrEjyJqiXyWAbi0GyDvyYVa Ii9YIGriq74MwkPyE0IkwDi5G/9qiLPfXiDQ//IFgp0L1LnQ1j60sKuJw2YDKXnytJZS4vzZpeQh +QkyDkcG3L0g3AuyeIF3JiWHzQZS8qSQLaXEbIj4rFLykPyEVAHyLmmblALSd5d4zeuip1OR5nWD b4qgRih09Arp4gVhLhQ8CNGdad6x2R9uwvSk6S01D/La59Y8D8lPSOtHyCk1MtCByC+o9ILeX1BZ Dy3SC0R258UdNhtIyfOur6WUiApYTa7bl5Q8JK2UtE4hV4z+1ja6MYXN8IBkA8T7qy7g0It4D3Gl h+LNB4gkX7+Eucf7cJ8zGWqZxvxSYSr5CxN5jo1k1GdpAgBJtLC70/ZJAKcKm63mbSsajJt3Oi7H DeTVW0WDwbzrKvP9yMtHsZidoUvExz2soT76wCExSkDwGESl5Gcm1B3TQJiLAi4GTNjbeAwPqc/a ekxIS0AfXI4a6ENvVRQGazl51rzQR9HRByos7sIKwzYJNJXImyCYrN1WMjQ0LwZc3hpIprdCCgvB I3kykh4k46GYSmaVS2YvAf4oin2L5i3RkXDS3R4uew2E01s5hZnDBM8rHA9FRzikI6OAVTFg4QHl Brj3VkwxHlTXhuHdF+R99KYPO5T/d4l28XYMgZkOQfHjj/FYwdpPoT1N/hqHlCkTgvwxN29XICyZ j4lsaxVCmbjb5bCBdHorojDBnlc6HnqvfkCz6CG5i1PpELAI65yy3zz7HWa/iTSvtwIMTWadGzS+ HEvr3traESfC+i+H2waS6q2QYg+rPm+i5CMZ725WiySVFM7Q57lAwuwKyMiRUCafLkVl96+6vB4X leit8sCYME8/q6g8JKeb1cMySqK9qMheVJ2G7qE9M+Pj45/SDIelBprRW7WDmRODz6wZHpKLON5F s9SI7YbV8XbMsdkigUcfxnhkdxbEd5ttAjfyS60Dd7ulxOWrgVjalUQcSaQv/E4h31+Gj7n5HoZK EV3v1ihOJskuRna3Bzz2bjBZLu2Gjw7evmVm63DQAI3e8luqRS0aXcfp3GwY8hG8/uarr8s7OCa3 t9ub0iYOCA6dC5O/Dnok43xjh+1Q+NxShMJsgTqcRaMtWkKUtmgJLPJtVKUtWoLgwo4xuC+0Ptxj 9fh01R4tGkLCng1OIew+2ONV6GCeL/cp7NEiQqQbJY+fIxSedPLIMMe0ND0dVHeOkOARx/CDEk6f fo6wAYvOOUKXrQaG7Mn8WkIkZS1E9lQcVViMCNEUd3mQ8ATQsoOELqMNQPMkZC1Bgxyxbrm2oIVg 5D2dJDwFufRNksttA+Q8CVJL5HhYi9wTjhIWYHsiVpW8ZtGPj9k+ND2LhV2yx+UlPVlSS3nBY6Lm rEq/p/xO0fP0lJ/LawPcPDlEW9xwLW69nPI7Ba607uuy2AAuT2zfEi5KzFeU1J7yI69hHeLanvJj WvOuTvmdtAzZU34upw1Q86QkLVGDiLMOtYqDZPtjd116SqKweXXu4+TJp/xOEU56ys9lo4FwPBlS S+Fg89WFPR/yOyX2TA/5OWw1QMjzJq4dQkRzP0JPO+J3CibpET+HkQaY9JaxEMgQSN1XLZx4wu8E ILL9vS71Bkj0lpiYL7g6FYnaE34nQWFP+LnkG0DRW7phvncP131lQIsTfqdYSnrCz+WkASq9pRJE hB5U3BN+j0fuul3+UrvxsFJ/wu8U+NMTfi6dRuUjhUvlIxrSvMJTKh/RUBWqWfsCz0H5hxbLP275 iAicfwtW5RG/QgfvET9ujvTsedHma/l4TfVIeWJ4/yjH9IuZ73yslOnVcgHrEHq1nm8qa7vH6e5J 6dB+OZlD67hZKU8o3m7WNKyb9fXDFF1tduskbjtpCEHMpMukGkzaE0m3mzSxPv+EF+xPJJe+X/eR +y5KJkmyRQj9nLfQ95vN/e4BoetoMlsu1vfp3gjzyTZ+2S7Mi0OErrbRJG1dR6vN+0qn31xMWTnC 5baBnDxBdUs5YY+cUO3LuScSVaCfRk89VHG+U4XkLZ637F1BbKZvto8LmbWcJyrf1j1BUPhUQXli +3aCwsScBD/l9dBTLcq+HfLRu47Wk1WUAv19ajw/f1zlZvTmfrbYpva0SlvGrmzrGn7+CCmZufnV dBrFlcv46XJymG0gJ0++0UpOGlZzWVOm6s+gfFQfzaNNi3YiKJfPBoLypEOtBBWyQHCPoLq0KLsv 0kfwzf16Y1eeb2Pz1jttLSAYQujHSXJ3tVnPzbWrzWq1SNpJIntV5nLTQBKebKyVJCDiFzWVrv4s xkO0mUl0yBKVxLzE9OLQJTlFTI3GIQfz7kSxDoY9qleefLaVXkkR1Jibibs7SDXAhpT9HuTgJEMK Pe+WWk1YSC8j7SgI8893lCkUIX27WEWzH3bGdb1av58sFzP0P1j3HyCvi20EkGxt6zqrrXSia0cm fIh8X5ktVwGtj6I7D8485JQMzb8f0XB570QKDjeNSighLZVQCOd5laNUQsn+iRR/CaXwdFUJxZxH D7MKCuYmUCzfN2PTx+8ZLxI43kFmd/0cPrJgazRlLrI+j1T230heMZC/k4X5/wrH3Q4NCmVuZHN0 cmVhbQ0KZW5kb2JqDQo2MSAwIG9iag0KPDwvVHlwZS9QYWdlL1BhcmVudCAyIDAgUi9SZXNvdXJj ZXM8PC9Gb250PDwvRjQgMTUgMCBSL0YyIDcgMCBSPj4vUHJvY1NldFsvUERGL1RleHQvSW1hZ2VC L0ltYWdlQy9JbWFnZUldID4+L01lZGlhQm94WyAwIDAgNjEyIDc5Ml0gL0NvbnRlbnRzIDYyIDAg Ui9Hcm91cDw8L1R5cGUvR3JvdXAvUy9UcmFuc3BhcmVuY3kvQ1MvRGV2aWNlUkdCPj4vVGFicy9T L1N0cnVjdFBhcmVudHMgMjE+Pg0KZW5kb2JqDQo2MiAwIG9iag0KPDwvRmlsdGVyL0ZsYXRlRGVj b2RlL0xlbmd0aCA4ODM+Pg0Kc3RyZWFtDQp4nL2Y3U4bMRCF71fad5hLQMKxx2OvXSEkSGhLBVJ/ qLhAvUBRoFHV0BL6/p3dkjRsNsmhIl0ErHacM/bxfLY31HtPBwe98/7pgOzhIR0P+uTotiz2Le07 bzyFHIwESsJ0PyqLm72yoJPzPv0sCx+psmIiU/DZOCEJxidp2l3u0URbLsi73tn15JZ2RpP9z592 H3NBKscXZdF7LZSMFbq4KQtHVn8ciTVBPy3euEgX38vC1l239EYz71tjWZsPr3b618OvIzqd3Ny9 2v1CF+/K4kQVP5TFM7IzOddOn33TtDP/1Q49zdWYtugHb80PL01gpR9HDw/39Hb8MCW9zsfT6WhK Z99+/Wg/Ox7ffex4dtl69qKeissm+e4xbPbUb81TDus9JfpnGzqSBltX1MakYnO0NtLssvM7N78T /fUhibX2RbvIyZroEGNeNq0uRxZIGxzXI9axN0NvnOGgbtmtlGtXdzaXq2ytXF0wmdfYU6N9toz2 oI37YHx/0n52NBxO/8cS0DWGzZ6GrXlqo8nrC725XrLaq2RyvcNsSP2X++ffyfyOtzKLHV1fmsSu eYyPU9d8zDd4VUGlTY4U50eRWaD518SaP/NwnMUeB/AkGqpQP5vJWsvUEZxL1/Gn6k0o5mxC+hPR dVtYlhqk2sKO3m0Ix1lsTecXstf9a3Vg3mieYmkQQJOluam6GFtdCvVAUtKOra2DpSxpVZZF3co/ ltcGsYyIhazljag5i8iJmBQhuc6zcVvOW+MFkus8WrbldHmtILXOQ1VbzTrjMOs6N72WXEjRRMy6 zvW+LVdxXeOIXETkQmUCpLYSlEU1YZMxOYSIwEqah+QQJoK+BGaHyDHChOhy5SFgGWFCkpgK8o4R JqSyhiHvGIFCdO+oMO8QKESUMcw7BArhaALmHQKFODYW8w6hwufKCDZYhAqfFDJouWOECh9T/aUI IOcRKnzwJkGF4hEqvB4XGPLOI1R4Vsgg7zxChXcKGeYdQkX9Qhgx7xAqOOnxBfMOoYKjQoZ5h1DB QSHDvEOoYA9uZB6BgpnB844gULBN4HlHEChcVsYYkkOgcJVuZJAawoSLAp53BGHC1d8fYdYhTDh9 UYmYdQgTjnUfg9QQJJy+YAbo5CkIEtmZLJh1CBOV7mKIWIBeKJQvaKQBOzsZv7LkfgNJKfPvDQpl bmRzdHJlYW0NCmVuZG9iag0KNjMgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVz b3VyY2VzPDwvRm9udDw8L0Y0IDE1IDAgUi9GMiA3IDAgUj4+L1Byb2NTZXRbL1BERi9UZXh0L0lt YWdlQi9JbWFnZUMvSW1hZ2VJXSA+Pi9NZWRpYUJveFsgMCAwIDYxMiA3OTJdIC9Db250ZW50cyA2 NCAwIFIvR3JvdXA8PC9UeXBlL0dyb3VwL1MvVHJhbnNwYXJlbmN5L0NTL0RldmljZVJHQj4+L1Rh YnMvUy9TdHJ1Y3RQYXJlbnRzIDIyPj4NCmVuZG9iag0KNjQgMCBvYmoNCjw8L0ZpbHRlci9GbGF0 ZURlY29kZS9MZW5ndGggMzQ2ND4+DQpzdHJlYW0NCniczV1tb9w2Ev5uwP+BXwIkPUfiu0ijW6Bx 0l7OMZo6OeSApjjIu7Kz9b64K23SFPnxN5S06901l6T1Yl+C2gr1MqPnmSFnhqSK4rfo++/js5PX LxH+4Qf04uUJIujq8OA5Rs8JixgSWkRcIMUpWmSHB5ffHR6gV2cn6M/DAyZRwnGkFBJMR4Qj+FFd 9eE7NIPrNh5O4jfp7Ao9zWbP//3uWS0p4Bkv3h8exD9xpCLM0fvLwwOCMPwlCG4S5l4WEYneTw8P sFEbo59B7nMcYQqXD397ep6lI0Soun72O3r/r8ODV/C8Xw8PgiVTRMiuaMWiRHG77N+eom1Jv9Z4 bYJB6/cvb2PCPD8R8OxISyTXMK9OlL/Kc+WP9Wm5Olfrv3VWJMK0rR6LMUWWk+tHm/PbT6+Ew6+V XMJNy+4jbi8wj9i+ZsNOJAasVooSKiMl9xgK228o7of4LIUyw+9+SyFgDUlMGCMsZjihGE0vlpc5 Gs/QMs/Qx6fD5WKRzYp4mA4/ZXExL9LJx2c2swrR02ZXVOOIErumd+zqjlHxvoCDe/ajpikRMaEE s5gSRlRMJSWcl9ih4WSZF9nCjWE8Tf/qFkeWyEgyi+J+EEVfIGIeaeXDUSuRlMj9Y43cfFmg+SW6 SYfXWYHybDifjdLFV/T3fJbtgbVbMDnGJZi2F/DjKXvCU8KopFz9Po4J5uV/tUHya8DpJr3KUD7+ GzBCfyynF/M7JtoUOoumVOtIEb+qD+QVnIA6zK6On8ikLyKVdKOjiCYx4UTGMMQksQR7xEhf76Hv oXoYRSLO7Mr7sVR9YZkkUeLqrHFs/jJqICTysTGE8Akzu9J+DHVfGErlxlBLgfFpzLHU/BTChYRo cYouvhZZjtLJZD5Mi2yEijmaZcWX+eL6IQKHesCz6u6HkuC+sBQ6kq6Qq7RHeN6fyywvcnQ5X3TZ /0I4AnG6X4sqzBtlszEQ97H6Z7xyinhrOO6YNqYiyuwKBtDmSKla0obvTdtOT7IGk1/H+jqGvqaX SNmqaQBytC/kOImEF7j80p5X3GTpdffdLIVsAhOragFI9ZWNSUbdSG2bVw1ZZVZdogNJnwlrbeoE oNNXyiWBM+bKFvbBM0m/do2PLvtQm0IBAPWVTknC7wXQ6/gX8LdxMS4H54uvkEjNRpfjSdapp4ER SWLXLQCr3lIlLCLqTJXQEAKX3MQsN4t5MR/OJ90OwzISyq/GaJFCl7iAhHc8y/JOiYH7TCJm0yCo XkeSmo66Kkb1uiq2XY9jwlmMu71vux5XPVZowG5VbIPYRe/ev3FBWWzbumaj2MbAaXWy0oQSGmG9 ryzryEU8j/GYnVCAtiuKWGTp6L/GB49hDLwC3m/GowE+Qn/ML/IB+fjsGGWLxQDhY2ROEIy54Mfo A/jvWbpAQCRJjnFyTDDwS7jNXoJewF7zECa9s75CgCc7cpOWmEKcTxyQIqvfNpUJwRXxyzQ0omM0 ng+I0FqdvThCF18GkmtN1emLOD+CUzf5QOBEHaHFclYMEBUUy2meDTslrY6mbfr6OaOOHKglZzLx cNYxazBg+6UOJ2kBbgcB6NB42nQ8G1B+hCD6HBCmMAE/TD9fDYgGsdCeF6Ps8wA4jRLaMWks4sSq bgBpjgyoJWkicZdrUde0SR4kN4C2LrWqkwevWrWpcPwApmLTJcBWHDlfS1vhKtKuXKYnD/eJLV38 JlsMIbeEUTa/tZtOaakzKKs6AbQ4EsyWtLCy3P3wLuyT+w0hABEXnwa/gQKC/H4EPzcaODQQvNEg oIFuNEi4pVvPMlMCzK55AIWOLLgthfiRKPTI/YbYBh/aMMZvGwgzjImNBqmhQW406N4otGgeQKEj T29JISVR4gt/+qDQJ/cbSm75oCQBgtRGg6TQoG8bWKejKpBlRlWfjrxU4rZv4JT2ZDY2TQLMxlGy aGk2cP2jmI1P7jcEgc2KD6lLgnQk6oZEy6pBVw1UsfoKLcoGkei+KLRpHkChY163JYWYRUI/AoU+ uSWFWhs+CIUs9fduwyEO6aeyaxFAR2/lGK65n47uo1Sv2IsvEJeevgAU8nVGoyTFqspqEiIVhqwC 4lhTAgLXe1JlplJgSkxhcZVxSKE5iWjSbSEI7sDM/hYBbPZWCOJKRDx5eDZ9YjfTU3QMXjDAkeZP TECLBxCbSA3HFJrBS0TyxERJeEBUpCgcJ9BOI4afdNs/KmnKQlbV/RSy3upCPJGPQqFP7A6FQJwh kYtuWSEQfUE3adUmgJXeCj9cJhF7+GTeK7ZiZbpihQIlxHSFHA5w5WDmyLgULY9I6VzVUbf+VC3b sWocwFxvZRgu1KMw5xO7wxzBa3poL/xQ6E4psesVwE9v9RjOtemHH3bywit0eLNE6z/HaJkvYAgS AtjJv+YDRiJuRqZh8ZeZ3Ui0hqikh0zQq2b6x+WAHJno6HIAkX63Dg2ebBzapkOAwfRW/eHw6xEM xiP09S9olN0Un/LKYIjpiI0v1795/VvVv4mswlY4ZOYac/DDQJbddrd+z4moaLS8QACNvVWAODgR eYRU3is3X15Mx0VFI66p4Wu2VN0CBO7QV5PXH49lFdaqfgCPvZVkOCWQNT8Cjz65w/n0ZpIV2YpH 8v/Fo039AB57q8twuN6vUEtJImJ7JPUZkdUG4xM7zvNlNqocv1yMPVgMCFWQ6MRfBjguF6Hkn+aL AtrXTd0OuWAbZtgPouKubbQtEtmXvJTYgdeIBr1Ob4UOpklEXeH1+XKG8iItljkq1xDBvR+fppNJ uYyo87lOxc1SNqtSARv0eqslMEXdKHWd+nCzRs0r9fzVjy+3VwalV1eLi2pxUFItDoKQdqch/evO FcVgvWSovGDj353yK8xCvD0vFrRQkJOthYLCLJ+xLRSEXOJ2JWO1no/cXe1HN1f77a4WZGbenVXP pgQex3YfsHGFecDORRvrBSkvEdpcWMn3rBfkjvzd8xifDcvybvf+XBVzCSpQikXz7blBelpTa1ND JHZNA9zfkVq3hE4wN3Tl5lImFGkMh0V2vQveKzwmNNGyy33BjQlkCTExmVXlAAIdqW5LAjkzHZaH QKV72xzcHNFq9tH6AgGIOrLOlogy7ka06f7ghzLVOn2wvkcAsI40sCWwFAh3Dfrlfl2KuQGYMrJ3 w26XXVG988Kr20N3N9XUk1WtAA4dKWBLDol049TLPuHmMFYzDlatA2B0ZEstYcRJlLhqhWarMDmN aYK1OoVhkEEY3NVW4RZjYLlX2Kp8AJqOTK8dmlQrN5qWXaetd+42R7HabGBVOuArI45MsCWKSpue sMHe3S77YyKxyZO9ytx7v3Dz0L3aMGzVKIAvx3xzW75wCF/Ntgw3R6ta9m9VLgCt3nJECvE7d+Ua 99023BihetW7VaEAhHpLBSGYbYbQ3p3DLSAqtw5bNQqAqLdki0L2x5ypQYu9w81drgphrcoFoNVb IkU596F1d/ewfytvc5yqvbxWtYJKdEJuleiYVOsy2laJjiVso4i4KqLtlthu77bV6ChLotWH9aw7 ejcucO7oTZTBZ4WWoGbtlL1AJxxZg/spPkOg3Fy/3xBOJmMYktDr2c0QncyXsyI/RujnrEiLYoEQ erc+Qm/m8+vljb0k3VDHeqjyKmk+BzkZz66rBR/lxyHNnw+LsZlAROhkkaXV0Xk2nX+2+niIitbq smnaA2SAizvymHbEEuHGbLU8pkO64DatuF80Wa/NuT3i6yNsflAhTZpaHjFNleUO6+RdUxbrwoxV 9QAWHflTOxaxNMZ17ymghkKrGSCv0PNslk6zioU3ldu9+zpdO+DZ9Wi8qDxxWh0ZjyyPzuHnW0gP zMkfh8Mstw4jbUm06R/wUUNH+taKRKITL4k9uaJXNN52u31HXfbpUpqczarZngULTSVBGI/3SArH wLosr6WFhr38XQt1fW65lYWqJNK8wZqOpv1MucjWK/XsejYvR/KfcrMYoDoazy7nCL1Ni08n89ml aTuZT6fjokuaKMRxitgVDKDJkSm3o8l4tY+mnjoSn+jAjqTbUXvFk025AJ4c+Xo7niR4uesjOecQ TL8GO7auY2kKRj1zaxUeAIYjM28JBg7Qp52gcmGaVdAm6u/H02z0y9L0I69nn9PJeIT+A6HJDWQ4 eRmkFIvy6LyuE/RhqUFg3CXHUQhoR44gkWe3UufhpU8mJFSSP2p3YlEwgCPH5Gw7jjh14nViJpW6 7060+TYgt0oPAKOvegVhzMxB7EfjR1ON+Oe4KFOMs3Geg0e/MXWJnbYX4/m5pe3DTlsfwZ/tFQIg 7atSQCh3Q9pLJ+ATWi7dlWs/v1fJwPxJSNUmNNV9kGjTP4DEvgoFhIiIu5IcY+9v7tr7y10feDle vNptg0w9fwC/sL1CwCfse0vbsXRDujK1HqJtn2jcvWhCEvOB04C37lIolWbCLfh9veGApXPoJYG3 KRxgqy0T+P1L/GG0NoZzb4Vapqr7FVLExOQNZnIStjWTQxlZT7ZszeRQVlb53DM5G3fbZnISFql6 Hkck5qOgO6epQQBvft01/AJZn3UruNagnCjaVmL9fdmVELzxKve66C7I1mzPtZsUSRpJt4HdlWJN W4yUjedytfrfIlme9j+GpSraDQplbmRzdHJlYW0NCmVuZG9iag0KNjUgMCBvYmoNCjw8L1R5cGUv UGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwvRm9udDw8L0Y0IDE1IDAgUi9GMiA3IDAgUj4+ L1Byb2NTZXRbL1BERi9UZXh0L0ltYWdlQi9JbWFnZUMvSW1hZ2VJXSA+Pi9NZWRpYUJveFsgMCAw IDYxMiA3OTJdIC9Db250ZW50cyA2NiAwIFIvR3JvdXA8PC9UeXBlL0dyb3VwL1MvVHJhbnNwYXJl bmN5L0NTL0RldmljZVJHQj4+L1RhYnMvUy9TdHJ1Y3RQYXJlbnRzIDIzPj4NCmVuZG9iag0KNjYg MCBvYmoNCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMzQ2MD4+DQpzdHJlYW0NCnicxV1t b9s4Ev4eIP+BXwqkd6nMV4kMVgts0+5erw12N9tDD9gWB8VWUm9sy2vJ7XbRH39DSXZkmyZZS4ob NJYpiTN6niE5MyQVNPgFfffd4Ory1QuEv/8ePX9xiQi6Oz15htEzwgKGhBIBF0hyihbp6cntP05P 0MurS/Tn6QkLUcRxICUSTAWEI/hVXfXuH2gG1zUqJ4M3yewOnaWzZ//57WktyaOO529PTwY/ciQD zNHb29MTgjD8EAQ3CX0vC0iI3k5PT7BWG6OfQO4zHGAKlw9/P7tOZqNsiq7TZIT4/dMP6O2/T09e QqW/np54i6eIkG35BEdBJM0K/H6GNiX9WoPWRITWIJS3MaHrjwTUHagQhWusVyfKj/Jc+Wt9Olyd q/XfOCsioctW1WJMkeHkump9frP2Sjh8rOQSrku2q3i4QFexeU3DWITCAVsjTeBDyD3mwvabi6MW l8FQphnebzAE7CEa8JCzAaUYkJveLG9zNJ6hZZ6i92fD5WKRzorBMBl+TAdFViST909NduWlp8my qIQWR8ya7ljWjlnx3pCDe/bDpigRAyYkGRAaqXBAQ0o4L8FDw8kyL9KFHcTBNPmrYyBZRALJDIq7 URS9oYh5oKQLSKmiErp/rqHLlgXKbtE8Gd6nBcrTYTYbJYsv6O/sYMgMOlJo0JK4lZyle7jsmkEY fDSDJn3cJIZ9kRhqtWyjDh4QzMv/dTvg94DUPLlLUT7+G1BCfyynN9nRWgYnSuNqfA43rlFvuMrQ jqskigwo5hpfysgg5BhjpO6PDCcDvSkzq++GU/YGZwQuiq3TxgP9w6gGkYRHRxFcKczMWrtRVL2h GEo7iioUGL8e0Agr+RqGPyaUeo1uvhRpjpLJJBsmRTpCRYZmafE5W9w/ihPBojAImVl5N5gE94am UEFo879Km4T6/lymeZGj22xRe2CjdDYGHN+flV8HKysdbIyTXaPIZNmwTUp7oGgJetqiiL8Zxa3G vYaT3w/U/QCaf9duLNxIiVlVD+xob9hxEggndPmt2e2fp8l9D30fBWcfE6NuHlj1Fi2FjNqx2jSx GrTKtDrFB6Iy7QGa9PHAp7eYKATa2K5jirv0ybkI2F5JqEtJ0FLJHklN0s2UT5IvnXOudIbF78l3 Se8thAsJtwO01SpeDX6GfmRcjEs/4OYLRG+z0e14knbbg0DTCIlZOQ+0+ouVsAioNVZCQ/CScu0g zRdZkQ2zCRotEuh5FxD0jmdp3i1OmOkQ16iWB05tYx9jNFYCJRQN1CGJRCJrNep0HVXrdN1mopAJ a5bw4b7NRGFVrZDQD/FVFpDrMX2rgsYVVRpw46JGHpCFLIiilS6UiEBE+9LGFrfeUY3DLkVkH9oW aTL6n26lFzD634EpzsejGJ+jP7KbPCbvn16gdLGIEb5A+gTBmCtygd5BC79KFghMi0QXWF2AbIoJ N5mwl/5mexEBZ8YncFswtbj2LRGFu22IIuNgdajMarByydQsogs0zmKuFEi4en6Obj7HUqlQvX4+ yM/h1DyPKeWKnqPFclbEUKfCcpqnw05Jq6Nbk8IepFkiiZakCRgv7Kx1zBsM6h5ih5Ok6FJsJMF/ 5Ga5e9yoQ0VJuG2fqOYjvj+D0GKoe5LpeBazcwRhRRwJycQ5Sj7dxZwGjJyjvBiln2JC4QGibi2S KB1z+EGya5KWAK2lSWqP12WSnTIWci+5YJLIg7SQbpAmeyHNpKsHaZZIsSVprEyBPno/4hKr+xE0 TxdDCN5hMM8fCOyUljo+NarjQYslQG1JC43ctPTRllxyvyIEIOLiY/y71oB9OEdINAo4FBC8VUCb BeLDebctq5prMmruQaEl3GxJIZEBIUeg0CX3K2JNPkIgiDcKiIIC0aVeFDpXStyKNZSgGJQImwWk J7MxKeVhNpa4u6XZYHUcs3HJ/YqiLT6QbBZQKFDNAt3y1UPfwATuiUKT5h4UWlIC7SiEkCVQ2ymB R6DQKfcrgitWfAip2xgUiLogFLQqUHVBVFEIBaIu4LIfCo2ae1BomYJtSyE+EoUOuSWFSpV8EIb5 h27dIR4GRJq18KCjt6QPlySQ0aN7qU6xN5/BL339HFDIV6GFzhTQKrzgFFMSnms3NlYhPMeTKtaQ oYxIEKlVuKFCrk8fPuYak02U64lk4yO4qWS9ZZt4RI9CpUtsM0hEF4jHJBD8iXZmY3CXQvFEu7Ex OAsSjgSOBQtYdR7HUAt70m2/GFLtzRq19mCvt7QT13c/ftrJKXaLPSpKUmhJFY551fYiKCVBWLPW C21U6skco7oetPWWmuGCH4U2l9iKtumathVpXB+QqvXVRxQ/lNWHHed5o9ITMansQV1vCRrOhe7H Hzc97xQ6nC/R+t8FWuYLaGQyAnbyL3nMSUA1jcPir1hiwbEeD/+41dMwMETexiTsljoR6XkVo9Ie 1PWWxOEsPAJ1LqGvfkajdF58zCvqCLSlsn3Vn7z+lPUnCfUkWYDhkJXtEw6+j8OyhXbbAjlcipn5 ATxo7C2Rw2kUiCNE5E65+fJmOi4qGnFNDV+zJeuSFo3NlMsRYTnCuXSLt0ymNpj+bKdcGG1Uy8N2 esvmcCKPYzsuucNsOp+kRbqyHbLHdo7Eo0l9Dx77S+lgFbBj5ANccsd5vkxHVR9QrkeOF3r6Ctxb PPgc40G59iH/mC0KOLEu6tjpLVfmGFX14KxtDmf/yhymIj3n9s0a9ZbGYDKCSNK263E5Q3mRFMsc lUtX4N73Z8lkUq5e6XzuS3K90smolMd+td4yBCySdpS6jlW4XhvllHr98ocXWytSkru7xU25KEVW i1LAu62/dzrmVru7nBrWOiR/betUxOu1MeX5xvdOLUpQEfA9UHpYVNushaUnCMOAOPMoRqXoxiI9 Ean1arqNRXpC8sYiwmotHdlZafdwt2mpHhMyiFhVOaVY+1lbFTSu0BVsXdRYqkdpuUNstaaRlhuc zEv1uCV0dVTjasgQkEnb4NXZnl0vPS17do2aelisJXRsCR0EQVboety3ezCY9b5do+4eYFoCuJZg QlQQ2XK/JZgEHBjvzbvfuJH2YEg5OOQQDBufwANSS1zTElLiGIseaSft4cBWgYbxOTyAtQQaLYEF L9oKbLWVlkKTB3Bl1PFW2sPbfrWV1qi+B56WIKAtnvBhS4f1spf2cBir1cZGrT1gtEQu7WCkithh XG2mVRxTvZlWKMwMm2kPBsvkFkOnKIlbtYM28LYYBMv0pVErj/dXWKKqlgxKqj2dx93AeziK1QZe o9IeKFoiiZYoRuybUTxwA+/hPm21gdeoqgd2lknIltiFPKA2n7bNDt7D0ap28BqV80Crt+CJCuFA 6xv38B6MUL1G2qiQB0K9xUiUG4J6H4T2bnltAVG55dWokQdEvUU+lEX6Nh+IuhyZpQyE5Gbpezbt HCpK6U3Ge0Q1H/SbN/Qe3qFUG3r9Hn7XFnoL2SiFE9Z3Txy0ofdwnKoNvUa1vFJzItpIzTFB19mz jdScXh7wkDuE80IptZtZo83M2nZqTmfaZP2WPcJIsHN/44JyE+3GNY3EXAhh5/qlf3qrrdr3Kj1h iYnstbgMAUf2t4RdTsYw4KJXs9vMOOPgI93EuNKr57hZvkfLsAQ3rfAgStrxuJ4P0WW2nBV5t3BU XZdRvMdL0iyRQjs4pLK/zMs8+3KgzGrRilPmT2mRFMUCIfTb+gi9ybL75RyVLwmdjGf31Wqk8pWh +t+7xVjPbyN0uUiT6ug6nWafjN39oSTWeSTjA3iQaAlUWpKIXSSivVPWB0qut+o6RZP1wrGHI74+ wuWJUE9gl9/l+mjzDuMcdmsWDap7sGgJmdqxqJeJ27yabgmsJkKNQqET7FJQuTrU+XTpLJmmFdtv qub925fpuqFf3Y/Gi6rFT6sj3fLLo2v4/QvE2/rkD8Nhmhs9l7bGYsbJaSyWiLGdsYTU/m6xHpu8 S/RDA25zRPtg0aS6B4uWqLYdi4K5Wey+KbqkXt3PsnJU/THX61OqozE4iAj9khQfL7PZrS67zKbT sTHJeyhN9doHo4IeNFki63Y0wfXOVx3009Yckr2bVR8sGXTzIMkS8rYjCWI364oe8Os7D3Lqt4iY ZHtAYZlXbAcFxOW+r6E7VFD5vjujoCbob8fTdPTzUnchr2afksl4hP4LI/ccYs68HMOLRXl0bclL tTVTLzB2yekrICcksmPWi5vnEkr1xLI6Zm9i0tCDpt7yBFja34h3qScVe0ib1HkCk3iPl373lSdQ CtSxue8/6DD9X+OidMOvxnkOzfqNDti3yp6Ps2tD2butsh6cP+MjeEDaV9SucKCXyjxq6sUps1pN Hq6b+r7wnRC6CtVJh1pSIAk6A6eaq4xBpUXVM5W5gz7sxqSMh930lSeQxP63UXQLe7Pbwl5st7oX 48XL7TKIn/P+W6LpCTwA7SuWjo4TR1vFejv2hqOH9FofMfSO2l7TNxHfmL6hhK9nWDambyhRjQmm avpm5yWojbtN0zchnK0nb4RGdPs008+Pm69R9b8grM9uLP3eecnqWoNydmhTidUlayGrP+hkqsd6 0S7IxgDY9t4BpCNte9+2K8UYwWkpjXo1PJWhGGr7P0dDMUYNCmVuZHN0cmVhbQ0KZW5kb2JqDQo2 NyAwIG9iag0KPDwvVHlwZS9QYWdlL1BhcmVudCAyIDAgUi9SZXNvdXJjZXM8PC9Gb250PDwvRjQg MTUgMCBSL0YyIDcgMCBSPj4vUHJvY1NldFsvUERGL1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUld ID4+L01lZGlhQm94WyAwIDAgNjEyIDc5Ml0gL0NvbnRlbnRzIDY4IDAgUi9Hcm91cDw8L1R5cGUv R3JvdXAvUy9UcmFuc3BhcmVuY3kvQ1MvRGV2aWNlUkdCPj4vVGFicy9TL1N0cnVjdFBhcmVudHMg MjQ+Pg0KZW5kb2JqDQo2OCAwIG9iag0KPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAzNDM5 Pj4NCnN0cmVhbQ0KeJzNXW1v3DYS/m7A/4FfAiQ9R+K7SKNboHHSXi4x2ro55ICmOMi7srP1vrgr bdIU+fE3lLTvXJLRS3wJ4lW4Emf0PENqZjiUUfwz+vbb+PLi5XOEv/sOPXt+gQi6PT15itFTwiKG hBYRF0hxihbZ6cnNN6cn6MXlBfrz9IRJlHAcKYUE0xHhCH5UZ739Bs3gvK3OSfw6nd2ix9ns6b9/ fVJLCujj2ZvTk/gHjlSEOXpzc3pCEIa/BMFFwlzLIiLRm+npCTZqY/QjyH2KI0zh9OFvj6/S2Wg+ RVdZOkKEqrsnv6M3/zo9eQHd/nJ6EqwARYTsa0BIEjFlV+G3x2hX0i81bNuY0BqG8jImTP+JgL4j LZFco736ovwovyt/rL+Wq+9q/Xe+FYkwbatuMabI8uW6a/P9bu+VcPhYySXctOx3sTnBdLF7zpa5 SAxYrRQlVEZKHrEXdtxe3J34DIYyw+9xgyFgDUlMCNMspkphjKbXy5scjWdomWfo3ePhcrHIZkU8 TIfvs7iYFzaTaqgjBTgV8SuZTt49aSrWZspU44gekXtgygd2zPviCq45joGmRMQEswR+aqFxTCUl nJd0oeFkmRfZwkVbOomn6V/d4sgSGUlmUdwPougLRMwjrRw4Ghi1EkkJ3D/WwM2XBZrfoPt0eJcV KM+G89koXXxCf89n2RFUO8YSHjyK2fX3wyl7glMarVwPHAw2yct/tT3yO8DpPr3NUD7+GzBCfyyn 1/OHslBOtEHVeht+VJO+UFXSjaoimsSEyVgxmsSSY5iX9R3qcOolAhs782ryoOzV84tVST97qi/2 kiRKXFM1js1fRg1rRN497Ahg4K9hZlfaj6HuC0Op3BhqKTB+FVNBmXoVE0oSpl+h609FlqN0MpkP 0yIboWKOZlnxcb64s8PZjznadPdDSXBfWAodSZf7VNoj9PfnMsuLHN3MF7WTN8pmY0Dx3ePyv/HK QuOdR2PHGDIVUWbXOQBDR1zVEkMcgmGHE7DUEE1xv+Ad2vZmkjV//C7WdzHMNb34yVYdA8iifZHF SSS8XOU39kDmPkvvup9mKcQSmFhVC0Cqr/BPMupGandWqCGrzKpLdCDKLL0NizoB6PQVcEngjLli hWPwTNJPXeOjo0TZFQoAqK9gShL+RQC9jH+C8TYuxuXD+foTxFGz0c14knU60sCIJLHrFoBVb5ES FhF1RkpoCI5LbnyW+8W8mA/nEzRapDA/LSD4HM+yvFOU4DpF7FoFZetIUmNT58SoXufEdrNxTDhT cZvrdrNxVbdCA5yrVBsjkd6/fuuEMtW2c85Wqo3BeWQDlGaR5sdys47AwNONxwaEArTdD/N09F8z IM7hgXQLvN+PRwN8hv6YX+cD8u7JOcoWNitoqlZtBX69BgifI6MMwZhTeY7ewgC+TBcIjIck51ie Yw29Ed5YO2t0DtbJmV29gKHsCE5a8giOPnHAhayOYFOZ4F0Rv0xjOugcjecDrkEQvXx2hq4/DogU CdGvnsX5GXx3D1ZEtThDi+WsGCCGleLTPBt2ylodUdoU9pNGHVFQS9Jk4iGtY9rgke2XOpykBYx1 cEGHZnhPx7OBFOQMgQM6oDrB8gylH24HiRQg9gzlxSj7MCAcdCHdksZYxIlV3QDSHGFXS9JE4k7X oq5pkzxIbihtMmL0K9BmUzeAN0cE1pI3riLtiix6Gm0+seVwu88WQ4j04DGbVxx2qQiRAIAK0ATs plNTqGMoq+AAU3CEmC1NgZUp9q8/hH1yPyMEIOLi/eA3hBIifz9DSGwaGIUGgjcNHEMD3WoQcEm3 o7la47FqHkChIw5uSyF+IAo9cj+DC7PLh9F43SA5NIj9BrnX0A+FFs0DKHRE6i0ppCRKfO5PHxT6 5H5GyRYfiRl0ar9BbzUoZRq2xqnuaxTaNA+g0JFAaEkhoQ9DoU/uZ6T1mg+IKAxBOhJVA8GEVg26 boAnZN0g6oakLwptmgdQ6FjXbUkhxNtCPwCFPrklhVpXfCRS/d6ta8IhFFR2LQLo6C0fwzX309G9 l+oVe/0R/NJXzwCFfBVdEKqokFWAQSR4K+rM+LEmHwND71EVbkCwrziJ1DpQFFiSiOBukzKMmxU6 600EkNlbUoYrEfHk65PpE7sdKaJzU3s4IEkk2CPjXWI8UCTS9FGnFNUpcKtqfopYbykYnsgHocgn tqJouqKIDkBbM6Y4HGDyqPmz6XjK1arTATdtRanSDHy3DxZZ3me3NsgTs+AZdp+HNthbRonLJGLu zET3yVuv0OH9Eq3/nKNlvgBKEg02mH/KzXxBKRwPi78GCmOqzXPgjxuzMgDPhpsB+Dnduk+SllO8 TesA7nrLKnGhHoA7n9CXP6FRdl+8zyvuCDAHc0Y5i5B6EjGfqv4ksnpowyEz55iD7waynGu6HYKc CJOIt95AAI29ZYQ418a8vroX7JWbL6+n46KiEdfU8DVbqm4BAvfoq8nrj8cyH2RVP4DH3tJCHD4e hkeP3OF8ej/JiqzisdPsLiOmEMOrwWro///YjkXhANvpLR/FAUjyAMkMr9xxni+zUTUHlDWig8XA 1Ffj+OMAx+VqfP5+viiged3U7cOXcFNUZ1U0gLG26Sf7OnwJHVGmvPWLNeotm8Kxjpgrjr5aziAQ TotljspiCrj23eN0MinrKTpff1HcLPxYlQpAqb8kB0xBTpS6jrm4KdbxSr168f3zvXKF9PZ2cb1T sQDe7V5D+tfBGcVgXcdQnrD1/04JLpdljtxZUMkU0zslU0LTdW3TTsmU0MmmpquqbCKHdU90u+5p v26KwW1hUddNmezsQQdbZ5SFU7snbVVOUVkW+W6XmB2rnOKOzIGnG48RM1Ve7d2nyBhhMcMJ9e9T PFb9H6SooxraqmrArkFHxNsSu4S4sau2DlKCWUzBwVFdbh1sjGa9mcKqfACajhi0JZoQG2PXxKb7 3kHYGFIzcxlIbXcQAKkjHmwJqWBuSJtvImyMn0VZhrlJp3m1/Trjo3bsrdoEkOkICluSyeGB6so2 VnsXOZExPHm2Ni92sguu+XSjiCk1tWofAKcjTmoJJ+NuOHvZTNgcxqr206p1AIyO4KUljFREypXE q/cTciw1fwV+BDiYoqv9hK2fgTblA9B0BF4t0STSjeaxHYVdTsdECJOKsepyZCGlsaiqps972y32 Sza3kWrDZBgMhzbiCDtb2ghOosSZIju0kYbbF1t77DZVA7BzLGW3w45CiBqAXbMdjM3RqrYwWpUL eKFHb7EhVdr4Zd3tYmyMUF2Ca1UoAKHeIkAKIWkjhI5uZGwBUbmT0apRAES9hXUU4kzujEFabGVs PuSqQg6rcgFo9RaxQRjmQ+vIZsYeHstWZTr3AITZTeq9b9dWzeZWUJWMhN2n1RD4TuKRqWSdHNxJ PDJdZgZ2Mo+b99ytEoebq22ZRwpuGUk2OzaV2u9g64zVls2tk7YyjxDwl2WDNWC8nFztmUfhCMA8 3fhMndOIuZ7FF5MxPHTRy9nN3LqgECTeRrvJU4OFWBUIGPyOWKolIsy8CNGR2b8foov5clbkHePB yjdi2MQHwOEIhlrCQcurv7QopanQqijFK/THrEiLYoEQ+nV9hF7P53fLe1S+/nEynt1V5UblyyDN n7eLsVkzR+hikaXV0VU2nX+wPtMa01gnrqx3EMCjI2BpySMRPh7R0UXppqITBSdyv2yyLg7bHPH1 ES5/KrNKXR4lKpHYcoV1obo1kTbdA4h0RE8ticTSLI198YJn4xFZLnh6pV5ls3SaVUS8rgbfr5+m 62F4eTcaL6rxOK2OzLg0R13qymhZEmpV9ojL1FhUtTXVjwvc488Q8ptb/344zHKr89TaSoPu+MBK pSNqbWelRCdeK+1ruvHK3kwebY6sW53bEmnVPYBIR3DdkkjwhzVvUNTeeL4pi9q9Yi/vZvPyuf5D bmpgqqMxeKkI/ZwW7y/msxvTdjGfTsfWl/Y2Zqp+ba9VwwCmHDF+S6aM+fuY6mvI+WQHD69eqLJp F0CVI8HQkioJg931RhoTZHQfcikWJYYsm/QAOByLqm3hwAEKtZQkInZE0jbwb8bTbPTT0swnL2cf 0sl4hP4Drso9RMF56bQUi/Loqs6W9WKuQXAc8tNbkoAIEnn2NXXvcfqEEqETnTzotGLRMICm3jIX hFMnYhdmJbePTE6VubCJD4Cjt8wFAQ9duKLO703e4J/jovTNL8d5DgP7tckg7LU9G8+vLG1v99p6 cQdt9xAAam9pBEK5G9Se3EGfWFNmIzeuueXoeD6h07CQJyVvATB1Krba0uYVK3V5x4nWrJ/wxaZA gL32li0hRETcFQuawf36cHA/3x/wz8eLF/ttEM/nX2MSsN1DwOv7+wvusXSDuhppfUQaPtkeR8B5 JDbzRi9E2nQPWuJKyM4SFzWvWkssS1xU8K1FOPhe6MOXkm5dbVvi0uba+nWm5vckbK2R1SdUKzer 3060IyLgDLn62q3llhrlOtquJquTNoLq32Zk7cp91iHa1hjdtaEYmWSAcoeYh2Ks8aURs9VxwiJe 9evrzRqe7fUmYBDRoN6swcRebzAAqDrW2/8A/ZIXVw0KZW5kc3RyZWFtDQplbmRvYmoNCjY5IDAg b2JqDQo8PC9UeXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291cmNlczw8L0ZvbnQ8PC9GMSA1IDAg Ui9GMiA3IDAgUi9GNCAxNSAwIFI+Pi9Qcm9jU2V0Wy9QREYvVGV4dC9JbWFnZUIvSW1hZ2VDL0lt YWdlSV0gPj4vTWVkaWFCb3hbIDAgMCA2MTIgNzkyXSAvQ29udGVudHMgNzAgMCBSL0dyb3VwPDwv VHlwZS9Hcm91cC9TL1RyYW5zcGFyZW5jeS9DUy9EZXZpY2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFy ZW50cyAyNT4+DQplbmRvYmoNCjcwIDAgb2JqDQo8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3Ro IDMzOTc+Pg0Kc3RyZWFtDQp4nMVdbXPbNhL+7hn/B3zJjN2zKbwT9FSdaZy0l0s8TZ3c5Gaazo0s 0Y6qF6oilTSd/PhbkJRMQSIIGaQuaSMapLCL58HL7gJLo95b9P33vZvrVy8Q/uEH9PzFNSLo4fTk EqNLwgKGRCQCLpDiFC3j05P7705P0Muba4Qq3yS9N4P5AzqL55f/fndeVvP8/elJ7yeCCEXv709P CMLwlyAmUchFwKB0dnqCA66l6Y+fT09+O/twTvnZcnx+yc+y80t2Fp//jt7/6/TkJVRWVEgRIQHm W3VKGYR0q9q80qJKVKki1/zP0xOtBOVBRJFgUUA4ivLGffgOzeGhSsvovpY1VVAoypEy9eQ4EPqL YYBVrie6xAGm8NAQmr4cZzHik4q6v56euMhaVx+qIFK8Wv9263/N/9thj5XNykFjQisdMqbl4SCS SG54X99Tuhhv/tnclut7pZ5bd0UodFmlZowp2nN/U7u+vy2gkE+iQKniDuG6xKzi8QFdxfYzlR4g RBREa12JAL1UTS/g9b3AXklTT8DAVKXHVrsDCTnBPUGF6FEqpUCzu9V9isZztEpj9PFsuFou43nW Gw6Gn+JelmSD6cfzfX2nWcO1UlTBUCc7Wu10op0eJDrCR0aqFp+IEtHj/Gkt3isz5AGB4WMTisMe YZLSHpWUcJ5zgobTVZrFSzs3vdngL19+WEgCxXYUbOZHdsWPgsFqoYcKSXKM/rHBKFllKLlHi8Fw EmcojYfJfDRYfkV/J/O4BkBf2DjGgWSmss2ohV2hFkZ1qOEewTz/v+xgfAJoLAYPMUrHfwMS6I/V 7C45Xpfjei5lpsrN2KnOsMNBqPaDp0hEepRTDaCE3ieBeIyiSYlZizMFUVHAlFWZ400KSgaU7ejS TFHUFUWS1KKCe/ovo5oYIifH7s2MswCzHQ2bsYKVuCOwBA2krJlCpcDkNaDFRPS6R6hSkrxGd1+z OEWD6TQZDrJ4hLIEzePsS7KcdGMUsFDms6ehqANme72CNjADFuswyzsYVPXnKk6zFN0ny9JuGsXz MaD18Sz/sbfucr2t1ckbKzBe9Vg0FHTAyuJn+GHFeCC4G1ZtTpAkzCdIi3Rgxhj9G4r4pBdNejA/ eBu0EQ4o2VHDgQ/WFR9UNPCR3u+38xfxYNLGHEgpzIFkRxEHTLrygiSRAa8zuLfHcglO0VP8cAD/ St80hTvg0Jm3A37XoThMB1/9gYj0amhKdwCiK7dCgIPDnHB41fsFRso4G+dL4d1XcCjmo/vxNPYc IwwWO2Lq4YBIVy6D0AZnHSJDMAZSbQcslkmWDJMpGi0HMH8swdcaz+PUEwvMAkVMDZwCS0SVCKwD N2wTuNmOGhFViW7tjRo9fnU7alTULATXXkoREoKWYWFWUHkijwltP1QJCjEuArVBhmIaRLwmKkQs FnRDNU10cxJENS7iFx0m/K/u41ewOjwAyYvxqI8v0B/JXdonH8+vULxc9hG+QnBjH/dP1Y0KkvcE i3J9grHg/Ap9gOF4M1giCoSEV1RcMQaSyN5wjYM+j+4o1gaWqULzyKQWA96TK5gr6uBAe53Op8oT OCB2eXnfuELjpM+jCGq/eX6B7r70lVKMv37eSy/g1iLtU0qxukDL1TzrIxFKJmZpPPSkpvStTPUc qLH4CZ7UgKmjwjpqWiYHFlG7xOF0kMGIBatuqAfpbDzv0wsEFl1fUYbJBRp8fuhzGjB+gdJsFH/u E3C9CPblhUTa3jM1c+DF4pN48kK4lZdWiZG8SeQuL2wfL5J0wouhmAMvFt/EkxcsglAcdbxYJObj ZREvh+D9wGqXPnLkiXxp+ZuyHZC3eEB+yPNIWpHvYETYRH5DiARP7eB7ZUYsYIpbheLsU/833Vb2 +wVCYNdtFcDCv11AjQLf4RgJbSCaCjp0Cos76NkpVKgjb0ftFBaR3xAzOeBmgTALZDcsGWo6sGTx VT1ZCtXRWbKI/IZCkwNVLeBQEFULhC54HG0swq2xZKjpwJLFf/ZkSUaBIMdlySLyGwLzfA25ElRz AE+vCxQvCqKyIOKqLBBlgVKtsWSo6cCSZXPPlyV8fJbqReYsRVEOOcEK/+5reHCwG9WOSAfEO4s0 cPDpWa332oXJZ5N49wWMvNfPoe3p2hQXSkayMMcjCSbEhTYJdZgBhs+zwjYH91bPaHRtnguKZcB9 7fNy59tUt5ks1lmogWvv8LhkWSRW3SZ0hXhfn06Rz7Sp1qdwqZ5pI60PXyDPtGWwvgLuikvf+UtS HQsyVXQgqLOAA2csoEcNONgkGgQJwJ0EYQQUhMAGOLRhwUZeLHzpoKHQ0UFTIQc6OoszcMqPTYdF YkHHbE0H1YOAAwP8cWB0OFh4mM9mhn4O7HQWbeBEBKTW520/cGqTN1ys0ObPFVqly74MmGYn/Zr2 CfiEerEZZn/1FaZRHrb7414H3WGJgk9fcvTZCrajoAM53QUkYAnFtV5NB+RY5L36BY3iRfYpLcgh MCD00KDlJy8/VflJZGEewCXLBxlc/NCX/Mk01cCUHyq06Q2SvW0QoWs3hTh0jM6CEiwKddjymCa6 TWS6upuNs6Jj4JJsvuFflSXQJYwOIflWz2iDqtxcNHV1oKqzyARTof72UamyiBwms8U0zuI1VeT/ TZWhqwNVnYUnWKiOTpVF5DhNV/GoGFX5IcL+so97X/qEKjAXcS/fb04/JcusvJEX+a6CND9oYOrl QItfPMJgQkqdA3GI+M6ccybCgNRYk7erObi7g2yVovwMAHzt49lgOs2PAbSwR6Kgk6gdDZrR4J15 v4wrfWbjaCaJTd6H21fvXxob7YOHh+VdvtfOir12sA23fx78Zd7P+pvt9/x+5WdPAgUVAd9thNO5 HU62zu0IqGN9umbr3I4Q+Z5y5dwOhrXPPHnz+O19R3cYkKtEUTnF4focUqWCyhO6AuOhytGd8s76 UBOVgZJ1CV0Wn7OhmqZeSlWtsdJaSpeDikZOl6mWw0C2OH6eEBU737asri7yrA4ArUy0MhV1AM3i kHmChqPa5boh1+qJYNgyX2za+Kd2HcBU6YaZ+jgwZfGQfJnCtdgcKb/rEABLY9hQ2gFAi9/iByCN SBDWmD0NOV5HnCKKtCtTVwfcLE6EJ26K1uLmkXjV5vRRzqg2TdthpziMaMpxYMfiS3iyE7JA1piT XWZ6HbTs5alepqYOqFlcIE/UwFmtQ62LXK9D0CqSvUwNHbLlLS6SJ1pCBKIuT2AXrSfmXx1ifRYJ WKZeDhhZ9ug8MeKyAaOnZWAdgkqRgmVq4oBKZ24LZXnC3kG5R63aldBDQmVVoy7l6wDgy5OfphQH 4Dtzhig4ZocCX5v0dRAUedaXKd4Bis5cHArGAa3L6/XI+zpkZBaJX6YmDqh05k5QgvXXWsz9OgSP IvnL1MEpiiTkVhSJMb4J9GxFkRiLKkEtuC+iKNoJAj1+e18UqQyFFQlgjAQ73688kOd/bT1TfScQ qby/CBy0+ncCCVuun7WWBsIJrJk10ZHr6RgWIvRqfp/sDfA2y92cx1b5u24MWQ793PZiD69WK1LX 6tvFEF0nq3mW+ja6OIRuiHJotO1VGV6NDol+/oCQ9hPFFRFtm7if42yQZUuE0LvNFXqTJJPVAqHb eDCajueT4nyE/im/KN5YBn+ul/GguLqNZ8nn/dm3zjSVIQlTW4fXHNne0uHFk6QWnmo36J4os3wP lU2oEKVYQvDm3ArfXO3dinuiNpSGeglohKBNkUWiqU2k0nmmOQByfVXg4YuB2QkNHRw6oe21J16d UO+21JhH7eIveJFIXS/vNp4PZnEB9JtiYnj3dbaZIm4mo/GymCtmxZWeM/KrW/j3LXic+uaPw2Gc 7k9PP5gnQ1kHnmyvXPHiCZ638NT6bFG89NEm9XGK8Lki7RBl6OlAlO1dLF5Ege0Y1ljWLZNUnA61 CbyZzJN8Vf0p1dv6xdUYDD2E3g6yT9fJ/F6XXSez2TjzY6I8qWtq48CE7Q0wXkyAD2JjoqMhY5Hq PDzaocLQxIEK20tovKggYSBrXkKjDfEWfA/FglDDb0hyaLTthTNejcbKroqfDBGwXRlVYN+PZ/Ho l5Ue9a/mnwfT8Qj9B5bMBbh7ab54Zsv86raMgbTT6ZoavYt/Vy5vnvtWP/5bN21s8ijHNIyOOvxN dRyY6MoNj3A9Mtd6q6mV4EPhh5uyHJrdlSOuiB6ie1v9o/aF/znOcov1ZpymMBDfaK/YKHs+Tm73 lH0wyloxogx1HV5l25VnHNJa4DoxoY4nb/My9VqB4XoW2HjjVf+Tt6wP5fnJckOf384u23e8LW3e +N2ovCJgRkYRl3njc1+8lR6+08zGHt6V2y1ZQGsmRD3k3+wO+RfmNPBivHxploHvm3YxNRjqOgDX lR8MSy2psXe6s+ktQp1N+j1XlQHeCkuGmk77KSHb2k8hkdxseWztp1BMKts7xX7Kzgv1Kt/et5/C 5ebXPxD9Tg9q3hfVXwGxJaD5AVnetWv4qELxwr4tLdbPbKSUvytib0XWh3KY/wcHheIpDQplbmRz dHJlYW0NCmVuZG9iag0KNzEgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3Vy Y2VzPDwvRm9udDw8L0Y0IDE1IDAgUi9GMiA3IDAgUj4+L1Byb2NTZXRbL1BERi9UZXh0L0ltYWdl Qi9JbWFnZUMvSW1hZ2VJXSA+Pi9NZWRpYUJveFsgMCAwIDYxMiA3OTJdIC9Db250ZW50cyA3MiAw IFIvR3JvdXA8PC9UeXBlL0dyb3VwL1MvVHJhbnNwYXJlbmN5L0NTL0RldmljZVJHQj4+L1RhYnMv Uy9TdHJ1Y3RQYXJlbnRzIDI2Pj4NCmVuZG9iag0KNzIgMCBvYmoNCjw8L0ZpbHRlci9GbGF0ZURl Y29kZS9MZW5ndGggMzUxMT4+DQpzdHJlYW0NCniczV1tb9s4Ev4eIP+BXwqke6nEd5HBeoFt2t3r tcHuZnvoAW1xcGwl9fota8ntdtEff0NJdmyHIRlLTC5FY5WWOKPnmaFmhqSK0l/R99+nZ6evXiD8 ww/o+YtTRNDV4cEzjJ4RljAktEi4QIpTtMgPDy6/OzxAL89O0Z+HB0yijONEKSSYTghH8Ks+6913 aAbnbXRO0jf92RU6ymfP/v3700ZSQB/P3x4epD9xpBLM0dvLwwOCMPwhCC4S5lqWEIneTg8PsFEb o59B7jOcYAqnD94fvVuMyhwRqsZPP6K3/zo8eAkd/nZ4ECyaIkJ2ZSuVCMXtwt8foW1JvzWAbaJB GwCqy5gw/WcC+k60RHKN8+qL6qP6rvq1/lquvmv03/pWZMK0rbrFmCLLl+uuzffbvdfC4WMll3DT stvFzQmmi+1zNgxFYsBqpSihMlHyDkthd1uKuxOfqVBm+L3bVEjGCU2FzU72FJzxhBg78UrGLKVU EIGmF8vLAo1maFnk6MPRYLlY5LMyHfQHn/K0nJf9yYen+ypos2SqwLuJXcNblnzLjHksquCau9HS lIiUaS5SwojEKZWUcF5BhwaTZVHmCzeE6bT/V7cwsowkiln09mMoYmGIeaKVD0YiZW10/1gjN1+W aH6JrvuDcV6iIh/MZ8P+4iv6ez7L74C1WzA5xolk9hvw4ykj4SnhOahcTxqcEsyrv41B8jHgdN2/ ylEx+hswQn8spxdzp4l2OPYwQQ2KXrUjegUn2niFVQU/kVksIpV0I6KIJinF3BBKGUklGCRGehzA X8whBtSmzK69H0wVC8wsSzLXYI1T84dRAyGRj40hBGyY2ZX2Y6hjYSiVG0MtBaavU5ppTl6nhHLO 2Wt08bXMC9SfTOaDfpkPUTlHs7z8Ml+MHyJwYJmshheb7n4oCY6FpdCJdIValT1Cf38u86Is0OV8 0cRcw3w2AhQ7HIGJNuGfX6UPR5UG6cop0q3ncce0MVWNIjadAmhzZHEtacP3pm1nJGn4+3DEx6ke Q2Az7jhS1jihxK5pAHI0FnKcJMILXHFpzyuu8/64+2GWQjaBiVW1AKRi5X+SUTdS2+bVQFabVZfo QJpp4lqbOgHoxEq5JHDGXOnCXfBM+l+7xkcnmbIrFABQrHxKEn4vgF6lv4C/jcpR9XC++AqZ1Gx4 OZrknXoaGJEkdt0CsIqWK2GRUGeuhAYQuBQmZrlezMv5YD5Bw0UfxqcFpJ+j2d4gWR/DqvI3m07v j4pO6YDrFLlLVEBdkGQNCU31jep19W277seEs+h3c9123a/uVgAieFXUYyTRu9dvnFAV9bbO2Sjq MXBVna00oYQmWN9V/3VkIJ5uPMYmFKDtih2+mBLwf43rncCj7wos7Ho07OFj9Mf8ouiRD09PUL5Y 9BA+QeYLgrEgcPwO3Pasv0DAJMlOCD/BCggm3GYwQXdgT5Gxices9xDgwI6UpCWoEN4TB6bIGijv KxNiKuKXWfF4gkbzntZgoPTs+TG6+NLjGpJ0/Pp5WhzDd9dFjykuj9FiOSt7QFim5bTIB52y1uSR NoX9pFFH7tOSNJl5SOuYNnhQ+6UOJv0S/A4Cz4Fxtelo1mP6GEHU2WMMawKO2P981aMUxAJvRTnM P/dACZ7QrFvW6tzHpm8Aa47UpyVrInMXalHXvEkeJDecN66Pu9SPZsTkD1YFbzHVVpQwT2wvFvGt Muhmb5ulI69saZYcwiZXvhRpNPGJrYaT63wxgPwVHunFjYl2SkuTpVnVCaDFkcS2pIVVNfWHHy18 cr8hBCDi8lPvPSjA+cdjhMRGg4IGCHbWDVJCA91syD7uP4pYPcvMOzC75gEUOjLtthTiR6LQI/cb Ypt8GMb4ZoOGBrHRkGFokJsNJBaFFs0DKHTUAlpSSEmS+UKtGBT65H4DUjb4YECQ2mwwTqc3GpTx UwAZd6nr6pHmU7ZRghAdyWxsCgSYjaMs0tJs4PxHMRuf3G9I65VREF0N3tAg6gaI8JoGXTUwjqvR 3DSIqkETM1hEodCmeQCFjsnjlhRilgj9CBT65FYUam34yBgl8mO34RCXZi7LqkUAHdGKP1xzPx3d R6lesRdfIC59/RxQKFbJE80oU7xOoBRmjKpjE8eaehO43pM6m+KaSVPboKtEWGPCSUK6zTk4XIGZ /S4C2IxWdeJmNVr28Gz6xG5mwugEvKAnoacnJqDFPaUSIuCYQjMwRZ6YIAn3cEIYHGbCHGLypNvR sZ59tyruJ5BFq0Bx0OsxCPSJ3SEQaIvBCoHYCwZJqzYBrESrMHGZJezhU3mv2JqV6YoV2oOHiwSf 4cBNpiv3MixR41wNX8a3jGuJjv2pLuhaNQ5gLloRhgv1KMz5xO4wR2p3Yuuhr2vPokSZxRdWvQL4 iVaN4VybTOdh50m8QgfXS7T+OUHLAoIMeFAYPyq+Fj0qE5bB8aD8qyeYqeNO+39cdpoAYm6mpb2K mkk4CI0ue/unn1Zvrpe+WsUHWEu0wg/n+DGsxSP01S9omF+Xn4raWogZXs2I23zy5lM1n0TWESsc MnOOOfihJ3l12PHcqKgGZdsNBNAYrfjDmQnzHj4F9MotlhfTUVnTiBtq+Jot1bQAgTv0NeTF47Eq wFrVD+AxWjWGUwIJ8yPw6JM7mE+vJ3mZr3gk/1882tQP4DFaSYbD+Y/Co03u+6NRp5J0whT332JR LPNh7fnVYu/eoofTLz0FiQ5Oq8Uuxaf5omyaq6ZuH7mEVwGaHRKvbbStD9m3EVUkgdeIPUadaDUO pklCXbH1+XKGirJfLgtUrVWCaz8c9SeTarlS59OcCkxZ2ZUK2P8XrZDAFHWjFCNU8gp9d/7q7cud FUj9q6vFxdYiJIhodxr6f906o+ytlyZVJ2z8u1OCq9nPO24taEkiJ1tLEoWm67WDW0sShc5u1kzW KwfJ7XWFdHNd4e66RGbm3FndNyXQHdvtYOMM08HOSRsrEymvENpcwsnvWJnIHdm7pxufEcvqaueO Y5ylggqVUrraibnP9t8gPR37f62aBvi/I7FuCZ1gbuiq3avcwEdA+b0xsa0XxplZf+7VgHa58Xhv Apudx1ZtAwh05LotCeTMDFgeAqmQJNb2470hbfYfW+8gAFJH3tkSUsbdkHaxAzmmrTYJhPU+AoB1 JIItgaUiUa7nfr0jmEgF4DKhnTuCuxyKqCBmdaJXv4ccb+ptyFaVAjh0JIEtOSTSjVGUjcj7w1hP OFi1DoDRkS+1hNE8/FzVQrMXGb9OGcZCmr3IUoisq73ILR6C1XSoVfkANB25Xjs0KSQJTjR9u5H3 2Rq8P4r1si2r0gHvMXHkgi1RVNrMJuyxObjTsFBikyl7lbn3huT9Y/d6R7JVowC+HNPNbfnCIXzt tyd5f7TqTclW5QLQipYkms0O3JVn3Hdf8t4INUverQoFIBQtF4Rgdj+E7tya3AKiam+yVaMAiKJl WxTSP+ZMDVpsTt7f5erdyVblAtCKlkhRzn1oebYnW3cN749TvW3YqlZQjU7IrRodk2pdR9uq0bGM bVQRV1W03RrbzdW2Ih1lWbJ6V6B18/DGCc7Nw5BIYLl+dyGkq6uNzLdLdMKRNni68ZkCrd7Rcrcp nE5G8FBCr2aXc2vlPUi8jXSIssy7Aa0KBDiHIwNoiQgRbkTOrwfodL6clUXHgLBqqssqPwAQRxDf EhBsLr/3VMS+QuupCK/Qn/OyX5YLhNDv6yP0Zj4fL68ROs/7w8loNq5XAJl/VQf1u1Dh53SR9+uj 83w6/2wd9ffmsanyWO8g4PV9jjSiHY9EZz4e75on3ldwM3vrlUwIaVZrUUrXK7f4+gg3nwQL88GI phB+rH7IzXkxiLRqH0CkI79oSaTKEs3dRHbqkdWJXqnn+aw/zWsi3tTO9/vX6doNz8bD0aL2x2l9 ZPyyOjqH379C2mi+/HEwyAtreNGaR9sNBPDoyHxa8pgpL4+du6SRaVzSJxtHkE1IZl7TE3DfnUqF QFQHSF3d8c1wE35EohisTeMAg3Ukoi0NFsZz5XrTTNcGU6/G9oo9G8/m1RP+p8IsHKmPRhCxIvRr v/x0Op9dmrbT+XQ6Kjtlqnl9glXDAKYc+XBbprCfqVhDi092sHvFocqiXQBVjmS8JVWCJK5c3KQb 3WdfiiWZ4coiPAAMx0xkSzA49evTUpBImF3QJupvR9N8+MvSjCWvZp/7k9EQ/QcClmvIhosqdCkX 1dF5U1WKYqohYNwmJ1qtgDBmZgAeOu70SWUUwwPzUccUm4oBTEWrYZjX9DgxOzVTkTGKOnUNwyo/ AJBoNQxi9v+4MtAfTQXhn6OyykHORkUBzv3G1BJ22p6P5ueWtnc7bVHCQds9BLxGPl5BAUs3qDGD DJ/sbBWZk5vywM0R714xKqQp9lsVe3/0rFNRkphFi14MmnIJagonFDOiNanu3awDUTKKmdoB8Jpp tHKJhqSTuxI/49Nvbvv0ixuf7pS8erLDq1Wtx4vR4uXuePPjYFA8wHhj1TCAyGj1EqU8kEUcb7yy g5May5FYH0WpI1h1D5pXy9jWvBplZD31tTWvRllV63bPq21cbZtXA5+o59TMG7bo7pfY/F7/F2D4 XifI5lu3eo38aspuW4XVCWsReOM27nXSbYCtFQDHEiXz//CYaVUXmbelWJNXI2WjXwF5UZ0XW3r7 H47tnegNCmVuZHN0cmVhbQ0KZW5kb2JqDQo3MyAwIG9iag0KPDwvVHlwZS9QYWdlL1BhcmVudCAy IDAgUi9SZXNvdXJjZXM8PC9Gb250PDwvRjQgMTUgMCBSL0YyIDcgMCBSPj4vUHJvY1NldFsvUERG L1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUldID4+L01lZGlhQm94WyAwIDAgNjEyIDc5Ml0gL0Nv bnRlbnRzIDc0IDAgUi9Hcm91cDw8L1R5cGUvR3JvdXAvUy9UcmFuc3BhcmVuY3kvQ1MvRGV2aWNl UkdCPj4vVGFicy9TL1N0cnVjdFBhcmVudHMgMjc+Pg0KZW5kb2JqDQo3NCAwIG9iag0KPDwvRmls dGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAzNTIzPj4NCnN0cmVhbQ0KeJzNXW1z2zYS/u4Z/wd86YzT cyi8E/RUnWmctJdLPG3d3ORmksyNLNGJakl0RSovnfz4W4DUqyEAEQn7nKlFgyR2+Ty74O4CUFHv N/TDD72L8+dPEf7xR/Tk6Tki6P3x0WOMHhOWMCQykXCBFKdonh8fXX9/fISeXZyjv46PmEQpx4lS SLAsIRzBr/qq19+jGVy30TnpvRzM3qOTfPb43388aiQF9PHk1fFR72eOVII5enV9fEQQhn8EwU1C 38sSItGr6fER1mpj9AvIfYwTTOHy4ZuTy8FsVEzR6/m4yhG/efQOvfrX8dEz6PX346Ng+RQRsqsA ITQRyq7BmxO0Len3BrVNSGiDgrmNCd1/KqDvJJNIrsBenjAf5pz5tTotl+ca/bfOilTotmW3GFNk ObnqWp/f7r0WDh9LuYTrlt0u1hfoLrav2bAWiQGrpaKEykTJPebC9puLuxOfvVCm+d1vLwSsIe0J KlSPUikFml4trks0nqFFmaO3J8PFfJ7Pqt5wMPyQ96qiGkzePrJZVYiaNrOiCvyN2BW9Y1Z3bIrH wg3u2Q9aRonocQ0cYZLSHpWUcG6gQ8PJoqzy+T4ID4XOoikjWYKJR9Wast508Llb2lhKEsUswv2c iVicYZ5kykcbFZIYpv6xYqpYVKi4RreD4U1eoTIfFrPRYP4F/V3M8j00dgsmxziRzP4AfjxlJDwl vAmV612DewRz81/jAPwGcLodvM9ROf4bMEJ/LqZXhc8lYpkoB/8AE7U+hh/VNBaqSrpRVSQjPUqk AmSZyHoSrAOj7OZhwWSgNmV27f1gqlhgpmmSuoY/3NP/GNUQEmkNhw5UoAmHrBrcAaSlpDRheyRt PuvD2geEhZgFwnHHPrJY9iGVG7NMCoxf9BjGQr7oQfdCpC/Q1ZcqL9FgMimGgyofoapAs7z6VMxv 7iMiYqnUbwOr7n4oCY6FpcgS6Qolja9Bf38t8rIq0XUxb4LJUT4bA4pvT8yfvaWF9rbewh1jyJQZ rmw6B2DoyN5aYoi/GcMdt16ByW962U0PBrWO4/EMJ5TYNQ1AjsZCjpNEeIErrzezly7HYCbNaO/R Yj063OaDm+5HWQpZEoT8NjUCuImVZEpG3dxsG3RDUm3IXaIDuayOn23qBKATK5WUwBlzpSX74JkM vnSNT5akyq5QAECx8jZJ+DcB9Lz3K3j4uBqbd/PVF8jYZqPr8eTg3NrqaWBEkChadQvAKlpOhkVC nTkZGkLcUuqQ5XZeVMWwmKDRfAAj4hzS3PEsLztFCe5TxK5VUE2QpA02TeWNZqvK23bNjwlnwW99 33bNr+5WZADnsqDHSJLt3r9xgSnobV2zUdBj4EFZutSEQgqAs30FYEfO4+nGYwNCAdquIOKTLv7+ V3vEGcQJ74H42/Goj0/Rn8VV2SdvH52hfD7vI3yG9AkCwS+hZ+g1eNPFYI6ASZKeUX5GMRBMuM1g gp7AnpRjHZhZnyHArxyJQktQIegmDkyRNZo4VCYEV8Qv0/B4hsZFn2cZSLh4coquulSDgB7wwvTq 0VdC8YS/eNIrT0Gd27JPCZenaL6YVX2RSU7FtMyHnRpKk1DadPPbCXUkQS3tRKYeO+nYUuCV7Zc6 nAwqcHUIeofau6fjWV+cIgg/+6nKZKpO0eDj+z6XENMDbWU1yj/2meC6hNstaYzpYr5N3QDSHFlX S9JE6q4Mo65pkzxIbiBtgPO90GZTN4A3R87XkjeuksyVWUTyNp9Y4263+XwImR68Zcs1h53S0uQz VnUCaHGkey1pYaawfv/u5JP7FSEAEVcf+m9AAZa+O0VIrBoIY9AA8ceygdIMGig0dKlrU3ryKrtU IuXvTrv1Zj3hsUeBALNx5MFtzQY/kNl45H5FbMMoFAWj4BsNGYYGsdkAjEHOs2pgmMai0KJ5AIWO TL0lhZQkqS/8iUGhT+5XlG7yIYEgtW7gQlOYrRuk0hRm67Eh5SoShTbNAyh0FBBaUgjXPwiFPrlf EeQ4DR+MpZpCaBB1g4J4qG7ITAPhhPOmQZgGkTEZiUKb5gEUOmaQW1KIWSKyB6DQJ9dQmGWaD6E6 TVSpNJGQTwFC6LtuQzAO6ecewQEmEK0GxDPuN4HuI2Ov2KtPEAu/eAIolMuMBjAkrM5quKCMnurI WRedwNm/qzMclTKSkGyV4vBMR7wd152ELidYHyCAyGh1J66Xv6X3T6RP7GZmis4gau7jhGi+qDmS 3+mQqE/qIyATDpU5Da1ZwtR33Y7EkuqY2qq1nz0WrRrEU/kg7PnE7rAngB+Ij1QKBKWaNgK317zh PoN3XLdsNUs6rVoGsBWtDMRlmrD7Lyd4xdZsTZdsUXAwSoEerj2NGp7gSK68j6a190EPpGM/q4uu Vo0DmItWCOJCPQhzPrE7zOlx0JBSj4PNoVi3putW43sRGFTg28yueQCD0WpGnGd6BL/fCRav0OHt Aq1+ztCihMAkEQLoKb+Ufb0AS7/ehtXnvpBCKarDmD+v9VQaRDbw2S11IjXU2ZQOoC5a3YZz/BDU eYQ+/xWN8tvqQ1lTR/RYqf2u+eTNp2o+ieyTLrWkunhA/GrqMBfEM62XPvixL7k5jBLd2rQJMJ1o 9SKu4/oHKDZ45ZaLq+m4qk0HN9TwOilpjAbXRrNDX0NePB5NzdaqfgCP0YpGnBLI6x+AR5/cYTG9 neRVvuSR/H/xaFM/gMdolSMO1z8Ijz6547Jc5KPaH82S6v68j3uf+oQqiJNwzyxfKT8U86o5YZq6 ff1Ss5bJqmoAZ21LPfZ9QQY8sGZxwGgQrWbBMpJQV1h8uZhFiMa9YstqUC1KZBY8wX1vTwaTiVnz 1PnErOJ6NbBVoYA9i9HKEUxRN0IxIjWv0NeXz18921nGNHj/fn5llhXVi4ognN76c/B55+zmOiNz PtrCI0H1Khn7YwWtaeRka02jyOhq8eHWmkaRpetFl/XSQ3J3YSLdXJi4u7CR6RUCrO6bEuiO7Xaw cYXuYOeijaWNlBuENteA8j1LG7kjz/d04zNgae6+h+3KQXo6iltWTQN835Fgt4ROMDd0LTYt+zaN HQxms4vYqnsAmI6UtyWYnOnBwwPmwVDsjZO8cmPuXj6YxWb7slX5ABYd2WdLFhl3o3lPG5gPB7ZO I6zPEQCsIx1sCSwViXK98us9zJxqcCV4Srd7mA8fbupNzFb1A/B0pGUt8STSjefONuZdGLsciIjS 03d+le6bunpmwqpWAHWO7KwldThNUlcBcLnDmDKJ9Q5jpSTpaodxi/eu2WJsVT4ATUdm2Q5Nmik3 mjH2GB+OYr3S06p0wNegONLAliiCC0tX9bO7XcaHh9H1NmOrqgHYOeaI22KHQ7Db3mh8Z9tvlwMy JWZxvlexrkfcZq+xVW4AQ9FyRAo5A3clOt+63fhghJr1+VaFAhCKlgpCMHsYQnt3HLeAyGw5tmoU AFG0BI9CxsmcqUGLPceHu1y96diqXABa0RIpyrkPrUN2HR+OU73t2KpWUIlOyK0SHZNqVUbbKtGx lG0UEZdFtN0S2/puW42OsjRZfs+gdfPxxgXOzceQvGC5+t5DSFeXG6HvVuiEI1XxdOMzBcr1gtj9 pnA+GcOLED2fXRfWonuQeBvpEGURsA+rAgHO4cgAWiJChBuRy9shOi8Ws6rsGBD9NQ7cLj8AEEcQ 3xIQrG//5lmIQ4XWsxBeob/k1aCq5gihP1ZH6GVR3CxuEbrMB6PJeHZTLwTSf5mD+ltU4ed8ng/q o8t8Wny0jvoH89hUeaxPEPDtf440oh2PJEt9PO6dLj5UcsqNk3tFC9Gx4GZ0CXrmLuU2X+7mlUsI Xi1U46ujuk1hLHQboVhkbH0dWV8Xw2CtOgcYrCN3a2mwKk0y7iav05HHXOiVepnPBtO8JuJlPcj8 8WW6Gm4ubkbjeT3uTOsjPf6Yo0v4/Ruk5PrkT8NhXlrDqNY82h4ggEdHhteSx1R5eex+5AGZegDw yV67V5sjGoVIm+4BRDoS0ZZEStDL9U01XZNYLxTxir24mRXmDf9zqdeM1Efj2XWnA3tdHPDqUiD0 26D6cF7MrrUe58V0Oq46tQ4KaQZkS1ZNAqzDkYO3tQ7st45Ybu6THezScaiyaBdAlaMA0JIqQRJX /q9TnO4zPsWSVHNlER4AhmP2syUYnPr1aSlIJMwuaBP1V+NpPvp1ocev57OPg8l4hP4DwcMtZOCl CSOquTm6bCpZUUw1BIy75ESrTxDG9NK8+00+vUIJzYhS6YOOKTYdA5iKVjchlLtBO9fTnzEKSXXd xCo/AJBodRNCRMJdOeBPumrxz3Fl8oGLcVmCc7/U9Yudtifj4nK3rUujZ1TpMrVXX9Dj9Y4eUcJe mx4BX/Afr3CCpRuYmIGNT/ZqEDIllPpnXSjgXatFeapnEa1qvTl53KkoiBPUHlFbi5OacsmqcEIw w5JwpZGghCkRJzuzQ+A102jlkixNGHelunoceXl3bHm6O7Y8Hc+f7bb9NByW9+D71mcIADVa7UIp D6gRfd8rOzipsRwti46dhrlY6B2vAZh1KrXeLGmV+uYkiu/vkRUwU5myrZlKyshqMnFrppIyM3vg nqncuNs2UwmDZz1Lqb/sjO6exPr36n/Ihr/pAtmcdavXyDeToNsqLC9YicAbj/FNF90F2FrfcH0V CdKFFDeZd6VYU3MtZaNfAVlfnfVbevsfzEnrvA0KZW5kc3RyZWFtDQplbmRvYmoNCjc1IDAgb2Jq DQo8PC9UeXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291cmNlczw8L0ZvbnQ8PC9GNCAxNSAwIFIv RjIgNyAwIFI+Pi9Qcm9jU2V0Wy9QREYvVGV4dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4vTWVk aWFCb3hbIDAgMCA2MTIgNzkyXSAvQ29udGVudHMgNzYgMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9T L1RyYW5zcGFyZW5jeS9DUy9EZXZpY2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFyZW50cyAyOD4+DQpl bmRvYmoNCjc2IDAgb2JqDQo8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDM1MjE+Pg0Kc3Ry ZWFtDQp4nM1dbW/bOBL+HiD/gV8KpHupzHeRwXqBbdrd67XB7mZ76AHt4uA4SurGL1lLbreL/vgb UrIj2zTJWFJyLhorFMUZPc+QmhmSCur9ir7/vnd2+uoFwj/8gJ6/OEUEXR8ePMPoGWEJQ0KLhAuk OEXz7PDg6rvDA/Ty7BT9eXjAJEo5TpRCgumEcAQ/ylrvvkNTqFdrnPTeDKbX6CibPvv3708rSRFt PH97eND7iSOVYI7eXh0eEIThH0FwkTDXsoRI9HZyeICN2hj9DHKf4QRTqD58f3Q+mF7OJujdfFRk iFB18/QP9PZfhwcvod3fDg+iNaCIkE0VCKUJUW4d3h+hdUm/VbjVQaEVDvYyJkz7qYC2Ey2RXMG9 PGG/7Dn7Y3VaLs9V+q+dFakwZctmMabIcXLVtDm/3nopHL6Wcgk3JZtN3FUwTazXqdmLxIDVUlFC ZaLkDoNhuw3G30jIYigz/O62GALWkPY4JqxHCacYTS4WVzkaTdEiz9CHo+FiPs+mRW84GH7MesWs GIw/PHVZVYyaLrOiCnoccSu6ZVZbNsW7wg2u2Q2apkT0aIpFjxBNcY9KwI5b6NBwvMiLbO6HsDcZ /NUujCwliWIOvcMYiq4wxDzRKgSj0qkF7h8r4GaLAs2u0O1geJMVKM+Gs+nlYP4V/T2bZjtQbRlL eAgYLF36h+GUHcEpjVa+wR/3COb2f2WP/AZwuh1cZygf/Q0YoU/7ouRQimhl0AlqtZhczB6rU3Ci DZFOHcNEpl0RqaQfMkU06VGqpSFTpT3JMcZI36BPjwkmA7Upc2sfBlN1BWaaJqlvrMY9849RAyGR j40hOE+YuZUOY6i7wlAqP4ZaCoxfwxOPa/a6RyjTmr9GF1+LLEeD8Xg2HBTZJSpmaJoVX2bzm4fw G1gqE8ncuoehJLgrLIVOpM/hsvYI7f25yPIiR1ezeeVyXWbTEaD44cj+2ltaaG/t4dgyhkzZLu3S OQJDT5TTEEN8bww3uvUKTH7T0zc96Pgte60aJ5S4NY1AjnaFHCeJCAKXX7l9/NtscNP+mEfBs8fE qVoEUl0FRpJRP1Lr5lVBVppVm+hA/GXdKIc6Eeh0Ff5I4Iz5XPdd8IwHX9vGRyepcisUAVBXsY2E mP8+AL3q/QL9bVSM7JPy4iuENdPLq9E4a7WngRFJ4tYtAqvOAhcsErrtMeIWgxEuErZTEGpREIz3 ZIegOvtD8Ipy4xDdzmfFbDgbo8v5AMbbOcS2o2mWt8o6XKdI5M07iU8rrqvsF9Wr7Nd63o0Jb9Lt 7rr1vFvZrIB4ES+TaowkevP6WgWbVFurU0uqMaUTrJeaUEwTne5Kw3qijkAzAZsWCtD2uShfTAr2 v6aHn8AT9hqIvx1d9vEx+jS7yPvkw9MTlM3nfYRPkDlBMBZYnaB3MDqcDeYImCTpCWEnRIFuhLsM JuoO3GExNm6f8x4ixglPGNIQVHDpiQdT5OzJ+8osu3JIpuXxBI1mfa3BQOnZ82N08aXPU05p+vp5 Lz+Gc7d5n6VYHKP5Ylr0ESVC6UmeDVtlrYodXQqHSaOeeKchaTINkNYybeAPhKUOx4MC+h34t0PT 1SajaZ8DP+Dc9okUKWPHaPD5uk9hkBFQnheX2ec+KAFemGyXtTLEcukbwZonwmrImkj9uVnUNm+S R8ld461N+YSkxityKrDDR9hbVDlTFrxXn1Vq9RBWGQXGtll6wteGZslVon1hWUejSUisHU5us/kQ wmR4pOctmKgnGHSqE0GLJ1ZuSAuzefSHHy1Ccr8hBCDi4mP/PSjA1R/HCIm7AsGgAJydVUFKoIDW C6BGuz2rnDtyah5BoSegb0ohfiQKA3K/IVbnQwBBvF4goUDUC1IokPUC3RWFDs0jKPSkHBpSSEmS hlytLigMyf2G0hofCgNByha0+lBVoIwKK1MpYXq+rmmlbcHd2EB4V2bjUjDCbDzZl4ZmQ+jjmE1I 7jek9ZIPSqgZvKFAVAVUyLJA2wLGMa9qaGEKOKOSd0ShS/MICj0Txg0pxCwR+hEoDMm1FGpt+EgV Y/KPdt0hLq0f7dIigo7Okj9c8zAd7XupQbEXX8Avff0cUMiXQS9LRapUGWIokWpGj40fa/JN0PWe lPEGN/O6PDHhRxlyKJ4SlsiWs05wBWbuu4hgs7OsEzcL0NKHZzMkth4JoxPoBX2a0PSJcWhxH0QL AccUilmS0ifGScJ9nBBW1jDHmDxpd3hUJOHMrXmYQdZZCoqn8lEYDIktGZwsGaRAoDAE8j5JBLc8 lSyZmGR5JCyLomXmytyhU+MI5jpLQ3GZJuzh4/2g2A3mSNmdWNXh2u9ZlCiznMCpVwQ/neVjuFAh ftpPyQeFDm8XaPU5QYscnmc8oWYQzL/mfWqyJ3A8LP7qc8Wpefp9uuqTY/NEvOo75x3371mCmmVI Tp0jmOssZcO5Nhm+B2YuJPTVL+gyuy0+5pa5DtK7Tg26Se8Gb5aYgdwMGdU3r75V9U1k6YbBITN1 zMEPfcntYcsTfsIO/1HgbBtpZ0kpznHISDuJa4Jy88XFZFSUwwuuqOErtlRVAgRu0FeR1x2PNqvo VD+Cx84yU5yRhDxCiiEodzib3I6zIlvySP6/eHSpH8FjZ6kiTglE84/AY0juKM8X2WXZH+1a4v68 j3tf+goCnVaH9nJTQ1Af3LNrOfKPs3lRqWKL2nUtjFND3MpEWEnTbJTTcEszwdr4PPfWqLuEDHRo 5uPrfDFFeTEoFjmyS3Hg2g9Hg/HYrsZpfRavzFQ7lYpAqbNEB9MkoQ/tzAeFvjt/9fblxgKbwfX1 /KJcYyPLNTbguW8UDP7aqlH0VytvbIXa760SbCf3dtxa1Io7jtdW3AlNV0vj1lbcCZ3eLQksF8aR 7WVztL5sbnPZHQNjxKJsm2KVbDdQq2Ea2KhUW3hHBTHjUX2FIt+x8I57MgaBZkJGnNqrQxtaBcGq R0GU2HtDa5Senh2tTk0jdrR6gvmG0Elq7D2wH5NpLnqEEenZ1ro3XA7dKEstWiHl7r9Ja28Cq720 TpUiCPTE9A0JFCyCQCKl6GpH7d6QmochuAzOO4iA1BOBNoSUMzPqN9pU28aGwv2BLUMJ531EAOsJ CRsCy7gf2HKTK+YGYMrI3ptc2xyLqgkOp+7vj9oeZcr9tDuEBZnzBIENmaMiUcFNZG3vqN0fxnJq w6l1BIyeKKkhjET6YVxtqtXcbqrlHL5b2lTb4NFnd9U6lY9A0xPhNUQTp0nqy9h1sa12fxTL5bVO pSNQ9ESAzVCkEGDdF8U9N9bu7z2XO2udqka8y8QzydwQO4gWpS/waLK3dn+0ys21TuUi0OosTKMK B9C65/7avRGq1lQ7FYpAqLNojEKQwb05xvtusW0Akd1j69QoAqLO4h3wuO8F0b022e7f5cpdtk7l ItDqLJShEFsxbyiD9tmWuj9O5b5Up1pRWTIh1rJkLFWrTNZalowpVsvjLfNYm1muu6tdaTJqlhqm d7tTldpsoFZjuT21VqmWJtNQXa1eTwch43Kv7HaaTHic+EAzIWNg5m2Gu23hdDyCpxJ6Nb2aOZPf UdJdrMNj21zokh/ROTzeeEM8KDdz9p4ZgtshOp0tpkXeMh7mvQXcLT8CEI9D3RAQIvyAuCcD9hVa TgYEhf6cFYOimCOEfl8doTez2c3iFqHzbHA5Hk1vynU/5jd78G7epqIMa5OODGpqXrAJn9N5NiiP zrPJ7LPzSbO37VS5HacuEbbjCSMa2g6WZsohME3cqv1IXXajkGSCVbUujIK9LT98dYSrbwJOM3wY 0Rzf1SN39Toh0qV9xAsFPTFNMyKJToNEtjoK2IpBqefZdDDJSiLelB3+96+TVdc/u7kczcsxYFIe mbHAHp3Dz18hVDUnfxwOs9zp0jTl0XkDETx6oq2GPKo00TzYIdslMzVLJXhYNr7rVHsdtTq6lznX SLzalFwtwXNKhmGrEzN1ywqaqSfkbWimxmRCsLc63tjV3kGxZzfTmfUlfsrNIpHyaASuMUK/DoqP p7PplSk7nU0mI+eexr2ZohBbKOLWMIIpT+TdkCl43irf6226HFBCsqMHj06ocmkXQZUn7G9KFfbD ZSKb9uM8iLlTS5ZDegQcnnnHhnAIkgSzDQ0F2ZeNuQTVYX87mmSXvyzMaPJq+nkwHl2i/4CjcguB d25dlmJuj86rDFYnxhoDxjY5nWUlCKdezLpxNwNCGcVMi0cdUxwaRtDUWbKEMGYWMHqyR2b+sYvk UZksccqPAKSzZAmh3A/IjyZV8c9RYQOPs1GeQ89+Y5IWbZqzGWtBi6AyG3o8H83ON3WDsncbZZ14 oC5VI4jsLHNBwAvn4cxFN35NSHZayb7LYdQzErx9xaiQZibDqdj7o2etipLEpNWCGFR5GVRlaGBo JlqXmRyz4EQ5t6Q3NlM3AMG/D9BdXgZLP0ymT7/Z7tMvNvv5i9H85WbZj8Nh/hB933UPEaB2liTR acL4IyVJgrKDMU2rnbFcchsLSKuiU2LWK+yFx52P1km6xKlT1ERlStcmKinYWzURiaS9o9pkJYXo 4W46FeoIvf0q3VoLdrJyvRErRJsalRAi8MZZpS0Wyz+ftSYjooZcnvarWdPBTonW1Khq1KRUf2vL 1U6g1jbezryHZzmWtP3QPxhtS3GG7EZKrV3JE1aGv6HWnBHvRmscDHFXY/8DQ3+J/g0KZW5kc3Ry ZWFtDQplbmRvYmoNCjc3IDAgb2JqDQo8PC9UeXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291cmNl czw8L0ZvbnQ8PC9GMSA1IDAgUi9GNSA1NyAwIFIvRjQgMTUgMCBSPj4vUHJvY1NldFsvUERGL1Rl eHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUldID4+L01lZGlhQm94WyAwIDAgNjEyIDc5Ml0gL0NvbnRl bnRzIDc4IDAgUi9Hcm91cDw8L1R5cGUvR3JvdXAvUy9UcmFuc3BhcmVuY3kvQ1MvRGV2aWNlUkdC Pj4vVGFicy9TL1N0cnVjdFBhcmVudHMgMjk+Pg0KZW5kb2JqDQo3OCAwIG9iag0KPDwvRmlsdGVy L0ZsYXRlRGVjb2RlL0xlbmd0aCAzNDI2Pj4NCnN0cmVhbQ0KeJzNXG1z2zYS/u4Z/wd8yYzcs0G8 E/BUnWncppNLM23d3PRD07mRZdpWrRdXpJqmkx9/C5CSRVCEGJFUL85YNAlil8+zC+wuAaHoR/Tl l9Hbq9ffIPLVV+jlN1eIovvTkwuCLijHHEkjsZBIC4aWyenJ3RenJ+jbt1cIbd1Jo+9H83s0SOYX //n5rOjm5bvTk+gVRZRho9C7u9MTigj8UMQVioXA8PFudnpCrDiCvjs9+XWgz35D7/59evIt3Gvv X98hOJalm34d4LMLNkBnF2Lw6vUP7vNlcsYH8/HD2YUezEZnF3KwfETLswszSFLbeDXNbDPX9oM9 Mcke3B/ZQ+I+n+AmPsigh91asFhhbcpqvB/UNdYUK6+xEmdyUIhcJk7abc3tXFBMafl2VNtWYhKX 27rO57ebB65Dlscax56aF3VtNdDgtYXHuLWinLwH94cYvD+r41EyzGX9U1Usi9VblgTLqpgV01iJ 3KywcIYFH860rq1uZa2iVwJRiokodaME1qWOtuyzousfpydWLImx0Uhyg6lABhMmnLP88gWaQ8Ot 5+G7nqdJJ2t9ta+uIM41KLF3WX3RBXH3vhvn7CDxuKX1T6cnrcTFAlMttuWVUfnJ/a/wKIpHdWBy aXuNaYyIHRrUZmApLjBmP9w192tzWa2vFVqXrspY2nPrbglhqHrxuWt7vdx7LpzAcFX0TAUmSlT6 eG5h+/AabVmFtMZO17pSsHnFasxC1pvFnl722IUywNOzGW/bBouNlCIShtGIaSFiiWY3q7sUTeZo lSbo/WC8Wi6TeRaNR+OHJMoW2Wha8uyNLR2qI4O5hVBfyYpBVaxJ9QZXbLBSu/ECnGTEpTERhbFd REwx6N5hhsbTVZolyzB20Wz0V7f48ZhizSta7wcw7g1AZTBjAQCpIKZLCGBKimE88uVWIGgpRWO5 Q8r201k7+NfGDharDC3u0NNo/JhkKE3Gi/ntaPkR/b2YJzVG0q1pCEKw4vtxqZiG7s00JEQaumYs sr6kIhIVB4VviUcA6Wl0n6B08jcAhH5fzW4W/5S3CaEdpN5z7IfU9AYpaMTFbkg1NTCuK84iSjTj kQKLIMg8/rMYcq0w4xXF92NISW8gcrhQ49Qksj+cWeio8rHrEhgIqmCIqSjT8ThGtcG8KmX7kY9s D5DZEb7/sav2sDPn7MQeWGyZ2AkOZZTA0PQGTIJS9QYiAcFiHr9BNx+zJEWj6XQxHmXJLcoWaJ5k HxbLx2PEUdxQh6KneQMUd+ZXnaBI43WCsNuroKs/VkmapehusSwiz9tkPgHw3rvJNI3WfhaV5taO oQPLswOSp24D6AKpXEvoiMJxjf1xIoyGqL0CoDeqb5AUj5F5jGDs6hg2xrGmvqoNUBN9oSYNzCw1 oXtucOldNb3pdHBVxLpfSBE7Cjwlo8fuB1IGhEAq5QtvwEhvuafUEuu6ibVsvQU1udV2ykksXJXD U6UBLL3lmLaMID8PlenoY9ewGBvDeqo0QKW3xFEqiUmd+5ZheR39AE48ySZupr35COnV/PZuMk06 9SfOsKIVtRpA1FsCJaWw5dcaiMYQfKQ27nhaLrLFeDFFt8sRjHVLSEUn86TTYJURN/r7CjWqBFJT QJLX2xQMWOt6W7nSR+hWfXJXpW/r1nKxL+9Zihhzs67kGSwrlbytFnklr9Roq5LHbcmPr3VhEDII VVPJY4E8ZU83++jngHbNwGHrvP+1LnAJ89o9UP40uR2Sc/T74iYd0vdnlyhZLoeIXCJ7gRKIIvgl +gXc5+1oiWDCoPGltP9Bfyp2mcrhuksseEX5/X7EAgF+SyCZsK9WdgKJdqZbh8qTBNOwPFegv0ST xRCiB0LevjxHNx+GSsB03qUi6/E+oAl58zJKz0GTp9TqIqU+R8vVPBsiJiWlszQZd6kRB9/TvKJR A7sIpCwt7YJyrGoCb4Q6tgzgIyxxPB1l4MwQqo6t/84m8yE7RxA3gvsaMI9zNPrzfoggvIHDNLtN /hzGCnPVKUnUuKDSU7MBSYHkqCVJMPVAvlZLUqcsKbFPZJUkvoMkiRnpnSRPywYkBXKxdiQJw+xL 5SN6Ukii86SnZDmG7AumyPSZsG6HWJdp+Io0oCGQgLWkQbu37cf0lZDITwhRTEj2MPzVCue/nSPw jPIJSrwTzDvRqecYaecgX+cGlAWSw5aUxcxW8o5KWUDkJ+jbI0T4J6R/Qh2BMk/nBpQFMteWlCla m5b1RVlA5CfInLbwF0CI9k+Y7RPSntjyQ2r6oczTuQFlgUy6JWWSYnrkgTEg8hMy5hl/xiwhkIrC iU4nqPzdT0iRXL7M5Zu1QpIWJ2R+grKeLMTTrIGFBN6strQQ+Ihrc8R+LCQg0lmIMTn+SrLfOjUM oTDVFfn74ee91ToETDvsqKlYSOLNBwgZ37yEZ0/XUb7QgpoiH4sJl1qc2xDTllbAk1/kcT+k8QqU stfy2J9rxe3isU4LLHADTIW+/g3Y663AAtBYfz4mewGJ2ykaukRiqDiW4oWNNYcAmqAvbJQ5hLBd vrDRzBDyKOIukyH0Il90OtQpZrM0X98GbPVW9hDE2CroMdkKSPTYYtKRwB0z7pB2ygejGjMa1Ogc xbKQ7IyiBy24XW1c1aKBVfRWZ+E2JDnqCBySmFvFbGMVazrEMy/rI7Y5kn0wla9X8XVtwFRvxRau nfEck6mARI8pSp6J6YOPwn99jRrw0VvVxe6ooLWe0/3rhZC88dMKbf5dolUKEQpQaIfT9GM6jDmO 7Uw4zv4ailgKG8/8fmdfHEGMczek3dYpZWzfBPnqNqCqt2oLVzGAcUSqAvJe/4Buk6fsIc2pouAr zmuKT1F86uKTqjzWhEPOhn0McQFdc7lfQYBrPbpThxZU7pLewEp6K/BYw2XHLfCERKarm9kky63E jajOOtbGoIszYB8b68gPCrJ6481FtL7iDXjrrcrDIa/Vx63yhESOF7OnaZIla97o/xVvnuINeOut 9sK5wvK4NfCQyEmarpLb3N/c2t3hcgiRun1dH30YksgttEgfFssMLmxOdZseuOU2vpL7ORK9FWg4 U5h8lp+L3soNHKYMVeN016s5SrNRtkqRWxYDt3VaOuPUrqsIafB+MJpO3UKcrl845jsGfNkNmOit lMCJtNF4ned2GlAJu64rJPD626+/Ka+sGd3fL2/yxTXFahcIc70To78qLbLhZgWMa7D1d6dPxN1S Q/+Rmm2v5aVFdVLQzdK30qI6CcO82OiWL3yj1WVxdHtZnL+ujhlt1xu4zhnEilz7HWy1sB14jbbW 1VG7AG+zxi+4Q1YE8uc93ewxWqZlbf603iPL7F4qzbU4fI/soUoWe2R9LRu4eSDDbYlYLGpTA7fJ k8nYRJTGptNdsocqXeyS9bVugGAg8WyJIIQddfG9QzBWZudweajYYptsSG5PG1gPZi1/Kehr3IC1 QCLYkjXJa6P7o+5gPfgx8h2s/nM0wDSQpLXEFCYHUTOD51tYJRERJcrIjrewHmyY+RZWX/EGIAYy ppYgcl6/eSK4hbXTQaZYYRDS5shU5aUsX6H9VMlA4tSSKsZs+/DuUqZj5XaXcsO17mp36eEjsdtd 6mveAMVAytcSRcrskoWj7S49GLp8d6mvbgPoAjlaS+gIpKyiZhKjTNAut5cejFte8fB1bYBb4IVo O9xgLsJ15fDa7aU7t6QdPLwqae8LKLJ3d+nBSUi+u9QT3YCO3rI2qgnWzfYL7t1cejAh+ZJvX5UG sPSWmtGYYPGZe25rd5cejovbbeTr0gCX3hIuuze7buNTi92lB/tTvrvU06oBQL3lNnbzY903Xx22 ufRgbPLNpb5CjepgUpfqYMywTamqVAdjJsZEbtXBpDGmWsZi22Usvw5GBbOVvcD+0q0W4f2l0E6t lTHw9Kru6wNlIIUIdrKPfWZqo9Kr6QTCS/R6frfYWb8+TK7R7nsEfcENvvstEJq3goCa2pDy+mmM rhareZZ2iYCtioqK3AYIBMLqVggQXRsZ7i7eHyYtXwsRkvZdko2ybIkQ+nlzhL5fLB5XTwjZL/Kc TuaP+aIW+5c7+GU5se9WEbpaJqP86DqZLf7cOW4fCFP+rtTXvAFjgWi+DWPgRJzUMoZqX5YeJjPW EO2IoFCyWWxEN0dic+SuSupKEW6jgSqOvDt2vjxtRZmvcwPKAolEG8p07Np/ziuyA73MvSALibtO 5qNZkkP+fe5QP3+cbVzr7ePtZJn72Cw/sr7mjq7h94+QC9qLX4/HSbozAGjFmK94A8YCuUYbxuIY H9vF6kU+u0ubI9Y5XWWNG5AVyIDakKWU+7Lxz1kLe6B7uZWwAXFvH+cLNzO96tI7KHVBdUAuzJe5 1AkEbAj9OMoerhbzO3vuajGbTbLulGEQ5tovASsr04D8QJrXhnypcGBo7cdVAzIb+2XnjHhKNWAk kFe2YUQoXJNV2JC645SieMdZltkobVTltJFytcnsSmkjFRRvZ40urfOTvuebd2WNPLa4uK4hKDFE +NeBuuevPt/uf+91lV8Mq/cs3170VFi3WctYf0f6rn5CbRzA/wOAwUg8DQplbmRzdHJlYW0NCmVu ZG9iag0KNzkgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwvRm9u dDw8L0Y0IDE1IDAgUj4+L1Byb2NTZXRbL1BERi9UZXh0L0ltYWdlQi9JbWFnZUMvSW1hZ2VJXSA+ Pi9NZWRpYUJveFsgMCAwIDYxMiA3OTJdIC9Db250ZW50cyA4MCAwIFIvR3JvdXA8PC9UeXBlL0dy b3VwL1MvVHJhbnNwYXJlbmN5L0NTL0RldmljZVJHQj4+L1RhYnMvUy9TdHJ1Y3RQYXJlbnRzIDMw Pj4NCmVuZG9iag0KODAgMCBvYmoNCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMzMzOT4+ DQpzdHJlYW0NCnicxVxtbxs3Ev5uwP+BXwIkPYfi+5JGVaB5ubtcGvTOl0MPaIqDLK8T1bLkaldN U+TH35C7klbULsXNUm2MWGuS4gyfmSFnSM6i0T/R11+P3jx/9QKRb75Bz148RxS9Pz97StBTyjFH 0kgsJNKCoVV+fnb71fkZevnmOfrl/IwrpDKNuUCSG0wFyihWSrh2P3yFFtCy0T0dfTdZvEeP88XT //z7SU0rqpdnb8/PRn8VSGMi0Nvb8zOKCPxQJAiWCmWCY6rQ2/vzM2JZJ+hv52c/PkZPfkJv/3F+ 9hK+/a/zs4GUJOYdlNBTggmD5tMfH7+d3ec3369LhNCrxa+T+ewG/Rdd5Q/zWV5A2VVertzTVf7L Oi/KIiWLLJNY00g0nAib0mEnkw5nOGMBzACXlDBIgU0E1UxpQjTa/CPhp1PIqY3D43LiJ5MTY5jp AGLPJ9MPOaj17fIyJRwGJhkt2skfh0OcDA5KsQ4p0LdluUJ/n5XWrtGbWVGAXX93t37wy57NlldV WVIl0gQrepzLxMbFYDEgEWR3Y//BwyPppEwN1rydm+OqI0+mOoRiGZRJ0vmO2On+GE1BjCJEtcxt dPsk4D+XWhDiaiVl1RN3Ra6MSegl6Wy4kWEL/8dFqE4lQmUoJqGF3lr1d4eW/sK3/hez1Uu/7Nvp tDi9VbQO4Tik2ckgtTPWkanK/UuJBXzNwOJylPaR5T/wlJJbLjMseAxSSamqDJMIqmI7dnYKdW0j f6CubRqrayV1X+PS9p9J6Nu6gWobs2wq3Ierc7+21WpTVw9gr1aCy0Z33RLCUEvltmtbv9+7q1KZ jadcRaZwRsRBPbMItjB3pFpt6gK874hb7jz62zZbCgdDiGhyIBnTNpcE9BApJY+sAgc0KOki0uwX ZEMr57Klu1p/ucJCbSCEX6wrjg0FsqFOmkzVcyLXOBOOr6alXeWTG0SZvmu1s1gSmuPMzn07GlHm RNmhPSkIf6VuNyklneFu1ZKAf9jUzKq6oZnNBhut2vXfalsNGhut2yNTMcIpFjUjVFinyO9l18D2 st+moQkCpiJDNyxTIUAnO1QhEI2Fe2nRBaotBlvlbyoEy4yUYgQqAcrHDONKovvr9W2BZgu0LnL0 7vF0vVrli3I0tVHaqFyWk/m7J236E88XM8YGqj5jx30IGojKesNCNGaqHRbDqARQqDEjRgVQZQo+ hYMGTefrosxXYYhG95PfhsLEYTJV/IDRCJgCEUhfmKTJ9mKgA5hMxowD5i9bYJbrEi1v0cNkepeX rS5FD11h3AaGITaKfLpc3ExWn9Dvy0XeIZahwoBIxQrD5yNCGIFYorcwdIZ5lylbFVUjMqofapUV d4DEw+R9jorZ74AC+nl9f73845RYCO1w8ziPwC0QMPTGDSzJiHbcNDV0RCH8BE2WdKRclGru/mic uKbWRfc5jcBJJ8RJqaabtBfIjOwPd5E7VT3hGQgN03ZLKsBcAvgFt7GKRyMC/VZX9AvRl2ovsG6O EBYjAgb9eiQoaOhrWJwkAwt/ja4/lRD5T+bz5XRS5jeoXKJFXn5cru5Os3ZzG/3zA14jzgFa/ekv RArcuowGFBW+XZ2DoNvlqvZobvLFDAB699j9Odqo7mhv7RoMDzjFjB8wGAFPwPnvDQ/4rSzrWCaY yTS4fAcgeQa9RUvcjczdCEx+MDQQj9uV3GMuAprAEVJvaJiwmwIBzSlu2/3fh3xyl2Ki3zg0HiMR MCSMCSTEKNx0wLCvF8XtsPFSxmyIHKLYMM+BtDKBzSGtCGwTBhaScGy6VlEf23ro88mn4WM3EJYf UI8Ye8JoQRiOZdfEvD/2V6Pvwchm5cwtW9efUJEvbm5n83ygeXFmnQWfkQgYEvrpQoONd0zABE1h tS7sQv2wWpbL6XKOblYTmG9WEDfNFu179j0AAAWA4NpnIWqLhmX1sOuNDya3Gx/7u4KcNLaKuvZe dt/e33upOhf2tDrb7KoYLA92DRstqm2VvUaNfRXG7RlPzQ2DFZhnHfsqLOAsh3tpEbJyLmOrkFf5 5OZ/VpkvYQV5D4J9mN2MyQX6eXldjOm7J5coX63GiFwiW0EJEZJeoh/AEN5MVqCKYM2Xkl9C94xQ 0aYQfbiVNrLw2Y2wiIBv2xssybDsmP7bTx3iKVQHlSEKVhzoEs2WYyqN0W+eXaDrj2OlFeXs9bNR cQFVD8VYckUv0Gq9KMeWYqbui3w6jLfaH/SZi7iOEXCXe4MvWKc/2HXU1AN+WHfCNKbzSQl2AP7U 1Kr+/WwxZuYCgTMFqq95BqBPfn0/phACZ2AkRXmT/zoWRuGhqs85x4Ie8BaBfsAb740+tO9yOTtP +uKJKHGMSCz6DCK6U6DvsRaBfsDh740+o51+ZzLdD9Bwuv+Qr6YQQ8B6UOwkMYxy7er6pCPADYQR vcGlpNPVTafaASKfEaKYkPLD+Ed7n0T8dIGQ3BVI/tPFMBY27qTHQ8cl0D79Zs42AmODqt1AJIyM NQuyoSPjRlqzPDqyQwUKxEq9FYiQznghnQIFiHyGhXaHamZhFrsCShQUyEaBoFCgGgVZMkF4XEYI IhC49RUENzCmU1tyiMhnlO1QZcQKQjcKhIECsysAv8oW7GxdUJ1IED6XEYIIhI69BQFhvDq1RYSI fEbGbFHNiIMZAq9NgVFVgakKOBN1CyNdgVKtl3h6SKE6CAmxKFJJ2qMRIenAoVRvSSuDWcAzSSPp ABEnaWOs2KhkINiBHolQdqPPpxgBasKgnEuNdWAeS+HuhWhcfwQH7/UzGF+xcba5MEayyuHOMkmV vrAOoY36wc4eVd63jUWVAsXfOODKKMIxbb0S2wMUaANBv89xhEwSxv5caOvrnFYmARrN+AddwnI+ Bp+ZA/AggbG99GTgmUGxDRP1I7vgQxOFdQbPGZQzzOmjobOasvsAPpsReQgJ9wE4OJ3mxPsAIRqe IAB+MiZYZAOxpeA52InHoxyBbcIon7MMy0CUnwTbAI0K2/sNtgyApRT0V4ytN+OU3T6pR0MDIknt 9rLPyvCASCq7FoeGyNwIqDPQ6mmgTVZn/keHcqg3CfcnOM0wOfH+RIiGpzeUbEFmaVBmVGNGD3iI QDnhRgUnCmedM1+K7ecQhenDenvFHTBeFytYUogAjItPxZhzzK0bMC1/G1MB9kWtr/Dz7dh+zha3 Y0mHutTVRXyfxQgRJAz1wcGEVfeUIghRePU9uskfyg9FJQJqp0er4/WnqD91/UlV5Z7BI7dt7MM3 Y+Um04H2IKi0wvCZjRBGwnCfaQiATh17hIgU6+v7WVkJg9QAiy3mui4BMXhCqEWQUBouScRnNUIa CWN+m7grTr35EiIyXd4/zPMy30iD/snS8FiNkEbCuJwpl4p6YmkEiMyKYp3fVLbh7uyNIVhkGrzl 0ccxGbmD5OLDclVC+bZo4ApR3c3yuYoAPmHsziRA0299ShilMiEw7Vg9rtYLCMsn5bpA7jAfvvbu 8WQ+d+f5w49ytLB5ED4DEYnGCWNDxm1WT6fWD1yf3asTQiSuXiaxqgCFb1/sXwCYvH+/uq7uANDq DgA4W17B5LeDFuV4ezPANWj8PRAjey3mcAhR93Yk3bu3I5TZXq3Zu7cj7KHh9hbR5mKNf+1m9+22 ezuMuTQx1zkT7oTX66DRwnbgNWrc24HmZnuLKJQOJQMBV7CTFj2nfC8dtCUbShhGR0wLkX15MlQ0 V/WLD3y2Iow/EB71xYTYFM7uFB8ONuNyxJImQkUzyTN3gcDjMgKhQPTSEyFqGNahZDFtsq4kqEFj p0bYva0QfZdkNSAHKl4O1amJz0uEIAKRS19BaGYvFf75GVDxfFcJUD7jEagFIoy+qIEFmY510eU/ McXZiBLNeOIEqHjlqjbIfU4jYAq4/n1hUtTeME2Z/zRwbXB5siG2UqFfbYX6lCLQD/j/fdGXFHdE RNvsJ84oVTb7SbCMZ6myn3rMgC75yeM0AqVAnNIXJfhQHSvBKVKf4rGpbrr6/EW8aycQxfQFhxP7 EomO61RCm5SZT/HIMHcj32cuApnA+VRfZCDA78r3GZD2FD+VVVlPHhsREKTz/CkxNiw5kpsyaJTa vtBAHFDqOByL7rZ+k11oAHvZRF2JVPFeZ3W59Og4DsWVLiixt5BY12LcN42qx8hdFpVPPGLk6YKN WouSJ1HFG2t16dXnIwKEdI5+pu0LaBJmUMWPvkqg8hiI2odRam8fhjGy3SrZ24dhTO52hTq2YRpf btuGyaCChbKndg3CyVOAxO7lVEBT645NGBV6IUOok7b3WGDSEY48n89g2el+G2g0JQPyA+31SEXo cOh9Cv2GKTKsOuz46mGKni/Xi7IYOMpqafAoxSmr2U/2E1uF2s/1Mw27sXfFTcvLu0RT13xV5abe LZS2lVen916b1rSD4w1UXRs2pJq824rc42BTr/3Xq7V1EmzksP0/NyzWTA0KZW5kc3RyZWFtDQpl bmRvYmoNCjgxIDAgb2JqDQo8PC9UeXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291cmNlczw8L0Zv bnQ8PC9GNCAxNSAwIFI+Pi9Qcm9jU2V0Wy9QREYvVGV4dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0g Pj4vTWVkaWFCb3hbIDAgMCA2MTIgNzkyXSAvQ29udGVudHMgODIgMCBSL0dyb3VwPDwvVHlwZS9H cm91cC9TL1RyYW5zcGFyZW5jeS9DUy9EZXZpY2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFyZW50cyAz MT4+DQplbmRvYmoNCjgyIDAgb2JqDQo8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDMzNzQ+ Pg0Kc3RyZWFtDQp4nMVc+2/cNhL+3YD/B/4SIOk5Wr4lGlWBvNrLJUZ7bg45oAkO61052XofrqTN o8gff0NKu9ZyJYqqJNcGvDKl5Qy/jzOc4UNo8gv6/vvJxbOXzxH+4Qf09PkzRNCH05PHGD0mLGBI KBFwgSJOUZqcnlx/d3qCXlw8Q3+cnjCJJBYBi5BgKiAcEc4DWTz39ju0hicr1ZPJ6+n6A3qYrB// 59dHpSyvWp6+OT2Z/MhRFGCO3lyfnhCE4ZcgjgMhUchZQCR6szo9wVp1jH4CyY9xgCk8PvvtIUKP 3qM3/zo9eQEV/fv0pJ9QgbW0VqE/Jfk0z1OE0K/7K/R6s7nZ3iJ0mUzny8X6Bpkf/Z+5eJsu8kRf PEuTaXF1maw2n5Ih9edEBRGrbwBgdSjKUF1lkY7GIqNBSJ0sFj9DYhFGgYp4u2yyE47I8EoQCQBE XgAMKjbk0Ph2sXzfdqz/UCExLq8Y5ZGNjH5ujO5ap2Z7d2WjdVdKAxrdH1sCyPKQepmsp6ukoOF1 4WB+/brau5qLm/kiLXzOqrjSvsdcXcLfX5bbTN98MpslWTYGi3X6t7PIR2ORkCD6u5xOm2x8aHh/ 8YqOQWOd6u00itFoxCQQLSwOaowqCNulXtysN2Zc/zHLIAIorhbr6w1Cv0zzj8826+th1aIsDDjx QGNQoTwMcLvQZ5vVapEPKjgUQVQvuL0ryrG6olQkwO5gdDSP0irb23uMwFOtcu1EhaMRFWH9vGMo vZ2hl2Ct50OCEbEg1EzVCW8HIxoNjBCSmtZxuack+FqDpCrsbxarZP7zVrvLl+tP0+Vijv4LUcrt cpFkJl7JU3N1mfyxTbJ80Chl11W90DhiR43Gjh5v+H3Hmq1SBSGR5G5HMq5LqdOwnSeCRyOKq4C5 BsHp7GMyvFNRDL7Ga6V7oDHavIxkelhywPFEz4r8c5GbnONikWVg16/1/IhV9nSxuawpe7svGxJM pscp1q58occIwXWtZA8eR5uZkTQKxP27nzaphEYYAordz9EUDfw0Tl3oH1mWCcXHmLCo1d+DxtFm LCSJAuLAU1vZ62Mre25b3vNF+sIuezKbZVbZGJDWtMAD0dFmDyQOA/l3xfptsluGY+fVnd2MMXtQ q/oRj7VUipI98z0mtIBQQOXGYezXJ3Y3zIe5Z/7sb8vdvbIFB3cFxBbkrlqMKaq5ua9a3z+s3dyC 8oCVNRMYl6X9gFBKY7jXDkMO3eEJubtdaUL1gVLRihpaUUuT8qGKoF1raqpqeeqYq9oUu3nY1S2i bR7zWEptfqilVCuOaECLbKumuqJPC8kCsUMT/jSuYjlyMFcdVZUK7yGkgFzQaHWQdk7X882qzux8 q5fKhIOV+hsyOd8KQw7pEG9UWK9e8Zs+Agh4BKjZpXGtO1DH7kCEVHujWo8gQuN4Gi2quO22qEr9 ta6hIqPOTEpFRKRXfErvoEvsWu4eKOy2+kyl13IATokDT9zQb6kj6XHXUtNzIZUPKwNxtTfQUAnB J1xRMqER56FAq6vtdYYWa7TNEvTu4Wybpsk6n8x0MjTJN/l0+e5RXe/xV4sqoecgbb08VjMdyU9n VIBbGtajAmiICQMXOoGoFQISKilUaJBBs+U2y5PUjdBkNf3SFyUWEj382np6oORILTqjRLlWohml SIUGl3/scdlsc7S5RrfT2U2SoyyZbdbzafoV/blZJw2g9YYKOpSGylLWAypH+N4ZKjB7phrMTHcg OcGT8qLsUPwGkLidfkhQtvgTUEC/b1dXG93FehqYlAElTo3upw9zHgWSHSniQYwjC+hMDGb6+VoY IqLA8UlGJwRH0Bsk1ymnurlj4p6MPQLG2JGqHkA5Vum6AsUVjP8NAwWe6F9GNTxE3j8+nOlpHltD D3wcS0ed8YlAiYYhg1CCwahfAUSEyFcwcHAasvAVuvqaQ8Y9XS43s2mezFG+Qesk/7xJb8YZXMsJ MVtXD6QcazedkYKwK2wwOdOTagNcfyllzOwSk5YLDuh6k5YxzTxZL4CBdw/Nv5Nd550cjF+98WeR tmRbNQ/8HalKZ/wlDViDJRPMIxVB0KdpOEDJsuk9XPxmom4mYPW9saFMLwTY2nlg41is6YyNgLiq wYqLvpld14fAt8n0ZghnRgEGTI4U8dgeNWBWwDkJeENkZ/WLEo+iP/RrermLzRbu0fQBQ3/OIFNr iIuamr6cfu3fdqXzZVu6R9sHDOg5hY8mz3zY9peTn8EGFvnCjFtXXyGWX8+vF8va7a0dej+DlJwc KeIBw4DBOidYe+kGGGYwXGd6pL5NN/lmtlmieToFd5BCbrNY10+WdwAAF17QUsFrDoXxstm7mQmq QSymT0wDqhMkItRzHy0TJHc1mAmSw0qMEI5poEhlcvRo/qPyxH7esnYChEizWFaqRClA1jABwhxx rbuWY7KZAnIbw4Tp/H+6U5+Do/8ABN8u5jE+Q79vrrKYvHt0jpI0jRE+R/oGwZgLdo7egkFcTFOI LsGqzwU/5yHQSnhdx+iirdBe0VbXwzIcQW5nsMBLyYbQoX5pxF9CsRffJUHTgc7RYhNzpaC+i6dn 6OpzrAB1/urpJDuDW7dZTJlk7Ayl23UeI8GJxKssmfVTrkwxbO080HcEzp3Rh8iWNgxNTUtTHfA3 GyFdMmbLaQ6GAHHPTPf91WIdA84Q88RERJE6Q9NPH2JQlYKJZPk8+QQmAe68dkW2A/RE6YDIVswD ekfM3Bl6EQVRg5doXBb0FyJ5m5Aj6PsJJEzqSWmXxHpuBTnglgzDraWGB7eOmL8ztzzSW23HNSuH DGNWt0k6gzQCxprsjueeHBfhtC3aY4u+I5PoDC6DJxoSquEMxyHkG0IkwDj/GP+mxbH3ZwiJSgGH Agi2DgtotUC8P+vZy4sZaFtLDyIceU1nIuAJ0RDUDkeEQ8g3yFArqEqAmVcKiIICUSmgGAqkVTAM EZaWHkQ4kqzORJAwwI5RfBgiHEK+obCKKgGYo2oBhQJVLdAWoe5shjE+FBGWlh5EONK8zkRgGYSj uyaHkG8IotgdqgJri4ACsSuIcFGgygKpaFkgioIQR0MRYWnpQYRjDaYrEVTJgI3tmlxCDBFKGVQJ CeX7ngMvl3rPuS3RA9QB81oaAbMjJwsuGVefIY559RTal+2SBSQJ50VQySMiGT3TYU+sIogjHxQR pmIUxl4W3gWZROij67V71zpAQpVOnm19PRgZMHnWO925K8rtXK05hWFX67GbtzPPDhHVxASScx4T yPge6JgqhvHeXFIcg1uLHujxPRYkUMzcx7Gu5UFPByapnihrxfaY2QETcyrNnpVxbc0hw+KACgMt N4AD4gHV5hVCqY6EC+zNE6Qv+FSb7pFqHuAPmJpTAf7dMYwPAr5DRgH+ag9+ASwEt7sLgndXdBDU yx1Stk4eqA+YNFPOA9I4Zg8xE+iSMLvd7rc1A+bbLIVeHlGAOPuaxZwGXHf+Wf4ljjBRRI85v1/r mVsYhq5jInt2exGa0cRS0OPQ9ICJNWVst8FyLAIcEl7+jOa1Sz3+QsqdqC4pt/nHrGCYgOEYCyo/ efkZlZ9E6un3QDs3ZiwQLn6IpbHBnubGidBTv7aeHmwPmL3rVWHGxw6RHUKy7dVqkRdk4BJgvsc8 KkuABouEkoIB2TDnDmxVPdgYMIWnhAZjZ/AOGbPN6naZ6HcKFVyQv5mLQ009qBgwiacYnL0cmwuH kEWWbZN5YRhmj1ac9swehdk575IZm/NyGE8+x3hiViSzj5s0j9N4X9Q3sDOr0LYOHtQOOC1AFIUW d5I+YP5MIqJXSGrxv9yukX4NyDZDZlUYvvbu4XS5NAvD/eftI64l2wp4NH/AZJWEJKDNdtUzvjCn Ql0iLl88eW6t9E4/fEivzGIvKxZ7IZY7/H/6xb6fx/v1X3O/8n/PFlChLcNugtc2DREebNNgSupJ ifptGhyTysYReEYAHvYOi0oNTds0iGSBKs+w6T026mibRuUJXYn1UPWcip6O2W9lIY63BArXpkVn LTX9UZDgHo6peGtVJmGWWh426tqs2BUT+JANBuQ6pdKv5eWyokt4353f3rqUx2BsXTzeGeTaLNmV BoYbw2ZDA6inRjsH00FZrI9b2Np6YOXaXdkVKwgZhzoGcz9drDylYinugZprX2ZX1LBqjAX/0iGV vr4P6xNELq3GsPXiFIwt1YMJ19bQjkzoBTvaYOvjnILxx6fYomZr6IGP6zhVR3wifda1AZ+xT8H4 I1WcgrF19UDKdZ6qI1JhpM9QOw4aDHxGxR+d4oyKpZ8HOK7DVB3BkWHQkHrB8MXdB1R6+rZi/3Wz Al3Pv/jjTmske8DuOpnVEXYR6gXF4Q+/+KNfHH6x9PAAYcBMg8tANRlm16MvXYNqS7ZXOinVQTpJ ALxdpnfwnhQiaSW5LVLJo1ceVL59+EqEMleN9Ct/i83+pOZNKCzUDd69lOFAQPsDsrzr1vBOBXOa gNS9BeVOSvnehtqKnA8ZmP8PxVidFw0KZW5kc3RyZWFtDQplbmRvYmoNCjgzIDAgb2JqDQo8PC9U eXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291cmNlczw8L0ZvbnQ8PC9GNCAxNSAwIFI+Pi9Qcm9j U2V0Wy9QREYvVGV4dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4vTWVkaWFCb3hbIDAgMCA2MTIg NzkyXSAvQ29udGVudHMgODQgMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9TL1RyYW5zcGFyZW5jeS9D Uy9EZXZpY2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFyZW50cyAzMj4+DQplbmRvYmoNCjg0IDAgb2Jq DQo8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDM0MDA+Pg0Kc3RyZWFtDQp4nMVca4/bNhb9 PsD8B34pkHQnMt8iB3WBZJK22WTQ7jSLLNAUC42tmbhjW1NLTpoiP34v9fBIskRRkZRtilihKN6r c/i4D1Jo9gv67rvZ5cXL5wh//z169vwCEXR7evIEoyeEeQwJLTwukOIU7cLTk5tvT0/Qi8sL9Ofp CZPIp9gjFAmmPcIR1Z5SabW336ItVCy1Tmavg+0tehRun/z718e5KJdGnr05PZn9wJHyMEdvbk5P CMLwhyCOPQEPc+YRid5sTk+wURyjH0HwE+xhCtUXvz3C0Naf+zBOYnQT7VB8c72/idEyXAefwuXj 39Gbf56evAAZ/zo9GaQO8bXnq2Z9fnuEqpJSCMvw0KngYdTzqTs8L2c/o9V2layCJFyi608oDrfL m9U6HBEpCjpJ0qxaN1JsKqQo9aiyIrUI1usYJRG630VJtIjWaLkLVlu0i/bJahvGY2KEmadIs1JH GDXBxHNk0seYMO37Atr2tETyMJSLG+lPei/963BbFvdy/St3hS9MWdEsxhQ13Dw0be5XW8+EE/hR 2R3GPU2PmnioYJqo1inNRQJGnvQLTYkUHmubjER7H+popasTYeDLNtwu1qtwm6CX25vovKm7fKl4 rTyieLP87jElp8JDamImxXY8ru4X6CLab5N4XDhg6QI4GsV3w+FPBociHuMWOBAaEwWBzVLUKfTH MAmSZIcQ+vVwhV5H0d3+HqGrMFiuV9s7lP5n/pVevN2tktBcXOzCILu6CjfRh8Z14otBIzBdsuYX 6GZRTcaiTwA7K4vZf2Ni4StPmx7dJZsUwtHDFT9c4fQGzKM4vULqcFV9Ak9BY5Pq3TTqyWiU8GOb m0ZmUPB07e2SehVug02Y0fA6G3i/ftochuDl3XK1y8biJrsyYzK9uoK/f1nvY3Pz6WIRxo0GyVAW m/TvZpHgyWg0s5x9Tp1uNHbJxhPIJsQ3NR3ee1SpVMIru79xPtf0uqJT9NcmhR36q8VfHdhf4TG/ o7uOOu2AuG6pl3fbKF3Zf4hjsAGyqxVYqQj9EiTvL6LtjSm7iDabVTKmehSMfHB0mvRzYMniNg9k CTw1agsrTDmrdMl2HllTENWknANTFrd9IFNEecpmERkXY3R/SzFwLHizcAc0+GRoYGXqdyg0UBJU bJFUxv3NahMuf96bueTl9kOwXi3Rf8BYuQcPOE7NlmSXXl3l8acpOqsTGsf0TBYfEOCn445pZXyb s1Mq5eAG6Ja5ZFQrQnOPKCcYRpZMKfMw6ZY87ezZKN6hQ04WoBHK96Q1YBUs3ocTxKuyAE2jeAc8 JovQCN83HbQdj6cmPvLTKkm9rMtVHMMU9tpESmplz1bRVUPZ21rZBBZv4ys4YDpZvERI6flfO+rV KZRQE/yQh+HeFjghhBZBkkPgJCvKpoo0hDIFjU0v4EDjZPESIaTHLICaHv/6uMc/r4+C56vdi3rZ 08UiRj+NiSITvsdZp9KTD8YG+Q7ZwMmiJQKMN2UDZEK/plO2s1/TcPUQ45wiitCoulMejpKcvTzb hbUxALI0XPri5Vwb4aVcYWOu7eHxNN1WbSHP1ClPsuxp4mOvlLgvKvgGikO2D4NZ1KOGLG6X1CxX yDUtqWE0rWlyqHQQVGQHG5uy1jqGvDEq0D5TmDfCXfQeS2n0aI2UcsMUe0ynDXc11+gS1psD+rVo ay7r6VwTj4mCHd2SAaUWF6e9hbI62XzCwZATJNWo4nMH22W0ydJFsNDeNQ1JNzF5tLMkx23YyeP8 t8DYCGxMgQssDFFFlxda60qXz24/dPlKhaKfPrTfmAsvycj7cVVMqghXsGCQfPhyU1JrpVQhHViV OqV+wFjmceQzHueeaOsLFmva3kpDf/CZx0uBvIrF5XNCZmBPST2jSiqMNulenNUW7eMQvXu02O92 4TaZLYzPMUuiJFi/e9zUddy1ohp7lByp5bAEW8zh3qBI8D1ZMyiaEjEjmIDdRjRjfEYlhRZTaNBi vY+TcGeHaLYJ/hoKE/Olma3rijrAZDE3e8MkqCe0BSbtU91omLhLUsoTYIrYRBnk/3FAPtonKLpB 98HiLkxQHC6i7TLYfUJ/R9uwhZahZHCYRwwZNR0dtkZZzMbeZHDq0ZaBjGc0+z/rqvwOELgPbkMU r/6Gt0d/7DfX0dfrvJzAGGdHGjvgZUny9MaLkYoJUcZLEQ3zHpMzxWCulSbYhvTd14apGOM1RR1g smRZesNESWEQNXQr84cZ3xosdBs+A9cEsCc5saoyDQE8nV3rch0IsCRPehNAcGFCHk+yUmDyakY5 E/rVjFCsJHmFrj8l4KEH63W0SDeFJhHahsnHaHc3zZJd9NOaog4wWbIqvWGCOTi3N5v7aXXb7Cbf VLxdAUDv0iUknhU9d1ZZUQbDA46Vme1qCjrAYzH5+8LDwKzCLb2oAZ7aSD7gxO9m+m4GY30kM6+u lgMolsh6b1B8bfY1W0DJN58fzfT3YXA3xgyTJznqijjAMKILwKT2aNsUn/WLYW/p83Svq03O0W5/ 0+GGSQXvxWyAqUt1wHZET4IJ5am2cdf3oEOfd0/PNtSlO7z7iO4B48os2y7v3usUQx/DIT24UFek GwY+omHOwHrRbYb5Fx1R6AFAdiqhroLbgYRqIJQrfIhzVA4ccCVKYaEizHEUBMHlIEg9msMY9ny/ CKJoTxxFUUo1sihKpVIpjEJwmk0rzm8Q3+x+bI6jcIud3NFMA81QRbf4E7swWP7XdOdzWENugdr7 1XKOz9Af0XU8J+8en6Nwt5sjfI7MDYIxJ/ocvYWhcBnsEDVx4XNBzwEoiglv6hK91BUm0VLX12FQ WKza/nDBA7xlMWhOZvQQkWUUbSIMI+gcraI51xrss8tnZ+j645xIQRh59WwWn8G9eyAHDJUztNtv kzm0pzXfxOFioHa5W1FXz4EAi73cmwCq/VaLsC2f1IeCdBejTchiHSQwHMCwWpgRsFlt55LJMwRm 1ZxiqRU/Q8GH27kPCzj2z1CcLMMPMDjAHZZDx0BOQV09BwosNnl/CpRstT9bc3o9pEjeJaUHBVxO Q0FNOwcKLB5AfwrAd22zSccbBRYh6Si4D3cLcCtggYgf6BgoOjd/67Id8LW4Fv3xlaLV/h2xi1uk fEaIeBgn7+e/IeQT+fsZQuJQMFA8xcr0Xpt8n1EQSfCDDhxDAS0VCFBq6GjSab67rogD2xZnpz/b QrRa/COybZHyGZbVKrBGtUOB5FAg6gWyVjASFzU1HbiwOF/9uQC3oM3tGJELi5TPyK8jrUoFvhkX ulSglCl4GJu+Hm9c1NTs5kJYPMD+XDDjok1NRbuQz0jrA6yKmvkHCkRR4MusQGcFBDzKvEBAwVAG sn1dFu0KiSNRXRXkwLQlqdSfacor+4UnotoiJeVa65xJoHaoHcGlqVoX6YDrmN41Jayy73gaO80i 5PojWGavnsEbxoWprH1f08xYJhIsC3VmDDnjvsNY+yaznMGhpBTkk8J41grcbymH+vDQphlUNYUd SBnThzfRpZaIx3ictMsoOy/gy/sCz4n51sQ3xtzCeK44zAffDLXwshBiTQ0HoMf01YmmnrLMKqMg bROSQb0poKZz7FHTxSlgDrXIYJQzO7quggPMY/rjRFGz53HKmJRNxOJ+f9gTCyDv493cxDwB5vhT PIe201llkfw1VxgTbmaeP25MGBEmo5u5GLxOU5PFtSlIGnfq9pEhqUmF1mU40Dymz098YjzjSWm2 iHj5M1qG98n7OKOZAMnEDKX8l+e/Kv8lMltR4JKZOubi+7nk6eXQZcR8qYcdaevAx5gxAiKJieVP bDPZpMT7680qyfjAOcb8ALvKS4CJGg85C2MSku5er+vqQMiYbjwRxJvci7cIWUSb+3VoPlKT0UH+ 33RUVXVgY0xHnsCPnNylsElZxfE+XGbDI90xNN/NzZ44PPs4x7M0kxW/j3YJFB+Khq4UYMlRcqSV w5exxvTbCcOVDxk5iB/TmSTQ/VrCxlf7LTLffdjHKM0nwlPvHgXrdZpSHCF4DLYz3Kwp4PD6Y/p8 BJusQ2vHH7pQpyetbTKuXjx9XksRBre3u+tKlhBsr1pB8NdRjWR+yB2mFUr/HvoWUEcdv4ZTgl+y SoKfgYdf5OArCX5mPoJ42G7QkuAvPd2U4CfmO3f5MSpKab5hofR8qYJ5vlqnfFqGl75uKMzH83hL dl9aPDB7K8ddUQG8tMV4oL4W4O0RqgiZUU2ZFF98TsJdL6pTUuqKOQxR20GinrCYc4Itgyfd/z/s FbOFySKDUKL1jBJORj2F4a5hvvO1pqIDBxZ3pi8HUnmixV97OIMx0RGJPrqmRyRqyjogZXE0+iIl lPksafMYNr1HzvITEnLagxI99Obpmcya4g6oWbyBvqhx35MtM192TEKAyaV9QarnJIa9eJ68tsj+ iiwwRUywtaaM2xKrK0uslrDG8YYllmBiPt1dPq14dBqx9HDjFrr8MKTw02q1m6ryveDy+t1dQeZ3 OwyAh4OSNRWKCgcRD4eCj1uxVkrx/R+W2w/8DQplbmRzdHJlYW0NCmVuZG9iag0KODUgMCBvYmoN Cjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwvRm9udDw8L0Y0IDE1IDAgUj4+ L1Byb2NTZXRbL1BERi9UZXh0L0ltYWdlQi9JbWFnZUMvSW1hZ2VJXSA+Pi9NZWRpYUJveFsgMCAw IDYxMiA3OTJdIC9Db250ZW50cyA4NiAwIFIvR3JvdXA8PC9UeXBlL0dyb3VwL1MvVHJhbnNwYXJl bmN5L0NTL0RldmljZVJHQj4+L1RhYnMvUy9TdHJ1Y3RQYXJlbnRzIDMzPj4NCmVuZG9iag0KODYg MCBvYmoNCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMTg1Mz4+DQpzdHJlYW0NCnicvVpt b9s2EP5uwP+BH9MOofhOaigKtE67ZU3QLs3QAe0+KLbcaralVJK75d+PlF9qy7R0i6U6gEOb8t3x uXtOd6RQ8A49exZcjy4vEHn+HL28GCGKPg8H5wSdU445kqHEQiIjGMrj4WD6dDhAr65H6OtwwBVS UmNpkOQhpgKFDAsmqus+PEWpvXJHPA2uovQzOovT8z/eP1nrAkl5eTscBK8FMpgIdDsdDigi9o8i QbBUSAuOqUK3i+GAONMJ+sVqPieYWCm3449nJHB/nBFif6Rm6O/l4i5D4/myKOO8QEmKlkWMPp2N l3kep2UwjsZf4qDMymgeLKJ/Pz158he6/W04eGXt+H04OM1mbq0l3G/0xzO0r6pCehdE1huInGHN GkCkjBLF6JtAUCXIm4AyyZhib9DdQxkXKJrPs3FUxhNUZiiNy3+yfOaHtGM4Q1rB6TO/HU7eG5yM YWbaYtLK+7qMi7JA0yxHi7vltECTOE0sip/Oqo/BJkoD9/GnzaeOMeQGM+63uR1D0RuGlGLTFJKM hdpI4ZDsEo7Qpj0j/OoP4DhRlcH0mKrdle7FSS19bQNGzIJwFtgE13F00BATCoTjIDpkb9FBKJat BCumFakOUvx9HM26z+2M8Qoqj2ntSKm+kFIuRTbeH/fz0BqzVVx1CQ/VAofGb087Pro3fAzBqol9 x/CZRw9dAxRibfwGtQNkegNIE5um4ABdBm8t45IyqeqBuwdUxOlkmszjTrlm7/iK+m1rxyrsDStp XSgasRrbWqlwZdJ9npXZOJujSR7ZDJVnyzJJ46JTlAjHhvqtOkDJBxTddAXV77h0CrS0wnFordq2 BJuJ6l81V71tp9Vmbr2CvVmppftuI5YQhjyTW9Fufl96NaUkw3ItOdRYKFGXsXOFk1G7aKerEYZi Kje2UiVtQXCsq2loa1rEtEWS/ZluCKTRPLF3MnSZTrOffSHzWO3rmsSnvp1XtKFBOREOFmLWdBe7 uR+jUbZMy6JbOFbVoFc9AI+GDuNEPKhprhmRt1J9rFJJXLvaqvSXuIzKMkcIvd+O0FWWzZb3CN3E 0WSepDNUvdynavAhT8rYDUZ5HK1GN/Ei++a9XTwaNFvFGu5fAMCNDU3OiW4kxl3f5MbVq0swtLF1 mGjXTTfK0feR2I5I9W6I21ZxL20MJ55feJuzU/3oMx3gx4Z25DQ/Sps2G4vsjl0ohbvPtWq9idNo Ea/8cLWi3vuHxZaE17NJkq/YuFiNHCur0Y19f2d7TDf5YjyOC29dcqIbvfYD3NjQK53oRqObe4Ee 6diqm/Sgm1LtrgSsu1OtTLlOELziTbL5PyPaR7z6DAbEa0PvemK8at3cmnUdL66RAKi9nqVZdXd/ XRS2DliNEluqIvQuKr+MsnTqvhtli0VSdmkfs6W+bXi8BgIc1dBDn+gopZr7wj4TS5tuMLn68JTP OICnGjr4Ez0lFeYtfUbnTZfhWDtPeXQDDoVIb1gI6dIeaA/80ZrshUc07cJ+myziydulSyWX6bdo nkzQn7ZaubddcFHVLWVejW7WO1F9hCoIjUP39LZFILnEkv7worNNK5VhKM2RTNJpGREKl+UhMHSs eb3T36q559zpUw8IyN42aSSTmDRuWrlz4B72rFabNF71ADx626SRVGDVlNVeuC2SX5OyarOuk6Kw KezKbZbUvnuZZDee7z7Uvuuj5PUtAYBpbzsmkljShz++5G1TKwixrey2+/CMjm+duEle/dM2deo+ 3OgzH+DG3jZMhOWsbjrAdSF/dRjyF3UaXCT5q/p3L8bjotMY4JI5FrQa3TsdvRYA/NjbjomwpStv v/n20ti06m5pZxpH8juD+/Cjz3TQiRzTa++tj72E2h577R24KUl3jgztvK3KwoMjse+/3j92WwkX WmFKV8KpJnjnIcDNBdKhsDnx21MBuEJtpput3DHDWVmzZHvRVtH6hNAvqvGqQ7S9OwLH84RdkTJt nj3U4u1mnZZdwVJjs6puWsRxb0NYFyc0FhwkztvA1MVxhcMQJM5bftbFMRuYEiTOW73VxVHlHnGD iPMWLnVxRGINW6z3BloTx0ObPWDWefN4XZwR2MCs825M1sVp4Z4Ahog7SpZdcUq41gkiDsIKLjmW IFYICCvc47kMtFgBYQXnzDVqEHEQVnDGMIctFsIKTikOQXEnIKzghGIBijsBYQVzz4nBXAFhBXNP VcFcAWEFszchBlsshBXMdhkGtlgIK5gIsQAtVkJYwbgtuARIHIQVjBkMWquEkILR7SFimzgIKRjR mw3pNnEQUtBQYxDFJIQT1ChsYGuFcILawkowkDgIJ1wVHBqQOAgnqJSuOIWIg3CCClv6QaQpCCUo F+5ZTYg4CCUoE5iD1qognKCUY1ByUhBKUMKxAEWdglDCtrBVgwYQB+GEYbbPgYmDcEIzDDQOQglF MSxIIISwHR2HuQHUT9jm5yj3/wPBSAZtDQplbmRzdHJlYW0NCmVuZG9iag0KODcgMCBvYmoNCjw8 L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwvRm9udDw8L0Y1IDU3IDAgUi9GNCAx NSAwIFI+Pi9Qcm9jU2V0Wy9QREYvVGV4dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4vTWVkaWFC b3hbIDAgMCA2MTIgNzkyXSAvQ29udGVudHMgODggMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9TL1Ry YW5zcGFyZW5jeS9DUy9EZXZpY2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFyZW50cyAzND4+DQplbmRv YmoNCjg4IDAgb2JqDQo8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDMyMDc+Pg0Kc3RyZWFt DQp4nMVc+2/bOBL+PUD+B/6yQLuXUnyLCtYFrtkHct1id9McesC2ODi2krrxay25j0X/+BtS8ouW aNaSfGkRK3pwRt83Q86MSaLod/TDD9Grq+sfEXn+HL348QpR9HB+9oygZ5RjjmQisZBIC4YW6fnZ /ffnZ+inV1cIbT1Jo1/70wf0JJ0++/frp2UzL27Pz6KfJaIM3d6fn1FE4B9FXKFYKEwEup2cnxEs jDTz8cv52Z9P3ixGefr0Hbr91/nZT9BA0YhwG4kZJkzsNGSbKRpBWw1YXf86PzNimcRMI8kTTAWC X8UbvfkeTeG+rddhVa8T0MZKWW2V2lJXECzNs7H5MNqiZ8S+wO2gfGUkHreU/uP8rIm4WONEi215 u5j8Yf/vscjLN7VQcmlajbkAZghOFFJr+leXEizKa/bX+rJaXSvV3rkqY2nObRomhKGKy+vGzfXd 9gvxjGBeSqbCnHGb2Nxgmti9Z8sqpCKYqZWuVApMa8xC1JuFv5FDdkGAp40Rb9sGixMpRSQSTSOm heQSTe6W9xkaTdEyS9HbJ4PlYpFO82jQH7xPo3yW98dvn1aZ0pEqMnB/Ql0d9+xpz5hkR2CpJMZE VaOVMCojnsRJRDlNRMQUo2BFBjE0GC+zPF1kqE1sqMaMelXy8RRN+p9b5YrHFGu+p89hslRXZGmF FfOQRQVJLD//WPMzW+Zodo/m/cFjmqMsHcymw/7iC/p7Nk1rzL5VEAUhWPE91Q+DGHcFYqww1TX9 g7FwFZGoPCgtXjwCRPP+Q4qy0d8AD/qwnNzNNj5wUqsUQltAndc4DKjuClAlcSyqAdU0gZ5WcRZR ohmPFJgDQcljDYJtdiZK2c7Eo9yJehGwfMb3FDnMV9IVX1JCHFqJCInMP84MSVTVsXQi3ATHhLva HoaNkq5wEwLrmnGJMkqgr3gJ2OkEPigTXFL2Et19ydMM9cfj2aCfp0OUz9A0zT/NFo8nCDZ4Qi2E juIBGFZmIG1gCHGqrHFHa3zQ1F/LNMszdD9blMHZMJ2OALq3T+yf0coco52Brl3guLY+62gbAJwn 12kGHDO3V+ImlBZUSboPH7hvi6DQOMGx9mmy3VesSROPUfIYQW/SLkOMY01dZQII4l0RBNm9qukd CsvO7qtTjXnaf2y992SAD+QZrlYBAHWVlinCMasJXh27LZEqTKhVCxaQye9pEgBKV+mXTJjxqW8C Zdz/0jIq1q9dVQJQ6SrPkZphXjdK7KJyHf0GHjXKR3Z0vfsCGc50eD8ap236EmdY0T2tAgDqKoeR tmBXB9AAwo3MRBrzxSyfDWZjNFz0odtZQC44mqZZm9AQ2w27+gTVx6guASkrUFSsK1C71S+aYH/x a/PkbvWraFiaCJKWpS2ZYCLdBrbusLWt3Zu2ilucUcySlS6MafPW1dUt6gnfDzRziHzBsIiryf9k ip//NfZ/CSPMAzA+Hw175AJ9mN1lPfr26SVKF4seIpfIXKCESF1lDsdqyDSxruJRUV+iN+Cqr/oL yNHACC4pvSQaLImKNjURlJgQztXksNMyT/7QkDjoR0hN5oUqc99j5UmCqV+eNZRLNJr1RJJA669e XKC7T72EcK1fvoiyC7g0z3qMySS+QIvlNO9BHwGZxCRLB22qWqZ5rq4BPHlylIY8wdMyqeOpZaZg 7PVLHIz7OfgyxIwD476T0bTHLhDEiz2eSAGu3f/40BPm+QuU5cP0Y48qikXSKknQBUM06aoZQJIn H2pIEqWY1pT5UdssKXFI5D5JfEMSXZOkVfckOVoGkOTJiRqSRAiOa5LFjjzJI9F60jxdDCDrghEy 2xDWph5lkuEqEkCDJ/NqRoNIYCysGY478hWfyK8IUUxI/r73pxHO310gJJ0TMHrvnmDOiVY9J5Hm ix5X5wDKPHlhQ8ogKdO1sUI3lHlEfkXcJUS4J6R7Qp2AMkfnAMo8SWtDylSCpaez64Iyj8ivKHYJ 0dsnBJxItk9Ic2Ljh5ypbihzdA6gzJNGN6TMZHon7hg9Ir8iCMVX+GthGQKw4ESrA5ROMNdeRax8 pQv5SalQQlYnZHlCdmQhjmYBFuL5brOhhRTftp7UQjwirYUkicWfEk3etWoYwn6l7MoPgL+zWofg GtOTpmI+iXefIGR8+QLePVtF+UpxRopInxKilbgwEaY5Bkf+roj7EyJMV7zOz7TmCvN2qx1MGM5c 5Q9TxzurdggW49iToHVAnUfidn6GLpHoxQxL9p0JNHtGqDlkpAcP0O9MKLM6YqvDVvs5xUxtytU3 gK3Oah6CxqYEekq2PBIdtiCX6pkCM/ARS3NIhWGu1cJlUeP3KUUKLXSrpsBiaQrortwAU+issiKI giHmpKbgkViYwmRlCsx4ozEEsfJQuue1rfJTzNd0NQzgp7OiCk8UlicsI/vkDeZLtP65RMts0ZNY mYEv+5L1wKi1hONB/rmnCVHUDJQf7s33ETB2wmernapkZgh0tQ1gqrO6i5n6RWo9qQOmPPKuf0PD dJ6/zwqmKLiKHd/KT1F+6vKTqiKGgUNufQ4OnveU9bpWHUxQaQr/ruoBtHVWe+HQJ6vTlpZ9IrPl 3WSUF7SRkgqxZkeXZ4CwVt2pnD/pU2zXOkrb6MxMbKHHVSfATDqr93AlDUInNROPyMFsMh+neboy E1prJv8P3hzFA3jrrOjDpcDxaYs+PpGjLFumw8K97RTQ3qJHok89ymDIJCSyX99n72eLvLxgT7U6 chZz6VwlAzjqrOzChcD820bxzmoQnAsTAlaSd7OcQlLfz5cZspMt4LG3T/rjsZ1v0fYXSxpsSe+p cxga0VmObyxH1rpSBwGOR96bm+vbn5wZDf2Hh8Xd9qQGCDvXf7c770ViQb36WfH9z646eW89rcJe 3/q7VfwYKLgPYNBELUF3JmpJkaznU+1M1JLQgXC5NVNLAhF7c602T1dN1uJUYR4XjZucPNlbiLh1 h2nAuWlrshYtVlysprRBj6LqJmsJTzJ9oJlDHkJ4bfJkVyPGkUhi3XQ14rFKlumtq2VAn+JJb5sh xhJem8QErUg8bpXKsUqXiwRdrQMQ9KSdDRHUrDZmL5YJKiW7WiZ4vOp2naCrewCOnjywIY4xq422 T7pS8OjXKL7Jcd8jAFNP0tQQU0VrI+FvXCx4Kgcv1u+5igeA6MlgGoIo6U503PkSvqOxKyZ3uvoG YOfJLBpiBx+6ZngpVvGxYhUfqVjF1yY2NDHbSnj1abZo8GjSilWDrmIBpHlSsIakcXLSZYNHQ1es G3TVDdjewZOiNYSOkbrlepImiioSVy4cPGYt39GwFQUIR9UA1DxfhzZEjSSdrOY7Ok4vlvO5agVA 1FlmQyHJbms939F9aDHX1lUlAJbO0heq9Tev6GsVEimNfJ8atQsIj6chqZIZQENnORCNdSdLCI92 4GJ+gatWAESdpTdUxbgm8DhuDeHR2BSLCB19gkpTUu2Upjgl6+rRTmmKU1uf85emtp6uKk3BUInj 1Q5Zth92nt/cUOyQtXPP9hZZAD9bw0SEmXRXXZeSvgWk3lYOcS/i2r0srsYjGMjQ9fR+Vlm/PlIw hLtUiz3JAR7g27imEQg8xqJmBLuZD9DVbDnNs1Yx4JgbDBzBARj4NoNphAGL68LD6gL+kdKK+r1H 2i9p3s/zBULo9foI/TqbPS7nCN2k/eF4NH0s5pmYv+xBsdUf/Fwt0n5xdJNOZh+rF38fCVPxBaaj ecAeYL5taBoxRlVtaFpOxGnx9WNReK1HqIxLsRRaXv1sjoj9rQkB3c1N0DnKzVW6ua8D0hytA1jz bXzTiDWiaqPldhmTwo5QHnk36bQ/SQvUfy286vWXydq/Xj0OR4vC0SbFkXE4e3QDv3+HrNBc/Odg kGbVewk0I83RPIA036Y7TUhLTChdE0SuzLZNXyv2GfVJJY5bHXdUOXO2GWuu0gGs+XbiacKalhBi e1hr09fs9FWfwFeP05kdrH5u1VeomS+svZJhEC3kjiCQQ+j3fv7+aja9N+euZpPJKG9RnXIWs6tO gBH4dhtqYgQwbuma9Lc7z/UIDXbS9llxtAogxbfbURNSislQtfF22xmH5rZPcqQGvL9vX6Mm7292 1fGNbM1ESLMJjiNiG+Lb0SQd/rY0vcL19GN/PBqi/8DQPod0L7ODfL6wRzdlcaR9Szz0/pVkxDsZ vtmZqCrBN3v2bGoPRYK/n6CvH67K7wFAocv8ntiConNdmHdfbdC90/7hG1R51a/gRgVbQdjVYnXP Wkq5lXdlQ96bLMj/A3JzkY8NCmVuZHN0cmVhbQ0KZW5kb2JqDQo4OSAwIG9iag0KPDwvVHlwZS9Q YWdlL1BhcmVudCAyIDAgUi9SZXNvdXJjZXM8PC9Gb250PDwvRjQgMTUgMCBSPj4vUHJvY1NldFsv UERGL1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUldID4+L01lZGlhQm94WyAwIDAgNjEyIDc5Ml0g L0NvbnRlbnRzIDkwIDAgUi9Hcm91cDw8L1R5cGUvR3JvdXAvUy9UcmFuc3BhcmVuY3kvQ1MvRGV2 aWNlUkdCPj4vVGFicy9TL1N0cnVjdFBhcmVudHMgMzU+Pg0KZW5kb2JqDQo5MCAwIG9iag0KPDwv RmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAzMzU3Pj4NCnN0cmVhbQ0KeJzFXF9v2zgSfw+Q78CX AuleSvG/yGC9wDbt7fXaYu+6PfShXRwcR0m9ceysJW+3i374G0qyLdMSTVV0LwFihaI5w98MhzMk hyj5F/r+++T15YtniPzwA3r67BJRdHt68oSgJ5RjjqSRWEikBUPL7PTk5rvTE/T89SX6/fSEK6S0 xqlGkhtMBVIUMybKeu++Q3Oo2WieJq/G81t0ls2f/OeXxzWtoFaevj09Sf4ukMZEoLc3pycUEfil SBAsFUoFx1Sht/enJ8SyTtBPQPkJwQRaeTt5f4YQevwrevvP05Pn0NK/T0+GUZUCmwCqTBCmNVr/ EP9TTAZZKrGm7RwCGrukSmE25cSOJifOcMo8iF2OJx8z9GJ+s7iICYcBNdainfxhOPjR4GAMM+2B 48eiWKJ/TIvcKsjraZ5nOXp1t3pwy55OF2+qsqhKpAlW9DCXkQcXA3NDAshu+/7OwSMmN4IarHk7 N4dVRxxNdSjF2jeS1mYlJhapxsYOo0O00zVxuXmCb69/RGy2mEhLhWlj6/3Zk6ikJC3t6iEENCEg JPtTP1HKpTGi7DtVtuwYStqOwCEllUdTUkKx9KBkzdarfVP2zDVvz6bL527Zj5NJ/g2GfUsPDgOq jgWoMhSTA37OsUb9QdoH3BvPU0xuuUyx4CFIRaWqwAQFUJVbe3gEbW0lv6eubRqb1kpafo1L234q oW3r5qqN179+UX6U78o/m9dq/a7uwM5bCS4p3TZLCEMtLzdN2/e7rZevlOY4rVtWCjMl9iowC2EL dwdeq/U7D/MN6iX/uwxsKm1I7HUioMqebHSbNfFoIlJAxAivGuwRMV1Emu0qWRvDlsZqBRYKmzWC 8Id1RYLEYyI9bTQ5qk2igCqVY9YcaO+W0yJrH+KBzStTxgwd7VOmhzRu9Wi38fdnd4Ma1Fi6DQaM e0r3B76SCq+Vcm/4qNLCbIYPAb+rOYKq140R1KywVv5t+61GoEGDbOpRlxEBrk7NCBU2yHVb2Vaw rezWaWisAJvJ5ZplKgSoeIfOesJifystWsuMxWAzRHdWDVIjpUgk5TRhWigt0f3V6iZH0zla5Rn6 cDZZLZfZvEgmNlpOikUxnn143KY94WzVQZfL12Ffh3qi496oUKvF7agYRmXCteIJ5STVCVMMGiyR QZPZKi+ypR+h5H7851CUeErLedbhMwAlTyDYGyWiYdrxoERTw0tg/rYBZrEq0OIGPYwnd1mB8myy mF+Pl5/RX4t51oHaUKwEIVjxPW4DsPLEI32xkibdiYh2xpnVIJWQpH6oNUrctU4b4ZQpTJNce0l/ OHsY32Yon/4FMKPfVvdXi2+nxEJoKxiXvwDBeOKa3oLRqQWpFR1NDVg+ykhCCac0UaBKBJm7bw0U WBvM+B6rAUClEYFK1Y4ztxNxJfaXMwsPVd8eH8Ft0ONyGIBPq0v7lfiAEy7b4aGMEhjTL2EWJVq/ TCjjOqXkJbr6XGQ5Gs9mi8m4yK5RsUDzrPi0WN5twBqIDON2mcjDW9TJm9u4j7vkAuTQ6vV/pRys V9cxK5V6Ct/+fZXlRY5uFsvao7nO5lPA/8NZ+W+y1txkZ/IajA7X5TB2GAzYBvEEKb3hERKntB0e ToWmaZruo+QM6A1c4i4xdwkM+cHY1IrqcBeAjWcvrzc24NKz1Kc6+U27A/yQje9iWDIGMID36zIS AEPEmEAyYZ3Ldhh29aLGo9KHgd5KKrDRe8QDuh7R8ZcQnnHTr+uz8efhfTflgo9DPaDvEd15STg2 HbOE0/cXyc9oOrDLXNnQxUd0Pi2m5bR49Rkihfn1zXSWDRxenNmNRJdoAM4RQwFhOJYdFpigCXgD uXUEHpaLYjFZzND1cgz2ZgmR03TevrfQAwBSmlmXhaAlGqbqbtcLH/CxXvjYXb2Ebm5WirqWXrZf 3l16qdoW4PByul5UMVjurW02alSrKjuVGssqDLhRG2wYfCjVsa7CPN7ygWZapJxyLDrmkk92TfC/ Vp0vYA65BdE+TK9H5Bz9trjKR/TD4wuULZcjRC6QfUEJkQye38FQeD1egiDAYFxIc2H5IlS0qUQf fimxbonLcMCg8HjP/fFSpQ/filf7FkkPEpLYRTcfiVIkF2i6GBljwE17/fQcXX0aSZKmhr58muTn 8O4hH3Gj0nO0XM2LkcVesfs8mwzkrg5fXPYCBOBxm/sLQDIsO6a+rr2xPiKAGc5PZDIbFzAcwLGa 2BFwP52PBIwJ8KpGnKZKyXM0/uN2xOwKKj1HeXGd/TFixDCsW5fA+0ig8sxd7gJO5Hg88/4SEKzT /ezcnuxBRYlDVA5IQG0lINWuBEwkCTjMBUjA4//3lwCnnZ5vvDHgIVKOgYdsOYGgAqaHfCuNgaRr 79qlHYCvJ7Dojy+jne51RA33UPmCEMWEFB9H7+0JIPHrOUKyUWCgAObDTYFSUMCaBfrX86G6bqQV gstmgCw8kU5/WUA3uzzviLLwUPkC5rMBrIVetB4+7aPn1Zq3j6xDUjZ4SAkUqGYBjSZth6MAaXti u/7SJqQz6IgobQ+VL4BuA1gOSOtmgR1oplGgrXDMdmxSSqLJwmEzQBae+K+3LLjt5tFHno/KF2TM GliqaQk9BFBVAaOGVQWmLOCCCF0XSFsgSFkjjixcNgNk4dn96S+L1NgVgWPLwkOllIUxFti0hH6o BRTKnrxwSQbgGjP85cpgduxowkfk6hP4Ti+fQg/ztS/LiTSs9meHRgz1QoqPA8UY0efWm7MRPAy3 R5X7LIlRXMH/aw/aMHtezgyO4+ElxI8uSwGCjxnHc6mx9hi3OIL3EGnGMOgCpveRxIw+sm4dGWkN Rg2eGRRTMHiP7PxPRgRT+5hK+0joo6GmLVV2Y9vlMkASMQN6LjQWHtMWRxIeIo4kAP848FJwJKyF c0gHJEHEjNY5T7HxROtx4PUQqeC9X8PLRnb9DrRYAMjSlApfwW0DmPVTqeFUDNbwasnKZS9ABDHD dc5SLI8drvuIOCKglYLzjVUZruuMarBee0wEAB0zbuc0xaRzNo+yOOsjMXlYbY6rA8yrHGZUmwkK OOef8xGzZ4/heVL8ORJMlTP8bzd2PR1mffgcquuSldbcYTBABDHDdU4UTjvNTRwReEi8+BldZw/F x7wSAbVWxBqW+lPUn7r+pKryeeCR2zr24YeREuXj4P0KWdoeh9sAecQMqJlRmHtsT5zAwUclX13d T4tKHqTGWGxg13UJSMKRQy2FmAIpkz5cXgMEEjOqZhriyaOvLfqoTBb3D7OsyNYCoRuBDHV4tE0B 8NL+P4nc4ShA5DGDd5tcLo6+kOKjMs3zVXZdjcHyBN1oOSLJp5EGX5ck5Z5u/nGxLOrismjodESF 9QhcrgKgjxnfM1Ue0+9DPmaUySSIpmOqerOaQ2A9LlY5KrfW4WsfzsazWbm7HmEnRQt7cMXlIACA mMEdEwLT426X+0i8e/Pi7XNnu3x8e7u8qnbMSbVjDu6XUzD+c69GMdrso5cVGv8P7Qa81Pv9CDrs IsnOYRehyeZAys5hFwEWcHv0Zn0cZe+wCmkeVnFPuzD4kHUqoGU25W4DjRq2AadS47SLFmWKbOMk UNdhF+kJxPyttCgjYzYb0ZdEJIxuTTQNJ1VfneGjZbOUJP/6LKVwZuosJZeZgBR3T1jWG3bKcMf4 rJKUTGoSyqkRMZOUwrmsk5QcNgMw8sRNvTEiZfZdN0japMdKUeqBVLUX4jIbAJUnpOkLFTUUi94p Sr0yiAaiVOW2+/iMrcJ1ipJLMkAwntCmt2A0xR1LrFWGEgPfihLBTeQMpR6iqTKUHE4DYPKEA71h Son1Bb5dglIPeKrlUpfDAHw8PntvfBTBrGPcbDOUjODSZigJezVMUIZS1Im1ziByeQ1AyhNe9EZK lMfx+6YQDet5vavho/3VOUo98K/O4blcBODviW564w8VeYePJ2jKqNI6Wo5SD2yqHCWXu4BreDz7 Tr2xYRp3rQENSFHq4fpWKUoOHwEgRAw4KNVYdJn6nhlKfUZoeYbSJR7Q9YhOPyXaVoyTodSn76VB cKkH9D2iM29SnJKwvpcZSqEJRD2Uv0ogchkJgCGio67T8raf4ASigZKvErV8RA9nKPVAuDpY45IL WrRRcmfRhjGzWVfZWbRhHOizxqKNNPtXIzW+3bZoY88uCl+G0raCP0FJlFeB1rxwinnXVUXKd+uB r5GWmzsA3XZRXs6mMFl0X3waTMiAobD3AO1QChgnvhsL+nVSKrt9375E/DBBl4vVvMgHdrJak3Io BfTSd+9Av14KZWv0WAcObn+dNNXZ/k9ZMbb3wiL0y+YJvVos7L2w6E02vp5N53fVpr39r3yorsOC n8tlNq6e3mT3iz/aEzuDoai2oxxew2yG2bEZMMWvh/XufXIGE33AYmy/22YweIpVbYzAovK918r2 tXGfFetRQdVv/cvQWw7sy10m1lU2REjjZqxelUqE/wdsbwXIDQplbmRzdHJlYW0NCmVuZG9iag0K OTEgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwvRm9udDw8L0Y0 IDE1IDAgUj4+L1Byb2NTZXRbL1BERi9UZXh0L0ltYWdlQi9JbWFnZUMvSW1hZ2VJXSA+Pi9NZWRp YUJveFsgMCAwIDYxMiA3OTJdIC9Db250ZW50cyA5MiAwIFIvR3JvdXA8PC9UeXBlL0dyb3VwL1Mv VHJhbnNwYXJlbmN5L0NTL0RldmljZVJHQj4+L1RhYnMvUy9TdHJ1Y3RQYXJlbnRzIDM2Pj4NCmVu ZG9iag0KOTIgMCBvYmoNCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMzQ0MT4+DQpzdHJl YW0NCnicxVxtj9s2Ev6+wP4HfgmwudvIfBe5qAs0L73LJYv2tjnkgKY4eG1t4q5ftpbcNEV+/A0p 2ZZliaIqKkmAWCFpzuh5ZsgZvhiNfkTffDO6fvbyOcLffouePn+GCHp/fvYEoyeERQwJLSIukOIU bZLzs7u/nZ+hF9fP0G/nZ0wiSUSkKBJMR4QjwnhElG339m9oBS1L3ZPR68nqPbpIVk/+89PjQpZX L0/fnJ+NvudIRZijN3fnZwRh+EsQx5GQKOYsIhK9WZ6fYaM6Rv8AyU9whCk0n/58geyfx7+gN/86 P3sBnf37/KyfYKkjpni7ZEJELhxRhtHuD98/7coIoeaDESVjsa8lh3YhledER4rVKw9gHYuyXJdp pIPRyGgUUzeNIWEQPNIeUm+S1WSZ5DS8nq/u4eOnT8tF/oTQ9f1svrFPN8v86SaZhFSTYR5J0q7n zEq/gX9/XGxTo9B302mSpkNYTp0u7ZbDBrMcSiOqWgeAsOYTq0ibIaBN9sHt+zyRIWisU72dRj4Y jYSYb7tpDDoC6Cj2EHt9v1rPjOzv0zSbZPnTfHW3RujHSfbh2Xp1Z8qerZfLeRZSPxrD90i9gu08 icF4wiQSX8nbWkR7O9YQNNXo1s6SHIolqUmEXbHJzcMUvQQrvgqJhWJRDDzVCm8HIx4MDIUj2epC PSWJiDVIKsP+Zr5MZj9szTDycvX7ZDGfof9CzPCwmCepjR6yjX0KaqEsjjhp1+0m+W2bpFnQmKHw Dj8GTixCDWYRMTbtv3C02SqVUSxw/BVHsVoN23nSg/FkJmnuQOzZZPohCT+QaWYzvFrx7XAQPBge HFJPBxzfZdkG/XOe2Qzgep6mMJa8vt8+VMueztc3NWVvK2UDhJ11b+CB6GDLB5KZ2OLrxDOtsndD wWEhobwswMMrRoU06WatYj9fPAkqShI7K7VhUCyOoGKZhGJGtKb5ggnFOFZDGGn9+7da6WCrI5Kq SLjGQePQr08d+nlQ6yASlFHtylT0eD7fvKgONt9Np+kXGGzqNPXgcbC1CklURL7SYNMi2jt5qnkq jU5DsFij+QmJtTzygjr7PSZM/7GAvm0wtl/Q3lXYD1tn/9lXy11d8QJHtQICJ3LoFmOKair3XZv6 495tlSQYMqe8hoAUSqsNsDYQ7rXDmHRpIXfVpVcoNygULalhFK1osmt0ELR7m7qu3K1OuapdsGhe FzVvRNsG6VMptQm3kVLqWGi6GzVqustNWsQs0nuLgH9o076HI6t1dlJWKh88RCwNHUavo7Rtspqt l+jtZp4liN/XOaC3IJhizQhfkuTnaOrU0YRiZhGg1teEMh7dZKp5rdtSS73XutxBQp31FVpIGBkL n+NmM6Tax77eOsNRk5IpcBiemN6PbpxHoskWHHmSu5caY4BIrbzGUrYIGmsh+IhrRUZUccEEWt5u 71I0X6FtmqB3F9PtZpOsstHUpE+jbJ1NFu8e11mOv1pUA6jkRC+PLSVHttQZFS6imNSjoimMMUzH ekQghOQjKil0aJFB08U2zZJNGaGeaOTRrUufUwpGy8kffWlgMTFzZlWwBw2OFKszDeBgNHbQQDDN bfLve+TX2wyt79DDZHqfZChNpuvVbLL5hP5cr5IGw+2LFcc4kuxEWw+sHIF+Z6woN4TVO7IxUTnC o+KhMFl+D0g8TN4nKJ3/CSigX7fL23WdEQ9jY5wri1tFcw/cHIF1Z9xgQIb2tbgpomHko1yBoXGm RxKYxkjfdwSqrydKi5JDz/7ermCiPpXhwYRjH68zExiiC1H/hnhk/ppEHdrLrgT0x4dDmMBONPTA x7F/1hUfrlkkGiYBQgkGp341opoz8WpEKJiqkK/Q7acM0uTJYrGeTrJkhrI1WiXZx/Xmfpjpm5k9 G3aiqwdSjj2szkiZqK5h3rCWBN/O9yjQ3XpTBDWzZDUHgN5d2P+OdrY1OppeesPDlHG0qoIe8Dji /87wxDSKGxyNk5gSqdQpShWXK+DqGd5AlAHBnkuhdxf8fqTvR+D1vcGnzGxOVKV5gO/YQOoMvqQR a/Di3DbTu/og+yGZ3IcYzCjAYECvKOIBQ8C8gwuIL5tc9NjwCjxyg+v36iTmJtevCvc4dRQwueAc YvqGSK3p1ReTT/3fXZtFmap0j3cPGNFzBnlwQxRTefeXox/AB+bZ3M5bt58gll/N7uaLpKf1M2q2 JqqKeMAQMFjnFD6aQh00hek6NTP1w2adrafrBZptJjAcbCC3ma/qV7g7AIDzUbCigtcCDWPFa+9W Puhu5eNonVPIuLROBPVCA9YnyyK0tCxSXdrh5tiw2K2qaLOQVvl+qUW+rnLUqLSwQqTdhSm0oTAD s7hhZYU5wtmWbmpIhrSwKV77aFbZ/mes+QpG+PfA7MN8NsaX6Nf1bTom7x5foWSzGSN8hUwFgWCO qSv0FjzherKBYAkRdYV1nSV00LNI8l2KXokYLIbwnpI4wTbuqUjy8DpHAN2ZEabhxRrcrn6zpIMI gc3CnkuEJf0KzddjDv4Qkeunl+j241hxCeC8ejpKL6HuIR1TQsQl2mxX2VhiQRRZpkntOlIXqvMM pqqeBwGOuLw7ATAHyQZLa9qv6kKBPYrpEjJdTDJwOAispsbHlvPVmF8iCKrGSmrF5CWa/P5+zGH8 kvCcZrPk9zETZg+it68xZlbyqsp5EOCI/LsTIHVEG+bfxh3DDlIkb5PiSYCK8EAEVHTzIMAR/Xcn QKhINQ5BoTzAIcR6wEOymUJKAdNPeiCjp+gitq7K9sDXkVZ0x5crw/LQBu6Q8hkhSGpx9mH8M4ij 8pdLhIQp6Iuv4mZPzSXaiiQ4BpEw3+50IIpBAT0UUAlK9fUmLUzoUNXG4+C/I5PqzjaDQLMhiwzI tkPKZ0gND8AqCkjzUoHGUCDKBQoK5KGAYRqMi4qaHlw4MrvuXNA4Eg1pbUAuHFI+o7gMrPE8dSjg xuqRPhTE2HChxaFAhPOLipoeXDjSy+5ckDjCw0/zDimfEUS4BbBMCQs9JGh5gRasKNC2gHBIWIsC YQuEMuQE4qKipgcXjv2l7lxgGcXDj1EOKZYLrQ2wWiuOf+k7E3FpTl5VRXrgGjK9plpGzDHeBImk XEJuP0Ls9OopvGG6C2UR0hLn4SznQtNLE2mZ7B1c4VEe2ipzOErpfWyrMSRm/dNrYbK7qrYejIRM r6kCf3OMOmEYcQgp5xboCmKgMSTW5JEJfuCJGwoEHpNI6kcmQsJjCqmAqTalJCLiUd8RBxJHw0NF Rw8eQmbZ5hIGHzrHcAkpeOgpRymId7lT0I5pAVxKEQkBZMaGTBaJOOcYjxmMfX2JNYxycqKLB7Eh s3cq7dmogYl1CMmJXe5gp+BV2DgQhwdCLODwJOOdy9E4dzloRnr7Vr6CVVXPg4KQ+TsVMN06ZvMw FDiEVCgwo5hFNx/FikdxKI0PpdYbglCh7B5SVU0PKkKm+pTziDQGAEFWc10ipg/b/XF1IGKbwkQf CYNz+ikdSxlRMxpNsz/GIpaYMBMW/Hpn1vghUoDPvhyI2HJQ0dDjRnfIBJwyc1V3WA4cIl7+gGbJ Q/ahdkusg6Tinp77bXq/T3Eq2iUFHNoMpcabi09efKrik8g8noRHZsdfePh2LLl9DBRGVhT0sKmQ CwnmfALjQydMLinp9nY5z3K/xgXGfA+7KkqAiQoPBQshCbE3V6q6ehAScjWBErNTOzQfzUKm6+XD IsmSHR3ka9NxrKoHGyHXEyimEW84sR+QDoeUeZpuk1nuHvbQ4HgzxqOPYfYPXILHhCqIYfDIbpWn H9abLBc9tkV9Z9T8eFhVAQ96Qy5rEE3hfTuJD5nDEwjtRMPkdLNdIfPbNdsU2RML8LV3F5PFwh5a CLCBlG9wVDXwACBk8kxiEtFG9woS0bhEvL15+eZF5YzA5P37za09JpAfEoD48ei/kz8qteVzA7Y+ 4EECQSHBO30Hr+NDIj46PsRxvD/jc3R+iMMAezjM1HB+qPTtugNERNq7arZzyu3Wb6WDUgvTQaVR +WYWzL6Ha6eEm58gqT8/JFwHRZ291BiiIFFDGh7yYpa3VsXFrIpaHt7pOjXaFRNuje8v38vqiQCx FlJVouEXgLr0qyJ62u/RlPvXLjh4q1CcBGt9tdNfnnIdjO3KLsONIXl+3YtjPdh1rw7K2uteVW09 sHIdpO2KFYSiDeHyl73t5a93fturorgHaq5zt11Rw7oxuMwve0lGRwQryrpd9urpfVSZQ2Eu7UL6 en7ZqyrNgwnXtbuOTJjdYdrg68Nc9vLHJ19ormrogY/rMlxHfIrND/dlL7AbTexlLyYIDXbZyx+p /LJXVVcPpFzX4joiFSvzOwRf7q6XPzr5Xa+Kfh7guG7CdQRHxlFDMsel4kQK0n7Rq2dww6kJbpoV 2VHRds2rw2jK8t+gizplkdJ1w64j7CI2v+IY/o6XP+r5Ha+KHh4gBExguIx0k2N2veHlLXp3w+tY tld2KvVRdkpEvE8gj7JTImkpWc6z05Mf/Sh9uy47NUPD7nd6ICCSJ/WxeeHdL5scCWhvIItat4YH Fez1mWMtdm32UoqfP6ntyNnIwvx/FTDBSg0KZW5kc3RyZWFtDQplbmRvYmoNCjkzIDAgb2JqDQo8 PC9UeXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291cmNlczw8L0ZvbnQ8PC9GNCAxNSAwIFI+Pi9Q cm9jU2V0Wy9QREYvVGV4dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4vTWVkaWFCb3hbIDAgMCA2 MTIgNzkyXSAvQ29udGVudHMgOTQgMCBSL0dyb3VwPDwvVHlwZS9Hcm91cC9TL1RyYW5zcGFyZW5j eS9DUy9EZXZpY2VSR0I+Pi9UYWJzL1MvU3RydWN0UGFyZW50cyAzNz4+DQplbmRvYmoNCjk0IDAg b2JqDQo8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDM1MTc+Pg0Kc3RyZWFtDQp4nMVcbY/b NhL+vsD+B34JkLQbme8iF3WB5qVtLgna2+aQA5Lg4LW1ibu2tbXkpCny429IybYkSxQVSe0usNZS FGf0zAtnhqTR5Ff03XeTl4+fPUH4++/RoyePEUHvz88eYvSQsIAhoUXABVKcom10fnbzzfkZevry Mfrj/IxJFFIcEIoE0wHhiOpAKdvt9TdoAx0Lo5PJi9nmPbofbR7+57cHOSmfQR69Oj+b/MiRCjBH r27OzwjC8EsQx4GAhzkLiESv1udn2DCO0U9A+CEOMIXu8zf3MYz1xy5K0gTdxFuU3FzvbhK0iFaz z9HiwTv06l/nZ0+Bxr/Pz3qxQ0IdhKqenzf3UZmShbAIDx0LHkaDkPrD82zyC1pululylkYLdP0Z JdFmcbNcRQMiRYEnSepZa0eKjYUUpQFVTqTms9UqQWmM7rZxGs/jFVpsZ8sN2sa7dLmJkiExwixQ pJ6pE4zqYOI5MvYxJsz4oYCxAy2RPJjy/ob9sPfsn8Ntub+X81+6K0Jh2vbDYkxRzc3D0OZ+efSM OIEPld1hPND0ZIhjBzNEuU/BFwmwPBnuOSVSBKzJGYlmHWoZpU2JMMjLZW6PV8tok6Jnm5v4sk5d vpa8VgFRvJ5+u03JsfCQmhin2IzH1d0cPY53mzQZFg6YugCOWvLtcISjwaFIwLgDDoSGREFgMxW1 Ev0pSmdpukUI/Xa4Qi/i+HZ3h9BVNFuslptbZH/Mf/bi9XaZRubi8TaaZVdX0Tr+WDtPfDVoBNwl q3+Bdimq0aQYEsDOKUX7MyQUIbf23UpahDlxAsPvf45X2P5VGAtq+lCsMGjI/occ+40hxjre28Wo RxOjhA+XbxpYhILbubeN6lW0ma2jTAwvMsP77fP6YIIvbxfLbWaL6+zK2KS9uoK/v652ibn5w3we JbUBSV8p1vHfLkWCRxOj8XJunzqCOapAG3Nso41HoE1IaHp6vPegVKmEV/Z/472v6XJFx9DXOoY9 9NWRr/bUV3gsbFHXQd0OkGun+vJ2E9uZ/cckgRggu1pClIrQr7P0w+N4c2Nn+3i9XqZDskchyIdE p44/Dyk50uaeUoJMjbrKCmN6lTba3pY1hqDqmPOQlCNt7ykpogLlColMijF4vqUYJBa8nrgHGnw0 NLAy/VsY6kkJOjZQKuL+armOFr/sjC95tvk4Wy0X6L8QrNxBBpzYsCXd2qurvP40hrJ6oXEqntHq AwLydOx2K8MngK1ECdVSatngSwaNIrTJY3xgGNq3UsoCTNopj+s9a8l7KORoBRqhwkA6C1az+Ydo hHpVVqCpJe+Bx2gVGhGGRkGb8fjB1Ed+XqY2y3q5TBJwYS9MpaTS9mgZX9W0va60jRDx1r6CB6aj 1UuElEH4D2VorbT39RJ0qJwU6yB8cDfEQ+uG6th6c//hoKQEsS6nDYF9NehQFyKYYXDU2iBBicKC jKGl9Qi0aulo5SAhZMAcMBmDfnFq0E+qRv5kuX1abfthPk/Qz0OiyEQYcNbK9Oi+poa+x2LnaMUg AbGpcgEypqtpo+2dttVcHUu4YxRJaln3WmakJJdevpiHtYlvslVG++LFpUTCC0uhtUuJx8ftamJ5 hHwhUgWSZU+TEAeFfQn7DqGB4rCYicHddugh97cLbBY75JwW2DCcVjg5dDoQ2i9+1g7l7HUKeW3R o9lTmDfCbeI9pVKbsBsqxYEpDpi2A9cMl6smJkGo93DqhhVZ6sqIG0cospM7AMwCLSxHpRrAbLOI 1/mqFaHqttaIvOgQSm0KcyTkZyjidEFeEGwI1q7JC2JNcq+kQmtdUtLs9lFJSx32mnUcv3ZxvkAj 17wyGcsI1zBRi9zguGmpjFLoYE2h1KegCIzLgBxYJpybyL9BGxz5TsswpyrBFUiKHLW/qBc01ELw CaeaTKiCjESgtd0ftNygXRKht/fnu+022qSTucmDJmmczlZvH9RpTwe+qBYmCKwy5jFxOvKe7riE LOBhPS6aEjGhItQTQkLNJ1RSGNFig+arXZJGWzdGk/Xsz944sZAY/awy6oGTI5fpjpNkAWYOnEKp a+OJDpTycqKLlMH+2wP28S5F8Q26m81voxQl0TzeLGbbz+iveBM1CKa/OLR1F1UmPcThCNq7i0PQ QOgGczZqKid4kl/kastvAYu72fsIJcu/AAf0+259Hf+Nisy5jRaqrHtsdnNEyt2R4zSgDQaviPGA AvMJwVKLieQYY6Rv/3aomJIBZSe8ekDlWLHrDhUjpYipFMZPzC+jBiAiTxDq+f5EaRO7uxgYXwbc WniVBw8ZONbjusuAkn1weQIBoaCmlDyHSTuU8vmEUKaZUs/R9ecU8u3ZahXP7Q7WNEabKP0Ub29H msuZJharCrMeWDlWxLpjRfA+Gq3X1/Iu33W+B3qzBIje2qklmew1eFKaafoDBJmSMegKhx4AOVKC 7gBh+GgKAgnlREEYeIJTxbQPgPHbib6dgPH3R4fafb5V9jzQcaxRdUaHaUDDqT75tvmy6+nr6rIy hYs4GOtdNLsdxKvlK0BVch5QD5mNsFCbze/1UDccVTBa1xfrkJvtO1XqHi8/ZMrBpA5og0vvfE6j 09vbsxlV8h5vP2QiwYQKVJOV9TiG0cUGsqMXVU48gBgyhGcQDvNGG/iaUxZdIMhOVlR58DtUgUvV Tg6uZF8aKR2a4JApHStJDdWZwtPl6kw2OOMkoGRfd9GBOCm8FHpkhZdSp0LlRctAycMZDgoTMuMN lRfuiKJbhqmRM4UuDQnHJ1OK+59R6UuYTt6DcO+Wiym+QL/H18mUvH1wiaLtdorwJTI3CMaChJfo NZjDy9kWUVP9vRTqkocgUcLrlKILvxCdQJBSZbjdLrgj4u2OFwkD3RD01y9ZdCCR7QVxkbAiuUTL eAraCkHby0cX6PrTlCsW0vD5o0lyAffukikLQ32BtrtNOkWUwCy+TqJ5T+7ylKPKnocAHGF0dwHg MOAN81PTqlEXEditmC4i89UsBXOAGGtuLGC93Ew5v0AQ/kxJSAgXF2j28f2UUnB3YCtJuog+TsGj yUCGfSWQxelV7jwk4IjTO0uA6rAxGGxcuOtARfI2Kv4SEHwUCVSZ85CAIxfoLgElGwPUwWzARcTa wF20nUOqDtNDcpRGT9J5DFyl7YGvIwHojm8oG2PgATXcQeULQiTAOP0wfWNWlvW7C4TEsUH0ZIAS ZcISFwcCSMKMeyAZUmigxQb+7qKvNWXV6iobHtJ2ZDzdpQ05blPMP6C0HVS+gEsvAGug58WGEBpE sUFBgyw26MFkUWHTQxaO/Ku7LIRoTDsGlIWDyhcUFoBVBJBWxQYjHF1o0NI0HG2TEDWYLCpsesjC kQJ2lwWkaLohNRhQFg4qX5DWe2ApzpCGBCpvIJpmDdo2MI65yhuEaeibcDAZmnjXxSCE2XIwaVfo tEtbOFaeukubgThGF3YzEStrrY3gQsWYfNc3kjDlBlWl6AHqkNk1pby0l3ycQM1B5PoThGbPH8Eb JvtQmXFBCc3CZY1DjNWFCeVM+g62di+LnbniQtKAiH34rEPoGfC+8TOHPsamKix7iGXIJJ4SVtrS Po5YHESKCQy6BEOekkCyeybiwlMzuyi4ptDMIYe8ZyZ/PDXfUQKXoTCXmN3r63Ug6JLshEsPSQyZ zZtKX2M1ZShBNNOoyAHQt+DSvuCavYbG+ZQpe2A7ZJ4O8yM8MTK4LiIZuus9unRKA2m0mU9NUdRq u0Gb9EU737DvYsWkLhktY0zGlvobUFYOq1L1EPKQpQCiqNnVOrKQHUQqQiaZBfHcfw0i3ixZrTLh AfSQNQESElOZGLPw6yIxv9sdtpcDzLsEJmyIF82ckXxOptTsSIHrefrnlHPOzPT++42p1cOUfzOt PR3SRdcFNZNFlUEPEQyZqBNJzOrJqCJwkHj2C1pEd+mHJBMBMV7EqHn+yfNPlX8SmYVUcMlMH3Px /VRye9l7LURY31Ph1kMeQybrRJBg9FzdQSTZXa+XaSYNnCPMD6CrvAXkUJFCLoMhxWGPaFRY9ZDG kOk6gQ/pmO4HEoeDyjxe360i801TmTzIPy2PCq8eX7I2ZEZNGC59J9ZIAnFQWSbJLlpkBmK30k23 Uzz5NFUQ5+KJXcpNPsTbNG+2TX1nCpiVzGRd4coD+iHzbgIK0KmgLIdMLwnWAW+wkavdBplvb9kl yC6ow2Nv789WK7umPsD6ieLmq5+qHHgAMGRWZ6p2dNyJ2kXi9dWzV31tS9uz7y4qTyvL8LP377fX 02HWv1x07Vq/7BtTZztsXHSy3QQQPl5n2wtk3jD7s9Kw3KTTwx4D26Hwf19hQx91Km2vnUCSl3YC MfjYb9Yp7QRioLRUlA8TnpwTLDxdtxNIk0DR/MAk5Wx/tLMwQKGHGaDSqbATiPPCl7lyYg+d1G8E ko5E0j3KqcmaA7INLssewAonglA1oYpL9fUHsPy5ys9fVdjy8GOOnK8rJlIFoiHrtYeKmAKvCTNd qIY8fOXPZX72qsKmB0aOpKwrRkKZL/5txoiEmqHac1H93j0/g+Eg3//cVRcssMmPK9x4SMKRjnWV BA8D2eDK/9YzVx34zo9clRn38+265NuVOLrfkm/XuDjVZLs8T87YFp6u3eWZH//l0nar3FSlr+Uu zhztHWR+t2XqOR4PrrCw73AgcTycfjqKs5MF+P8rKbK3DQplbmRzdHJlYW0NCmVuZG9iag0KOTUg MCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwvRm9udDw8L0Y0IDE1 IDAgUi9GMiA3IDAgUj4+L1Byb2NTZXRbL1BERi9UZXh0L0ltYWdlQi9JbWFnZUMvSW1hZ2VJXSA+ Pi9NZWRpYUJveFsgMCAwIDYxMiA3OTJdIC9Db250ZW50cyA5NiAwIFIvR3JvdXA8PC9UeXBlL0dy b3VwL1MvVHJhbnNwYXJlbmN5L0NTL0RldmljZVJHQj4+L1RhYnMvUy9TdHJ1Y3RQYXJlbnRzIDM4 Pj4NCmVuZG9iag0KOTYgMCBvYmoNCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMTgxMT4+ DQpzdHJlYW0NCnicxVptb9s2EP5uwP+BH9MOoXh8EzkUBVqn7bImaJdm6IB2HxxbaTXbUmrJ3fLv R0qW4xdGJGppS4CYNpm703P3HO9Io+g9evYsuhydnyHy/Dl6eTZCgL4MB6cEnQLDDAktMBdIcYqW yXBw+3Q4QK8uR+jbcMAkkjzGwJFg2r4AoVjJat3HpygzK7fEQ3Qxzr6gkyQ7/f3Dk7WuICkvr4eD 6DVHChOOrm+HA0DE/ALiBAuJYs4wSHS9GA6INZ2gN0bzKcGEmuWTTycKNEQUKImAMIBIckII0jP0 12pxk6PJfFWUybJAaYZWRYI+n0xWy2WSldFkPPmaRGVejufRYvzP5ydP/kTXvw4Hr4xBvw0HxxnP lMSUua3/dIJ2VVWQb6NJe0OTURzTFjRJZH8ZtRiCDASxS+CoJpiC39DeHGc8Rphbv99xrDfHUYqp asEDKBBJ6duIKqLU2wgoUzGQt+jmvkwKNJ7P88m4TKaozFGWlH/ny5mbCx3DqaGC02W+H07eG5wA WHl5YOR9WyVFWaDbfIkWN6vbAk2TLDUofj6p3kYNMyL79qfmXccYMlXlEpfNfgxFbxgSwKIFQgZc QRzHh0juJZUNpHwW6Vlk0k7H+FGGFTjt9cMn+4JPWma0bWx1CBa3VdgdJN67ZDzrPvtRAxUBt21+ rOLesFIEy1a6oi5REMRu2l6lOzG99lMdy10aAzHHWrmt8btE9eaS2IDUtiHtcX6Dz3x83zVAGsfK bZAfIN0bQMKYxcMBOo/eGZanZVrt0jf3qEiy6W0677a6MmWNBLdtfqyA9AYW16YbacNqYiqYwhYv d8u8zCf5HE2XY5MVl/mqTLOk6BQlUm0YLqMOQHLiBGtoqv9jwsqPhZGNtTFq02k1E9VLNVf92UzL Zm79ADuzIhb2s0YsIRQ5Jjei7fyu9GpKcoqhngASY70v4WHeCthdstUocsO9GDZAS4GZeqxRbOlt PGJ8MUQ1pm0b6miemp0TnWe3+c+ucPlR9VqZxMPd+gNI1dIzHAkIqPYi9+pugkb5KiuLbvFgZiF3 qw/Ao6XoPxIPouz6x/FAziriR5Wuqwif0jdJOS7LJULow2aELvJ8trpD6CoZT+dpNkPVj31XDT4u 0zKxg9EyGdejq2SRf3fuFT8MGmismPsBAtzY0ncc50Zh+NZaONdgdQmF1FVEezUDsFo5ol2qB9M9 m9LGq95kn+aHb0Zk/QpE2xcGSlPYzD6MSA+x47Q4IHZamq4jY0fF7TV9x4EjuN1avVqvkmy8SGo/ XNR0/3C/2BD/cjZNl3UGWNQjmwmq0ZX5+9400nbyxWSSFM5C6Fg3uuwPcGNLP3ikG+O4vflogrpL MGJl+jDu100eKHXECPrwo8v0AD+2NJFH+lHK9h6pax/avidA7eUsy6ud9nVRmD25HqWmbkTo/bjT zK5tUPkNKr+O8uzW2jHKF4u07NIGakp90+04bQgIjpYG+sjgELK1J+yT4x7VwXTuw08O2wJulFpa 9yPdxIXNLe1dRuc9l2I4to5yKQ9Ao+W68kg0TF8twGfQkZrMwkc0beN+nS6S6buVzV7n2ffxPJ2i P0zhcGe64KIqIcplNbpan0L1EatBaBy6p7czAkEFJvF/Xv/5tDJKuP5fk4rLwgBH9XZ2IYBj2ZZV RvbGsoeznPrswqk+AI/ezi4E4Rh0Cx4v7MnBL2lZdQKXaVEYal/YM4S9z16m+ZXjs4+dhj0T1MLn NXrXjj4qXpcFAX7s7fCCmwCL225Ne6xqvLrjpvfYHGNsHxDw7g2jppaR4Dbs08lpp6okYP6Iqp0j nPqYBK0PTChhoDXURyeUENVHX/bI83ujtLdjEm7KK9ZWW9gscnGYRc72M8tZuny1/9mLyaRAvTPf +QgBmPZ2ZsFjipWn9OiN+T7dwbWHYyQeckUffnSZHnQLR9Xae+urLiabq66dOzbJYeuS0MwLrQ/v wTb/vHvRVovm5qFkvL5GM4+79W3KZgG3GDR3fDsaAlbIZrrdyC0zqtu6XUs2izaK1neCblGtqw6x dp4C1L41lsBBgWAeydTGnt7g8Ot1zi7WqtkWbEDS9Z7vE+dsA/fFmdKYiSBxzrZlXxyQpmP1iXMW 13vimJaYhj2sszbdF6cAx3GQOGeJtC9OxpiGYefcy/bFierbCyHinGl8XxwzSScMO+cJ6L44yrAI w+5RtmyLIxqTIOx4CCuo5rb+CREXwgoam+wchB0PYQW120IQdjyEFVQYkoVhF8IKypojN5+0EFJQ CljKIHEhpKCk+pJLiLgQUoCmzV2PT1wIKcBUIGFRF8IJkIZiKkScCOEEcFMeBUEnQjgBjNsNN0Rc CCfAFEBBDBMhlAAimmt8n7gQSmhT1PEw6EI4EZtdLEhYCCNMpyfDnjSEEPY7ZI+y9V9Hm3T5DQpl bmRzdHJlYW0NCmVuZG9iag0KOTcgMCBvYmoNCjw8L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVz b3VyY2VzPDwvRm9udDw8L0YxIDUgMCBSL0Y2IDk5IDAgUi9GMiA3IDAgUj4+L1hPYmplY3Q8PC9J bWFnZTEwMSAxMDEgMCBSPj4vUHJvY1NldFsvUERGL1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUld ID4+L01lZGlhQm94WyAwIDAgNjEyIDc5Ml0gL0NvbnRlbnRzIDk4IDAgUi9Hcm91cDw8L1R5cGUv R3JvdXAvUy9UcmFuc3BhcmVuY3kvQ1MvRGV2aWNlUkdCPj4vVGFicy9TL1N0cnVjdFBhcmVudHMg Mzk+Pg0KZW5kb2JqDQo5OCAwIG9iag0KPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCA1Mjk+ Pg0Kc3RyZWFtDQp4nI2VTW/aQBCG75b8H+a4juTxzn6vFOUQCFWqpkoVqh7SHhAYuGBSlP7/jgmt sMO2iw9eo53nnXnXM4bmEa6vm4fJ/RTkzQ3cTidAsCmLWkJNGjXYaNFYCEbBoS2L9VVZwN3DBOAs kppPi24Dou3qr0/VCXM7L4tmRkAKo4P5uiwIJF8E2oE3Bvk235WF7OUkfCiLZxERqh8w/1gWdxzd E/7EWIVanYc9i8fKStEe1vuqtuKwW1S1Ed2yhapWYr/mmxYvVR3Fol+9LrftCvotx8du1S+Pz7+6 hKRiMROHmnVyb0Q12vtyEq5IijdxKz7PnhIErQyGEeG7GG1uZo79HJqpIrpRnOulTMUFvm6r2olD 2/9xNGiVkg+Ezg8xqbPQwSLZ4d439yvHAmw/hESoIYPW5nlqKGAcVTaoxL8ttqzank72fYGXX0Hj LTo3crtKJRIkapP25l0/qHQ/KCBCaYbJBN8nY5xEp3P4OsUfEYkT9yaHaP5H5MLJBbQ5MJsDsxpj yKG5HJqOaLKOxufQuBVjlm0hh0YSeXhm0GIGjaexD1m5kczAcderPFpyxp/RnEOfVSklO+SMxiM/ D6ZP8T/5O4Ly+CMKYGxAz0nxwFQ+IrvWf8C+XUFXFsbKfoT1WsoH0J7f7GP/GViyYnO/W2xakgTT PXy5pGkua5IPGP5K8tD5t+SpQZOSvwGSrJJiDQplbmRzdHJlYW0NCmVuZG9iag0KOTkgMCBvYmoN Cjw8L1R5cGUvRm9udC9TdWJ0eXBlL1RydWVUeXBlL05hbWUvRjYvQmFzZUZvbnQvQUJDREVFK1Ry ZWJ1Y2hldCMyME1TL0VuY29kaW5nL1dpbkFuc2lFbmNvZGluZy9Gb250RGVzY3JpcHRvciAxMDAg MCBSL0ZpcnN0Q2hhciAzMi9MYXN0Q2hhciAxMTYvV2lkdGhzIDMxMjEgMCBSPj4NCmVuZG9iag0K MTAwIDAgb2JqDQo8PC9UeXBlL0ZvbnREZXNjcmlwdG9yL0ZvbnROYW1lL0FCQ0RFRStUcmVidWNo ZXQjMjBNUy9GbGFncyAzMi9JdGFsaWNBbmdsZSAwL0FzY2VudCA5MzkvRGVzY2VudCAtMjA1L0Nh cEhlaWdodCA3MzcvQXZnV2lkdGggNDU0L01heFdpZHRoIDEzMDAvRm9udFdlaWdodCA0MDAvWEhl aWdodCAyNTAvU3RlbVYgNDUvRm9udEJCb3hbIC0yMTggLTIwNSAxMDgyIDczN10gL0ZvbnRGaWxl MiAzMTIyIDAgUj4+DQplbmRvYmoNCjEwMSAwIG9iag0KPDwvVHlwZS9YT2JqZWN0L1N1YnR5cGUv SW1hZ2UvV2lkdGggNjAwL0hlaWdodCAzNzEvQ29sb3JTcGFjZS9EZXZpY2VSR0IvQml0c1BlckNv bXBvbmVudCA4L0ludGVycG9sYXRlIGZhbHNlL0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggOTIx NT4+DQpzdHJlYW0NCnic7d0LcFTl/f9xepu2iq3aWrXOWCdVcXDUAoqC93qBCCuXYCLl5khQN2ga 8U4BL2nGSRnRSoKYykWLSMeMCBREREUdLHct9a7czUZJTALEIMGwv+/k/P9n1uxmE9mc3c+evF/D MM8+m9397Nlzvs8+u+fsCYcBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AACA1OjSbNq0aTH7I9vR3D9eu3bt0KFDu3XrdsQRR9j/N9xww86dO+M8nOtHP/rR0Ucf3bt374UL F3r01OJn+PGPf9y1a9cePXqsXLny8B7lnXfeufzyy4899tjjjz9+8ODBiUUGACSbMxwceeSR27dv j+6PbLc2DtoQ9tOf/rTFVb/4xS+2bdvW2sNF+8EPfjBnzhwvnlr7Mzz33HOH8Shnnnmmew9ZWVkJ pwYAJJVbw/v37x/dH92O1rdvX7v2xhtvXLVqVU1NzbJly8455xzryc7Obu3h3ItNTU27du2yv7TO iy++uIOeU+zHitnf2Nj4ySefjBgxwnoGDhx4GI9iM0q77fvvv79///69e/cmlBgAkHTOoDBmzBj7 /9lnn23RH92Oduyxx9q1r7/+utuzcuVK6/ntb3/b2sO16Pz888+t8/jjj3d73nzzzd69e9uk8oIL LrDhNfKPZ82aNWjQoKOPPrp79+4Wu7a21r3qjTfesL8/6qij7LbWbs846LCx2HqOOeaYNgM4t33h hRdOPvnkQCDQYkbp/M2OHTtuueWWjIyME088MScnJxQKtXZzt8eWnj2cPSl7anV1df/5z3/s3cWv fvWroUOHRj7B1p67cycffPCB3edvfvMbW/K5ubl79uxxb7h69eoLL7ywa9eutpAHDBgQOfePs6gB oDNwSqjN40444YTjjjuuuro6sj+6Hc1KfZfmb/qysrIWLVp08ODBNh/OadtksKGhwaZjo0aNsk4b O5z+nTt3/uQnP3EHF5twbdy40blq6dKlLYYe9ys5+xvLEHmr9oyDBw4csEHhhhtusJ7LLruszQBO j/NA/fr1ix4HbWyyQSqy0xZsVVVVzJu7PT/84Q/dv7d5sY1K7sVhw4a1+dydiz/72c8irx09erRz rY3ykU/HTJgwoc1nCgCdhFvAy8vLuzRPDFv0h1v5Qs29h6+++uq8885z+20+cvvtt7dzPxnX7373 u02bNjl/M3PmTOux/21M+cc//mHt66+/3rnqoosusotPPvnkvn37VqxYYe1TTjnFucoGBbtof7Bu 3bq1a9fa9KdFzvgZbAhYsmRJmwGcP7al9MUXX3z00UfhqDcJBQUFdtFmWKubOUtm/Pjx8W9u919Z WTlv3jznok0DKyoqpk6dau2TTjqpzefu3MomdBs2bLB3MjbMdWkef51rn3jiCbs4ZMgQG/Fnz55t 7dNPP73NZwoAnURkGbcJnbVfeeWV8PcZB8PNM7vnn39+0KBB7g4zxx9//Pr161t7uBYD0Pz58+0e 3L+J/rzx97//vXttY2PjSy+9ZCOLdXZpnkk5/TaS2sXly5c7F1988cX2jINdu3a1gebqq6+2kaU9 AZyL7733XswFaHr27GkX7dEjY3Tr1i3+zW1SbG2blTsXnSHSRsbIJxjnuTu3+u9//+tctPGuS/Os 07k4cOBAu7hmzZroRRF/UQNAZxBZxm2Scswxx2RkZDQ0NET2tzagRKuvr7cB0WZDXVrZ7cS5Kxv1 du/ebdMQZ9y0iUzkl1mWoUVxPvLII52rbGz99a9/3eJa5yrns0T3KzObFsUfB+M8izgBnIs2I2vt 3o466ii76H68XFVVZRePOOKI+De3BW7tQ4cOORe//vrr6DuP89ydtvvcbbiMvNb5AjdyCbfnmQJA J9GijM+dO9cu3nXXXdFltrV76NOnj11r8xS3Z8eOHdZj5bfNh7NB0/lqzP081jgfaX7wwQfRN3dG 2NGjRy9evHjbtm2R93bttdd2idhdZ8mSJYc9DsYJ4Nw28jvQFvdmY3qXiPngwoUL7WL37t3befM4 F+M89+hnFNmTmZlp7TfffPN7PVMA6CSiS2i/fv3cHU5a+5tIzheLp59++qJFi2pqatauXevsOXP1 1Ve35+Eefvhhp9M9lH7SpEldmr+o2rlz51NPPRV5VyeccIJdtDHOppMPPfSQc0ObhNpVZWVl1j73 3HNfffXVdevWOUdzHN44GCdA/BHH3HrrrV2avx98u5kzeBUUFLTz5nEuxnnu8e+2pKSkS/Oxjbt2 7Zo1a5a1L7300jafKQB0EtEldPv27V27dm3/ONjU1JSRkdHlu4444gj3q7r4D2c3t7LcpfkrRefj xI0bN0YemG8TRvebr7vvvjvyUY477rgu///rtlAoFLnro3sP7cnQQpwAbQ5kX3zxxWmnnRYZ8qST TrK3B+28eZyLcZ57/Lu1F9Tde9bhzprjPFMA6CRiDgrODKJL+8bBcPOe+WPHjj311FN//vOfd+vW zaYere1+H/OuduzY8ctf/rJLxKH3q1atssHxmGOOGTBgwOLFi92/rKqqmjx5sj3EySefPGHChE8/ /dRuZdNP59q33nrL5l82iNuscM2aNYc9DsYJ0OZA5jwdWxqnnHLKiSeeOHz48C+//LL9N49zMc5z b/Nu7emcf/75tpBtsm8T5/Y8UwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAMD3KisrH3nkkZycnMcee8w9L3ltbW0wGAwEAgUFBfYH3nUCAJBaxcXFS5cutUFq8eLF 06ZNczpLSkrKyspCoVBRUVFhYaF3nQAApNaQIUP2799vDfs/KyvL6bSJW01NjTUqKipyc3O96wQA ILXuu+++ZcuW1dXVLV++3NpOZ3Z2dmNjozUaGhrcwdGLTgAAUquysnLAgAFXXnllZmam+7WdtZua mqxh//fv39+7zjZtAICO1p7i0+umDW3+a2cdg7g777zzpZdecr4fdOeD48aNs1mbNerr63Nycrzr BABNhzcOZmRkRF6cO3fuww8/7PQ7zj777OHDh2/bts35g1Ao1LdvX++fDeIZNmzYt99+G27+fnDI kCFOZ3FxsTM3tNcoLy/Pu04A0JT4ODhnzhybaBw6dCiyv6amZsaMGYMHD3YuLl26dPz48d4/G8Qz bdq0tWvX7tu3z2aFf/vb35zO8vJye6Wqq6unT59eWlrqXScAaEpwHJw9e/bYsWMPHjzYot/U1dX1 7t3baRcWFs6aNcsay5cvP/30088444yhQ4du3rzZ2+eG77I52tSpU6+77jr73/1+cOvWrfn5+YFA IBgM2kvmXScAaEpkHLRBcNiwYc6u+JH9xiYdM2fOnDdvnnPRRr1NmzZZwwZB6/zmm2+ee+65zMxM z58eAABxHfY46MwE7f8W/ZGef/556zxw4MDZZ59t/1t71KhRt9122/r16539KAAASK3DHgcLCgqs UVRUFLljauR8cMmSJVdccYW1N27c6B5EVl1dnZOT061btzPPPJPPRQEAKXfY46Azv7P/b7zxRvfH KiO/H7QZn00DrfHUU0/99a9/jbz5N998s2DBAvfbQwAAUiXx/UW3bds2evRoZ1h0+22kmzlzpg2R 1s7Ly1u2bJnTP2LEiPnz59sQ6eww4/nTAwAgrg45fvDFF1+cOHFi+LvHD44ZM2bXrl3W2adPn1Ao 5Pzlhx9+OHbs2DPPPNMGQRsKvX9+AADEw+/JAAA6M8ZBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAP9LppQ4f/S/VzAgCgvRgHAQCdGeMgAKAz YxwEAHRmjIMAgM6McRAA0JkxDgIAOjPGQQBAZ8Y4CADozBgHAQCdGeMgACD5dEYfnSQAgM5DZ/TR SQIA8JRUwdcJo5MEAOApqYKvE0YnCQDAU1IFXyeMThIAgKekCr5OGJ0kAABPSRV8nTA6SQAAnpIq +DphdJIAADwlVfB1wugkAYA4mpqaZs6cOXTo0DvuuOPjjz92Omtra4PBYCAQKCgoqKys9K4zETpl VieJVBidJAAQx8svvzx37tx9+/atWLHCRiins6SkpKysLBQKFRUVFRYWeteZCJ0yq5NEKoxOEgCI w6aB7733XotOm7jV1NRYo6KiIjc317vOROiUWZ0kUmF0kgBAHNdff/2zzz6bnZ398MMPV1VVOZ12 sbGx0RoNDQ1ZWVnedSZCp8zqJJEKo5MEAOLo37//1KlTv/zyywULFpSUlDidmZmZTU1N4eZvD+0P vOts04bWeVFmD49OEqkwOkmAFjqidsI/Bg8e7Oy1Ul9fP3LkSKdz3LhxNmtzOnNycrzrTIQXZTbd k0iF0UkCAHHY8GSTQWvs3bt3+PDhTmdxcbEzOIZCoby8PO86E6FTZnWSSIXRSQIAccybN2/OnDm1 tbXPPvvs1KlTnc7y8vIZM2ZUV1dPnz69tLTUu85E6JRZnSRSYXSSAEAcBw8eXLRo0Z/+9KeJEyfW 19c7nVu3bs3Pzw8EAsFgsK6uzrvOROiUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCA L+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0 kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0k UmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mU WZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCA L+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0 kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0k UmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mU WZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCA L+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0 kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgCAL+mUWZ0kUmF0kgBAm959990rr7zSvVhbWxsMBgOB QEFBQWVlpXedidApszpJpMLoJAGANk2YMCFyHCwpKSkrKwuFQkVFRYWFhd51JkKnzOokkQqjkwQA 4rPJYItx0CZuNTU11qioqMjNzfWuMxE6ZVYniVQYnSQAEJ8Ngi0+F83Ozm5sbLRGQ0NDVlaWd52J 0CmzOkmkwugkAYA4nMmgNSLHwczMzKamJmvY//379/eus00bWudFmT08OkmkwugkAVpItG7CX5zJ YPi74+C4ceNs1maN+vr6nJwc7zoT4UWZTfckUmF0kgBAHFd+l9NZXFzs7NIZCoXy8vK860yETpnV SSIVRicJALRH5HywvLx8xowZ1dXV06dPLy0t9a4zETplVieJVBidJADQHpHj4NatW/Pz8wOBQDAY rKur864zETplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzpl VieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADg SzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBid JADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJ VBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzpl VieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADg SzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBid JADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJ VBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzpl VieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADgSzplVieJVBidJADg SzplVieJVBidJADgSzplVieJVBidJAAQx5YtWyZNmjRs2LBHHnmksrLS6aytrQ0Gg4FAoKCgwNPO ROiUWZ0kUmF0kgBAHHfcccfy5cttkJoxY0ZxcbHTWVJSUlZWFgqFioqKCgsLvetMhE6Z1UkiFUYn CQC0hw2FgwcPdto2caupqbFGRUVFbm6ud52J0CmzOkmkwugkAYD22LJlS15entPOzs5ubGy0RkND Q1ZWlnedidApszpJpMLoJAGA9njmmWfWrFnjtDMzM5uamqxh//fv39+7zjZtaJ0XZfbw6CSRCqOT BGihI0om/Oazzz574okn3Ivjxo2zWZs16uvrc3JyvOtMhBdlNt2TSIXRSQIA8e3evXvq1KnOh5aO 4uJiZ5fOUCjkfljqRWcidMqsThKpMDpJACCO999/f+LEifv374/sLC8vnzFjRnV19fTp00tLS73r TIROmdVJIhVGJwkAxDFy5MgrIzidW7duzc/PDwQCwWCwrq7Ou85E6JRZnSRSYXSSAIAv6ZRZnSRS YXSSAIAv6ZRZnSRSYXSSAIAv6ZRZnSRSYXSSAIAv6ZRZnSRSYXSSAIAv6ZRZnSRSYXSSAIAv6ZRZ nSRSYXSSAIAv6ZRZnSRSYXSSICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSSICad F0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSSICadF0gn CZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBe dLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYd nSRSYXSSICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRS YXSSICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSS ICadF0gnCZBedLYdnSRSYXSSICadF0gnCZBedLYdnSRSYXSS6JBaJjphdJIA6UVn29FJIhWGJMpJ pMLoJAHSi862o5NEKgxJlJNIhdFJAqQXnW1HJ4lUGJIoJ5EKo5MESC86245OEqkwJFFOIhVGJwmQ XnS2HZ0kUmFIopxEKoxOEiC96Gw7OkmkwpBEOYlUGJ0kQHrR2XZ0kkiFIYlyEqkwOkmA9KKz7egk kQpDEuUkUmF0kgDpRWfb0UkiFYYkykmkwugkAdKLzrajk0QqDEmUk0iF0UkCpBedbUcniVQYkign kQqjkwRILzrbjk4SqTAkUU4iFUYnCZBedLYdnSRSYUiinEQqjE4SIL3obDs6SaTCkEQ5iVQYnSRA etHZdnSSSIUhiXISqTA6SYD0orPt6CSRCkMS5SRSYXSSAOlFZ9vRSSIVhiTKSaTC6CQB0ovOtqOT RCoMSZSTSIXRSQKkF51tRyeJVBiSKCeRCqOTBEgvOtuOThKpMCRRTiIVRicJkF50th2dJFJhSKKc RCqMThLAU7W1tcFgMBAIFBQUVFZWJn6HOtuOThKpMCRRTiIVRicJ4KmSkpKysrJQKFRUVFRYWJj4 HepsOzpJpMKQRDmJVBidJICnbDJYU1NjjYqKitzc3MTvUGfb0UkiFYYkykmkwugkATyVnZ3d2Nho jYaGhqysrMTvUGfb0UkiFYYkykmkwugkATyVmZnZ1NRkDfu/f//+7bxVLwDoaF6WOqBV48aNs5mg Nerr63NyclIdBwCApCouLnZ2Ew2FQnl5eamOAwBAUpWXl8+YMaO6unr69OmlpaWpjgMAQFJt3bo1 Pz8/EAgEg8G6urpUxwEAAAAAAAAAAADgB5s2bRo7dqzT3rJly9NPP53aPApYJjGxWKKxTAAfOHjw 4NVXX71ixQprT5kyZdGiRSmJsX79+ptvvtlpb9iwYdmyZd98801KkoRZJq1gsURjmQBpat26dbW1 te7FN9988/LLLz9w4EC/fv2++uqrlESyR//jH//42muvLViwICsrKzMzc/z48SmJ4TRYJg5WlTgx nAbLBEhHgwYNuv/++yN7xo4dW1hYeNppp1l/KBRKSaqVK1decsklt912W7j5V3SsqiTz3fXkyZMH DhzYt29fKyZOD8skzKoSC6sK4AP2drFXr14ff/yx27Nt27Zu3bpNmTKlqKjonHPOmTRp0ueff560 PPYueufOndYYPXr0fffd53SuXr36wgsvdH5czmvPPPPMvffea421a9eef/75TmcnXyYOVpUWWFUA f7A3+bNmzRo1alRkp23Cd999tzW+/PLLBx988Pnnn/fo0R9//HH3y4sDBw4Eg8Hs7OxLL710yZIl n3zySY8ePaqrq51rb7755scee8yjGE1NTe4DWaRp06ZZ4/3337cw7t8kbZmEIxZLCpdJNFaVMKsK 4Dt79uwZMWJEY2PjFVdcsWLFig0b/t8pV/bu3du7d+/Nmzd7HaCgoKCwsNDZs85qrF20hm3Xzvk1 HnjgAfc97fbt28vLy73IMHny5GuuuaZv375WKA4dOvThhx/27NnzhhtuuOiii6699lqbAc2bNy+c xGUSjlgsqVom0VhVwqwqgB999tln+fn51igrKzv11FNvuukmZwsKN3/jv23bNq8D/P3vf8/IyJg/ f7617733Xrvo9FuMf/3rX3V1deeee6692fYuQGlp6V/+8hdr2GNZ7dq4caO1rdpbcXvjjTfCzTvA n3XWWQcPHgwna5mEIxZLSpZJTKwqrCqAL61Zs2b8+PETJkzo16/f0KFDZ86cmeQAn3zyyaOPPuoU 2MWLFw8ePNiprvZm+4ILLrDGa6+9Zm9lvQtgb5jdXQhsvrNy5Upr7N+//7TTTnN3ArSF43znkjTu YknJMomJVYVVBfCld955p0ePHk8//bS9ibV3s1dcccW3336bhMfdsGHDwoULne817P+LL754/fr1 VkzGjBlzzz337N69e968ebfccotHj75r1y63vXr1aqdwWek477zz3KsGDRr05JNPWrby8vJx48a5 cx9PRS+Wt99+OznLpIXo1YBVhVUF8Kv6+nq33djY6PXD7d27Nzc3Nzs7OxAIWNFwOv/973/bxYaG hq+++mrixIm2ORcWFn799ddeBKiurj7rrLOqqqpa9L/77rvXXHNNZM5bb701JydnypQpkYvII3EW y549e+666y5Pl0mkjz766LLLLuvbt681WlzFquJgVQGQiAcffLCkpMQa//vf/zIyMl555RWn3zbt 7t27L1iwwOsAr7766hlnnOHszhfp0Ucfffzxx8PNHzRNnz7d6xgtpHyxhJt3evz000/vu+8+q7Qv v/zyRRddtH///iQ8bmtSvkxYVQB0lM8++8zZqSDcvIfD5s2b16xZY++oS0tLL7nkEqfYHjx4MPK3 SrwzbNiwpUuXRu/ON3DgwHXr1j3wwAPnn3++szOGpyKXSVhgsdx///0DBgw4++yz3SlGMBh097tI GlaVaGqrCoDvyzbb66+/fsiQIVlZWc5nNbYJX3XVVRUVFTbvsDe0kydPPnToUNLyzJ071/63t83X XXddZP8555zTs2fPwsJCS+V1huhlEk7dYtmxY8fw4cOdn96yDLYcnHK6a9cuWyDJ/EUUVpVoUqsK gO/LCqy9ox45cmS4eb+L/Pz8O++8M9y8aTt7G86aNcve0C5cuDA5uxZEskcMBAKRPzb10EMPbdmy xevHbW2ZhJO+WBobG909CS2P87soxvLYTMdpP/LII3PmzPEug4tVJZrOqgIgEbYVuz99X1lZ2b17 d3tPu2rVqh49egwYMGDChAlJ2N2iNevXr0/Jj03FXCbWTuZiueeeewYNGpSZmXn77bcfOHDg448/ tjmO89sju3fv/sMf/vDpp5+Gm3eM9DRGJFaVaAqrCoAEffTRR7169XK/tujTp49TbG2jTsJnSm26 7bbbkr+fQ2vLJOzZYrGRbvTo0e49z54924a/cPM3SvnNws3fD7q/PTJv3rzXX3+9w2PEl/JVJf5h IJ1kVQHwfcU/8ehLL720aNGiKVOmDBw4cPPmzVZ+r7322lTEbJVVlT179nT43cZZLKlaJjYBLCws dNpW0mfNmuW0q6qqunXrVtvM098eUV5V4hwq4uo8qwqA76W1E49u3749JycnMzNzzZo1VmB79Ogx bNiwSZMmue9m/S3mYkntMvniiy969uy5detWaz/11FM2PXSvslmG8wtgnv72iOaqkvJDRQRXFQDt EXk+1pgnHrUNdv78+e4HTfYud/jw4SmJmmRxzseaqmVihf3mm2+2ML169brxxhvDzQdiZ2VlPfDA A6FQyCaGns4ylFeV1B4qIriqAGi/FudjbfPEo7ZF2za+bNmy5EVMuu97PtakLZNp06Y5L5ZNfM47 77xVq1aFmwfH22+/fdSoUTYaenrQmeaqktpDRWRXFQDt1+J8rO058eimTZvee++9pCdNksM7H6t3 y8Rq++LFi50Zx5///Od//vOfTr91XnXVVc4JEZJDalVROFREbVUBcHiiz8eazBOPilA7H6vLBjub 7NhrdNNNN4Wbd/50X6ndu3dnZGTMnj07CTEcOqtKCg8VkV1VAByemOdjTeaJRxUIno813Pxxn01z bAJokfbv329Tv+XLl+/bt88aVlqrqqqsCD/xxBMvvPBCcvKkcFXROVREc1UBkIjWzseatBOPppzm +VgdNg6OGTPGab/11luXXHKJjQi7du0KBoN9+vRxYidNaleVlB8qEtZeVQActpSfjzXl1M7HGnkq usiP+4wNPVaKkxMjWmpXlZQfKhLWW1UAdIhUnY81tTTPxxrzVHSRH/dZkbcwXsdw6Jy6N7WHimiu KgA6VpLPx5pymudjDbdyKrokfNzXgs6pex0pPFREdlUBgEQIno81Ly9vwoQJrZ2KzuuP+1wp/z0W l8ihIoKrCgAkTuR8rI4dO3aMGDFi9OjRzjwihaeiEzl1b1jpUBGpVQUAOorC+VgjjRw50p1xpORU dDqn7lU7VERtVQGADpSq87FGi9wpNJmnolP4PZZoUoeKOHRWFQDoWKk6H2u0yJ1Ck3MqOp1T90Ye JxIWO1TEpbOqAEDHSsn5WKN5vVOozu+xRIp5nEhY6VCRSCKrCgB0LI/Ox3oYvN4pVOH3WFqIeZxI WOxQEZfOqgIAOAwKv8fiin+cSDKT6BwqAgDwSGp/j6UFneNEwkqHigAAvJPaU/dGS/lxImGlQ0UA AF4Q+T2WmBSOEwkrHSoCAOhYOr/H0pqUHycS/u5wnORDRQAAHlH7PZbWKBwnEk7FoSIAAK8J/h5L TCk/TiScigM0AABekD11bwq15ziRcBIP0AAAeEHq1L0ipI4TAQB4SuTUvVLUjhMBAHQ45/dYws3H 36X21L0ilI8TAQB0oBa/xxJO9U+yKNA/TgQA0IEif48lnLqfZFGQLseJAAA6UIs9QpP5kyyC0uU4 EQBAB4rcIzScrJ9k0cFxIgDQyXXaPUI5TgQA4Ohse4Q6OE4EANA5xT91b+d8VwAA6AykTt0LAEDy KZy6FwCAVEnVqXsBABCR/FP3AgCgg51CAQCdHDuFAgAAAAAAAAAAAAAAAICO/wP/DC8iDQplbmRz dHJlYW0NCmVuZG9iag0KMTAyIDAgb2JqDQo8PC9UeXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291 cmNlczw8L0ZvbnQ8PC9GMiA3IDAgUj4+L1hPYmplY3Q8PC9JbWFnZTEwNCAxMDQgMCBSL0ltYWdl MTA2IDEwNiAwIFI+Pi9Qcm9jU2V0Wy9QREYvVGV4dC9JbWFnZUIvSW1hZ2VDL0ltYWdlSV0gPj4v TWVkaWFCb3hbIDAgMCA2MTIgNzkyXSAvQ29udGVudHMgMTAzIDAgUi9Hcm91cDw8L1R5cGUvR3Jv dXAvUy9UcmFuc3BhcmVuY3kvQ1MvRGV2aWNlUkdCPj4vVGFicy9TL1N0cnVjdFBhcmVudHMgNDA+ Pg0KZW5kb2JqDQoxMDMgMCBvYmoNCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMzI3Pj4N CnN0cmVhbQ0KeJyN1E1rAjEQBuB7IP/hPargbCY7+VgQD360WCq0dEsPpQcp6klL+/8PjYsUW7RO 9pznZbIzg+oBo1G1nC5mcOMxJrMpGFtrhg5DrqlGaAJJQBaPr7U1m4E1mC+nwMlNru5X+y166/3w +al/ZCatNdWNBzM5QbuxhuHKx5CcKEZIyuRqtDtr3CHS4daa1x76b2jvrJm3Z5L8paQ/NjeZWDr7 ilhfE+ui5UCNBhMNlpjqrNGCRguRctRoUaOJJ+81WtJoPlHSYFmDlW5k1bM1Cq20R4yq/mCn4LKU btNxF2flhIsNiapUvjoPRQtCjapDWDML4khZqRyBzzLa5LrD5Q9KisSlRvbwqaEs3V55GWBvjQRH HLownzLqROGwJvBe4qrFbrVdc9klsw88ngsM5wO5ieR/8spT/J/XrY5fifE08RsUARbFDQplbmRz dHJlYW0NCmVuZG9iag0KMTA0IDAgb2JqDQo8PC9UeXBlL1hPYmplY3QvU3VidHlwZS9JbWFnZS9X aWR0aCA2MDAvSGVpZ2h0IDM3MS9Db2xvclNwYWNlL0RldmljZVJHQi9CaXRzUGVyQ29tcG9uZW50 IDgvSW50ZXJwb2xhdGUgZmFsc2UvU01hc2sgMTA1IDAgUi9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVu Z3RoIDc5Njc+Pg0Kc3RyZWFtDQp4nO3dC3BU5d3HcWzpdKrUaqdW33am7UQtLVOLIYKAXLQqJMIC mpioXEcSwgaJAeSiBAQXbNMOeMkFSEGFUrRjWgpUQATlMlIgkFrqhUsIl5ANhSUJmCYmEPf9vznv nNnubiDJ7tn9m/P9DJN59tlD9rdnT57/Prvn4vUCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA0dGp2eLFi4P2+7YDmQvv3bv3kUce6dq167XXXis/x40bd+rUqSs8nOnr X//6DTfc0KtXr7Vr11r01FrT/+c//9nhcPxPsyFDhmzbtq2lzJ07d+7SpUtsbOzWrVt9l2n9GgAA qGIM79ddd92JEycC+33bLdVBKWHf/OY3/e66/vrrjx8/3tLDBbrmmmtef/11K57aVfsXLFjgF+Zr X/ua7xuDljK/+eab7VgDAABVzHE7Pj4+sD+wHahv375y75NPPrl9+/aqqqqNGzd2795depKTk1t6 OPNmU1NTeXm5LCmd/fv3D9NzCv5YQfs/+ugjmZNKFR42bNiuXbt279798MMPSx38xje+UVpaGrh8 Y2PjkSNHRo4cKT1Dhw41Otu0BgAAqhiD/NixY+XnH//4R7/+wHag7373u3LvBx98YPZs3bpVen7w gx+09HB+nadPn5bOm2++2ezZuXNnr169ZErVu3dvKS6+C69YsWL48OE33HBDt27dJHZ1dbV5144d O2T5b3/72/J/pd2aOpieni7tp556yneBESNGSGdqampLmaV2S8+NN97YjjUAAFDFGORlFnPLLbfc dNNNHo/Htz+wHSglJaVT8zd9iYmJ69atu3Tp0lUfzmjLZLCurk6mV6NHj5bOiRMnGv2nTp2S6Zg5 Ue3cufOBAweMu9555x2/jx+lZhl3yTKSwfd/taYODhgwQNoyK/RdYMmSJdLZs2fPwOUbGhpOnDgx btw46bn33nvbsQYAAKqYg3xRUZExMfTr97bwBZn5G86fPy8lw+z//ve/P2XKlFbuJ2P68Y9/XFJS YiyzdOlS6ZGfMtf7/e9/L+3HHnvMuKtfv35yc9myZZ9//vmWLVuk/ZOf/MS4y5jEyQL79u3bu3fv Pffc05o6KGmlfeHCBd8Fjh8/Lp0yy7tCZqmzGzZsaMcaAACo4lsUZDoj7ffee8/bljrobZ7Zvf32 28OHDzd3F7n55puLi4tbeji/grJmzRr5DeYyDofDb5lbb73VvLexsXHTpk2TJk2Szk7N+7QY/VJJ 5ebmzZuNm3/9619bUwevv/56acu01HeB2traTj4fe/om6dKlyw9/+MNBgwZJFW7fGgAAqOJbFM6c OSODf0xMjNSFwDrYmt8mFUTKQa9evTr57EYS+HBSNc6ePSszPqNq9O7d23dGJhn86uB1111n3CWV 5Xvf+17QimxUNPPrQo/H05o6GBsbK+2ysjLfBY4dOyadcXFxbX36rVkDAABV/Ab5N954Q25Onz69 9XWwT58+cq/M0cyekydPdvL5XPEKDyclQyZ0nXw+jxXGR5qffvpp4H836suYMWPWr19vfHpp/rZh w4Z18tlZZcOGDa2pg8YOQjNnzvRdYNasWZ2a9/9szdNv6xoAAKgSOMgPHjzY3OGkpWV8GV8s/vSn P123bl1VVdXevXuN/UYGDRrUmof79a9/bXSah9JnZ2d3av5O8NSpU8uXL/f9VbfccovclBon08kX XnjB+I8yBZO7CgsLpX3XXXdt27Zt3759xrEMV62Dn3322debuVyu06dP//3vf09MTJTS3Llz58OH D7fm6bd1DQAAVAkc5E+cONGlS5fW18GmpqaYmJj//qiy07XXXmt+VXflh5P/PnDgwE7NX6gZe6se OHDA97B0qUr//Oc/jYVnzJjh+yg33XST/Pz444/lLrfb7buXqfkbrppBCvE111zj+2vlpnReIXMo awAAoErQQT4vL6/1ddDbfDzd+PHjb7vttm9961tdu3aVKZV5pENrHu7kyZPf+c53OvkceL59+3Yp jjfeeOOQIUPWr19vLnnu3Lk5c+bIQ/zoRz+aOnXq0aNH5X/J5Mu4d9euXb169ZIiLrPCPXv2tLIO infffTchIUEmm1KL4+Pj/erXVZ9+m9YAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAACWOHTuWnZ2dlJS0aNGiyspKo7O6utrpdDocjqysLEs7AQCIrmnTpm3evFmK VEFBQU5OjtGZl5dXWFjodrsXLlzocrms6wQAQAkphSNGjDDaMnGrqqqSRkVFRWpqqnWdAAAocezY sYyMDKOdnJzc2Ngojbq6usTEROs6AQBQYtWqVXv27DHaCQkJTU1N0pCf8fHx1nVe1X4ACLfWDD5x E/Zf9V8rxzF8JZSWli5ZssS8mZaWJrM2adTW1qakpFjXCQA6ta8OxsTEmG0puImJiXfeeefEiRMP HDjgu4zh5z//eVJS0uHDh41+j8cjS8bFxQ0cOHDBggWXL1+2+jnCdPbs2d/97nfGh5aGnJwcY5dO t9ttflhqRScA6BRiHZSBrmfPnnl5eWfOnFm3bl3fvn0rKir8lvniiy+WLVv26KOPGjdHjRr1m9/8 Rhb79NNPJ0+e/OKLL1r/LPF/Pvnkk+eee66+vt63s6ioqKCgQN6c5Obm5ufnW9cJADqFWAfffPPN 1atXm/1Lly6dOXOm3zLe5jlgjx49jLbMHOWm0S4tLe3fv78VzwuB5B3IAz6MzrKysszMTIfD4XQ6 a2pqrOsEAJ1CrIPp6emnT582+w8dOjRkyBC/ZSorK2XS98wzzxg3n3766Tlz5siU4eTJkxY+MQAA WiHEOiiTO2OPCMPFixd/8YtfmMsYunfvPnLkyOPHj5vLzJgxIyEh4dZbb5WiafYDABB5IdbBwYMH m3vgi507dwbOB1ty/vz5RYsWcXwZACCKQqyD06dP/+1vf2v2v/jiizLX81vGT//+/Wtra422zA1l RhnO5wMAQFuEWAcrKioGDBjwyiuvnD59+uWXX5Z2eXm53zJ+FixYMHfuXFmspqZGaqjT6bToqQEA cFWhHz9YUlIyceLEHj16jB8/3u/4waCPePnyZZfLdd9998l/yczMlClh2J8UAACtxPlkAAB2Rh0E AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABWi5uwP+z/ov2cAABoLeogAMDOqIMAADuj DgIA7Iw6CACwM+ogAMDOqIMA8JXm8XiSkpL8eh7wYXRWV1c7nU6Hw5GVlVVZWRmWzlDoqT56kgAA 2qq8vDw9Pd0sdobdu3dnZ2f7LZmXl1dYWOh2uxcuXOhyucLSGQo91UdPEgBAW8lMcNOmTX51cOXK latWrfJbUmZzVVVV0qioqEhNTQ1LZyj0VB89SQAAbWXUJr86OHv27FmzZj3xxBOLFy/2eDxGZ3Jy cmNjozTq6uoSExPD0hkKPdVHTxIAQPv41UGpWTt27Kivry8pKSkoKDA6ExISmpqapCE/4+Pjw9J5 VftbZkX1aR89SQC0RogDJjokvzpokhmczAqNdlpamkzlpFFbW5uSkhKWzlDomYXpSQIAaJ+W6mBD Q8OECROMdk5OjrGfp9vtzsjICEtnKPRUHz1JAADt41cHp02bVlxcXF9fLz+XL19udBYVFRUUFHg8 ntzc3Pz8/LB0hkJP9dGTBADQPn51sLS0dPbs2SNGjJg/f77xYaYoKyvLzMx0OBxOp7OmpiYsnaHQ U330JAEA2Iee6qMnCQDAPvRUHz1JAAD2oaf66EkCALAPPdVHTxIAgH3oqT56kgAA7ENP9dGTBABg H3qqj54kAAD70FN99CQBANiHnuqjJwkAwD70VB89SQAA9qGn+uhJAgCwDz3VR08SAIB96Kk+epIA AOxDT/XRkwQAYB96qo+eJAAA+9BTffQkAQDYh57qoycJAMA+9FQfPUkAAPahp/roSQIAsA891UdP EgCAfeipPnqSAADsQ0/10ZMEAGAfeqqPniQAAPvQU330JAEA2Iee6qMnCQDAPvRUHz1JAAD2oaf6 6EkCALAPPdVHTxIAgH3oqT56kgAA7ENP9dGTBABgH3qqj54kAAD70FN99CQBANiHnuqjJwkAwD70 VB89SQAA9qGn+uhJAgCwDz3VR08SAIB96Kk+epIAAOxDT/XRkwQAYB96qo+eJAAA+9BTffQkAQDY h57qoycJAMA+9FQfPUkAAPahp/roSQIAsA891UdPEgCAfeipPnqSAADsQ0/10ZMEAGAfeqqPniQA APvQU330JAEAtIPH40lKSvLtqa6udjqdDocjKyursrLSus5Q6Kk+epIAANqqvLw8PT39gQce8O3M y8srLCx0u90LFy50uVzWdYZCT/XRkwQA0FYyE9y0aZNfHZSJW1VVlTQqKipSU1Ot6wyFnuqjJwkA oK2M2uRXB5OTkxsbG6VRV1eXmJhoXWco9FQfPUkAAO3jVwcTEhKampqkIT/j4+Ot67yq/S2zovq0 j54kAFojtPESHZNfHUxLS5NZmzRqa2tTUlKs6wyFnlmYniQAgPbxq4M5OTnGLp1utzsjI8O6zlDo qT56kgAA2sevDhYVFRUUFHg8ntzc3Pz8fOs6Q6Gn+uhJAgBoH786WFZWlpmZ6XA4nE5nTU2NdZ2h 0FN99CQBANiHnuqjJwkAwD70VB89SQAA9qGn+uhJAgCwDz3VR08SAIB96Kk+epIAAOxDT/XRkwQA YB96qo+eJAAA+9BTffQkAQDYh57qoycJAMA+9FQfPUkAAPahp/roSQIAsA891UdPEgCAfeipPnqS AADsQ0/10ZMEAGAfeqqPniQAAPvQU330JAEA2Iee6qMnCQDAPvRUHz1JAAD2oaf66EkCALAPPdVH TxIAgH3oqT56kgAA7ENP9dGTBABgH3qqj54kAAD70FN99CQBANiHnuqjJwkAwD70VB89SQAA9qGn +uhJAgCwDz3VR08SAIB96Kk+epIAAOxDT/XRkwQAYB96qo+eJAAA+9BTffQkAQDYh57qoycJAMA+ 9FQfPUkAAPahp/roSQIAsA891UdPEgCAfeipPnqSAADsQ0/10ZMEAGAfeqqPniQAAPvQU330JAEA 2Iee6qMnCQDAPvRUHz1JAAD2oaf66EkCALAPPdVHTxIAgH3oqT56kgAA7ENP9dGTBABgH3qqj54k AAD70FN99CQBAISFx+N5wIfRWV1d7XQ6HQ5HVlZWZWVlWDpDoaf66EkCAAiL3bt3Z2dn+3Xm5eUV Fha63e6FCxe6XK6wdIZCT/XRkwQAEBYrV65ctWqVX6fM5qqqqqRRUVGRmpoals5Q6Kk+epIAAMJi 9uzZs2bNeuKJJxYvXuzxeIzO5OTkxsZGadTV1SUmJoalMxR6qo+eJACAsJCatWPHjvr6+pKSkoKC AqMzISGhqalJGvIzPj4+LJ1Xtb9lVlSf9tGTBEBrhG2shA3IDE5mhUY7LS1NpnLSqK2tTUlJCUtn KPTMwvQkAQCEV0NDw4QJE4x2Tk6OsZ+n2+3OyMgIS2co9FQfPUkAAGExbdq04uLi+vp6+bl8+XKj s6ioqKCgwOPx5Obm5ufnh6UzFHqqj54kAICwKC0tnT179ogRI+bPn298mCnKysoyMzMdDofT6ayp qQlLZyj0VB89SQAA9qGn+uhJAgCwDz3VR08SAIB96Kk+epIAAOxDT/XRkwQAYB96qo+eJAAA+9BT ffQkAQDYh57qoycJAMA+9FQfPUkAAPahp/roSQIAsA891UdPEgCAfeipPnqSAADsQ0/10ZMEAGAf eqqPniQAAPvQU330JAEA2Iee6qMnCQDAPvRUHz1JAAD2oaf66EkCALAPPdVHTxIAgH3oqT56kgAA 7ENP9dGTBABgH3qqj54kAAD70FN99CQBANiHnuqjJwkAwD70VB89SQAA9qGn+uhJAgCwDz3VR08S AIB96Kk+epJAOTYVAGGkZ0jRkwRB6XmB9CQB0AHoGVL0JEFQel4gPUkAdAB6hhQ9SRCUnhdITxIA HYCeIUVPEgSl5wXSkwRAB6BnSNGTBEHpeYH0JAHQAegZUvQkQVB6XiA9SQB0AHqGFD1JEJSeF0hP EgAdgJ4hRU8SBKXnBdKTBEAHoGdI0ZMEQel5gfQkAdAB6BlS9CRBUHpeID1JAHQAeoYUPUkQlJ4X SE8SAB2AniFFTxIEpecF0pMEQAegZ0jRkwRB6XmB9CQB0AHoGVL0JEFQel4gPUkAdAB6hhQ9SRCU nhdITxIAHYCeIUVPEgSl5wXSkwRAB6BnSNGTBEHpeYH0JAHQAegZUvQkQVB6XiA9SQB0AHqGFD1J EJSeF0hPEgAdgJ4hRU8SBKXnBdKTBEAHoGdI0ZNEVRiSaE4CIPKqq6udTqfD4cjKyqqsrAz9F+oZ UvQkURWGJJqTAIi8vLy8wsJCt9u9cOFCl8sV+i/UM6ToSaIqDEk0JwEQeTIZrKqqkkZFRUVqamro v1DPkKIniaowJNGcBEDkJScnNzY2SqOuri4xMTH0X6hnSNGTRFUYkmhOIqbmHw37v/YlmVZwNOz/ 2r1aAOskJCQ0NTVJQ37Gx8e38n/FAUC4WTnUAS1KS0uTmaA0amtrU1JSoh0HAICIysnJMXYTdbvd GRkZ0Y4DAEBEFRUVFRQUeDye3Nzc/Pz8aMcBACCiysrKMjMzHQ6H0+msqamJdhwAAAAAAAAAAAAA HUFJScn48eON9rFjx1auXBndPBqwToJitQRinQAdwKVLlwYNGrRlyxZpz507d926dVGJUVxcnJ6e brT379+/cePGL774IipJvKyTFrBaArFOgK+offv2VVdXmzd37tx53333NTQ0DB48+Pz581GJJI/+ q1/96v3333/rrbcSExMTEhImTZoUlRhGg3ViYFO5QgyjwToBvoqGDx/+/PPP+/aMHz/e5XLdfvvt 0u92u6OSauvWrQMGDJg8ebK3+Sw6MqpE8t31nDlzhg4d2rdvXxlMjB7WiZdNJRg2FaADkLeLcXFx hw8fNnuOHz/etWvXuXPnLly4sHv37tnZ2adPn45YHnkXferUKWmMGTPm2WefNTo//PDDe+65xzi5 nNVWrVo1a9Ysaezdu/fuu+82Om2+TgxsKn7YVICOQd7kr1ixYvTo0b6d8ic8Y8YMafz73/+eP3/+ 22+/bdGjv/rqq+aXFw0NDU6nMzk5eeDAgRs2bDhy5EhsbKzH4zHuTU9Pf/nlly2K0dTUZD6QRFq8 eLE0PvnkEwljLhOxdeL1WS1RXCeB2FS8bCpAh3PhwoWRI0c2Njbef//9W7Zs2b///69Ec/HixV69 eh08eNDqAFlZWS6Xy9izTsZYuSkN+bs2rq8xb9488z3tiRMnioqKrMgwZ86chx56qG/fvjJQfPnl l5999lmPHj3GjRvXr1+/YcOGyQxo9erV3giuE6/PaonWOgnEpuJlUwE6otLS0szMTGkUFhbedttt EyZMMP6CvM3f+B8/ftzqAK+88kpMTMyaNWukPWvWLLlp9EuMP/3pTzU1NXfddZe82bYuQH5+/uzZ s6UhjyVj14EDB6Qto70Mbjt27PA27wB/xx13XLp0yRupdeL1WS1RWSdBsamwqQAd0p49eyZNmjR1 6tTBgwc/8sgjS5cujXCAI0eOvPTSS8YAu379+hEjRhijq7zZ7t27tzTef/99eStrXQB5w2zuQiDz na1bt0qjvr7+9ttvN3cClJVjfOcSMeZqico6CYpNhU0F6JD+8Y9/xMbGrly5Ut7EyrvZ+++///Ll yxF43P37969du9b4XkN+9u/fv7i4WAaTsWPHzpw58+zZs6tXr544caJFj15eXm62P/zwQ2PgkqGj Z8+e5l3Dhw9ftmyZZCsqKkpLSzPnPpYKXC27d++OzDrxE7gZsKmwqQAdVW1trdlubGy0+uEuXryY mpqanJzscDhk0DA6//a3v8nNurq68+fPP/fcc/Ln7HK5/vOf/1gRwOPx3HHHHefOnfPr/+ijjx56 6CHfnE899VRKSsrcuXN9V5FFrrBaLly4MH36dEvXia9Dhw7de++9ffv2lYbfXWwqBjYVAKGYP39+ Xl6eNP71r3/FxMS89957Rr/8aXfr1u2tt96yOsC2bdt+9rOfGbvz+XrppZdeffVVb/MHTbm5uVbH 8BP11eJt3unx6NGjzz77rIy07777br9+/err6yPwuC2J+jphUwEQLqWlpcZOBd7mPRwOHjy4Z88e eUedn58/YMAAY7C9dOmS77lKrJOUlPTOO+8E7s43dOjQffv2zZs37+677zZ2xrCU7zrxKlgtzz// /JAhQ375y1+aUwyn02nudxExbCqBtG0qANpK/mwfe+yxhx9+ODEx0fisRv6EH3zwwYqKCpl3yBva OXPmfPnllxHL88Ybb8hPedv86KOP+vZ37969R48eLpdLUlmdIXCdeKO3Wk6ePPn4448bp96SDLIe jOG0vLxcVkgkz4jCphJI1aYCoK1kgJV31KNGjfI273eRmZn5zDPPeJv/tI29DVesWCFvaNeuXRuZ XQt8ySM6HA7fk0298MILx44ds/pxW1on3oivlsbGRnNPQsljnBdFSB6Z6RjtRYsWvf7669ZlMLGp BNKzqQAIhfwVm6e+r6ys7Natm7yn3b59e2xs7JAhQ6ZOnRqB3S1aUlxcHJWTTQVdJ9KO5GqZOXPm 8OHDExISpkyZ0tDQcPjwYZnjGOceOXv27J133nn06FFv846RlsbwxaYSSMOmAiBEhw4diouLM7+2 6NOnjzHYyh91BD5TuqrJkydHfj+HltaJ17LVIpVuzJgx5m9+7bXXpPx5m79Rymzmbf5+0Dz3yOrV qz/44IOwx7iyqG8qVz4MxCabCoC2uvKFRzdt2rRu3bq5c+cOHTr04MGDMvwOGzYsGjFbJKPKhQsX wv5rr7BaorVOZALocrmMtgzpK1asMNrnzp3r2rVrdTNLzz2ieVO5wqEiJvtsKgDapKULj544cSIl JSUhIWHPnj0ywMbGxiYlJWVnZ5vvZju2oKsluuvkzJkzPXr0KCsrk/by5ctlemjeJbMM4wxglp57 ROemEvVDRRRuKgBaw/d6rEEvPCp/sGvWrDE/aJJ3uY8//nhUokbYFa7HGq11IgN7enq6hImLi3vy ySe9zQdiJyYmzps3z+12y8TQ0lmG5k0luoeKKNxUALSe3/VYr3rhUfmLlr/xjRs3Ri5ixLX1eqwR WyeLFy82XiyZ+PTs2XP79u3e5uI4ZcqU0aNHSzW09KAznZtKdA8VUbupAGg9v+uxtubCoyUlJR9/ /HHEk0ZI+67Hat06kbF9/fr1xozj6aef/sMf/mD0S+eDDz5oXBAhMlRtKhoOFdG2qQBon8DrsUby wqNKaLseq0mKnUx25DWaMGGCt3nnT/OVOnv2bExMzGuvvRaBGAY9m0oUDxVRu6kAaJ+g12ON5IVH NVB4PVZv88d9Ms2RCaBEqq+vl6nf5s2bP//8c2nI0Hru3DkZhJcsWfKXv/wlMnmiuKnoOVRE56YC IBQtXY81YhcejTqd12M1SB0cO3as0d61a9eAAQOkIpSXlzudzj59+hixIya6m0rUDxXx6t5UALRb 1K/HGnXarsfqeyk634/7hJQeGYojEyNQdDeVqB8q4tW3qQAIi2hdjzW6dF6PNeil6Hw/7pNBXsJY HcOg59K90T1UROemAiC8Inw91qjTeT1WbwuXoovAx31+9Fy61xDFQ0XUbioAEAqF12PNyMiYOnVq S5eis/rjPlPUz8diUnKoiMJNBQBCp+R6rIaTJ0+OHDlyzJgxxjwiipeiU3LpXq+mQ0VUbSoAEC4a rsfqa9SoUeaMIyqXotNz6V5th4po21QAIIyidT3WQL47hUbyUnQazscSSNWhIgY9mwoAhFe0rsca yHen0Mhcik7PpXt9jxPxKjtUxKRnUwGA8IrK9VgDWb1TqJ7zsfgKepyIV9OhIr6UbCoAEF4WXY+1 HazeKVTD+Vj8BD1OxKvsUBGTnk0FANAOGs7HYrrycSKRTKLnUBEAgEWiez4WP3qOE/FqOlQEAGCd 6F66N1DUjxPxajpUBABgBSXnYwlKw3EiXk2HigAAwkvP+VhaEvXjRLz/XY4jfKgIAMAi2s7H0hIN x4l4o3GoCADAagrPxxJU1I8T8UbjAA0AgBXUXro3ilpznIg3ggdoAACsoOrSvUqoOk4EAGApJZfu VUXbcSIAgLAzzsfibT7+LrqX7lVC83EiAIAw8jsfizfap2TRQP9xIgCAMPI9H4s3eqdk0eCrcpwI ACCM/PYIjeQpWRT6qhwnAgAII989Qr2ROiWLHhwnAgA2Z9s9QjlOBABgsNseoQaOEwEA2NOVL91r z3cFAAA7UHXpXgAAIk/DpXsBAIiWaF26FwAAJSJ/6V4AAPRgp1AAgM2xUygAAAAAAAAAAAAAAAAA 6PG/KHMjAg0KZW5kc3RyZWFtDQplbmRvYmoNCjEwNSAwIG9iag0KPDwvVHlwZS9YT2JqZWN0L1N1 YnR5cGUvSW1hZ2UvV2lkdGggNjAwL0hlaWdodCAzNzEvQ29sb3JTcGFjZS9EZXZpY2VHcmF5L01h dHRlWyAwIDAgMF0gL0JpdHNQZXJDb21wb25lbnQgOC9JbnRlcnBvbGF0ZSBmYWxzZS9GaWx0ZXIv RmxhdGVEZWNvZGUvTGVuZ3RoIDI0Nz4+DQpzdHJlYW0NCnic7cqxDQAACAOg/5+2fuDiYgzMJAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA cFENthsAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPisAcVaVP8NCmVu ZHN0cmVhbQ0KZW5kb2JqDQoxMDYgMCBvYmoNCjw8L1R5cGUvWE9iamVjdC9TdWJ0eXBlL0ltYWdl L1dpZHRoIDYwMC9IZWlnaHQgMzcxL0NvbG9yU3BhY2UvRGV2aWNlUkdCL0JpdHNQZXJDb21wb25l bnQgOC9JbnRlcnBvbGF0ZSBmYWxzZS9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDg1NjU+Pg0K c3RyZWFtDQp4nO3de3BU9f3/cbx02qq12plWv+1M/0AtraMiIF5QQStKIqxcEhItiFaCcVFjxAtK AbURbWRAlCQgFa946ZjqgJWbqFRHCwJWrSgoCZeQjUJIAqSJBML+3pPzm53tbhKybHb3FT7PxzCZ z35yyL7O2XPOez+75xIMAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AKnRrcXMmTNb7Q9vRwtNvHr16hEjRvTo0eO4446znzfeeOO2bduin+uMM86w//Xxxx97D5ubm084 4QTrsf9lba/Tfms9NmU7ab32e++9N2DAgMOb3/Cev//97z6f7/9aDB48+J133ome3nPsscda4F69 eq1YsSJ8mg7OPgBAkLeHP/7447ds2RLdH95uqw6+8cYbP/zhDyN+deKJJ27evDniufLz8+1XU6dO 9R5+9dVXoenXr1/vdT7wwAP28I477mgnbXQ71vkNPXz44Ycjkh999NHh7wramvdXXnkl1tkHAAgK 7brT0tKi+6Pb0fr162e/vemmm1auXFlTU7N48eKePXtaT1ZWVsSUb7/9tvX36dPHe7hgwYLQs7/4 4ote53nnnWcPly1b1sHkHZ/T6P/16aefHnPMMUcdddQ111zzwQcffPTRR8OHD7c6+IMf/GDTpk3R 0zc1NX399dejRo2yniFDhsQ6+wAAQd5+/oYbbrCfL730UkR/dDvaz372M/vte++9F+pZsWKF9fzy l7+MmHLfvn0/+clPrO4EAgF7eOedd9pkXtWYMGGC9Xz77bf2Wxucfv/996Hnff3113/961/7fL7w JBHjL+/vv//+++eff76NxS688EKrSu3Mr9fOzc219m233RY+wbBhw6wzJyenrXmvqKiwnpNPPjnW 2QcACPL28zaQOfXUU3/+859XV1eH90e3o2VnZ9tvbWCVkZGxcOHC/fv3t/N0I0aMsIn/+te/Wrt/ //7WLioqsp+XXXaZ9Tz77LPWttFZ+PPaX7afgwYNCrZbB7dt22bjuFDPscceu27durbm12t7AWxU GD7BnDlzrLNv377R01sd37Jly4033hgKHOvsAwDUhPbzpaWl3sAwoj/Yxndkob+wa9cuqxqh/l/8 4hc20GvrQJH58+fbNDbmOnjwoI0NrXLV1tZaETnppJPstyNHjrTfzp07N/x5LZKNEzds2NBqqtBf tv/l/V/7g1ZnrX3ttde2M7/Golp79+7d4RNs3rzZOm2U1868W5F98803D2P2AQBqwuuCjWis/fbb bwdjqYPBliM/X3vttaFDh4aOGDnllFPWrFkT/XRVVVVHHXXUCSec8Nlnn9lk5557rnWeddZZ1rZK 99Of/tQaoSLi/akvvvii1bQRMXw+X0TC0047rf35PfHEE63d0NAQPkF9fX23sI89w/+gxf7Vr351 1VVXLV++/PBmHwCgJrwu2LDL9v/du3e30tBOxWmHFRGrCOeff363sCNJInhHwowZM8Z+jh071nq8 9m233WY/zz777Ihse/fubTVtRCpLHlEHjz/++Pbnt1evXtYuLy8Pn6CsrKxb2ME8HZ/3Ds4+AEBK xH7+ueees4f33HNPx+vgRRddZL9dsmRJqGfr1q3dwj5ajOCdGeF961dSUmI9s2bNsvaPfvQj+zlx 4sSIbOHfuEWnOnjwoPfw4osvtodffvllx+fXOzoo/BnNfffd163l+M+OzPthzD4AQEr0fn7QoEFe kepgHfS+WPzNb36zcOHCmpqa1atXe4eOXHXVVa1Ov2bNmtCQzSYOthznGeqxdjvZwnu8urllyxbv 2J7Jkyd3a/lOcNu2bU8//XRbAcL/wldffXVMi4KCgu3bt//rX//KyMg4+uijjz322I0bN3Zk3g9j 9gEAUqL381ZZvMu8dLAONjc3d+/evdv/Ou6445YuXdrq9DaCO+WUU7q1HG3S2NhoPXv27DnqqKOs 56STTjpw4EA72cJ7Tj/9dO/h0KFD7eG6devCz2e3cvbZZ58dcn4fffRR76lD7KF1tpMhztkHAEhp dT/vncvQwToYbDmlbuzYsVaYfvzjH/fo0cNGVa2esxDyxz/+0f7gOeecE+qx8VS3qHPP26+DNv6y AmSlc9iwYV7PypUrBwwYcPLJJw8ePHjRokUdnN9ly5alp6efeuqpVp3T0tIi6tch5/0wZh8AAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACArLKyssmTJ2dmZs6YMaOq qir8V59++unAgQNDD2tra/1+v8/ny8/PD03Z8U4AAATdddddS5cutcpVUlJSWFgY/qsJEyaE18Gi oqJ58+YFAoFp06YVFBTE2gkAgDIrhcOGDQs9tMFgRB20IV5NTY01Kisrc3JyYu0EAEBZWVnZ+PHj Qw+tCEZ8LpqVldXU1GSNhoaGjIyMWDsBAFD2wgsvrFq1ymt7g0FrhNfB9PT05uZma9jPtLS0WDsP aS0AdLaO7Hz63Lz2kP86uB9D17Vp06Y5c+aEHnqDweD/1sFx48bZ+M4a9fX12dnZsXYCgKbDq4Pd u3cPta3gZmRknHvuubfccsu6devCp/H87ne/y8zM3Lhxo9dfXV1tU/bp02fAgAEPP/zwgQMHEj2P aN+OHTumT5/ufZLpGfi/vM7CwkLv4M9AIBD6BLXjnQCgKc46aDu6vn37FhUVffvttwsXLuzXr19l ZWXENN9///1TTz01cuRI7+Ho0aP/8pe/2GRffvnl7bff/sgjjyR+LtGm9evXT5o0qbGxsdXfho8H S0tLS0pK7G3M7Nmzi4uLY+0EAE1x1sFXXnllwYIFof65c+dOnDgxYppgyxiwd+/eXttGjvbQa2/a tOnSSy9NxHyhg+xtSfTQLyS8p7y8PC8vz+fz+f3+urq6WDsBQFOcdTA3N3f79u2h/g0bNgwePDhi mqqqKhv03X333d7DO+64Y8qUKTZk2Lp1awJnDACADoizDtrgzjsiwrNnz56zzjorNI2nZ8+eo0aN 2rx5c2iae++9Nz09/bTTTrOiGeoHACD54qyDgwYNCh1sb95///3o8WBbdu3aNWPGDM4vAwCkUJx1 8J577nnsscdC/Y888oiN9SKmiXDppZfW19d7bRsb2oiyM+cHAIBYxFkHKysr+/fv/8QTT2zfvn3W rFnWrqioiJgmwsMPPzx16lSbrK6uzmqo3+9P0KwBAHBI8Z8/+Mknn9xyyy29e/ceO3ZsxPmDrT7j gQMHCgoKLr/8cvsveXl5NiTs9JkCAKCDuJ4MAMBl1EEAAAAAAID2deTzk1j/pXqeAADoKOogAMBl 1EEAgMuogwCA5NOpPjpJAADu0Kk+OkkAAO7QqT46SQAA7tCpPjpJAADu0Kk+OkkAAO7QqT46SQAA 7tCpPjpJAADu0Kk+OkkAAO7QqT46SQAA7tCpPjpJAADu0Kk+OkkAAO7QqT46SQAA7tCpPjpJAADu 0Kk+OkkAAO7QqT46SQAA7tCpPjpJAADu0Kk+OkkAAO7QqT46SQAA7tCpPjpJAADu0Kk+OkkAAO7Q qT46SQAA7tCpPjpJAADu0Kk+OkkAAO7QqT46SQAA7tCpPjpJAADu0Kk+OkkAAO7QqT46SQAA7tCp PjpJAADu0Kk+OkkAAO7QqT46SQAA7tCpPjpJAADu0Kk+OkkAAO7QqT46SQAA7tCpPjpJAADu0Kk+ OkkAAJ2luro6MzMzvKeqqmrGjBnZ2dmzZs367rvvvM7a2lq/3+/z+fLz822CWDvjoVN9dJIAADpF RUVFbm7uwIEDwzsLCwvfeustK2eLFi2aOXOm11lUVDRv3rxAIDBt2rSCgoJYO+OhU310kgAAOoWN BJcsWRJRB4cPH97Y2GgN+5mRkeF12hCvpqbGGpWVlTk5ObF2xkOn+ugkAQB0Cq9gRdTB+++/f/Hi xXV1dUuXLrW215mVldXU1GSNhoaGUHHseGc8dKqPThIAQCeKqINVVVWDBw+2zvT09NAXfNZubm62 hv1MS0uLtfOQ1rYtEdXn8OgkAdARnbGDhBMi6uDdd9+9ZMkS7/vB0Hhw3LhxNr6zRn19fXZ2dqyd 8dAZhekkAQB0oog6mJmZeeDAgWDL94PDhw/3OgsLC72xYSAQGD9+fKyd8dCpPjpJAACdKKIOzpw5 c/Xq1Xv37rVR4WOPPeZ1lpaWlpSUVFdXz549u7i4ONbOeOhUH50kAIBOFP394PTp00eOHGk/Q98P lpeX5+Xl+Xw+v99fV1cXa2c8dKqPThIAgDt0qo9OEgCAO3Sqj04SAIA7dKqPThIAgDt0qo9OEgCA O3Sqj04SAIA7dKqPThIAgDt0qo9OEgCAO3Sqj04SAIA7dKqPThIAgDt0qo9OEgCAO3Sqj04SAIA7 dKqPThIAgDt0qo9OEgCAO3Sqj04SAIA7dKqPThIAgDt0qo9OEgCAO3Sqj04SAIA7dKqPThIAgDt0 qo9OEgCAO3Sqj04SAIA7dKqPThIAgDt0qo9OEgCAO3Sqj04SAIA7dKqPThIAgDt0qo9OEgCAO3Sq j04SAIA7dKqPThIAgDt0qo9OEgCAO3Sqj04SAIA7dKqPThIAgDt0qo9OEgCAO3Sqj04SAIA7dKqP ThIAgDt0qo9OEgCAO3Sqj04SAIA7dKqPThIAgDt0qo9OEgCAO3Sqj04SAIA7dKqPThIAgDt0qo9O EgCAO3Sqj04SAIA7dKqPThIAgDt0qo9OEgCAO3Sqj04SAIA7dKqPThIAgDt0qo9OEgCAO3Sqj04S AIA7dKqPThIAQGeprq7OzMwM7ykrK5s8ebJ1zpgxo6qqyuusra31+/0+ny8/P/8wOuOhU310kgAA OkVFRUVubu7AgQPDO++6666lS5daOSspKSksLPQ6i4qK5s2bFwgEpk2bVlBQEGtnPHSqj04SAECn sEHfkiVLIupgiJXCYcOGeW0b4tXU1FijsrIyJycn1s546FQfnSQAgE7hFay26mBZWdn48eO9dlZW VlNTkzUaGhoyMjJi7YyHTvXRSQIA6ERt1cEXXnhh1apVXjs9Pb25udka9jMtLS3WzkNa27ZEVJ/D o5MEQEfEsV+EW1qtg5s2bZozZ07o4bhx42x8Z436+vrs7OxYO+OhMwrTSQIA6ETRdXDHjh3Tp0/3 Pt70FBYWegd/BgKB0IelHe+Mh0710UkCAOhEEXVw/fr1kyZNamxsDO8sLS0tKSmprq6ePXt2cXFx rJ3x0Kk+OkkAAJ0oog6OHj16YBivs7y8PC8vz+fz+f3+urq6WDvjoVN9dJIAANyhU310kgAA3KFT fXSSAADcoVN9dJIAANyhU310kgAA3KFTfXSSAADcoVN9dJIAANyhU310kgAA3KFTfXSSAADcoVN9 dJIAANyhU310kgAA3KFTfXSSAADcoVN9dJIAANyhU310kgAA3KFTfXSSAADcoVN9dJIAANyhU310 kgAA3KFTfXSSAADcoVN9dJIAANyhU310kgAA3KFTfXSSAADcoVN9dJIAANyhU310kgAA3KFTfXSS AADcoVN9dJIAANyhU310kgAA3KFTfXSSAADcoVN9dJIAANyhU310kgAA3KFTfXSSAADcoVN9dJIA ANyhU310kgAA3KFTfXSSAADcoVN9dJIAANyhU310kgAA3KFTfXSSAADcoVN9dJIAANyhU310kgAA 3KFTfXSSAADcoVN9dJIAANyhU310kgAA3KFTfXSSAADcoVN9dJIAANyhU310kgAA3KFTfXSSAADc oVN9dJIAANyhU310kgAA3KFTfXSSAADcoVN9dJIAABKnubl57ty5I0aMuOuuuzZu3Oh11tbW+v1+ n8+Xn59fVVUVa2c8dKqPThIAQOIsW7bsueee27t37/Lly62WeZ1FRUXz5s0LBALTpk0rKCiItTMe OtVHJwkAIHFsGPjFF19EdNoQr6amxhqVlZU5OTmxdsZDp/roJAEAJM6111770ksvZWVlPfroozt3 7vQ67WFTU5M1GhoaMjIyYu2Mh0710UkCAEictLS06dOnf/fdd6+++mpRUZHXmZ6e3tzcHGz59tAm iLXzkNa2LRHV5/DoJAHQEZ27b4Q7hg0b5h3fUl9fP3r0aK9z3LhxNr7zOrOzs2PtjIfOKEwnCQAg cayQ2WDQGnv27Lnuuuu8zsLCQq84BgKB8ePHx9oZD53qo5MEAJA4CxYsePbZZ2tra1966aXp06d7 naWlpSUlJdXV1bNnzy4uLo61Mx461UcnCQAgcfbv379w4cI//OEPkyZNqq+v9zrLy8vz8vJ8Pp/f 76+rq4u1Mx461UcnCQDAHTrVRycJAMAdOtVHJwkAwB061UcnCQDAHTrVRycJAMAdOtVHJwkAwB06 1UcnCQDAHTrVRycJAMAdOtVHJwkAwB061UcnCQDAHTrVRycJAMAdOtVHJwkAwB061UcnCQDAHTrV RycJAMAdOtVHJwkAwB061UcnCQDAHTrVRyeJFBYLACSUzm5WJ4lUGJ0kAHBE0tnN6iSRCqOTBOJY VYDDo7Pt6CSRCqOTBK3SeYF0kgBdi862o5NEKoxOErRK5wXSSQJ0LTrbjk4SqTA6SdAqnRdIJwnQ tehsOzpJpMLoJEGrdF4gnSRA16Kz7egkkQqjkwSt0nmBdJIAXYvOtqOTRCqMThK0SucF0kkCdC06 245OEqkwOknQKp0XSCcJ0LXobDs6SaTC6CRBq3ReIJ0kQNeis+3oJJEKo5MErdJ5gXSSAF2Lzraj k0QqjE4StErnBdJJAnQtOtuOThKpMDpJ0CqdF0gnCdC16Gw7OkmkwugkQat0XiCdJEDXorPt6CSR CqOTBK3SeYF0kgBdi862o5NEKoxOErRK5wXSSQJ0LTrbjk4SqTA6SdAqnRdIJwnQtehsOzpJpMKQ RDmJVBidJEDXorPt6CSRCkMS5SRSYXSSAF2Lzrajk0QqDEmUk0iF0UkCdC06245OEqkwJFFOIhVG JwnQtehsOzpJpMKQRDmJVBidJEDXorPt6CSRCkMS5SRSYXSSAF2Lzrajk0QqDEmUk0iF0UkCdC06 245OEqkwJFFOIhVGJwnQtehsOzpJpMKQRDmJVBidJEDXorPt6CSRCkMS5SRSYXSSAF2Lzrajk0Qq DEmUk0iF0UkCdC06245OEqkwJFFOIhVGJwnQtehsOzpJpMKQRDmJVBidJEDXorPt6CSRCkMS5SRS YXSSAF2Lzrajk0QqDEmUk0iF0UkCJFRtba3f7/f5fPn5+VVVVfH/QZ1tRyeJVBiSKCeRCqOTBEio oqKiefPmBQKBadOmFRQUxP8HdbYdnSRSYUiinEQqjE4SIKFsMFhTU2ONysrKnJyc+P+gzrajk0Qq DEmUk0iF0UkCJFRWVlZTU5M1GhoaMjIy4v+DOtuOThKpMCRRTiIVRicJkFDp6enNzc3WsJ9paWkd /F99AKCzJXJXB7Rp3LhxNhK0Rn19fXZ2dqrjAACQVIWFhd5hooFAYPz48amOAwBAUpWWlpaUlFRX V8+ePbu4uDjVcQAASKry8vK8vDyfz+f3++vq6lIdBwAAAAAAAAAAAMCR4JNPPhk7dqzXLisre/75 51ObRwHLpFUslmgsE+AIsH///quuumr58uXWnjp16sKFC1MSY82aNbm5uV577dq1ixcv/v7771OS JMgyaQOLJRrLBOiiPv7449ra2tDD999///LLL9+3b9+gQYN27dqVkkj27L///e/ffffdV199NSMj Iz09/dZbb01JDK/BMvGwqrQTw2uwTICuaOjQoQ888EB4z9ixYwsKCs444wzrDwQCKUm1YsWK/v37 33777cGWq+jYXiWZ766nTJkyZMiQfv362c7E62GZBFlVWsOqAhwB7O1inz59Nm7cGOrZvHlzjx49 pk6dOm3atJ49e06ePHn79u1Jy2Pvordt22aNMWPG3H///V7nhx9+ePHFF3sXl0u0F1544b777rPG 6tWrL7jgAq/T8WXiYVWJwKoCHBnsTf78+fOvv/768E7bhO+9915rfPfddw899NBrr72WoGd/8skn Q19e7Nu3z+/3Z2VlDRgw4M033/z666979epVXV3t/TY3N3fWrFkJitHc3Bx6Ios0c+ZMa6xfv97C hKZJ2jIJhi2WFC6TaKwqQVYV4Iize/fuUaNGNTU1XXHFFcuXL1+79v/fcmXPnj3nn3/+559/nugA +fn5BQUF3pF1to+1h9aw7dq7v8aDDz4Yek+7ZcuW0tLSRGSYMmXK1Vdf3a9fP9tRHDx48Kuvvurd u/eNN954ySWXXHPNNTYCWrBgQTCJyyQYtlhStUyisaoEWVWAI9GmTZvy8vKsMW/evNNPP/3mm2/2 tqBgyzf+mzdvTnSAJ554onv37i+//LK177vvPnvo9VuMv/3tb3V1deedd5692U5cgOLi4j/96U/W sOeyfde6deusbXt727n985//DLYcAH/22Wfv378/mKxlEgxbLClZJq1iVWFVAY5Iq1atuvXWWydM mDBo0KARI0bMnTs3yQG+/vrrxx9/3NvBLlq0aNiwYd7e1d5sX3jhhdZ499137a1s4gLYG+bQIQQ2 3lmxYoU1GhsbzzjjjNBBgLZwvO9ckia0WFKyTFrFqsKqAhyR/v3vf/fq1ev555+3N7H2bvaKK644 cOBAEp537dq1b7zxhve9hv289NJL16xZYzuTG264YeLEiTt27FiwYMEtt9ySoGevqKgItT/88ENv x2W7jr59+4Z+NXTo0KeeesqylZaWjhs3LjT2SajoxfLRRx8lZ5lEiF4NWFVYVYAjVX19fajd1NSU 6Kfbs2dPTk5OVlaWz+eznYbX+Y9//MMeNjQ07Nq1a9KkSbY5FxQU/Pe//01EgOrq6rPPPnvnzp0R /Z9++unVV18dnvO2227Lzs6eOnVq+CJKkHYWy+7du++5556ELpNwGzZsuOyyy/r162eNiF+xqnhY VQDE46GHHioqKrLGf/7zn+7du7/99ttev23aZ5555quvvproAO+8885vf/tb73C+cI8//viTTz4Z bPmgafbs2YmOESHliyXYctDjN998c//999uedtmyZZdcckljY2MSnrctKV8mrCoAOsumTZu8gwqC LUc4fP7556tWrbJ31MXFxf379/d2tvv37w+/VkniZGZmvvXWW9GH8w0ZMuTjjz9+8MEHL7jgAu9g jIQKXyZBgcXywAMPDB48+JxzzgkNMfx+f+i4i6RhVYmmtqoAiJVtttdee+3w4cMzMjK8z2psE77y yisrKytt3GFvaKdMmXLw4MGk5Xnuuefsp71tHjlyZHh/z549e/fuXVBQYKkSnSF6mQRTt1i2bt16 3XXXeZfesgy2HLzdaUVFhS2QZF4RhVUlmtSqAiBWtoO1d9SjR48Othx3kZeXd/fddwdbNm3vaMP5 8+fbG9o33ngjOYcWhLNn9Pl84Reb+vOf/1xWVpbo521rmQSTvliamppCRxJaHu+6KMby2EjHa8+Y MePZZ59NXIYQVpVoOqsKgHjYVhy69H1VVdWZZ55p72lXrlzZq1evwYMHT5gwIQmHW7RlzZo1KbnY VKvLxNrJXCwTJ04cOnRoenr6nXfeuW/fvo0bN9oYx7v2yI4dO84999xvvvkm2HJgZEJjhGNViaaw qgCI04YNG/r06RP62uKiiy7ydra2USfhM6VDuv3225N/nENbyySYsMVilW7MmDGhv/zMM89Y+Qu2 fKOU1yLY8v1g6NojCxYseO+99zo9RvtSvqq0fxqII6sKgFi1f+PRJUuWLFy4cOrUqUOGDPn8889t 93vNNdekImabbK+ye/fuTv+z7SyWVC0TGwAWFBR4bdulz58/32vv3LmzR48etS0Seu0R5VWlnVNF QtxZVQDEpK0bj27ZsiU7Ozs9PX3VqlW2g+3Vq1dmZubkyZND72aPbK0ultQuk2+//bZ3797l5eXW fvrpp214GPqVjTK8K4Al9NojmqtKyk8VEVxVAHRE+P1YW73xqG2wL7/8cuiDJnuXe91116UkapK1 cz/WVC0T27Hn5uZamD59+tx0003BlhOxMzIyHnzwwUAgYAPDhI4ylFeV1J4qIriqAOi4iPuxHvLG o7ZF2za+ePHi5EVMuljvx5q0ZTJz5kzvxbKBT9++fVeuXBlsKY533nnn9ddfb9UwoSedaa4qqT1V RHZVAdBxEfdj7ciNRz/55JMvvvgi6UmT5PDux5q4ZWL79kWLFnkjjjvuuOPFF1/0+q3zyiuv9G6I kBxSq4rCqSJqqwqAwxN9P9Zk3nhUhNr9WEOs2Nlgx16jm2++Odhy8GfoldqxY0f37t2feeaZJMTw 6KwqKTxVRHZVAXB4Wr0fazJvPKpA8H6swZaP+2yYYwNAi9TY2GhDv6VLl+7du9catmvduXOn7YTn zJnz+uuvJydPClcVnVNFNFcVAPFo636sSbvxaMpp3o/VY3Xwhhtu8NoffPBB//79rSJUVFT4/f6L LrrIi500qV1VUn6qSFB7VQFw2FJ+P9aUU7sfa/it6MI/7jNWemxXnJwY0VK7qqT8VJGg3qoCoFOk 6n6sqaV5P9ZWb0UX/nGf7eQtTKJjeHRu3ZvaU0U0VxUAnSvJ92NNOc37sQbbuBVdEj7ui6Bz615P Ck8VkV1VACAegvdjHT9+/IQJE9q6FV2iP+4LSfn1WEJEThURXFUAIH4i92P1bN26ddSoUWPGjPHG ESm8FZ3IrXuDSqeKSK0qANBZFO7HGm706NGhEUdKbkWnc+tetVNF1FYVAOhEqbofa7Twg0KTeSs6 heuxRJM6VcSjs6oAQOdK1f1Yo4UfFJqcW9Hp3Lo3/DyRoNipIiE6qwoAdK6U3I81WqIPCtW5Hku4 Vs8TCSqdKhJOZFUBgM6VoPuxHoZEHxSqcD2WCK2eJxIUO1UkRGdVAQAcBoXrsYS0f55IMpPonCoC AEiQ1F6PJYLOeSJBpVNFAACJk9pb90ZL+XkiQaVTRQAAiSByPZZWKZwnElQ6VQQA0Ll0rsfSlpSf JxL833Kc5FNFAAAJonY9lrYonCcSTMWpIgCARBO8HkurUn6eSDAVJ2gAABJB9ta9KdSR80SCSTxB AwCQCFK37hUhdZ4IACChRG7dK0XtPBEAQKfzrscSbDn/LrW37hWhfJ4IAKATRVyPJZjqS7Io0D9P BADQicKvxxJM3SVZFHSV80QAAJ0o4ojQZF6SRVBXOU8EANCJwo8IDSbrkiw6OE8EABzn7BGhnCcC APC4dkSoh/NEAABuav/WvW6+KwAAuEDq1r0AACSfwq17AQBIlVTduhcAABHJv3UvAAA6OCgUAOA4 DgoFAAAAAAAAAAAAAAAAAB3/D4z1rb4NCmVuZHN0cmVhbQ0KZW5kb2JqDQoxMDcgMCBvYmoNCjw8 L1R5cGUvUGFnZS9QYXJlbnQgMiAwIFIvUmVzb3VyY2VzPDwvRm9udDw8L0YxIDUgMCBSL0Y2IDk5 IDAgUi9GMiA3IDAgUi9GNCAxNSAwIFIvRjUgNTcgMCBSPj4vUHJvY1NldFsvUERGL1RleHQvSW1h Z2VCL0ltYWdlQy9JbWFnZUldID4+L01lZGlhQm94WyAwIDAgNjEyIDc5Ml0gL0NvbnRlbnRzIDEw OCAwIFIvR3JvdXA8PC9UeXBlL0dyb3VwL1MvVHJhbnNwYXJlbmN5L0NTL0RldmljZVJHQj4+L1Rh YnMvUy9TdHJ1Y3RQYXJlbnRzIDQxPj4NCmVuZG9iag0KMTA4IDAgb2JqDQo8PC9GaWx0ZXIvRmxh dGVEZWNvZGUvTGVuZ3RoIDkyNz4+DQpzdHJlYW0NCnicnVdNb9s4EL0b8H+YSwFqAVEkRVJStu0h blN0u8EGTYo9OHtQLTo2GkuBLKdIf/0OZSeRZMumqiCJJM/Xe/PIoSG4grdvg8vJ5w/A3r+H8w8T 4HA3HvkMfB7SEFSiqFQQSwGlGY/mf4xH8PFyAtDw5MHfaX4HxOT+t2tvF+b8ZjwKLjhwQRMNN/Px iAPDHw6hhkhKiv9uVuMRs+kYfMK4PqMslPh6NiWcef/BzV/j0UeMY2M9eytOuWgGwLfWgXq+IOD5 kkyu7N9vkHq+JnlmH+r3l54kZlWUnh+RpxfrzdrzFUHbkNyZnpyCMapbVU/JLekYBxcawbaQCobk JW0/jVVI/IVqYesrzTa9IllfchVRwdtBoM9WMxp3bFOLMvc0JrCY4z7XhO2l8XttJdWqbfsMBJNl XrS9WWBWY292r7uEHVQHNjjs1HHr7bkK4Jwy2XFVNEn6mdoTrugTblOqIqbSJVjYvwokJIcWAVdU PWuYMiFrHf+Tw7UpH00J18vMnDkB50KjfztgHexowXJwwUxQFXcLhj4xyrjpMSWT0qSVgRTmy3sD P5fVAtbLXwY4+wTnT5VZO2EVErGqdi2nsaqhWHWc0EiexBpcqH13BN5wn5Isg+X8XZCZx+CXKQso 8GG9SEsT5PM14q6CmpHv63ccE/2AWbHJK7zHy4kSiVywqF3yaUq0g/h1JF9hHI0WDSZYc5ocEP/k fmnyarj4m/FOg48HlytjmpzWQ1MCrx5T8nWTA8pgXpQ4RtPtTfq92FQgYbXMN6gC6iZ/La38W+Wc hpsMhhtqyn9f/S/eU/JzYbVdlRvzJ2QF7NZCS/jFbnHkm/v7ehWI+Ic1zrvzuEf+XNbyb5R8mhHO XOQvOBXCRf78yBmoh2EWU9VleHKFx5Y8g0t7TnmCzTq9s/QA0pVBkcO6ngy3pN48Z8VqZY1dmhKi XixHL0mnpCoe9jwPlqq7ntR1HkedpE6NcRnKKgmpdmvM4LGsYkajQVOu4TEl/2Jr7LquFgYe0mq2 cNvElE3Vzu3A1ZERfkgECE5HA7exhgeeyuNEckgz3LJge2kJgMdGvNAePxZf8FnF8guUD7P14wwg BKzzLAxxySVURW+smN3WNZLB2xU7UDJ40islqdqbREcpefXY9ntos0NRN7uZ2AFZ78DuRSY5jaPf 3sVb7nv1tb6TyZ5coeZOcolf5aLxs/gMj3bY/Z1a9r84HJHL0aJrUv8H8V8uVA0KZW5kc3RyZWFt DQplbmRvYmoNCjExNCAwIG9iag0KPDwvVHlwZS9PYmpTdG0vTiA1MDAvRmlyc3QgNDczMC9GaWx0 ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDUwNzI+Pg0Kc3RyZWFtDQp4nLVcwa4st3HdB8g/9DLZ5DXJ qiIZGAaCCEEAbwTbWeUGhmQ8KwEcSXhygHx+zmHX3DfD2+wuIMlCGr6ZwyJ5WFU8xem5KeVt31LS rRhebKsFLxX/8d22JeHbfUstbynvW+Z7OW3Z8HkuW+7AZvQuglfbSuX7dZMd/XLfRGCvpE0a3i9l U9oGVpXv66YNtkvdDH1T6ZsZ3pe01R14yZgO8CJbrTLm0lLCa92acG5ta023pPvWM+aneesKeypb 77Cn+GzHoElhZOfM+ClaWzKYod1kmevEvzjwWJ1xJexqYwn43DhnLCZZ42QJ7pwlevAfgvFTzZwP gCRQC98RDkyMcTy+w4GFGA7T8Hnt7AkbpKdqO3iuHaPz41bwL66wccAGO528cphOYhs6dHYFi3kn tR1btGP9qWOPdvwvYQE5ZQKxW8mUXbecd1jGMDmD9ASycuY0sZu5cF29oaG0AYMF47BDFux03tFd QFvGVLJ0joyPFf6TscjMJedED+H/YCsb9iljctmwFExhyzXn4UkYk73wbtsBpF815aTwcePEh4eB +ZwxaCeYvrYPMJxth6dg4mhUYtpW6LsZHQo3P2Mbylg3PLDwfxmElgwyM9yioBsaQuclmF7cCIZl 4Vzge/iEYFiWwQ0sK4BZYHmsFDMoY6Vw1mICMDamWOM0gKlkDH5WOodAIJXOIYQRMuw0NIYdxEqi HTiTJNoB+8KtznBr4brBMRrw04wNlrITrGgUgg2NSjAsCxmFM4kowYzCfoSsKMmE84saNwSWbec0 0N2Ee1UYqvgI05U25oyxGteOuJAOGxnbgG0f+7npDv4yQkt3hAI2FhFOd4Dza2J3eJXSqTN8XkkA thoNzDfjXUwHg8LVFZuBBsCFE8OyVSrfQa/BMxxbB89YiVYuEP6hjQbhz9qw7txHPuFHsNPLkZ20 c2LwZ9t3gpFjdqHL7Gg0elPajKmswMONiaHAvSwBCP9CYqIXgCPLlWDbjAFC1zEGSMFqrTSCYVno jIgCEwQanHEzEIQGLCs9BT1NK90Tli0RDMumBMOydYJhuQ6HRfcKXy1wJGsYuSAuEEPDqdEASQVu gU7EoHuvxFQkT84Q4VB3grHldScYFNeU6e9IsGOlCIfKWC8gqw7Hx3ZW7l6Bd1ZOvBQeCrQDIipT AaJkq8L4w8AVfoPggGUFf4WpWxkczN1aGTewDFrRgGWwwJBCoxMMy5Uxiv2oFd5UMLk6VsrUvnMI bAzeQC9w3XhAFYRDY4pHIKJBO/hHG7EOHhu9ssC9GvcKMYoG4w/h0EojuOPwYGjCleHgjeGLBlym IByaMh8gCuBQI6C3xtRdsJ2NeazA+Zs1gmG5kkyM16rRMgy2nWAMwbSMnLH1PTEtNTTqPo7HznOQ p0jn6YEMgcMLvlPg5Z2JoYD9numZcLhOAgr2tXOajHDsNsEwKPRMOFMXeibG68p9x2HWlZ7ZeBzS M5FxutEzEWhIogTDcqVnYiW90jPhH73SM7HBvdEz0bMzGxeEXm/0TIRe7/RMkI4DlskMR8W+IxaQ 39gSwnnq7o14nCg71YLwNN95+nLyaOETQdylfSRSOCBalTmRVuiwso+zm6mGJ96ObMlEyRZ6yM4e MnrQnjJJ8YTddWRNWlYsROC/aae2GIf1bsxhSYYYQOpLHKPSXuIYtbIHx2iJPThGY6ql/tl5HtFv 0t7hajwf0MLeC5UHFowemdoJE0RL2GrsgTESlRNXgBbTMrVOoisIT+HE1QjlBQTOzsTOFjiRQpHC g1WoAyAmmPZpBfkKLY5hPC4KxzCud4i4msdhwBbXi2iFwuF6edRDV6CHDHXH9QrH6FyvcIzO9VJa IP+hBxVS3rle7CxaXC8VIU56rndoQa6XoiYzepkA0WI4DGVItoVaLY/dQkBAW4H3oZkyVaBgx9Dq PJOoMY27T3WIcwY9qGVwvrAHZ1CFPTiD2tiDY/AQYCCjxdOPGi/zGJChUXkOCOVe5kFA4ZIKTwKh 4Cs8CpjB0GrsoVSrmT2oR5kjZag/Hgcy5B/PA6EcKzwQeEghnuFdQilYmKaEWrAwfoTyr5AToRos tM9ghfKlP1P9FZ4LQkFYmC4PnWzwKfKFFk/vVobspIqmlUrRQFWILMBTmjNt7EFdiNBmD9prjLKh rwcHnZY7vY6yECEJK1S3CDoe6NTde+MZT6WdaI+RjNBgD2OLPsldFZ4U9FC06JOMaeFZoUO1MxEq o1uYTHgswEfgk1wzWvRJqlzhJ8qIB4mc6VD28CZlnIMc9hgqn/UC4xwLHPoCrY7/66gFOthQxjkm RK1B+Y20QvnBlrEHawSEKFqsCsDMUYdoOtQJK4fEHqwdKLeUca5DczDOlWepjgqDGUmZkZS5Shnx qkMHcQzjOqij1ZQ9RgUAC8osoBU7o4x9KADoEkaKNtZAjCNt8BWqM2RJrGtUENoRL8osoJ2KiFnA dthSZgEMhh7MAjih2IMlC2soZgy0Knuw4qDAVmYBo4JUZgEbMopZwJht6YNoGVUXx2D2VWYBYy7Q USdxf5RZwMiTMgtAtwydxha1FyMZBRz4Y3Qb5Axao35KtMJxK31jFFfMSMpItsY5M5KhmsEuI9k6 ezCSrbMHI9kGB4zkOvaXkVzH/tooxLi/jOQ69peRXMf+MpLr2F9Gch37y0iuh6ZkBTb2l5Fcx/4y kuvYX0ZyZWY8CjlGJ7MwSrXEHhyDa1DuObaXPTjGwQFxRmnM6K6NozGv1c7R6O2VIkEZC40qQRnn bae9Ufzth7BFK9Ee4xwHM3tU1ok7e4yKUeyoo9tQ0ozzRqnInIoWuFPGeeM5q4zzxnOCs0WLQn2U m4whZZw37p6OqpL2lRHfKlljvmqV62BGb428MAtgadh9ZoHGstT2UZFihcYsAAVEzc16c8cuG7NA 3zGm7aNOLezBspRyiBGUhh4yZoEhiIxZYCgiYxYYkmjU00MTsShIQxTxVExDFdkoYZlpjVlg6CJj FhjCiGo0DWVko2SmNDJmgaGNjFlgiCNjFhjqyJgFhjxiZk5DH1keZbKyMuAYVEjGOB8SiadYGhrJ MovnfRQOLM4pyTkiWo01RGHlzpsPFlI7PdZYGO4snY1F9c6S0Ap75NGD9njWGgu/fXAgtMz9NVbE O/cXtQhb2N9R2O3cX4QzW7TH2mxoJJQqLOVZPwjHoEYaFfvQSKOiHRqJSiMPjWTjXoAayZRjUCMZ K82hkSjx89BILGvy0EiMhzw0krHwSyzef/WrT99SEG379ttPv/v0u5+/+/HTtz9sZfz7N9v+61// 9V9NmH/On37zr1v6t+0duATla9C3xJQARgIYvcb803/88F9fPo8B7Rr54CBdkfAOqhegMa+b0Qam BmfUAzPKKQK6WtuYUgtMu8emjXAMzOiKyHfQFQFjSmmPzalEWCq3LKWbMDhAN2HwPlyEpxLhqdzz dBN2D0sS4UkikSJXqzvmFAnzdBPnBygY4nIbvekmNL+mFR/yMHmJrDfIx+z0ivtjdjdR+nXMx34f Nq+hcgN9n9/9jt5kiAHKNyF7gCKBloOBppE8arfs58jRlSNOnSNOnSMnSb7x1wMUSe45snclsncl snclpBUijJcI4+WG8ReRk+4cyiIJ0CLp3SLp3SL+WyOZu14KvfKk4W45qLfZoJxa4ldRTyBZgNRB v//mSAeDpxvsByWnTyB5GPztYXDMYnwftjSMSuRpphYB1QioRUA9AnrInxvUuV/PqPOdn1GLXZ1Q i22dUBpChYhPIeZTiPoU4j6HuM8h7nOI+xziPoe4zyHuc4j7HOI+L7j3Kuc91I+CZnwdfbwWfxV/ VX/1CD4qifFF9PHaj9ey+6vbK26vuL3i9orbK26vuL3i9orbE7cnbk/c3qGGb9b2ITXlvs5N+TGS rC3LSxYtC1+cUAtfnFALX5xQC1+cUAtfnFALX5xQC1+cUAtfnFALX5xQizzwipIQ9xLiXkLcS4h7 CXEvK+6nA1g8KsSjQjwq1H1VPSrUo0I9ytSjTD3K1KNM3Z66PXV75vbM7Vm+iIHlwS+2ji7xGdg6 br8Kmd9/9/2fPx953BeQHulAA93fxVJE5tWIzGsRmdci+rRF9GmLTLxFJt4jE++RiffIxPvVxL34 /Yc//+Vv/vHv397+5ZfPX355e/v+uy9/eXv75stPP3//03+/vf34p1/+8PPnL3/66ct/fvfjHz+/ vf3x3wH4Q/m7n3/84W+fpVC/uQL43w4jN8O8e/+e/5/WnD5Mxsf6Px4nfx1H78b5uurbSx05PxNm 1PmZMKPOz4T5cvr8TJhRd/XmgbqryA7U+Zkwo87PhBl1fibMqPPzeEaFuNcQ9xri3kLcW4h7C3Fv Ie4txL2FuLcQ9xbi3kLcW4j7es59meorB70mtDIVAuUoBG6wH76YyU+g9nLy+yzGA31Lw1meZ1oi IImANAKyCKhGQC0C6hHQ+xXDNWqxpxMqh1Ah1lOI9hTiPYWITyHmU4j6FOI+h7jPIe5ziPsc4j6H uM8r7ssU5+phaf5a/bX566G4ixf0xQv64gV98YK+eEFfvKAvXtAXL+iLF/TFC/riBX3xgr54QV/E 7YnbE7fnhUOR57vI1do+5iW7SEyPkdrasp5fuVyjFr44oRa++IoqC1+cUAtfnFALX5xQC1+cUAtf nFALX5xQizwwoULclxD3JcS9hLiXEPcS4l5C3MuCe5lOcr8cKH45UPxyoPjlQPHLgeKXA8UvB4pf DhS/HCh+OVD8cqCY2zO3Z27P3J65PXN7dhFfslQUouvI1ceM+tLyyWWCy5dHungMftP9oPzcRVcV zzXq3EVXFc8lSs9ddFXxXKPOXXRV8Vyjzl10VfFco0Lca4h7DXGvIe4txL2FuLcQ9xbi3kLcW4h7 C3FvIe4txL2FuK8h7muI+7pIzb0/ox6m+gR6z0zfHEfG/pRSVtgPT5o9VU9+g/ae6o5ZjF+QrAzz 1xbPh1sEJBGQRkAWAdUIqEVAPQJ6r56uUed7OqNyCBViPYVoTyHeU4j4FGI+hahPIe5ziPsc4j6H uM8h7vOK+zxFsHjAqb+av1Z/bf7qgZl2f03+mv21+Kvb828yxB+BkOT2kttzcSJ+XSL+haz4F7Li okW8apH8/E3Iak0fM40uU43zNX6ytbSc2wunCx+cUAsfnFALH5xQCx98RZWFD06ohQ9OqIUPTqiF D06ohQ9OqEX8T6gQ9yXEfQlxX0LcS4h7CXEvIe5lxf27fx+R7Tca4jca4jca4jca4jca4jca4jca 4jca4jca4jca4jca4jca4jca4jca4jca4jca4jcaIlfx9Zjzh8gVWUeu39nIyx3Jq+Wzeqg/Utkj xUS6H5Sfu+iqarpGnbvoqmq6Rp276Kpqukadu+iqarpGnbvoqmq6RoW41xD3GuJeQ9xriHsLcW8h 7i3EvYW4txD3FuLeQtxbiHsLcW8h7muI+xrivi5Sc31JzW7q0DJPoCkz16eLmCX2w09v8hPo8XSN J8Tqma3ta8MtP8+0REASAWkEZBFQjYBaBNQjoEfVdINa7OmEyiFUiPUUoj2FeE8h4lOI+RSiPoW4 zyHuc4j7HOI+r7h/1wlHbDZXMc1VTHMV01wVNVdF7RFyroq6q6Luqqi7yOhur7u97va62+tu7yFO /AKEP7c/XpO/Zn99eqBsuZaPuUPWyaM9LMvSsu4vj/Plc9+bUee+N6POfW9GnfvejDr3vQlVzn1v Rp373ow6970Zde57M+o87mdUiPsS4r6EuC8h7kuIe1lx//ogqPrtg/rtg/rtg/rtg/rtg/rtg/rt g/rtg/rtg/rtg/rtg/rtg/rtg/rtg/rtgz4/tLGc44fokrSMLl//+HMSK8sntYrLgEdIPwa/6f5S RN6gzl1yRp275KI6ukGdu+SiOrpBnbvkojq6Rum5Sy6qoxtUiHsNca8h7jXEvYa41xD3GuJeQ9xb iHsLcW8h7i3EvYW4f0zr9TE0nZ57OAdNX17r8eX1DfbDD+3LE+j1O2OfxfgjLUvDevaE2TVIIyCL gGoE1CKgHgG9q/1r1GpPT58wu0GVECpEewrxnkLEpxDzKUR9CnGfQ9znEPc5xH0OcZ9D3OcQ93nF /etzaOpPoag/haL+FIr6UyjqT6GoP4Wi/hSK+lMo6k+hqD+Fov4UivpTKOpPoai5PXN71e1Vt1fd XnV71e1Vt1fdXn26JF2u7UNiyvUiMz1GamvL7fQJsxvUwhdfUWXhixNq4YsTauGLE2rhixNq4YsT auGLE2rhixNqkQcmVIj7EuJeQtxLiHsJcS8L7ut0/D5k9qMW9lsA9VsA9VsA9VsA9VsA9VsA9VsA 9VsA9VsA9VsA9VsA9VsA9VsA7U/fjSzn+CG6nr4b+RBd7TGDddye1BuuMR4h/Rj8pvur+r9Gnbvk Sv2HfkNzgzp3yZX6D/2G5gZ17pIr9R/6Dc0NKsS9hrjXEPca4l5D3FuIewtxbyHuLcS9hbi3EPcW 4t5C3FuIewtxX0Pc1xD39Zx7y6ffjegEev1uxPJz4llhP/wRsPwEer2A8VmMv9C3NFxOvxu5BEkE pBGQRUA1AmoRUI+A3qula9RiT8v5dyPXqBDrKUR7CvGeQsSnEPMpRH0KcZ9D3OcQ9znEfQ5xn0Pc 5xX3r9+zmD99Yv70ifnTJ+ZPn5g/fWLlEb6HQDF/+sT86RPzp0/Mnz4xf/rE/OkT89/TmP9pAfM/ LWAueMzvScx/PWD+6wFzIWRerZg+X8Cu1vYxL43f4/wPacMreg0KZW5kc3RyZWFtDQplbmRvYmoN CjY1NyAwIG9iag0KPDwvVHlwZS9PYmpTdG0vTiA1MDAvRmlyc3QgNDg5NS9GaWx0ZXIvRmxhdGVE ZWNvZGUvTGVuZ3RoIDUwODk+Pg0Kc3RyZWFtDQp4nI1cy44sOW7dG/A/xB9cvUhJwGBWXhiYTaPH u8EsxoAxm4Hh/1/5HAaVnRkViuQFuoNVdURJhw+RyqjSmo90qKZDCh75mPyqHLkMPOuRVfFsR0kN TzlKI0yPMojrR83EjaMKcfOoE7iejlaB6/lokLWXQxJwvR7SgOvtkAGczEMLcXL0jp/LOEbGz4se Q4nXY2J92vsxoVuBySl1CBNCk0OxipxGhZCPnAuwA2vPVIzv5jwBxky5VIIFQidYj1wzwR1CIxia 6yAYmlsBGDzkBgJ0QrMkgCc0SwV4QrNgTYq9Zs0EQ7MKwdDMEYqJcy8EQ3NXgqF5JOw0QfNoFQI0 Dyy8J2ieeUKA5ikdAjTPSTAoT5XgCaETPI6CPR89JwiwWgcjJWPhPdejlAZMhsFKJwbDK/7XQSs4 ADgDXA08aEyFAM3caS9QKFhdB49FKobDOEVghV6gWaG1F2hW6gFHhSR1qCi9EAzNXQkedA+CoXnA YXqF5gFbdVizTMzc4R5lwmc69lYmiOy1HTVVggUC3KbnAvfiTqsetXIK2KxWTgEz1EY9LUGgHmwb loQezFeFekBxFepp0KzUA4+p2giGQvpFb/Bg7rI3uHBXgqG5w2hdoHlg5i7QPOCyHduuE4vqsHSd wu1g1BwEt6Plxl10CING0aOVylECAe7QscpWsbkOA7cKP+zw99awpw5/bQ0zcwWtwQ87IrEJOO6I RewPYPzXFC7TEY0wF8HQzNgg6Y2mpp81rq4jelqfBEPzgB06Y3Lg551BOemHiEoQBjCW0ib9kHGZ 6IcISEn0Q9heEv0Q00imH8KrJNMPEYNS4JV9MHeAyA4VUhBkHfuHBQBGDErFVjq4RugQjOHYPAQo bGQVvihCUyIqBf+OjkAThVN3BJooWYVdhSHesRSh53YEmnQDQw9H0GZiO8U0MkDJgCvLhB8OsCYT fjEQaDKx3oFAA5dIN0xrSQhGGKdJMBJQrgQjaHMnGKmoJIKZr+A7A6Gn8GwICPUK/xlwC4WTQYBm +AQEaG70H6YD7nQwUzBndOZYRsBg8uymhzmIehCDSE/QY7mMepggJ/SMwvxCPYhB5A6CLUEQrMwL BHeGDcGMFiV4MrABroxVhMtADHYmh2HBlrmdzqgDmAaWnCwLw6dpFHpV4qhGP6sQ6A1g1GKDgTho TjrIYMrgLge9c8APB+aDgQFmVppYi6WDCT9kBoSFAG4knV8hPME1wVxTJriTNYJJ6CR4kgiAbdvw wyFcCiwzhJvkCE5csfDBLSEIIXCkEqxUQbDheM7wu4gAI125d0TcUA5FVA4mc3x1DKYT8yr6OwNp kDYeU8PmQaAN2yk9xnbKxdlOEWjTdopppu0UVpy2UwTatJ3yqLOd8qyzncJC03aKQJu2U0w8bacY OW2nCLRpO0WgTdspnGnaTvHdaTtFWE2pBEMzYgYCNMPFuV4I5A8OOXn0cQOTiRw7gMB9w4cmw2Xw 1EuJHsCyIOVk26NEvnneIXXwezykCwngiZcKGJg88lIFBZNnXqqNRBRK3ApPvdTAAoOV5zpHCKXJ EZxDKkdwDoQ/uYOEcIXEORBeII1z8IwHoZB4ZA2ey4m5lGk7p4mAGTy00+Q+kH4zl4+fslZIZBxR m/HPrEEJpp34ChIZgcVRZSCSeIZBUo7ACnLFKidiFxLceRZWI5XWKSxHGn4CW1ISWpVztMkRnEMq R3AOoacUzkFHnIVzaOMIzsGyBfaHxFOYhREkpQdwDmY7pglICMNZqdncq1IfVwmXoYQQmAjLXBJY nIhLSI3OwqIvcc2wDkopjmgsnbKNYO2UJ90McxTjAKxDon2xRtJiLkeJ9hXOYfYVzmH2Fc5h9hXO YfYVzmH2Fc5h9hXOYfYVzmH2pbWK2Vc4B+07lXOYfZVzMPXPs3jFrmclzjigx1aeY4gWln6cTa0I pD5lFVhMH39aTN+kRH2IZ+CpDyuDRH2wEwwNr5mdmpswbBolhhQQGbUWR3AOUY7gHJo4gnNo4wjO wVqWR1KuLM14DEMSropaeAROlrSVTEzlHNP2gXlbIi+sc1FEwB4sdFGFwKqsdFGGwG4sdVuhv7DW Rb0H+7LYbZX+YtVupb9YuVuZhVjvoirBCBa8DVUIJM7RmCUYPfgWR3AOUY7gHJo4gnNo4wjOwfJ8 Mgs0ntnIHpyEdRBEztItFpkIGg8jiJyHGRQiJzoDnbmgmd8lJoNmG2UfgLq82LBBUW3YpGg5iAkB NYulKNbwmRZiuKNotnTAWEeNxWEMcSlnOmN1X6thqYzFMtIYlTXGZmJwo40RS3MQxfIXw1uE4ZQY 3yLWozDARbMN4xQqNoxLt+4hMcaFRTDEaW2FpUXONiyPMcxl2CJJkQxbDgeIJYjElYr5cGLQq3GU uG1lXEPsFI0zer9mLjUxODQzWBKzAQqcbFmYYjNKiC1np1XY3tgwbkXlHMbZzr3RNugOOTGTglpD lJgVtBuTTAvKQjOzXkEqMwMwMSgPcIicgucQcz17KLKRmBvUojKRuJ7OI0Apmj8wPaDy4WzMD6h9 OBsTBKqfaocE+zA7a5jMejELMVugUbDvVor8BkuqjPKHetVEBh5PPKZZO12ogV0qRC7H2r3ERNLP bdIK/TQhF93HeShxOSxhMtsaiAxK9nK521mQOKBb4kqkaFigJLrrMJMlZhTUozYM/4Nn2LBOUWzY oGj00WIoGjmMaQW1IYcxr6BM4jDuCoVSOTtnHhfWRHO2ZmFKOkcza3Klo5mnMn8MsTBlAhliYUpv HGphytmHmlk4YFjjnZg6Rjeq6RSjmzWZWsYwT2WsjmEGsBbZyoZsTfJ5dlmbbEk1W6OcztOarbJ5 ULZm2VaSmSpw1tswpcjNZ3oNTmfTMCjSFTJTBUKIymimWe2sZtBN9iyQOFvjSjKnxLnIKUg9jr5h 1QHbdSsK6DU4wKxS4GwyrXzgbGrnLnPJVIsAZsnJ7hYzcGI7SBN9aVq+ZoDidG52pcGrFzNkZsed TA8rfIhWv7DrTrznyLyygadbDcDOG2lGrDThDYLVNey+kSS4Y/bfyAFcAztwhIGVLHbR062O4WyS rLjhbOwGIHI2oSugvIGoVrfwmiDxdgjf5cQs4O02BqLF5qRoVQcOLV5f0Dg4oSiaLeCN5fRMnEIQ jRg21BBtm0g2LElsGJbDGoMibz5Kt2G8FqnZhk2Ktl7hbNVIFc7WrPYSztbUboo4m5iNhbPx2gMi Z+PFx5/+9Os3u49Kx++//vrrv37/9Ze/sePj1wcTHp5/P3799s8D1oH85z//+7+dY5CDzjG/cUjp IdQIoWYEVVMIlUOoEkLVEKqFUBJCaQgV4r6GuK8h7luI+7bjfixv+w/zNhxh5m2a/eneh4PpfDZ/ ij/Vn92fw5/zfHbX111fd33d9fX24NVrbX/9v3/87x+AX385WvkDdY0XWTPLVjPzro/5x3//639s WPENVF/oOfmX4Se19654Rd274hV174pX1L0rXlH3rnhF3bviFXXviheU3LviFXXvilfUfRq4okLc S4h7CXEvIe4lxL2EuJcQ9xriXkPca4h7DXGvIe41xL2GuNcQ9xriXkPc9xD3PcR9v+een8S8oZaq fgH1j2zOz3H+SGI77Cu7ds+u5Q30mVx9FfZx0FZxLh+HWATUIiCJgDQC6hHQiIBmBJRTCLWx6QVV QqgQ6zlEew7xnkPE5xDzOUR9DnFfQtyXEPclxH0JcV9C3Jcd96/65YzzXD0smz/Fn+rP7s8VvmeN xA9hz2f2Z/Gn6yuur7i+4vq8WOKHr+fT9VXX50VU996FV07n0/XV99pst7efeUn3iSmvmXSvuX6k 0LLxxQtq44sX1MYXP1F144sX1MYXL6iNL15QG1+8oDa+eEFtfPGC2uSBCyrEfQ1xX0PctxD3bcf9 5Vyt7u/N/b25vzf3wub+3tzfm8dP8/hpHj/N46e5PnF94vrE9Ul98OrtOf7WJf2Il+ozS9tqvumS vGhYQbom/zL8s0t6Rt274q5Lekbdu+KuS3pG3bvirkt6RMm9K+66pGfUfRrYdUnPqBD3EuJeQtxL iHsJcS8h7jXEvYa41xD3GuJeQ9xriHsNca8h7jXEvYa47yHue4j7vjn+Rn9HLVXzAnrlwjObj/er nh32lV2nZ9e35DrkM7kOz5Jj7hXPj/3UCKhFQBIBaQTUI6ARAc0I6NUlPaM2Nr2gSggVYj2HaM8h 3nOI+BxiPoeozyHuS4j7EuK+hLgvIe5LiPuy4z59xvn0amp6dTa9OptenU2vzqaH7/QaaZ7VGV96 OZ/Zn8Wf1Z/Nn+JP9Wf35/Cn68uuz4up4T3M8G5u5PfabLe3n3lJ94lprplkq3nkjxRa7n3xirr3 xSvq3hcvqHrvi1fUvS9eUfe+eEXd++IVde+LV9S9L15R93ngigpxX0Pc1xD3LcR923H/ea4Ov20Y ftsw/LZh+G3D8NuG4bcNw28bht82DL9tGH7bMPy2Yfhtw/DbhuG3DeO8bfiyth/x8t4lXeLF922v qu4033VJYyWSFeAaGP7ZJT2j7l1x1yU9o+5dcdclPaPuXXHXJT2i5N4Vd13SM+o+Dey6pGdUiHsJ cS8h7iXEvYS4lxD3GuJeQ9xriHv9xv1/lreC/gz4na4SAdVb0ND0Dmr3oNcn657nzs/Kv2BX3jkB yDvyBnp9CH+mHf8QfmjZKz4/iL8U9M+gHgGNCGhGQKug/4K6t+kVdW/UK2pn1U/UxqwXlIRQIeJz iPkcoj6HuC8h7kuI+xLivoS4LyHuS4j7EuK+hLgvIe5LiPsa4r6GuK8h7uuO+3pJVF5Y+Us7w1/a Gf7SzvCXdoa/tDP8pZ3hL+0Mf2lndC/Uuuvrrq+7vu76uusbrm+4vuH6husbrm+4Pq+fht+jjOH6 hutbddVqZ+ZTnlwc/EjAtT1kYF/RrHvNcxWLH4X/F9TGZy+ojc9eUBufvaA2PvuJahufvaA2PntB bXz2gtr47AW1yRcXVIj7FuK+hbhvIe5biHvZcb9i3KPXryeGX08Mv54Yfj3B33s6n9mfxZ/Vn82f 4k/1Z/fn8Kfr8+uJmR/Km9caf5Y3eR9dfuEy8z5ub9oqr6VWSK/JI9WehGpCuXfJTQPwBXXvkpsG 4Avq3iU3DcAX1L1LbhqAL6hQOa4h7jXEvYa41xD3GuJeQ9xriHsNcd9D3Pd77mf9SC0LVC+gz8wy 31+a2GJfEV49wssb6FVo/H4q9AB//3j4h+LPd50joBYBSQSkEVCPgEYENCOgV7/0jNrZ9BNVQqgQ 6zlEew7xnkPE5xDzOUR9DnFfQtyXEPclxH0JcV9C3Jcd958v909/XWX66yrTX1eZ/rrK9NdVpr+u Mv11lemvq0x/XWX66yrTX1eZ/rrK9NdVprg+cX3i+sT1iesT1+cH+/R7j+m/fDA/7j92e/uZl/Qh Ma2Z6l6zfqTQsvHFC2rjixfUxhc/UXXjixfUxhcvqI0vXlAbX7ygNr6o933QM2qTB/S+D3pGhbiv Ie5biPsW4r7tuL+cvt6NT+/Gp3fj07vx6d349G58ejc+vRuf3o1P78and+PTu/Hp3fj0bnx6Nz7H e12/W+OP6Gp1H11+vzDHPm5v6novMVZIr8m/DP9sD59R9y65+SDkC+reJTcfhHxB3bvk5oOQL6h7 l9z1Qc+o+3Sw64OeUSHuJcS9hLiXEPcS4l5D3GuIew1xryHuNcS9hrjXEPca4l5D3GuI+x7ivoe4 7xvuU/68W3eUXFGf96j2dx3+yFBb9Ct3rl9bLO+o9aL/mTvXUs4/D/Gg/PPd5xCqhVASQmkI1UOo EULNEOrVPH2BbU38CSsxWIz/HDNAjlkgx0yQYzbIMSPkmBVKzAolZoUSs0KJWaHErFC2Vvj8HQD7 gy1nwPp7LfbHWlwoS6hLWNHtL7fYn2hxoS9hLGFprktzXZrr0lyX5ro016W5Ls11aa5Lc12a29Lc 3qvA7Z5/pjJ9zGVrvlYetLf7D+q+wHYO2+4/qnuG1Z3DtvsP677Adg7b7j+u+wLbOewFtnPYC2yX Ni6wmBVqzAo1ZoUWs0KLWaFtrXA9xtuKmLYipq2IaS8PXhEjK2JkxaKsWJQVi7JiUZZmWZplaZal WZZmfftl2v2if4TgWyd2E4JtLUUfAvxnM7bKl1fwv5bwRcNnP/aM2njsph97Rm38ddOPPaM23rrp x55RG1/d9GPPqE2+2PRjz6gQ9xLiXkLcS4h7CXGvIe41xL2GuNcQ9xriXkPca4h7DXGvIe41xH0P cd9D3Pddip71HbZ09Svqlew8kX98ErxFvzLozz90YH816jOBzpUHpzwp/3zLOoRqIZSEUBpC9RBq hFAzhHr1Y19gWxN/wkoMFuM/xwyQYxbIMRPkmA1yzAg5ZoUSs0KJWaHErFBiVihbK3z+GoH90T6P RK+Y+Af7XMhLKEuoS2hLkCXoEvoSxhKW5rw056U5L82rMMrrpibnpTkvzatmyqsDyvntdwD3W/2Z muQxN61pStprz+Xzb0ls/PQK2/jpFbbx0yts46cXWN346RW28dMrbOOnV9jGT6+wjZ9eYZtscYXF rFBjVqgxK9SYFVrMCm1rhZebnoGb1+1HXrcfed1+5HX7kcvLlVfErNuPvG4/8rr9yOv2I6/bj7xu P/K6/cjr9iOv249c336/d7/oHyH49otBP0Mwl9dSHgL8rr+aq2ucLwpyQMNJ/c5l77uwZ9TOYe+7 sGfUzl3vu7Bn1M5Z77uwZ9TOVe+7sGdUiHsJcS8h7iXEvYS4lx/c/z+c/VjyDQplbmRzdHJlYW0N CmVuZG9iag0KMTE1NyAwIG9iag0KPDwvVHlwZS9PYmpTdG0vTiA1MDAvRmlyc3QgNTI0NC9GaWx0 ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDUwNjc+Pg0Kc3RyZWFtDQp4nI1cy44sO27cG/A/1B+cFEVS EjCYlRcGZnNxZ3aDWYwBwxvDmP9fmRFZrNuV3cripg5Pd4iSgkHq0ZnVmvnjeLRm89En/l2P4fGv 44cKoz2aCYz4WPip94cIsD4e4gbDH/3oMMKLLhgW7kYYoz1U0XzIQyeaj/6whuZDH2boa9jDFvqK nr0TPB4+CJ6PccDhiHEpWoWPQc+zPaag+ZTH5DBmf6wDfqY+lmI80x5rwuEM4CENVrQ9OLsZ7o4F 5xMfnD4nOOB+BVga/K9wIIYOVvgUjhQfvaMLDLMP9LEC4kI+wp87ceF0NLhHRyNYbnJEi7EGrOhj docVDmbwEFY4XWBeMJVl6PKIXywO4/BHIDC0I8g/0Lkc8yENJEl8SANL8f+wMFVpLaIFnqRJWCBK WkSww32gwyJVMSDp4EqaPcTOGUVv3sBaTEbc/aRERicuWgyOIMIp82C/MYKpwEmMYE70EbTLErSI D1ngXiRGsECOiEI7bBGaOQZbhJ5aY4sRlrFFSIshE1mPfs6oH2GBZwlKegeJEkPrXdEiwtP7QIse fWhji+hDjS0cUmWL6MM6W0Qf5mwRffiBFuGqu6KFRh9OhjS8DEGL+EU/oxqT7pOcBp19KnHhZTLS MYW+Ori38LIGohpZpQfYlQiyHsgbiaHpwbiF4LQJW3hYzhbjoZSKRJeKAYW1wkJiSRCrHVqLmIXF UUXSaucIwoEqeQ7CVKm1EIMatRahVaPWIqnVyEsQoU6thZ6V+SjxoUxIQWIzIwWZzZSU+K8yJwVJ fgzgdES6N+I8LM48EsyE8Y3wGCgJa4WF9MNkrAvaRopbZzyCdtMDfUSSm1JhIS7TiXnEh5lgHpHn UVPQRwzInFyFaMzJVeR5hBd9REfGvJTIczsjiDJ05mVM3xaVE3L0o3FuEhayQqIKOKsFSoGjaIVl DxcyGTnjCG1YUcTOGYXQo66xxQoLedSjCrhCLz2qgLNGQsDOIolq6qySSFNnmexRBZx1socwnYUy 8iUslqBIvzDYIvoYYKhHFfCBHIxseoR00SJo94kc7FEFfCIHewzcV2eL6GMNtvCovI0tRljGFqjG ky2iHDdEukcaDJSHsPQxqM4e2T2QOGFZWJM4eQxAGogYHdpAkIeC3cjdsFBjewhuKAptjwkOg/5Q aIehqEZmx/J0sEV4dmWLGKlzlpHJYwjbRh8DK1aPTB4ThRZFIcjAqEIMIVj4C9rHIs+RtWMZPEdO j7Uw5nA/D2RtDzona1MPPc9YLGDNsBBpLHmTC0YPmibKUsxtYHlCHyHvaWwRhE1ji8jzec4jJj2d 0YogT4eeewxjDnIVwpwDeu6R53OS3QheFFb0Fnk+J/USU50Leu4Bngt6jqoVFvTcI88Xayyktw7k fo88X1zhkWCrIfd7DHdxuYt6+FhC7kNcS6mDGPhS5GqPir4M1bEPWMgKLIHLGxfDaOuoDD0IW87o R5frnFGkyzojE1VgzYMtYgSzs0WMYCKjopY+FtfBHlVgLbIWVWAtZFSPKtCOAynVEfnjYE6BvKMx qbjgN2YVV/xGmrjkC+ONNf8QJhY8HlzXO4J4dKYWeDg6cwtyPJTJBdTB+tOx3h9K4UEthzG/QPhh TDDk1mHMMKz+hyMEigX+cGhJEc5jgFxWgoN1SCHWqJ3EwtkE5+w9cm/BhLMVAmsKwo+FRV25Th/w znDF0NAMumusxMrtXGtshl0Ktn6Ny+K5d2EmcrEME5shVPEwuTniILHkUc9hojcyp5h2fKIL5F5T VGxFsGPlRm/cExk4U27YDHFSjD/2S+jt3E1C5nQTeypSwr0hZ4GMir0Umh3cJLFZ596Ic+vczGDU ylW2IZPP5a6RyT6464AHlm5hAPri3gEdn1sGyEy5VlAKei7j6F6xS4pVHr1xO2CIiCpXaEVvXJQM wlPliot5K5dIbgaVu6bBCHGx5/ZIucIuMKcHFwssPMp6e6CjM+UObDzUzgqO4TBhGwVjrNecJkvo GUJKG4tzmCyYjc0mqyibsaByvJQrQ6bOUokE5UdUNDQ7SxCYU9Z6G2zGQtLYjNWF9DlLDqXhzGlh s8XkRzPmFPdKOhozGM1YRVjNFLoL/kEqa+4yYo2JhLCwd6a/YrMeUgHVJK5RO6fgEV6up8pVWuep XFBNOrEtC5O6Q5XQMyDkgWJDhQ2TAkIeK1HUlU6GnzJiVhpThNM2hFc50lMP66QTHaGWBFuALRKH gqHMv8G4MY4DxCh/xY21nr1Dbbo4K4z6jM1CLtrBBoipkQFWfjuIguwNfEc+8gf4f2ts5jCVzdCA hwZK27AhDxO/wu4xfhe9GbYraAwT/HLahmLeuMgZiwt3lUaRGv9PNgzFJhZANkMD87PP+PBzUPiV I9UMKo/dGo+f+OB2jTU29mtoBjpjw4ZmiJjNzmb4/xxsht7WOQE0WMpm6I21hAOJPWLntGDysIuE dB4MDRkbe73BycKEBg1hcsrI+nnmYzOHOdksGnhnn0gZxwIcJnpj+TcUG2edMXw49WpIAz+JAQOx 7SOJ6M05C4Q/SgkB6I27BYOIndse6s65uhrG5NSOIdtjg0TC4WwdBMDZ4hhQKnwhUQzMxr4PvaFU xMYPHiCr2PYPxgknWkzWwNY4KUGXgxtg7r6HMPIoFQO76TADNbC5C9NhDoYXvXE94+FjsFryBDZ4 7uRmaThmaAhpHDfRG0rFGCDGoOcxmAxIxTGfAoHJaCLxxzroAb0tihalYrDSGqif3OMbGJgH2DDo bh4UF0rFbP2UGEzmEErF5CGGy9k8RYBsn9z0GxJp8hxjKBWTBxmDPibXdQNqctUwJMdkyeHRa56C RqmY5NPA1uRxhieAyTM2t8SxAWQz9LagK2444tSJjiHidSC/HAysA+F1ZHCkhVP8uG2gis+LB5Do vHmgwJmVi+cZHocXzkuNO9fVz5yZMHnjgTgublS4EVxcBg39LNZQhjS2hhwZ/PokFr0NKh6lIqLC rEOzicx3xGbxzOKYypqYlkOjazG/UCqi3g2m5UOOozEXBSZljuuG41jMWw2zndmMn+Jiitc2cvRz 6GjWzyTDHQt3aXGQg8lZxLTlYIRwOg4T4sdxTA6y7B1+/RzZhDnYDB1zh+fCuxsGQHChwx2ed4zh nFvHT3ki8w4sLwzgUbC/hYmbnoOJrbjqOTiySObYBHFkUUvkvLNDNZXz1gn3GMJrjoajk/BM3nD6 EB4CUYhgMtyK3rjDQ8Tk3KUhFeVc7aNShcllBeu+ZP1Cb8wkN/TGkEUpg8k6Y+gtCs/jT3/69Rv2 no/j8fuvv/767ddf/h7H7n88fv32P4+Ydfz0z3/+93/7CeVHCdVKKCmhegmlFVQOa11A7l9RskPZ E/W3/6Az3MLGDz6h//qvf/7fH4hff3n0ryhNn78/fXIo54Xu3nkUpS8D1hLKSigvoUYJNUuoVUK1 owbbhfgC28X4Aus1WC0ArRaBVgtBq8Wg1YLQalGQWhSkFgWpRUFqUZBaFKQWBalFQXZR8HUpFqM9 EztWvafR09A0LI2sAmOkMdNYT2MeaaTnmZ5nep7peabnmZ5nep7peabnlZ5Xel7peaXnpXe1Lyf/ rfbJvCt+49Wx3Xhf4y0CO+W+w/pOuRfYTrkX2E65F9hOuRfYTrkX2E65F9hOuRfYTrkX2K5+XGC1 KGgtClqLgtaioLUoaC0Kuo1CbiIy41cm1nomFnaCT6OlIWn0NDQNS8PTGGnMNNJzS88tPbf03NJz u0vZ1+i/paz6XcquHFO7S9nchf3tn//1v//93OhkRfJX2fGChzMGO+2+o3bSfUftlPu+O94J9x21 0+07aifbd9ROte+onWjfUTvNvqN2heMdVeLeStxbiXsvce8l7r3EvZe49xL3XuLeS9x7iXsvce8l 7keJ+1Hifmy4x5/yv8CevrRdUf29vuJZgFe92KOznp0IbEG+ouS9nD2Hcj5UcON8vq19JZSWUFZC eQk1SqhZQq0SKg9pn2DbEL/DpAar8d9qAWi1CLRaCFotBq0WhFaLgtSiILUoSC0KUouC1KIgtSjI Ngq5ZXvVitwoWW6ULDdKlhsly42SZRGw3IJZbsEstzuWWzBLz56ePT17evb07Ok5dz+S9z3i6dnT c26MJI9FkmdIOc+Qnyb/vfSN29qXHZ8H0433kfdaZwR2yr3Adsp9h/Wdci+wnXIvsJ1yL7Cdci+w nXIvsJ1yL7Cdci+wXf24wGpR6LUoaC0KWouC1qKg2yi8ThzPVM67Ecm7ERkv4WbG5N2I5N2I5N2I 5N2I5N2I5N2I5N2I5N2I5N2I5N2I5N2InHcjnwb9LQVV71IwL4Bk3ST4D6ep53bnlfyvIXzw8Hai /YDaKPbnk9kH1EavP5/M7lG2UevPJ7MPqI1Wfz6ZfUBt6sXPJ7MPqBL3VuLeStxbiXsvce8l7r3E vZe49xL3XuLeS9x7iXsvce8l7keJ+1Hifmy4x+OkX2Dpq19RrzJ2lmg8xvNH7dmiX7Wx5/bkK+pV 5H5/+nxWODzYeuP8/VayhNISykooL6FGCTVLqFVCvU5mH2DbEL/DpAar8d9qAWi1CLRaCFotBq0W hFaLgtSiILUoSC0KUouC1KIg2yhcrr/xsPkzYZ97Jzy18jRaGpJGZnfXNCwNT2OkkZ57etb0rOlZ 03PujHpe7XRNz5qec9PU8wjU87zYz/Pipzl/L2V+W8uyP2s33u2t+MpOsBfYTrAX2E6w77C+E+wF thPsBbYT7AW2E+wFthPsBbYT7AW2KxsXWC0KvRaFXouC1qKgtSjoNgrXZTzvQXreg/S8B+n2UnBm TN6D9LwH6XkP0vMepOc9SM97kJ73ID3vQXreg/S8B+nnPcinQX9LQe13KZiXPX3cJPgPJ63n9uWV /K8hfPBwUr+R7OY8do/aCHZzHrtHbeS6OY/domwj1s157B61kermPHaPKnFvJe6txL2VuLcS91bi 3kvce4l7L3HvJe69xL2XuPcS917i3kvce4n7UeJ+lLgfG+71kK+w9GVX1KtsnYVcj68Xulv0q4La s4LKV1Q+9vQsoJpPLuihd87fnqzsJZSWUFZCeQk1SqhZQq0S6nUe+wDbhvgdJjVYjf9WC0CrRaDV QtBqMWi1ILRaFKQWBalFQWpRkFoUpBYF2Ubh8rSy5tNDmk8PaT49pPn0kObTQ5pPD2k+PaRN07A0 PI303NJzS8+SniU95/5J80JHJT1Les6tleZBSfMwqfJ1M7id8/dS5re1LPvrx433/lZ8ZSfYC2wn 2AtsJ9h3WN8J9gLbCfYC2wn2AtsJ9gLbCfYC2wn2AtuVjQusFoVei0KvRUFrUdBaFHQbhesynpck mpckmpck2l8KzozJSxLNSxLNSxLNSxLNSxLNSxLNSxLNSxLNSxLNSxLVL28m7Af9LQW/nse+p2B/ DeUmwX84j2k+SqnHi4tW8PB+HrtHbRS7OY/dozZ63ZzH7lEbtW7OY/eojVY357F71KZebM5j96gS 91bi3krcW4l7K3HvJe69xL2XuPcS917i3kvc5+BvX7ySCqhXQFoBWQX0HLjd/mVMKqBeAWkFZBWQ V0CjApoV0KqA8jzzAVXivJVIT1e34pQKqFdAWgFZBeQV0KiAZgW0KqBX+O5RJc5bifRWYr2VaG8l 3luJ+FZivpWobyXupcS91PRe4l5K3EuJeylxLyXupcS9lLiXEve9xH0vcd9rxabEfS9x30vc9xL3 vcR9L3HfS9xriXstca8l7rVW6Uvca4l7LXGvJe61xL2WuLcS91bi3j5x/5/ydb2+fUVEKqD+0Ic9 /DEe84Hv4/pjXbpt1krOX+vSPUp/RpmuN5jtYPkY+/Nob18fM9ij80xt+W5e86+w10Pv55na8iEG +/qYwTfv738dfq1/H2CzBlsl2GsJ/ADbxPkK2wT6CttE+grbhfoC24X6AvMarBYFqUVBalHo2yhc HgywfDDA8sEAywcDLB8MsHwwwPLBAMsHAywfDLB8MMDywQDLBwMsHwywfDDAvn7bxn6I31Kkt7sU sVfH88b7+zv/faevC2ynrwtsp68LbKevC2ynrwtsp68LbKevC2ynr8vXlOz0dYHtsvwCq0VBa1HQ WhS0FgWtRUFrUdBaFLQWBatFwWpRsFoUrBYF20Xh+m0alm9CWX6bhuW3aVh+m4blyxSWb4xYvjFi +caI5Rsjlm+MWL4xYvnGiOUbI5ZvjFi+MWL5xojlGyOWb4xYvjFi+caI5bdpWH6bhq0vf5ffz/lb STO7K2nj1Z/eeF/vf0zfCfYC2wn2AtsJ9gLbCfYd5jvBXmA7wV5gO8FeYDvBXmA7wV5gu7JxgdWi 4LUoeC0Kvo3C5Q/Ill/1YPnFFpZfbIEvX3saLQ1Jo6ehaVgansZIY6aRntvdxvg1xG8pMo67FFnZ cbvZGHt72x2Njb6usI2+rrCNvq6wjb6usI2+rrCNvq6wjb6usI2+rrCNvi6wucnyK2wbhcse1fN5 Bc/nFby9op5yy+cVPJ9X8HxewfN5Bc/nFTyfV3C5qat/jOObJKfcSNLby7vtvf/w90/T1z77tZl+ rTyvmcvNiOX9HfMT5e0WJSVUL6G0hLId6vKmt+fbBp5vG3i+beD5N2bvX9bdvceM4ImICPpXlF8C mE+oeNe98x8C+OzvA/7rHwXuQbMCWhVQ/lHgA2qjmJ8vXz6gNor5+fLlA2qjmJ//KPABVWK+lahv Je6lxL2UuJcS91LiXkrcS4l7KXEvJe7lG/f/D0XQqZQNCmVuZHN0cmVhbQ0KZW5kb2JqDQoxNjU4 IDAgb2JqDQo8PC9UeXBlL09ialN0bS9OIDUwMC9GaXJzdCA1MjUyL0ZpbHRlci9GbGF0ZURlY29k ZS9MZW5ndGggNTY3Mz4+DQpzdHJlYW0NCnicjVy7riw7bs0N+B/6D05J1IMEBhM5m+Ri5maGgzFg ODEM/39kLlazTjW72ZvJuby9V5GS1iIlsWvvtuZ+HI+2pjyI9b/reOyF/7ZHawNGf7TZYdCjCaBr PHo37Hz0ZeD1oMPA+0HDwKzuDCyP0QDex2NMgHd7DAF498ckgDc95gZ4j8c6DDwfaxh4PRYbeD92 NzDrAA0sjy0AaxwmgBXIG2CmhzSAeT5kGqY/RAyj/3fMBkvUwj8Ls2p9w8KsdVpq6eP9IFj6Tx94 AqPu255QCDV7Qp2SzUv0f0nsWY0xCAMRBQ+dXtuHxpgY0z7U1TxHoE9MBN8HfjARjdXVFsOpKx4b ln7G9oND/1f6hKUxRJlQSx79wEh3O9TC8u/W1OIGqz+6TgQWqaU0q6UENgxyN2WwE2K0pRaY3zqt Tg0xlLdOEzGaxiAsxFZXfYC13TXG2IihvPTZEKNrjDkQo2uMicXeHSrB6u6+oBfE0P/tNsGtsul7 IIZKqos9oZOhw56gphYjmoak1hGN9KcNCti6xNSwxJvmgzr0u1UI1Lc9q1pUICwVI01EI1U5gcY9 DlUqYUZKLY2NGLp0NBtiDI0xJ2Lo0GiyxdAnFnS/h0bbYjNH1pDhIHhGDJ0+SUcMdUCCtNlK9zhO krtaUBPoHgdyZOsSj9YQQwc5GgS7VVKjQVJbE3R06HvrD0bfiKE0DkJ6IhUGQTlbM3aQKUdTduiP YWmMsW186mUeiKGiGSb5rRk2MAVVnT672fzpCHhhBErUEEhvI3XFOEI2i/GhxM/DVkPB8zBWNffm yZEOcp5aU0nNcx5KylQ5wdInumWFJtMkW3FdiEmmHB3LJKTG1kBzdHjWhJ4DeYlBKjEYiy72nANe NDXmtFHpY3PZCFRIc5lKUF0sEbfSONkUi9IjpliBZZ5V3uswz4IyZJ41ndepP83zdc5NJ7NO/WkK rVN/+s8y/bHm+TL9seb5Mv2xEr9Mf6zKXqY/1kDL9McHah30x7rYy/THmufL9Mea58v0x0reMv2x Dnyh1KqlMc7yoHm+IEy1NIZxySoQLXw2D8UxKjlrYmudsCKjlb6dT2hJ7ShQrCLUHEY0zXPNUkTT SW+rdax5vq3Wwem2Wscq9G21jjXPt9U6Vsq21TrWJN5W61gHvq3WsQpOFYoYhL0GWmMV67bixhpo Q5j6mUYTKJZ1EZUTw80HW61DwWerdax5zlbrWAlgq3WsU2DTH2ues9U61qRjq3Wsj7HVOlaS2Wod q7zZah3rkrDVOtZhsNU6lGG2Wsc6DLZaxypWtlrHShRbrWNdCLZap5gHgzz9TGOcdVwpY0EVYK0C LKjorKmrHyGu5rmmm/106fZlfGjGS7PVUPlIM1Y1z+XkSMFyak3zXM55aHaL1QNWUQuhSrEmpwxb cR2BDFOOLrtM22I0p2Uiu1nzXHdLPKtTlYXtllWYYocBVmrFtjHssbLPEegT21SiGS+Cusa67CJs OOy1B1KesWseBzZnxo+O1hAGrB12umAk2dFsuBjB0VFVGRM++kIo1NaDbEWxcAdhSRl14iBbSdvZ BzZHtq19YHdkSO6YJhk4P6ZpBmXymCYaMK3LjmhIzGOZbKDtwzZyxg5/bBMO6tixTTn40cG25GDg YNMOikE7zskrqllVN8pbw+QFA9EyYkepAROTF/DQOsZnBbcRJi/Ih0bLPDBOY7bnIyPawOStgLWB yQuqhi7khIloE5MXxNFShWjYLNrC5AWDtpKvzvAY9nT9FIFt7QUk6zHDsIgm244dB05EgAnqYD9Q 0QzVD6ycoNLo1OABgu4NRc1OK71jfIJS3jvKmiBH9BCJuaGkdcK/Ao3qVodoWIE+sBqCnawPVCpB MvYBVQji6OohGspPn8vGC2cL0xLormMP0U8RDYWsGU3dyBHsqV1QyQXsdgGngipOBxlAYBpvWkb0 wIeltY1Q914MHXsENWhQMDw9gmA4WGQ9g2AMKFpEmLfgASKjBStAA6oQ1EYa8C5QgooaY0D+6kkE j2G8unfYYwgxoVdBEaZF5gHHzmXRcHAgnALVRLRtg8QECfuhmoiGoqzDwWOoDPqputHR2IQWzGVY jTaarS9Qo9n6gpvRbX2BGt2GDo0OOxkLDht69oAHjEnPJogGJejhBNGwQ+jpBGxiZVXJmBtONWNa XoCbMS0voLsxLS+wLmNZXiDOWJYXyLRxnoWh57EtL8D52JYX0PNgywuoZrARgFoyLBetCA+jzE7T w/yIncy7EYDzmyYWPKCW6HEFY8CYzvOKoJbogQVjADdaYDAGZIQeWeABtWRarbUryLRiK6gl06qt 6W5auRUUkIl6q/sxoqHgqoloqLjdDpETJfc8809smDpePIZ9o9sJRhPAVgc3Hy1z+BR3FRRdNfEp iq6aOPuh6CqjuFKh6KqJuwmKrpq4RKHoqonbCYqumgiBoqsm7jsoumdlWCi6aiIaiq6aiIai221M C0W3H92uhrhLHMjKhaKrpt0cxcZrlz9CtG6XsGaf2u3KFGVHaIyvnzeFQzAcOyxqFsG0+w1biGnX FXiwUzHOnmratcJuSnYPAJtqDrswILCdVc+52fFb5QfTzuSHPWYHdaOF7PRutCAj9DAODzbIZbTY tQz3xn6eLLcN0g7V+xzOYWdfTGjYcXQDa7s2Flw/teOl0WKnjma02CGxGS12zmpGi22p3YY+zxMZ PNgWRUaLnUbJaLEDHRkt8zwdYc2mHZmMlvMcZbQsO5QYLXYmmBvR7NCycJc81nnyQDTbfnF7VtO2 SUI0ZATj9qCmbeB2abVDIBsB52bPiLZtU+62krb9LkSzUtfPu65tRvaYVRE0IdS0nWIg8FnnGYFt Dxu4tB5WY8eCB6tZQ+CBrW4SP2/OKjMEtv1j4XZ8WK7qsRzmsKqHaGyVjBDNKg7uFmpaVpq4tpWZ iWg2q8Oy0FCHZeEz/xBNLE8GosmZPYhmcTpu1IcJqJsQTfB6T4Zpch3wYzSRyd5mNXB1b1ZFBipB M+UOPNHs2anaVtNEAck1W8NlPYLDKJ1oHBxGtI1PbOE0k/pz0CZaFhueZbc9e5hfC4knGvoUuoPi A2tFNLDZnv0JPGEhcTxQE847GxYjNTYbMriZUpvN6pwb5Kq7BGBgV0s4ZoGehJZlPNat8bHNJUIs GwmSQ+sVXKIvYd2Yji5QPxsV50hxH1YTD7ANEhXS7g2YAEyIq2ExLG/1U7RUIJBudUj/xcOET9u2 CaDT0s9pbZioDw26692Gjgf0JAEPWkvQnyGbrJrDlgtp28e0yRJMW9+BaNOGioGoW0SDlvRAhGgo Eh1tNTURDd0xLBcaP7YwWPqOnoGaiIaLa7edouP2hPWECXE1pHgXIwAZ0cXEhal0MXFh6e0chbV/ 4ICHx1ADiOwx5B/pGeXxl7/8+gMH5Mfx+Puvf/z649ff/v3R5T8ev/74b71f49O//vVf/+UTio4S qpVQvYSiEmqUULOEWiXULqG4hCqt/Sit/Sit/Sit/Sit/Sit/Sit/Sit/Sit/Sit/Sit/Syt/UzW Hu3xG8xRFFH9ifrz3wyG9rp+8BP6H//3z//9jfj1t0e/o3yAf/796dNQZ6f+i/P9os8SapRQs4Ra JdQuobiEkhKqHTVYSvErrNdgtfVvNQJajYGWUrCiStkVJU9Dd8en0dzobrj8ZLgx3VhubDfcszw9 43L2NNo39foQ31Kj7a+5sd17z73jC5T7OiXyirBEXwHWE31FWKKvCEv0FWGJviIs0VeEJfqKsERf EZbkeITVWOg1FqjGAtVYoBoLVGOBaixQjQWqsUA1FqjGAtVYGCkLvps+KxC+13zm7XLjymR2w0tI 8xLSmhvdDXJjuOGem3tu7rm55+aeu3vu7rm75+6eu3vu7rm7576/FR2f81tJG+1LSXsu6cO+7M29 0/Gy8JlgAywTbIBlgg2wTLABlgk2wDLBBlgm2ADLBPsKm5lgAywrGwFWY2HWWJg1FmbGQpeQc+Qa J9c4ucbJNUeucXKNk2cPefaQZ8/w7Bnuebjn4Z7HlwPx7yG+pchc31KErsBfTsToc9/XKdNXgGX6 CrBMX6+wlekrwDJ9BVimrwDL9BVgmb4CLNNXgGVZHmA1FlaNhVVjYddY2DUWdspCOGbjfZenTD1j pgt3esZMz5jpGTM9F6fn4vRcnJ6L0z1P97zc83LPyz2fA/1p0G8puOlbCg4fyvqS4Ndd+89//uf/ /NfzxO73Cr52/munvQrLKvg8yUhEHFCZhpfcYc7/jCgOxO7jNr4Ufa3ofK5ov6N2WNDlk9/ti/P9 0pagEmqUULOEWiXULqG4hJIS6rrl/wDLKA6wXoPV1r/VCGg1BlqNgpZyEFpWeCPtKTyvN9szc3uS b68321XKXm/Y640n9vZ+1Wb3zO75yvnr3s73Q2461rdUavwtl/Y1AvniXV5qRMuE9grrmdACLBNa gGVCC7BMaAGWCS3AMqEFWCa0AMuEFmBZtgdYjQWqsUA1FqjGAtVYoBoLVGOBaixQjQWqsUA1FkaN hZGy4BcZL0Xec9zec9zec9zec9zec9zec9zec9zec8QLpk+judHdIDeGG9ON5cZ2g91wz96KYG9F sLci2FsR3O4nonTObyVtfD0eyBVv5t65vbb3E8FGWCLYCEsEG2GJYCMsEWyEJYKNsESwATYTwUZY ItgIS8pGhNVYmDUWZo2FmbIQLinsTTH2phh7U4y9KcbeFGNvirE3xdibYuxNMe6ePd09+7me/d7O 9OWM+3uIbyky95cUYe/zMfUv3umlYzkzfQVYpq9X2Mr0FWCZvgIs01eAZfoKsExfAZbpK8AyfQVY luUBVmNh1VjYNRZ2jYVdY2GnLITWN3uLjb3FxnQJ1zPGW2zsLTb2Fht7i429xcbDc3G45+Ge/ZbO 3opgb0Xw/HKH/T3otxTc41sKeh+R55cE/9AVeF6aryP/tVFe5eAa1A8+X+/731GZhue6w1wmO6Jm IHbevw1I0deK7ueK9jtqhAX11g9P/uJ8vXSEqYQaJdQsoVYJtUsoLqGkhLq6Aj/AMooDrNdgtfVv NQJajYGWUTDDtwDs/UH2/iAvLyTLs3d5IVkuv+WFxBOVvV/F2xN1u2fPYfZ7OO/7kTgd4ltq3L/7 f8+NdQWeX7zv13ckMnkFWKavV1jP9BVgmb4CLNNXgGX6CrBMXwGW6SvAMn0FWJbjAVZjoddYoBoL VGOBaixQjQWqsUA1FqjGAtVYoJSFeEfw9h97+4+9/cfe/mNv/7G3/9jbf+ztP/b2H3v7j/1NJPY3 kfja968br3cF2LsC7F0B9q4Ae1eAvSvA8m3D3G9XDC85JN9Kzr68S+5djs9dkh9giTwjLJFnhCXy jLBEnhGWyDPCEnlGWCLPCEvkGWFJkQiwWWNh1liYNRZmykLofIn3p8T7U+L9KfH+lHh/Srw/Jd6f Eu9PifenxPtT4v0p8f6U+Ksy0r7sm7+H+JYic3xJETmuwOuL98Yv65TpK8AyfQVYpq8Ay/QVYJm+ XmEr01eAZfoKsExfAZbpK8CyLA+wGgurxsKqsbBqLKwaCztl4bqpPhPOu13i3S7xbpd4t0u82yXe 7RLvdol3u8S7XeLdLvFul3i3S/zFGyH37C/eCH05GP8e9FsK7vYtBb2lJ/QlwT/cwtnfOOB5He6v mnMVli+33HC/TkQcUImGk+/5v6MSBb+iPAUlLPd4+QK1Z6jw/am8vOGUoi8K5Ukh3VHh7Qrx3o68 vOD05vzlODdKqFlCrRJql1BcQkkJdV3of4ClFL/CUo5fYVSD1QhoNQZajYKWchDO8OIdQPEOoPjL SOIvI4m/jCT+MpL4y0jipUG8IyX+MpL4y0jiVUP8Zi7r3gZMh/iWG/dv9z8khwde/Yv39fll/u+w nukrwDJ9BVimrwDL9BVgmb4CLNNXgGX6CrBMXwGWJXmA1VigGgtUY4FqLFCNBaqxQDUWqMYC1Vig GgtUY2GkLIRvNMTbheLtQllXinpt8HaheLtQvF0o3i4UbxeKv5Ik/kqSXOcQv4GL9yTEexLiPQnx noR4T0K8JyHekxDvSQjfrynpVN9K1P0d/vcSta4w+4t3fnn7cGQ6DbBMpwGW6TTAMp0GWKbTAMt0 GmCZTgMs0+krbGY6DbCsWgRYjYVZY2HWWJgpC+HNU/HumHh3TLw7Jt4dE++OiXfHxLtj4t0x8Xdm 5PnOjP0plafR3Ohu3L94S4f4liL3d/jfU4SvwPn52f7GyX2dPuvrDfZZX2+wz/qKsPVZX2+wz/p6 g33W1xvss77eYJ/19Qb7rK832Ocsf4PVWFg1FlaNhV1jYddY2CkLr1/q2h8FesqU3XDhNs+Y5hnT PGOevTb7M0BPY7qx3HDPzT0399zdc3fPPT/q3gb9loL7yy3T1+Rhfz4oTfAPfQLxdxBkXBv+tcFe y5TfXO1vEN3YeHK2W0RFMu6/b5ejfRVOxMs37PZ3V14XoV+LwF+c06dv2H9AjRJqllCrhNolFJdQ UkL5hfwnWEYxffyG/SdYbf1bjYBWY6BlFITfs7O/n3UqirwckJcD8nJAXg7I5UdeDsgziLwcDC8H wz0P9zzc870LlQ/xLTVu37B/yA26As8v3kNHIpNXgGX6eoX1TF8BlukrwDJ9BVimrwDL9BVgmb4C LNNXgGU5HmA1FnqNBaqxQDUWqMYC1VigGgtUY4FqLFCNBaqxQDUWRsrCa6/P/gzeM2+9hEzP5Okl ZHoJmV5Cphen6cVpenGaXpyme57uebnn5Z6Xe17uebnn5Z6Xe17uebnn5Z7vv2WXz/mtpN1u5h9K 2vB491+ze/MefoEuE2yAZYINsEywAZYJNsAywQZYJtgAywQbYJlgw68UZoINsKxsBFiNhVljYdZY mCkLr18D2R9XfArLNb5d4/vSnGt8u8bZs4c9e9izhz172D2ze2b3zN8OxG+/PXelyO1m/iFF9hX4 24lYXk+xmb4CLNNXgGX6eoWtTF8BlukrwDJ9BVimrwDL9BVgmb4CLMvyAKuxsGosrBoLu8bCrrGw MxY4HrPFM0Y8Y8SFK54x4hkjnjHiuSiei94Bwx8AfRrNje4GuTHcmG7c+sf5oN9S8HYz/5CCcg0l T/APN3O/BF9H/mujvMrBNagffJ5kfBZx8q17nH1r/Q5z/imirmmcxLZ260bk6GtF/e8V9DvqeF3Q 5p2Zdv/lvHfnL20JKqFGCTVLqFVC7RKKSygpoa5b/g+wlOJXWK/BauvfagS0GgMtpSD0opq375q3 75q375q375q371p3+XUvJN0LiWds80ZU6+65u2dP5kb3I3E6xLfUuN/yP+SGB77/Vt6bd3o5F7VM XgGW6esV1jN9BVimrwDL9BVgmb4CLNNXgGX6CrBMXwGW5XiA1VjoNRaoxgLVWKAaC1RjgWosUI0F qrFANRaoxgLVWBgpC+HG0byZ2LyZ2OjKZC8h3kxs3kxs3kxs3kxs3kxs3kxswz0P9+yniOb35+Y9 huY9huY9huY9huY9huY9huY9huY9hjbv56d0zm8l7X7Lfy9pdMXbX7xfv3j0esu/YP8Pw/MCng0K ZW5kc3RyZWFtDQplbmRvYmoNCjIxNjIgMCBvYmoNCjw8L1R5cGUvT2JqU3RtL04gNTAwL0ZpcnN0 IDUyNjEvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCA1Nzg5Pj4NCnN0cmVhbQ0KeJyNXLuOLEtu 9AXsP/QfnMokmQ9gsZYMAetc7K4nyFgBghxB0P9bYrAm6nRnN6vTOZe3JyofZJCZjKmeWlp5HI9a mjxk4r/66AP/tUcpHUZ7FGsw+qNMg+E/F4XhD3Rxox8PKRVGeYgVGNWHw7j+c60YuOtD8WSx+bAD I3d7tA6MjUc/AuNGw8h9PsaBkcfxGIqRR3mMgZFHfcyKp4Y8puGpoY85MfLwNR+CoYcv+uhY9fBV l4Jlj+GWYfQx3ZoY3pdYqmD8WdzqmGDWR5GCGRxSRGNhPooMzDEV7ojPfI524DMHu5tiPF9BV6xg +k+7L6jWw386avzU5x3ub//MZ5vhBF+4jxk4X9rhYLf8fw8ZsHwLsY8a4KKwwokV1jy3UCuWWyue KBi+4QkERDBULYKA4VkMIO44t/wHWjFKcbDGqhBvO1fgEAMf3Fke4yNm83+6Alf9nz4xBwYYgjmw wdExR/V/Jqas1X8wY6XVwXPGs/0hh/wsVw54sjo/3OFYvYBI8F8VZxLCWBEeiaGqOEcr+AH6iWCR 1ScXAUOqNLdAkeoU8I8whztRFCSpvhkPJeZQn8PAkqpga3jXB5BWYh/+0waWVPXZxggf+LMzvKs+ xwz/+SL1CP/pcCv851TREv6zw63wn2eDgtduVc8EuNg96xZyqvpEnh2Y1+yhgqzCD1SQVtV8DkUY q2eIKhKr+mZUI+ZOGjWkVm0+W5uxFh+vS1DA5+jhP3eYjvCf57SO8J8TxAfBHJ7VOsN/npzuoHgW GRr+8/S0I/znqWFH+M9T20r4zxdpJfzngbegVPWkthr+8x/48jCHZ48Jsqy6I9z1MYePokFbz3Wz 4IY70ZpE5vkTLbLHC4CHCCvwpdlAGlQvATYiZzwUNiIeXgRsgpjV09SmYaWe+zaDpx7GdsQu3SWe tJjXHdZKcNcX2Up43LO7nSnkQWkIqFv+BIZyqz+ahCfdEU3Ck57TTeFJ8cA31Ui64hY8KZ4zzeBJ kLUZPCme583gSSSnl40Y2Z9o8KRnxKONYI4HuY0eOJ9jCuZwh7XZMYcvsh9YKQpvPwxzeNL1A4z1 HHr0AsaKB6WjdLvVHj0KijgJO+jjlhdcuM6t+eiCJBZfbpeGOdydXQ/M4UTvqpjDIV7gMIfneTeU B7ikm2EOz/Nu8Lh4nvcG70pFUUdlECdr71GMPMi9ozKIk7X3Gb5y3ED0QRUPNJ7wcI9yPmFuxcnj eT6iHIq7biAl3RpugS/irhtYmpcIPzqkRenzs0NRvlAohqKGiWf3UPAAhXFYwY7cJcMsTjefwybm 8DwfHmpYPkfrMYc/0aOUOkHG6LEWH2+ilIpva0xw3KvMYx7guHhCzANsEs/zeeAkEc/z6UcOrO5W i2cHDjfM4VSZSGcva4db4DhO1olQ+GdunfHwwM+IB7Y/TeIJt1pExgedDZVGnAKzzfipz9ElRvE5 Ok5g8X/mQAVGcs4RPnV6zxHccML5ARvP+ol4HOcSKszYe8enBYkhHjU3gz1O3XLg9HOzwwSrxZfq ZoQKB+hxxmrUOLnneTqVQ2P1A4NpECuOdg1mxdluQS0c7ocFU3C6exWGm3C8Hy3YhfPdzxgEAwf8 0YNfOOGPKCqCI/44d46T/Rg4fATHuAcpZsNjM/aGI72csXPi+K0igodDveBW4gcLPo3TEVnjpoeu gnhuOqyiTjibcbKrVwE34T49cNXSOIYPTKGGs/nAxUURMT3i5oICoAdmM1QAFM/itzfMVjBbQw3A ceEmioAfQLjCoQpowWwdZUALZhuoA5jSTRQCP6JgIsZwvd+OUAq0xEUJ0cRxWjyYmK3iqnQE+3B7 8Vyc512i1LOCeOKUIIh/6rPF6G42mCCjVgB0xhUEIwRtcU10E3VYBbM1HH0qmK3h7FPBFA2Hnx+V uK7h9FN/1k2UBhXMFmeECmYLIvthChOUU8Fss8ZseGy2mG3ixldjZbjy4fJbUcqL4D7gJj7F0VtR PNxE+VbPTg+mYQR3hpszaG+4NIJG6hXDDxgcx35kwwTl1IBtsUjDYC0CYJitIc3VMFsPGhmm6DGx Ydw+47GG26jEY3ExDZ8ZZpvhM8NsM3zWMNsMn3nRKM7G2EWHGT5DHms9F9lg9tixz6Zx1ily/ryQ qJeEohqU82fdRElTNAa+1xjMJ9Yzmugb9IxmxxQtiIhS4T/CLlAqtAcR4TP/f8yGouAXF4yAqqFx c1bUEr+6YG8oIH53wRqQhZ4hWC/SVGcQEbXEDhQQRS3x+wtmQy3xCwxmQy2xOPnVq52bKCCKsmI4 UvzTA2YPrM9mGt5BLTEN76CWmOH6raglZorZUEv8+MAIZ5cQjEItsRaMQgGxjnu+IkW8cGFvyCHr UUBQS2wg8riGu4kCYqglNlBADLXEJgqIoZb4ERVXdkUnggJiqCXtQAGxI/oTFBC0W14pUEAMtaSh PatYfzlvOxYdTI0CgkLacGHyT31ir6PhHYxgyBZDLWlxvBhqiW8Na0AtaQ3+NdSS1uFfXA/chH9x wnpdQoHGhchNFGhDLfFqhTWggPg1BztGFvpYWAPStB9HzCYwkVmGstLRAfqn/liPRkTRs/YaLkEt 6RJToJb0aEaRih5sENFQdzpI6p9iBDsn7jCRFzhgvGLWaITQyzVc8dE3uBmxQC3pHYRBp+Qm7gaG StDPbaJU9HObqCX93CZqST+3iQLSz20i38a5TWThOLeJNPXDCYtELRlxxhpqyUCj7k1D9JZnt6Yw cf80lJWh59IFZgtXY1yLHg/FZsS5aSg2IyqtodiMOEVwCXITKY17nTfRNR7DxB0pbShB3o/jMRQQ PysBQKkYEyltKBUjGltDWEbcHwxFYR6xspATjlgZasksSGlUSDeR0oYsnFAC3ES7HK0ojj43g3Ko JbMG5VBLpgTlUEumxN5Q8KZGLFBWZvR6hoo4o9kz1JIZGoShlsxQIQy1ZHaktKGWzI6UNtSSOZDS hloyB1LaUEu8umI2FJA5kdKoAW4ipQ0+9BsT9oacOo5IabDxOIJGIMVRIqVxfhwl+Iu+6YjW3JDB R42Uxol51EhpbPuQSGmcCYdESuNcOqJAGwY/NKLZ41NM33AdO/q5eTzW0ao1EMhvQQHAbAObbyFD DGy+hQ4xsfk4uLyOGkwIJ8cRI0CUCKY2xLGcTMWicdl4nGd5iV3EJa1EZ9hwiS7oAWoEpOBK6iaE G1yA3ITsgdOoRsUpKOluzpBA8FhIQdFYxVWohDdaDQ2qxGOhQsU2a4g/sc0ailBss4ZMFNusocjE NmuIPbHN6LHPbYZwcSDGLbrsAzFu0dqWWI6EtoIYt2jmofnU9qOQYDYJ2QS+Pzs/tGVuhvgR3pFT rYj1htCAIRsy4nRXC9Gh4drRjujM4fsWwkyP9Z597hGPna0xVhb6z0A6xYXDW06sARck73XjsWhE IywaXWeEJZqWkGna2WNGWCwazwhL3LOjFMeaJO6/oaCcN6Zm0QRGWKITiTSNQ0PiVtHOjsuj9+c/ //rjFAiPx99+/f3XH7/++u8Plf94/Prjv70nxad/+cuf/uUzTPdgtgdre7C+Bxt7sLkFs2MPVvZg dQ+2FwXbi4KlURg/sH/8K3CRSf4BjEJDaCgNo9FodBqDxvwxOgfsHNAL54/BkXuM/G2Jf/+/f/7v b8Svvz6sPcHKwZ387dyJXSuwm9F7f/FTxq8FlvFrgWX8eoW1jF8LLOPXAsv4tcAyfi2wjF8LLOPX AsuyfIHtRaHtRaHtRaHvRaHvRaGnUWhLwnVmTCdfBzNmMGMGM2YwYwZzcTAXB3NxMBcHRx4ceXLk yZEnR55ylyRc9FsKdrlLwc6lzDzBSyl86p//+T//dT5YuL7C7QkXape/2saYZzA+k3hFZRyeL6WU 8bcVtVRSCHG/15eiL4/aj0frM6ovDp0/kcTvs9LB0Z48LVi2ULqFsi1U20L1LdTYQs0tVDn2YEmI V1jdg+35v+wFoOxFoKQhqCtLlYwyGo1GpzFokH7loFFoVBpCgyMzh9Fd/Rj9jr1c4ltqlH6TGz8e eMRvUPPR6/Hip4xeCyzj1yusZvxaYBm/FljGrwWW8WuBZfxaYBm/FljGrwWW5fgC24tC3YuC7EVB 9qIge1GQvSjIXhRkLwqyFwXZi4LsRUGyKBQekywtlbWhsjZUpmhlbaisDZW1obLqVFadyqojrDrC kXkrwJsKPwZHFo4sHFk4snBk4cjKkZUja70pItdW30qUHnclql7TyM3oai8Hc8bTBZbxdIFlPF1g GU8XWMbTBZbxdIFlPF1gGU8XWMbTV5hl1WKB7UXB9qJge1GwNAq6pJqS40qOKzmu5LiRfEaOG7PH mD3G7DFmj3Fk48i8gNezwf62xLcUMbtLEeXE7eb6jJdZnv2U8WuBZfxaYBm/FljGr1dYy/i1wDJ+ LbCMXwss49cCy/i1wLIsX2B7UWh7UWh7UWh7Ueh7UehpFK4e9SfhKIpVimK1XcRlxlAUqxTFKkWx SlGsUhSrFMUqRbHaOXLnyJ0js6+u/e6qey36LQX7XZP545NHvCiWjf6xa59Xn3A1A9cBe7kpb1zX fvwzibPe/h71mcK/Uf9Wn3v7fjdW/QiKl9+eUJKhFv2njucQpugrhP0nhPqMsiWClIDqmDeDz5dt 2RaqbaH6FmpsoeYW6mrtv8A+R/gNlsV4gWVBXmC6B9uLQNkLQdmLQUmDcGl5PzylUlgnC9RkTk8W qMkCNUnAqyRQucJbmz9GoVFpCA2l8Szdp0t8S44y77JjXhO3fHQ5xrOfasKvFZbwa4Ul/FphCb9W WMKvFZbwa4Ul/FphCb9WWMKvFZZk+QKTvSjIXhRkLwqyFwXZi4LsRUH2oiB7UZC9KMheFHQvCroX BU2jcN1LzgokVB+F6qNQfRSqj0L1Uag+CtVHofoopdMYNDgyG3ShdiHULoTahVC7EGoXQu1CqF0I tQuhdiHULkSeL0Dpnt/Pe7kpaUKFVqTejC7syV57/C+wjLALLCPsAssIu8Aywi6wjLCvMMsIu8Ay wi6wjLALLCsbC2wvCrYXBduLgqVR4G8RmHNU0YQqmlBFE7k4R45TRROqaKLMHmX2sBEQNvpC7UKo XYjeXIl/L/EtRWzcpYhcE9/cicVe78QZv15hLePXAsv4tcAyfi2wjF8LLOPXAsv4tcAyfi2wjF+2 3CqFEpRQghJKUEIJSihBCSUooQQlfMdDXtSidNY3yrRbylAok5bX7E+NcB3XPfi67F5Zc+3zbsXt JV3Jm3mLqlso2ULpFsq2UG0L1TPUWpGoswh1FqHOItRZhDqLUGeR/hzBdI6LH/OHH+MZdf3q9Ice lH6k13TwT/T4me8L/qX5vQVdve+6r9dXUkpGo/WVFHlRlVL0m7NKfYatyUTNSl5UpXX08bnB/gLL 6Do+N9hfYBlhx+cG+wts7ME+R3qF1SzUCywL9QLLSsYC24tC3YtC3YtCTaOwHiF8OUr4cpTw5Sjh y1HCgixUxoQvRwlfjpKrVl/6ACUPoeQhlDxkPgsT6RLfUqT2uxQZ18TtZvT5ucH+Asv4NT832F9g Gb/m5wb7Cyzj1/zcYH+BZfyanxvsL7Asy+fnBvsLbC8KuhcF3YuC7kVB96Kge1HQvSjoXhR0Lwq6 FwXdi4KlUViFCWqb+D7qj1FoVBpCQ2kYjUaj0xg0ODIlD6XkoZQ8lJKHUvJQSh5KyUMpeSglD6Xk oZQ8tN5dka49v5U0K3cljfqv1vyOFF8NfXZ8QtgVlhB2hSWEXWEJYVdYQtgVlhB2hSWEXWEJYRdY Swi7wpKyscL2otD2otD2otDSKCxtgFJ8U4pvSvFN68U5cpzim/LFIRVmD/sxpT6glDyUkodS8lC5 uRj/XuJbirR2kyJar4lvLsaqnxvsL7CMX7pcopRCjWremzw99r7FebdF6kGqmo7+sVPq173tupxd UWbg9Lhb8WsLdKLOe1aKqlso2ULpFsq2UG0L1bdQYws1t1DsP7/B9rxf9txf9vxf0gAsTa/yhSnl C1PKF6aUOpBSrVKqVUq1SqlWKdUqpVqlVKuUapXyS1b6LCzlS2S2/XRI3mnbM+wt2zhxq+non7Lt Z8Iv+Jce+Qsq42J7jVFGxrbGqI0bd7XcXfMZtrqrXe6aN6P3j334N1hG+QWWUX6BZZRfYBnlF1hW dBZYVnUWWBbqBZaFeoFlhecVJntRkL0oyF4UJI3CeqbylS7lK13KV7qUr3QpDzilPKb8Ppjy+2DK s0+pDyglD6XkoZQ8dDwJE/kS31JE9C5F+jVxXiHiD1k8+ynj1/jYh3+DZfwaH/vwb7CMX+NjH/4N lvFrfOzDv8Eyfo2Pffg32F4UdC8KuhcF3YuC7kXB9qJge1GwvSjYXhRsLwq2FwVLo7AoDkrRUila KkVLpWipFC2VoqXyPS3le1p63aavPp1aBv4ezo9RaFQaQkNpGI1Go9MYNDhyebqw51t9K1HW70oU 9VwrN1cqK6/fZkx4usISni6wlvB0hSU8XWEJT1dYwtMVlvB0hSU8XWEJT1dYUi1WWBqF5buKRoHL KHAZBS6jwGXl4gHpRoHL+E6P8Z0eYw9p9akvzdfxRsl2d7G0co2en8kf7+HtuhRcJ/+1vfxKH39i 6cmv5JncouoWSjLUUpiMkotRcrHn953y5y/X8ovl+oxqi2ep8JjUm8Hl0y+rv6DaFqpvocYWam6h rh77CywL+gLLor7AsrAvMN2D7UWg7IWg7MWg7AWh7EWhplFY5EmjiGgUEY0ioslFXuYK35sy6mRG Oc743pTxvSnje1PG96aM/b5RwjBKGPb8kkm+6Lfkq+Uu+6iUmuWp/amuWb1cIBsPvrbG96jPRF1R GU/tVf7LeGqLHGE2brxrb3LE5d32DFtPDcpG9vz1urfR2+vOsmRYYFkyLLAsGV5hkiXDAstK0gLL StICy0rSAstCvcCyUC+wrCQtsL0oyF4UZC8KmkZhkSOMX+syfnfN+AKT8Z0q483DKIoZ36kyvlNl vJQYVQGj0GEUOoxChz3/7aV8ie+nf7lLkXZN3G5G75+77y+wjF/9c/f9BZbxq3/uvr/AMn71z933 F1jGr/65+/4Cy7K8f+6+v8D2omB7UbC9KNheFGwvCrYXBduLQtuLQtuLQkujsF7nqUEaNUijBmnU II0apFGDNL52ZXztyq7+5WrTqWAYFQyjgmFUMIwKhlHBMCoYRgXDqGAYFQybd+dmTxuOJnclh6os /p5nOnq7/nTNa9P9BZbQc4Ul9FxhCT1XWELPdlz3tpMCjWJPo9jTKPY0ij3tFHu+jfju8nnj8ka5 qR35NevjNdOuQ+w6qTqHumlG2/LLqx+U3aLqFkq2ULqFsi1U20L1LdTYQs0MtdxDGl9catRHGnWd Rl2nUddp1HUadZ1GXafV50xMZ71Ix+++l+OZm9ffrPkhHV+vas9vQ72Nfr3q8dp0f4FlXFlgGVkW WMaWBZbRZYFlfFlgGWEWWMaYBZZR5hV2Nd1fYHtRqHtRqHtRqHtRqHtRqGkUFiW08Z2hxtegGl+D amyyG0WyRk2uUZNrfA2qCXORr0E1NveNCkajgtGoYDQqGO35jaV80W8p+PwW9HsKUpRtmif4p7rf +I5ke37XMH3wtSO+R30m6oKSzzxdUZ9puqI+s3RFfSbpivrM0RX1maIr6jNDV9TnMrGitnwvW77X Ld/rlu91y/e65Xvd8r2++f7/AbjdDyANCmVuZHN0cmVhbQ0KZW5kb2JqDQoyNjYwIDAgb2JqDQo8 PC9UeXBlL09ialN0bS9OIDQ5Ny9GaXJzdCA1MTcxL0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGgg MTI5MDA+Pg0Kc3RyZWFtDQp4nOydz440O3Ld9wb8DvUGN0/wTyYBQSvvBAgDzewELWThwhtbEsYj wHp7R7Dix+7OrqxOG7YgzcxCas53o8hMMoIZjHMiaL2Nx/aw3vUoR/y1x97jb3lINRr1oWbRaA+N Ker/3absHv8v/u94lG0Kj0epIbxv3l0I73pUhfBuj9pCeC+POkJ4r49WpnB7tH0K90ffpvD+6HUK H49+TOHx2C2Evde9h/Chxz5C+LDHUUL4KI9jD2H/xdAUPh6jjmjsj3HE4x7joa2UaHVv+eNZH+bv atHDiLfu0YU/oWyLPoa/t9UYcfgvbI8hx/5Q0fzF4a02f+E9F+/A9m17qMab7Zu8FWPs3pWatmj5 GC0eat9iav0RveVjdIvnG/6LOcP75qMdFg/vs66jTzkfY1iM4V1p9BhD3v22xRjxyls86S4X2fYY I9ZQsQC7mrfaHi3v3n8cLf+fVlq0XMRi8nb5z0p0sJt3VWLtd/Puy4gxYg1qiTHiMWqPMcx/1mLd dvOuWo0xYjrbEWPM14r/sD+VJsaw8ZwI22dXdc5VdHDEGMV1cdOcg91bLX5b6qNozp8/btGcP1/0 ojl/xeUsBtpdictUyr24LvqPo3tXxlJj3Oo9l6M8h3RNjXFr8VaPcV2ktG0+kI/RSozhWlharPnu r1W65hj+ix66vFcf7dB8Fu/vaCHXfIwx58+nrow5f81/Meb8uarUbb60v1bd5vx5V1Xz9X15qub8 eVduNzGGG6g/aLyHq321OX/NW2XOnz94fa6HT0md69EP77nFouxus3Va1O5TUqdJ7W61ddrU7ipV 54Lu/rM6rWp3O6zTrHYfqE672sOGx5zdHrbb57PYo202fxGtqZ1u2a7i8V/dfpvmqrpJN81V9Uls zzUKe3+ukf8H3wLiv7oRtzKfwJfHpz7mxR+y1TLVwn9R9/itT0lrUzfcnFubuuEq0FpsKbtbd+th 3bvbTIs9x1s+xjSS3e28xdR5y8cIYW/5GIfNMfwXR2xpu1t8f66Rm1B/rpFPSdecU3+0Lp8IO1xV +twjDje/bq5c3jJvxc53+PL0ErpxuJ33Erpx+OT04puHt3yrqzG7xxZ7nU+Jt3yMFgZ7uJ331mIM V3nfomMMfwXflWMMt/MeO6y3fIzYFLzlY4Q6esvHiMnxlo9xhCEerra+38UYbue+n8QY/mi+y8UY voy+j8252h++U8UY/v/chmN2faPY5+5++Ou79cVoro5uXzGad+8WFKP50rqNxGg+nW4FMZrbuet5 /Nan0zU5RvNXcD2KNyrxrYnd8fAl22NL85aPEarsLR8jltZCCffo3ls+xtHnGP6LEbp2uIIcU/8O V2Wfr5DzpfC3jzFcCf39Ygx/1WPu44f/P+8uxnCL9xHnUw1vzTnwTfB4vpFP59HmbPgDHW2+uavP 0eYKup0fPfZOl/HWHr20+AKFrh3+0sfzPVxFj+d6uJ0fMXXe2r01dcjV4hhTh7z7Y0wdcjM4xtQh 3wrGNnXI7XxojuFGPEzz3Zq3QmMPf9VR5ox7p6NMzfH/MOaeHRM25h4RH5HxXCNf2vFcI1+AMT+/ h5vL6FObfKDRpzb5bIw+n95VdMxt/fAOxj61aY/P6tQNt+5xzJn0iR1HzOThrzDGnD+38zHCuuM/ jBHWfbida9vCvA83dG/ucxT/rm0KAz/c6LU916nPr7am7BHNORPzW17nVMQncqstxoov6FbDUzni w761+Ngc8WXf2lST+LRvfepJfNu3uS0e8XHfYuv05h6eQRj7EZ/3bZ8rFd/3LezbRnzgt1Ayb8Zo 8aYWq6htxGqN+MZvI0w+bM2boTGhrv4JC6Mf8XH3jls0fTRNX2v4BuCtWL0RX3opDH/Ep142Lf8I WYvFGq5lLjp3i/BkNLeL4f/bJy92/KE2fZkYWCEQDo83o4f5nR6K0WKbt3hodxLigzIsRguzsVg6 b8bmPSxcpPhvFu6HN8MYhsVoY8zRSvhLYTbDwmGauhpOmOyprOFH2tTW4Qvi7zOHcK305pw+f0Hf YGONh28g3owv7CjRQ3sOvEdzTmqJIXp8ZEcJp6zHyo8SAuHheTN6mK6H20A0Q11HjdGer1ljtOdr 1hji+Zo13uL5mjWe9/maNUZ7vqarisrzNWv4gFsY06jhBEpziPACFVve8K3Cm2M+pHdWLDY9Nzpv 1uejl2i2OdXRbwudDIfBm3NZWgj0qXItRuuxfY0WsuEQW2iYynM1fZvx5vxZjyGmWxIeop5+yQiP vhxTEWMtypiKGF69f5GiB98mVLepiL5M3pwa5SbjzamIYYX+5DFRYaZVUxH38HEttg3fHqIZ+8bw 5fdmbBzDt5Bwh+do0zOea+FDyt89ZFv026Zq7DFaD59k7PGvcS6x2AvdKwrPbBzTow7XLL5Y3gzf bMReUuMr580Y7Zi6ExtIDRv3ZrxbLIPFMcCbUz1jL2lbqOeIvaRtU41iL2lbbCDhsrmNxQYyxnTg YwMZsZe4SsZoY56OYgMZsZf4+8RosZe4XxOjxV7ijk2MFntJq3M1Yy9pNbzRLfaSVqcF7CHbfN1K +PPuq8+f7dFZ7Dv+r/PA0MIpjr2kxablzRAIk/Bm9BAT7k1/yL65nnnziOYxZf0t3I+ZClOj6QOX mAH/WrboIey4u2pGMwTK0xmP80mJ01345b6rTMc89pJe6/xZDOHuYjT93Xp8Nkv4h95s8bPYS3p8 OF1gHnHCed9iW+nxmv6vIft8zdhA+vM1wy768zXD/PvzNWN/6M/XjL1kf75m7CV+kooeYi9xvyie ITYQPw/Fa4a9+eEnHicM0s83cW6NvcQ3+xgt9hI/pcRosZe4Ux+jxV7iZ40YLfYSP07EaLGX+Ilh Pm905uoX/xoDxyHa/zVGixOgz3r8LNx8F4jHOebzxmazhxvhzeghvmfejMeJvc6b8Thhod70xzm2 58BHNKfuxGZzxDnPzzlxRvTtPJqK5lzN2EsOzdWMveSwuZqx8u64jefZSu6bzZ/FebPM1YzN5ihz NWOH8W1y/ixGq3M1w47dKYqfxWbjfk/8LDYbX7b4WWw27rPEz2Kz8d1//qzNg+z8WYy2z/mNzeZ4 KkFsNscx5yw2myPcMn+6GC18jRLndR3xafNmjDbmcsdmc4w5fbHZjFCbEl9Xb85+w4b8OxpKEEbm X6vZ2RHNqUb7PFjP543NZsQm5k1Fc+pDbDYjTNqbJZpzjWOz8a9K9BubzYjTSglL03SzvBmjxVfa mzFam88bm43vevGz8TzHx89isxlx9i+x3WpEfMCbMdo8pG6x2Yw4KXkzRjvmssRmM8Jn9GaMFj6L N2O0MY0sNpsxrSVmIGIFFs0ZNnC18WbEDeL5vBmBg4iFeLNGs8+f+X6xxUnHmz2aoX3T097irFPm 4XWLw443RzTbkcf1LY473ozR4ryTrVibbZ6vp0szz8OhEprnqvnRnyeEMNmnIxa73Pxa1Hj3ac9z +uPgvYXXqhkO0AzIxOj73Ohdzkosf8zg0w0MFS91fpUiXDX1JKywRkAhvC/3n2MLCntt88sRb9PD lVJMs5+GYv/xXvb5VBHr2uc3ynsq0wjjxOuf3XiWsCL/8Mw18FasWZxH/Ov6nCk/5tuc1BYH/jjF +rxWmw6AS/iHMxwEX7JaxoxCtTjmz1DKHsf8+EjH8b2Fc2hxVJ+br0WYbdc8H/sv9udp13/hp595 qvODfGxnfg6MI/30p/1QvJXpF/txe4tDmcWBehvzOx+H+3DM/+IvfvnNXMbt8Te//PaX3/zyV3/r Y/7d45ff/Dc/GsW//uVf/uf/9FJqvyV13JIad6TadktKt6TsllS5JVVvSbVbUrfmvt2a+3Zr7tut ue+35r7fmvt+a+77rbnvt+a+35r7fmvu+62577fmvt+aeyZ1/yI0w+CfpOxKqqfU7/7LFIswuv/D T9K//ee//8cPiV/+6lE+SzX6/Jvsc0o9I/LXnY+vRn9Lqt2S6rek9ltSxy2pcUtK2z2xqyU+iV2t 8Ums3BO7twC6twK6twS6XIPtpKbuNT1Vyr9V2ag0Go1OA/0bB43xbMRBMxuiYTQKjUqjvVNfHvGb beh4ZxxjDdyvew/E5PM8XejXScwu9OssdqFfZ7EL/TqLXejXWexCv85iF/p1FrvQr7PYhX6dxS6M /Cx2bxXKvVUo91ah3FuFcm8Vyr1VKPdWodxbhXJvFcq9VSj3VqFergLfydxaAmN5GqTYG4SJir1B 7A1ib1Cj0WnsNA4a9Gz0bPRs9Gz0bPRs9Gz0bPRs9Gz0XLZ3mwiv+m2LqnqzReVMPiZae917KV/m +0pPT2JXenoSu9LTk9iVnp7ErvT0JHalpyexKz09iV3p6VexdqWnJ7Gr3eIkdm8V2r1VaPdWoV2u gp1MraDjBR0v6HhBx8tSPnS8Yj0V66lYT8V6Kj1Xeq70XPd3SswjfjOR1t+ZSFkDv/GfA5D+PE9X +nUSu9Kvk9iVfn0V61f6dRK70q+T2JV+ncSu9OskdqVfJ7Er/TqJXVl5HSc1bOhRQ48ay9nQo4Ye NfSooaENDW1oaH+3C6/hv6lY39+pWEPt+/Uu/HE2/N3f/9f//msev5YfvJzdZVzLcN6czAJp+jSv aMZ4K2W3pMotqXpLqt2S6ldSbGEoRWfBOwvembXOgncWfP+84JddrwUfueD7Z6m1OeZ6d1Rv12Xn r9Y7x/tB/sth9q3QuHitvX+WWmfZb2LtNLH7/maulvS3uZI+i9XTZO0s0n686f34skHoSkdPYldK ehK70tKT2JWansSu9PQktt8Te73Q38SulvqrmF0t9UnsaqM4id1bBbu3CnZvFexqFfbzt+Hg23Cw VR58Gw4M9GCrYKfdiXjtB1vFwVaxNuF18CeWsRPL2J+xjJ8e8ZuJWH9nIscauL3pfXwNG17p10ns Sr9OYlf69VWsXOnXSexKv05iV/p1ErvSr5PYlX6dxK706yR2ZeUnsXurUO6tQrm3CvXeKtR7q1Dv rUK9twr13irUe6tQ761CvbcK9d4q1MtVOEXkd4KWO0HLIKNmQzSMRqFRaTQancZO46BBz4Q8DkIe ByGPg5DHQcjjIORxEPI4CHkchDwOQh6HvfOQxjdcgS2tbe+2NAK7h127SJM/+mni24XCnsUuFPYs dqGwZ7ELhT2LXSjsWexCYc9iFwp7FrtQ2LPYhcKexPrFtnEWu7cK/d4q9Hur0C9X4RSYOAi+HQTf DoJvhy2dQ8cJvh0F6ylYD+erg/jAQcjjIORxEPI4yhvH+OMRv5lIb29M5LA18BvH+KivT84/iF3p 10nsSr/Kydc6iOccxHOOen2U+fT771My3k0J8aOjXrtZLw9WO37evpy5pRVroce7af7yCUgTePqB l1J2S6rckqq3pNotqX5Lar8lddySGrekOIj+JHZv9nVv+nU5/6ev+VHZNSq7BnGdgzDUQRjqIAx1 EIY6CEMdhKEOwlAHYaiDMNTxOQx1/WRYT0KxftKun8VOYah88cdMU7jq/ZX15IA/yH85I/8gdaWB /Wtg4koF+ykwcfT9zXT1c2DiY7qOz2LnzYbo0dGPN73vL8/hP4jZlabvL8/hP4ldafr+8hz+k9jV VrO/PIf/JHa11Cexq6U+iV1tNyexe6tQ7q1CubcK5WoV+vljubNb7OwWO7vFzm7BB+sgPHbsWO/O bsG37CA+cBDyOAh5HIQ8juPTF/P6Eb+ZSCnvTGRfA7c3vX/lM5Ur/TqJXenXSexKv05iV/p1ErvS r5PYlX59FatX+nUSu9Kvk9iVfp3Erqz8JHZvFeq9Vaj3VqHeW4V6bxXqvVVo91ah3VuFdm8V2r1V aPdWod1bhXa5CmdXhtjmQWzzILZ5ENs8iG0exDYPeFoHPK1jOdHrOE/I4yDkEWl42RANo1FoVBqN Rqex0zho0LPeOUnfCI9rS2v7uy2N+O/QtZM0E+k+T/yFwp7FLhT2JNYvFPYsdqGwZ7ELhT2LXSjs WexCYc9iFwp7FrtQ2LPYxbZxFrtchVOMYBAHG8TBBnGwobX8qBtxsAH1Z0D9GRwdB0f1YW++sh/P 8U0lP596v6nk0Or9+iv70m8HCD368hTWe9qbB7WvlMWUKm+l7JZUuZI6UbkGkZlBZGZ8ZjBd/35N bcmprZ+l+mlmCQSNYm86L6/A6h+k+i2p/ZbUcUtq3JJaZ+wfxK4W/SR2teonsatlP4nVe2L3VkD3 lkD31kD3FkH3VsEuV+HEZRjEGgexxgG9apSlvNgK4bFBOG4QjhvQqwb0qgG9akCvGsQHBpGOQaRj tM+f2suH/mZ8pnfWR0B1tGvTfrWvDVtTUG788OsZ+b3UlaK2L4EJu1LUdgpMjLa/mbf2LTCx5q19 Fjt/DwggjXa86b2/Pof/IHZlDf31OfwHsStr6K/P4T+IXe1J/fU5/Aexq6U+iV0t9Unsak86id1b hXJvFcq9VShXq9BOgYnR2TY620Zn24DSNPApBuGxAblqQK4auBuD+MAg5DEIeQxCHmP/7DJdPuL3 7/r2zkT6Gri96X1/fQ7/QexKv/bX5/AfxK70a399Dv9B7Eq/9tfn8B/ErvRrf30Ofy/Wrqx8f30O /0Hs3iq0e6vQ7q1Cu7cK7d4qtHur0O6tQru3Cv3eKvR7q9DvrUK/XIVTYGIQ2xzENgexzUFscxDb HMQ2B3SuAZ1rrHPOOs4T8hiEPAYhj0HIYxDyGIQ8BiGPQchjEPIYhDwGIY+RIY9ZLeTNpvMtMMGW 1uu7LS3jv7PqyFXvs6DI54l/rbDfxF4r7Dex1wr7Tey1ws6yJp9Xe1ZKyVdqNDqNa8fpU0ffp3Bc TyHP+ay1ctX7S4cT7G609WVbD3p9cpxFVD5NDDbV3krZLalyS6rekmq3pPotqf2W1HFLatySWifa b2JfM09nAZ3nmgk9EAooFFCsqzC3DATNkjnPRgaCZrmcj7W/HH4paUsl/cSHnlVnvipp8rdm2Z3r 3u31OfwHsSutsdfn8B/ErvTGXp/DfxC70hx7fQ7/QexKd+z1OfwHsSvLPYndWwW7twp2bxXs3irY vVWwy1X4GpiY5Z9STbEYw2IMizEspmAxBeUu2GLBFgu2WOi50HOh50LPhZ7r50/t5UN/M0E73pmg 8Sj12sBffCew+TXyDz/8ekZ+K1VeK+pZ6rWenqVeq+lZ6rWWnqVeK+lZ6rWOnqVeq+hZ6rWGnqVe bxNnqVtzX2/Nfb019/XW3Ndbc19vzX29Nff11tzXW3Nfb819vTX37dbct1tz327Nfbs19+3W3Ldb c99uzX27NfdMxFVu4RcP861QuSNU7wi1O0L9jtB+R+i4IzTuCC3P8r3UrTnXrUnPrvQ8VPywfj9I lVtS9ZZUuyXVb0ntt6SOW1LjlhTr+JPYvdnXvenXvflfY96x2B+kyi2pekuq3ZLqt6T2W1LHLalx S+pjxW+Z7k9i96Zf9+Zf9xZA91ZAV0uwnNrf/voPf3i+7NPlVp51lUdd5UlXedBVnnOVx1wl3UFJ clB6/krHX+n3K719pY+v9OyVjr3Sr1d680ofXglWKrFKJUKpxCWVsKQSlVSyrpVcayXDWhmkUdKq lbEaJaqhBDWUmIYS0lACGUocQwljKFEMJXahhC6UyIWSq6kMBSkDdMoYpjKEqYxgKuOWymilMlip jFUqQ5XKSKUyUKmMUyrDlMoopTJIqYxRKkOUygilMkAp4pNbxjuhX0G6gmoFwYpMMhLJoJ/AmgEK AgBaOfWrNE7+Jcl2pdY+/y626koVyb+L6Zp/F+Er/y7aTf5d8HH+JTK7Qr75d4Vl8+8KsORfTqCc +jh/cvzk9MnhM/vjsEfAhvAS0SWCS8SWCC0RWSKwlPNWUs9Kzl9JPSs5jyUpwYXQIwHhnX7yuXJ+ SwbMS85zyXB5yfkuqX8l573kvJec95L6V3L+S85/yfkvqX8l16HkOpRch5LrUNNua5IJatpvzaSJ mnZcM2Wipj3XtOeacHdNu66ZLVHTvmtifTXtvKad17TzmvpaE7msqbc17b2m/ta095rrUXM9aq5H zfWouR4116PmerSMGrTcH5vx73v+PfLvc5yWQZGW+2XcsvT8W/Jv9pf7ZstwSMv9s2UwpOU+2pL9 0XI/bbmftqR+tNxXWxI/Wu6vLffXluvUBn+f/fa0m55209NuetpNT7vpaTc97aan3fS0m55209Nu etpNT7vpaTc97aan3fS0m55209Nuen6nen6nen6nerLyen6veq5Lz3XpuS5952/2m3bU04562lFP O+ppRz3tqKcdZbmXB8VaqJmZ9TgfWTrzQQ0XygZSCJE6iJRBpAoiRRCpgUgJRCogUgCR+oeUP6T6 IcUPs/bhI6sUPqiHQzkcquFQDIdaOFkK55FFax5Z5uSRRUoeVDmh3ArVVii2Qq0VSq1kpZVHFkV5 7GlXe9pVZvs9yNWjMEhWHXlkfZAHKXxURaDOA2UeqPJAkQdqPJC/Sfom2ZuZvPnINMtH5gE+SDYk t5HUxsxsfGQO4iPzrB6ZJfUgzYp8MdLFyBYjWYxcMVLFyBQjUYw8MdLEyBKDDgqLFRIrHFYorDBY IbDCX4W+CnsV8mpyVx/JMn0kX+sB2wqyJNRMmJlJzHwkh/KRpKwH8CzoLOAs2CzQLMgswCy4LLAs qCygLJgskCyILIAseCxwrBKOjYZoGI1Co9JoNDqNncZBg55Fz6JnnOytrwZD4G9unSHwPDdczw3f c8P53HaGwA3d8EM3HNENT3Tb6RmfdMMp3fBKN9zSDb90O+gZD3U76BlfdcNZ3fBWN9zVDX91w2Hd 8Fg3XFalLWk5+8vbV5qTlt+vygGFE8A6AqwzgNKotE4DavTMuUCNnjkhrCOCGj03OuS0oE6HnBvW wWGdHNbRwThDGYco4xRl6xi1zlHrIJWW5w2OVEaH63C1TlfreGX0zEHLCj1z5LJCzxy+jNOXcfyy Qs8cxKzQM0cyY72M9TLWyzh24H8LB1x44MIFFz64cMKFFy7ccOGHC0dceOLCFRe+uHDGhTcu3HHh jwuHXHjkwiUXPrlwyoVXLtxy4ZfHrUDZYAULK4ivLpx14a0Ld13468JhFx67cNmFzy6cduG1C7dd +O3CcReeu3Ddhe8unHfhvQv3XfjvwoEXHrxw4YUPL5x44cWrsJSFpawsJR69cOmFTy+ceuHVC7de +PXCsReevXDthW8vnHvh3Qv3Xvj3wsEXHr5w8YWPL5x84eULN1/4+cLRF56+cPWFrx83GtGgZ5ay spSVpawsZWUpK0tZWcrKUlaWsrKUlaWsLGVlKStLWVnKylJWlrKylJWlrCxlZSkbS9mwyoZVNqyy YZUNq2xYZcMqG1bZsMqGVTassmGVDatsWGXDKhtW2bDKhlU2rLJhlQ2rbOyrjX2VM5k4lIlTmTiW iXOZOJiJk5k4momzmTicidOZOJ6J85k4oIkTmjiiiTOaOKSJU5o4polzmjioiZOaOKqJM4w6o/ey /gUZRu+M3hm9M3pn9M7ondE7o3dG72t0vsKdr3BnB+jsAJ0doLMDdHaAzg7Q2QE6O0BfMTsMv6/o 3QrfrfjdRwCPDlcob8XyVjAPw+8YfsfwO4bfMfyO4XcMn8OiOC2K46I4L4oDozgxiqOROBuJw5Hw /oX7L/x/keohUj1EqodI9RCpHlqpHpAWTB9xvCSraUX0CJURIzSChLaicgbPcsXnDJ7litQZPMsC UZzPn/H5Mz5/Vkk0Ycc2dmxjxzas0hoFNBoFNBoFNBokPAIJhhobamyosaHGhhobamyosaHGhhob amyosaHGhhobamx9PQ/5GKixN+gZviBqbKixocbWP+KpdLgiq1DrUWNDjQ01NtTYUGNDjQ01NtTY UGNDjQ01NtTYUGNDjY3IhxH6MGIf1LpdRVAftq/wL4+xr6IGrMWB8MEkHCu/AE0YzPxg5pPPuQg7 j3lPWzZWDHdRd4jiVsK4lThuJZBb6acRGm7Ehhs9N3peozd6bvTc6LnRc6PnTs+dnjs9d3ru9NxX 5JmeOz13eu70TNiYg2HhYFg4GBYOhoWDYdkWb3XRiQkfczAsHAwLB8PCwbBwMCwcDAsHw8LBsHAw LBwMCwfDwsGwcDCcd6hlg54JJ2/EkzcCytug57FC8SsWTzCe6L4I74v4vgjwiwi/CPGLGD+bZ9GK 8n+E+WfPf/dIjO13//rPv/7y2z/8/l/+4Q+/+/2vv/7NP/3TH375zd///td/nP/zkb97Im/5y0// /a9//V9/+Ktf//VRLbv763/5H//zb11MeEXzcrWp+PNytfkM83K1aNV5uVq02rxcLVrzwtHZ2ufl atGa14LO1nisk/b2+DhiP9ZJ2h7rCF0e6+xcH+usPO+yfDb7Yx2T93nJ2mwe85a12RzzmrX59PPK w2dT86K12bR509pszsvsns0671qbzTYvW3tuMvO2tedeMq9be24Z8761584wL1x7asS8ce25YPPK tdm0h2El8zr7Z3Pe0f5szuvQn824tTx/Nq/nfjbnDdfP5rwu+rkKWxQ6eDbn1bYfCvJ/g7eecNUz frpw0hMuuvyqK5wT32N5Jfn3Cq/8N8InqSs5JwsfHn9kuSyAj/x7fq7IwcdGyMDHC6L6H58TW35O /kX5chzq/hUcGvwZ3Jnl8eTf7I8kXHJwScHFNcEzwTHBL8EJwQfBBcEDwQFZ/hBmsbyh/IvjlP3h eSzHg48533I+5XxU/0ODqChPCqdSlFSKAkcZijIMZQjK8JOhJ8NOTiUocJOhJsNMhpgMLxlaMqzk VIKSSlBwU/BScFLwUXBR8FBwUPBGcEbwRXBF8ERwRPBDcEPwQnBC8EFwQfBAcEDYRHE/2ElxPthO /zgRa5QpYeHU9JqaXlOz68benzDyhnx+mTc+CPn71Oyaml1Ts2tqdk3NrqnZNXe6mjtdzc9DzXBH TSWvqeQ1lb/mzldT6WsqfU0lr6nkNZW8ppLXVPKaSl5TyWsqeU0lr6nkNZW8ppLXVPKaSl5TyWsq Oce/mkpe658ELQBl4sf5NzWdQCBxQMKARAEJAhIDJARIBJAAIPE/wn9E/wj+Efsj9Efkj8AfcT/C fkT9CPoR8yPAR3yP8B7RPYJ7xPYI7RHZI7BHXI+wHlE9gnrE9AjpEdEjoPcflGORysBDQLhITSfm QYCjpWa3hlw+RGp2S81uqdktNbulZrfU7Jaa3VKzW2p2S81uqdktNbulZrfU7Jaa3VKzW27nDcJL buctt/OW23fL7bulkrdU8pZK3lLJWyp5O/h9Pk8qeUslb6nkLZW8pZK3VPKWSt5Syf/ICSwoE2yW 7CQ1vadm99RsAsTEhwkPEx0mOExsmJgaITUiagTUiKcRTiOaRjCNWBqhNCJpBNKIoxFGI4pGEI3A LwEz4mWEy4iWESwjVkaojEgZgTLiZITJiJIRJCNGRojsT4wVhDIlxSY1nPsTuSyRuxK5KpGbErko kXsSuSaRWxK5JJE7ErkikRsSuSCR+xG5HpHbEbkcMe9GfOQthg8uv+LuNm6j4zI67qLj4jnunePa OW6d49I57pzjyjlunMsL5x55Ndwj79p6cKEX149x+xiXj3H32J8Y1Qpl4kfZSWo6l+VwV05elfPI S20eVGTnaghuz+DyDO7O4OoMbs7g4gzuzeDaDG7N4NIM7szgygxuzODCDO7L4LoMbsvgsgxuxuBi jLwX45E3WDwoJE/dfm4i4CIC7iHgGgJuIeASAu4g4AoCbiDgAoL/IPy1VAIGuyKzpaaDBFCqkKLb WdH7kbW3H1QwpOIwNZQpoUwFZQooUz+Z8slUT6Z4MrWTKZ1M5WQKJ1M3OcsmPw5IhLmdUyaUSqoU UqWOKmVUqaJKEVUqplIwlXqplEulWirFUqmVSqlUKqVSKJU6qZRJpUoqRVKzRuojq5k+su7jv3cS IUqUnaWmg4cCfoJ9UuWOInfUuKPEHRXuKHBHfTvK22V1u0fWoXsAPYE8ATxRBoQCbFnd7TFgZuZ2 TvUp6mlRTotqWpTOonIWhbOom0XZLKpmUTSLmllZMuuRxa0eFCOhRAxVdCiiQw0dSuhQQedPkKGJ YiFHbJ66H6LKh6jyoY3Q9Aa9ZINeAuYuqiGIagiiGoKohiCqIYhqCKIagqiGIKohiGoIohqCqIYg qiGIagiiGoI2WAEbWAEIqcBDBR4q8FBtazaAgsBDBR4q8FCBhwo8VOChAg8VeKjAQwUeKvBQgYf+ mSi7lHJNOoAJCKjAOwXeKfBOgXcKvFPgnQLvFHinwDsF3inwToF3CrxTgrD1AWQBTQrC1ge2tcCt hW4teAuLWgmFK5MQXFML+1rg18oqBPHUyi8E+dTKNAQB1cLMFmi2sg7BRrXyD8FItTIR/xhoyigT vX9wlnk4TGcBgiuzcaU2rtxGUFwt1HDBhgs3XMDhynQE8NXKeQT31UIXF7y48MUFMC6EcUGMC2Nc IONCGRfMuHDGBTSS+SigZpEDKRBnkQ0pgGeRFynwZ5EhKWBokSsp0GgBWIq0SYFOCwxTZFB6g57/ RPjjKCVd1NWgr8VCwFBAQAUEKjBQAYLKVocYCjioAEIFEiqgUIGFCjBUoKGyRezn00OWq6AViHxX wS4Qma+CZCByYAXXQGTBCp6ByIcVdAPBwROsA8HBE+QDwcETHASRNysSZ0XmrCAniBxakUQrsmj/ hPn8KCVy2BCwscCJVVZqOxZTPpLceaZF11j57ivhHYspK/UdiwEyFpixAI0FaixgY4EbC+BYIMcC OhbYsTfoGYsqi96D/QAgCwRZZc0G9gOILFBkASMLHFkAyQJJFlCywJIFmCzQZAEnCzz5z2kVH0qZ w1RsCPhZ4M0CcBaIs4CcBeYsQGeBOgvYWeDOAngWyLOAngX2LMBngT4L+FngzwKAFgi0gKC9Qc9Y FKi0gKUFDq26+E7YD1C0wKIFGK26ikhgP3XRpD7qSdDzqiyB/dTFpVpFJrAfcOk/iuyW1KrV1/9J qsvXshj0tAbiGf7/58F8rb+RD7Ke+89JMn9kSTK5wGjR24yZ/1cNHvd75s0qOsMm0D7KziC8iLKr Ag0mD3QvsHsB3gv0XsD3Ar8XAL5A8AWELzB8AdoL1F7A9gK3F8C9QO4FdC+we2/whGvCMVvwfAHo C0RfQPoC0xegvkD1BawvcH0B7AtkX0D7AtsXYL5A8wWcL/B8AegLRF9A+gLTF6C+QPUFrC9wfQHs C2RfQPsCyxdgvkDzBZwv8HwB6AtEX0D6AtMXoL5A9fUV1j+WHH1hZx0769gZWL8A+/+cDvZs0OG/ h3SwXFe0GH6AIAgIRoCgBAhOgCAFCFaAoAUIXoAgBghmgKAGqK9BMR3YAYIeIPgBokyIqBPijZ0G X2AMBYKAN+iHDxacAUEaEKwBQRsQvAFBHBDMAUEdENwBQR4Q7AFBHxD8AUEgEAwCQSEQnAFBGhCs AUEbELwBQRzwBv2U1Q9PiNnAJhB0AsEnEIQCwSgQFALBIRAkAsEiEDQCwSMQRALBJBBUAsElEGQC wSbQvnIzsBYIBYJRICgF33IUn/q6AmVQCbSv9IFV9wxr2D8yCnimlVOANUAh0L6SI9nqYBGIii2i ZIs36JnPyr68QawFXoEgFghmgaAWCG6BIBcIdoGgFwh+gSAYCIaBoBgIjoEgGQiWgaAZCJ6BIBoI poGgGgiugSAbCLaBoBsIvoEgHAjGgaAcCM6BIB0I1oGgHQjegSAeCOaBoB4I7oEgHwj2gaAbCL6B IBwIxoGgHAjOgSAdCNaBoB0I3oEgHgjmgaAeCO6BIB8I9oGgH3zLoEVx17/SF/YB50CQDkT6oUg/ 1LFymfh2HCtjt69+eCZsAzaCoCMIPoIgJAhGgqAg6FgJN6tCILZxrFqB2MaxsnJWWs7Ky/lIzKFn bAM6guAjCEKCYCSIa13Fta6CpCBYCoKmIHgKgpggmAmCmiC4CYKcINgJgp4g+AmCoCAYCoKiIDgK gqQgWAqCpiB4CYKYoLEyibANuAmCnCDYCYKeIPgJgqAgGAqCoiA4CqLSkSh1JK6yE/f13UjiRl/5 5UdG9/oX+sIaICwIxoKgLAjOgiAtaKx8cr4d8BYEcUEwFwR1QXAXBHlBsBfE1V+CvyAIDILBICgM GisTENuAxSBoDILHIIgMgskgqAyCyyDIDILNIOgMgs8gCA2C0SAoDYLTIEgNGitvbSWurcy1lbq2 ctdW8tqqrrnKa2Ib0BkMOoNBZ7Bt1dwkvXFb1TdJbYS8YJAXDPKCUV7KG/RDXtxGVU6oDAaVwbjC xLjCxCA3GOQGg9xgkBtsXV3w1E76In8OOoNBXjDICwZ5wSAvGOQFg7xgkBcM8oJBXjDIC7atkgYk z5HebaR3G3QGg7xgkBcM8oJBXjDICwZ5wSAvGOQFg7xgkBcM8oJBXjDICwZ5wSAvGOQFg7xgkBcM 8oJBXjDICwZ5wSAvGOQFg6pg28qjJXcfqoJBVTCoCgZVwaAqGFQFg6pgUBUMqoJBVTCoCgZVwUjU NhK1jURtg6Zg0BSMRG0jUdu2VXsW24C4YBAXDOKCQVwwiAsGccEgLhjEhW9VLtBX/nXljWIfWhmk WjI8E9ZAFqqt1GuoCqZVXoPMUqgKBlXBoCoYVAWDqmBQFQxigkFMMIgJBjHBICYYxASDmGAQEwxi gkFMMIgJBjHBICYYxASDmGAQEwxigkFMMKgDBjHBICYYAK5BTDDQM4OPYPARDOaBEXY2EsWNlGQj uGyQEozIspFKbtAUjACwQVMwor+2ks0hJRikBCOAaB9Z6JjESkaHlGCQEgxSgkFKMEgJBinBICUY pASDlGCQEgxSwrfSK6mvbOyrOPNKhV/lmVca/NqMVzb8UveVFL/ypj965pOxUqlXIWdoCAYNwVaa 9Uee9UeiNf2sVOuVa72SrVe29Uq3XvnWmMTKuF4p1yvneiVdr6zrlXa98q5X4vXKvF6p1yv3eiVf r+zrlX4N4cBWAvbKwF4p2CsHeyVhryzslYa98rBXIvbKxF6p2CsXeyVjr2zslX69imlDOLBVVhvC ga0C2xAObJXahnBgq+g2hANb5bepcWCrEDcUBFsluaEgXJcBSn1dj/uiJhB9YR+rUDcUA1vZ5ivd fOWbr4TzlXFuqz4EtgHFwKAYGBQDg2JgUAwMioFBKDAIBQahwCAUGIQCg1BgEAoMQoFBKDAIBQah wCgQaGSgGyno3qBnrAV6gZGWbuSlG4QDK6s8wapPsAoUYC1llSRYNQlWUYJVlQDbgF5g0AsMeoFB LzDoBQa9wKAXGPQCg15g0AuM5HQjO92gFxj0AoNeYNALDHqBQS8w6AWW9AIUjl9iH+XjX+gL+4BQ YBAKDEKBQSh4WaiKn2MEcAQMjoDBEbCyKl/xgYAjYHAEDI6AwREwOAIGR8DgCBgcAYMjYHAEjEKL Riq61VUnBQOAH2DwA4z8dCNB3WAMGIwBgzFgMAYMxoDBGDAYAwZjwGAMGIwBgzFgMAYMxoDBGDAY AwZjwOAHGPwAgx9g8AMMfoDBDzDy1o3EdYMxYDAGDMaAwRgwGAMGY8DqqtSxSnN81OaonzWPH6wS Hag8jACDEWAwAgxGgMEIMBgBBiPAYAQYjACDEWAwAgxGgMEIeFk9jZ+j9+D6Bq5v4PpWVzk2vgJA 9wZ0b0D3BnRvQPcGdG9A9wZ0b0D3BnRvYPEGFm9g8QYWb2DxBhZvYPEGFm9g8QYWb2DxBhZvYPEG Fm9tVRDCJMDiDSzewOINLN7A4g0s3sDiDeTdQN4N5N1A3g3k3UDeDeTdQN4N5N1A3g3k3dpnDHB9 3cHaDazdwNoNrN1A1g1k3UDWDWTdQNaN9HUDTDfAdAMEN0BwAwQ3QHAjcd24KsAb9Lzq1qzCNe8q +fFz9B4Q3ADBDRDc2qoRyMYPCG6A4EYmu7U1dWz8wOIGLG7A4gYsbsDiBghugOAGCG6A4AYIboDg BghugOAGCG6A4AYIboDgBghugOAGCG6A4EaGu5HibsDiBixuwOIGLG6A4AYIbn1V0sIAAMGtcxXy U/PyUUhpN3Lara/bXzACgHADCDeAcAMIN4BwAwg3YG/jOgajlqk36IevACC4AYIbILgBghsguAGC GyC4AYIbILgBghsguAGC/7mY5LNBz/9mxSTRPHr/qCxJX5gOILgBghsguAGCGyC4AYIbILgBghsg uAGCGyC4kSVvpMkbsLgBixsguAGCGyC4AYIbILgBghsguAGCGyC4AYIbILgBghsguAGCGyC4AYIb ILgBghsguAGCG4n0Ria9AYsbILgBghsguAGCGyC4AYIbILgBghsguAGCGyC4AYIbefVGYr0Bixsg uAGCGyC4AYIbILgBghsguAGCGyC4AYLb/uU8sS6jwj7IpTeS6Y1seiOd3vZ1tRX28arKKT/HCEC6 DaTbQLoNpNv2VT+VbwdIt4F0G0i3H9J5DL4dYN8G9m1g3wb2bSDdBtJtIN0G0m0g3QbSbSDdBtJt IN0G0m0g3XasmoyYBEi3gXQbSLeBdBtItzfoGZMA+zawbwP7NrBvA+k2kG4D6TaQbgPpNpBuA+k2 kG4D6TYS7Y1MewP7NrBvA/s2sG8D6TaQbgPptuPLeWJdf1ZXg2fiSwG2bWDbRnK9UR3ayKg3UuoN pNtAur1BzxgB2PfL0rv8HL0HzjbgbAPOtmPV8uUrQIa9kWJvx3pTvgIA3AbAbQDcBsBtANwGwG3A 2QacbcDZBpxtwNkGnG3A2QacbcDZBpxtwNkGnG3k3RuJ9wbAbQDcBsBtANwGwG0A3AbAbQDcBsBt ANwGnG3A2QacbaTdG3n3BsBt/7u9O0ZtGIiCANoHcpdkpsne/2Ip7DfCxnWSIp2aXQusFYjHzAfc AdwB3AHcAdw5D98TX1tpL4cAaQdpB2AHYAdgB2BHC3mYddZCLoofWfzg7ODsiONHHj+AO4A7gDuA +2UftOWee2YdZp2znssVTHvxU+xQ7FDsUOxQ7FDsUOxQ7FDsUOww6zDrnP0FDgCzDrMOsw6zDrMO s44kfkTxczYdceMRNx/RkTgr+VzL52o+1/N5FX3edi7F7se6PlfuuXbP1Xuu3/Phe2K90Z+7qAt7 afbk1uXW5dbl1uXW5dbl1hXBrwh+mXWZdUXwK4Jfil2KXYpdil2KXYpdil2KXYpdil2K/V9Sfj0N tvi5xvKnoaJuxAa/UGf+NM70fkfXOj/7N7rO39++AfdrR/gNCmVuZHN0cmVhbQ0KZW5kb2JqDQoz MTEyIDAgb2JqDQo8PC9Qcm9kdWNlcij+/wBNAGkAYwByAG8AcwBvAGYAdACuACAATwBmAGYAaQBj AGUAIABXAG8AcgBkACAAMgAwADAANykvQ3JlYXRvcij+/wBNAGkAYwByAG8AcwBvAGYAdACuACAA TwBmAGYAaQBjAGUAIABXAG8AcgBkACAAMgAwADAANykvQ3JlYXRpb25EYXRlKEQ6MjAxNDAzMzEw MzE2MDcpIC9Nb2REYXRlKEQ6MjAxNDAzMzEwMzE2MDcpID4+DQplbmRvYmoNCjMxMTMgMCBvYmoN ClsgMzAxIDAgMCAwIDAgMCAwIDAgMzY3IDM2NyAwIDAgMCAzNjcgMzY3IDAgNTg2IDU4NiA1ODYg NTg2IDU4NiA1ODYgNTg2IDU4NiA1ODYgNTg2IDAgMCAwIDAgMCAwIDAgMCA1OTUgNjEyIDAgNTY5 IDU4MyAwIDAgMjc4IDAgMCAwIDc0NSA2NjcgNzAzIDU4NyAwIDYxMSA1MTEgMCA2NzggMCA4ODQg MCAwIDAgMCAwIDAgMCAwIDAgNTMzIDAgNTEyIDU4MSA1NzUgMzcwIDUwMiA1OTMgMjk4IDAgNTQ4 IDI5NSA4NTkgNTkwIDU2NiA1ODMgMCA0MjcgNDMxIDM5NiA1OTEgNTI3IDc4NCAwIDUzNF0gDQpl bmRvYmoNCjMxMTQgMCBvYmoNCjw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMzc5NDcvTGVu Z3RoMSA5ODA3Mj4+DQpzdHJlYW0NCnic7H0JfFTVvf/v3H3m3pm5d/ZkEjJhDIECBgkhwEMZskAw KJuQCSESBCSoSGQRZNE8lYKhiNW61LaCz2dR2qeD2gpqRasC2sYVARcURUSFVNqKC5qZ9zvnTiZr hc/7mNT8//kl53uW+7tn/33POfdOJkAAwIsgQFXRlHFj311adi2QeZsBfMvHFhWPgT7CcIDGIGr1 GjtxwhTfBffdi/EwwNRBY6dMLdiy8OLfAplcA+AZNP6iKSXXTN1YAuASMMNhE6bkDC6VbpsBQD7B +6umFV0QWbhhGV7zjQQQD81eMKtmynmVDwFcPwd1js++eknw+ro5KQA3bwfgSy6tmbfg3QtfXQFw 40QA+Yp5sxbXgBMsWL6O+enzrrjm0ps/+BrDt2J04rXVcxYs/+tHT/4aID0NYG5J9dxZcw4efmkh 5n0FKgytxgT7K5aPMP7fGD+resGS5Yt25i8B4LDO+hOXz110JTkRPwXw2XG8/tgVC2fPOuF5bwvA h7UAymULZi2vkY4rdrx/D14PXjlrwdzz1YV3ABzD/jIm1ixcvCQWA7z23Ql6vWbR3JrVG0tzANY/ hWWcAtrXYsrdK164sWamY+RJCChA5X9qz3+M+i8eDdTERjUesFjlA6irMX0q6Ms3xHAcLONjo2J5 FmvySkKEfjRFGAKFwMNAdBzokAPTsdQx8hV4jQDPv8Q9BSIo4t1iLmaZZfr8ZtjDORWOU2Weo4Kj w63FLPmmvC+YEgwC/nwriR/HZtCakN04HTaxcnPEHbSlwONttTSFz8f756M7AW9zn8MY6EDw2n3U tUlbgO4hlmsUSoUocaC/Ht1GdIXo7kd3I7oN6JaiW9JR3q165Tm4QzgCE1l4A4Z/B3ckyxsBv2qp K34JbrEM7hA3w0ThEVNPOIz3+Mz7xSvMNNmJfXgGIhTCaOb7YbRwP0wVfgmjuZMQEgQYLDwNa7nZ ieuXYHgGPCIdgrXCDnRrUf9BWMz68nJYyy+C0fwAOEd4Aq8NA1EqBhu7r/jM6tEjXSPCKQiJfgj9 u+vRIz3SIz3SIz8e4ayw4IfIR3oHCqQNkCnOgAz5BnS3QYaQEj/OfwkrBQkKhCpYyf8F3VaM/xrj d8NK8gGspPcKIUx/CFaKd4FV+C+8/jQU8J8mrmXitatguKCDLqyEq4TM+AnxuFmmYP0hat4jPfLj FNxvL0S3H920xPmiw7MG1ePzYIZwLoxvp5eF+/NmvfGodxXqDfw+PVZ2FM8XK9Fd1HwuYembMQ3P HvwRKGlX39Pfk9K5PdYjPfLDCZ+L61AurE/4/dH5EuEL0UXQTUzEi9BtSPgD/5WecBsUtC2jo7Q2 GkMIFTuLEGICSUqTmpdAL8Iex0A2sfohbSCc3cLcCPTXPMSwEtAMq2ZooJJevb0pYPwwXfW9Qsjp dXqk08XVLkXTnD8ZZHMTt8tGiAYkVbJrdnAPUH02G/j/DVX8IYR8r4BG6DMqkpZNdVtFuol01CoP veBpqwiJFsPXShwUsMYbwQJq/DuwMlRBQ9TAFv8WbGDHsJ2hA3REHQxMNxBPgROcGHaBC9ENbkzx gAfRy9AHXkQ/+BBTICX+DaRCKmIA0hDTIB0xHfFr6AW9EDMgAzEIQcRMyIx/Bb0Rv4QQ9MbwWRBC zIKzMKUP9ImfhGzEL6EvZCP2QzwJP4F+iP3hJ/EvYAD0RxwIAxHPZpgDZ8f/CYNgEOI5cA7iYBiM mIv4DxgCufG/Qx4MwfBQyEPMh6GIwxBPwHDIRxyB+Hf4DxiBOBL+I/45nMvwPDgXr46C8zAchjDi aBgd/xsyOMVCKEAsYlgMhfEGGANjEMcyLIGxiOMYng8l8eNQCufHj8F4xONwAZQiXoh4DCbA+Phn MBEmYHgSTEKcDJPjn8IUhhfBFMSpiJ/BNJiK4TKYFv8EIgzLoQxxOsMKKI8fhRlQgVjJ8GKYgTiT YRVUxj+GWTAT8RKoih+B2XAJhucgHoG5MDv+EVwKcxHnwaWYUg3zEOcjHobLoBrxcpiPeAVchjoL 4PL4h3Alw4VwBWINXIlXr0L8EBZBTfwDWAxXIS6BRYhLYTHi1bAkfgiWwdWIyxHfh2tgGeIKhivh GsRVsCL+HqyGlYjXwipMuQ7xPaiF1fGD8J9wLeL1DG+AWsQbEd+FNXA94k/hRsS1sAbT18Ha+Dtw E8M6WIe4Hm6Kvw0/Y7gB6hBvhvWIG+Fn8bfgFrgZwz9HfAtuhY2It8Et8QPwC/g54u0M74DbEO+E X8T3w10Mfwl3xvfB3Yj74VfwSwz/Gu5G/A3De+DXiJsQ34TN8BvEe+EexP9CfAPug03xvfDfsBnx frgX8bdwH6ZvYfgA3I/4IPwWcStsib8Ov4MH46/B7xFfh/+BrYgPIb4GD8PvEaPwEOI2ho/Aw/FX 4VGGj8G2+CvwB4Z/hEcQH4dHEbcjvgw74LF4PTwBf8Twk7Ad8SnYgfgneCL+V3gansSrO+EpxGfg T4jPwtPxv8CfGT4HO1HneYYvwLOYsguei78EuxH/AnvgeQy/yPAl2BV/ETAN8a8M6+FFxJcZvgJ/ ie+BV+FlxNcYvg6vIL6BuBv2wqvxXfAm4m7YB68j7oc3MOUA4gvwFuxFfBtxF7wD+zH8LhyIPw8H 4S3E9+BtTHkf8Xk4BO/En4MP4D3EDxkehvcRP2J4BA7F/wwfw2HEoww/gY8QP0V8Fj6DI4jH4OP4 M3AcjmK4AT5B/BviTvgcPkM8Acfx6t+hAfEf8Lf40/BP+BzxCziBeBJxJ3wJf4//Cb6CfyJ+zfAb +ALxFMNv4cv4U/AdfI3YyDAG3yDGEZ9EfvfT95gWiwgCz5tvxJiHICalaS2QJKARel2RZBEk/KUv KxMigihJPP6CxFskqiwIkqR0yWscLK1H/v3CtUtRZNlikRITicMpQcPoiWIXzYzOkGbToA1pGwLT ZBKG0yrSTUToQMzGtNazWilvCELiJhOkpDSpYYhGGG/IskR5Q4bmx66MN4S2vGHp4Y3/f6QD3lAU ixW5gv5KyBuiLEnUsqSuWlE6Q5pNQxJoSySxZQhMk6HtNHWbI91EOuINZmBtrExVpTPhDVluwRuK BEgdMh44mwRjsixIMg8yb8Egzg5Blqyo0/kid0UhPXI6ac8bVivjDSnBG5KotOCN7kr2LXmjfQjt wqQK1rxWkW4iZ8obmsY2B615Q05KkxqGaITyhlVRZMobSivekBhv4K2C1eQNsct4Q+mCQnrkdMK3 S7FaLZqGtkN/cUVBtjB3qhizdlveaDYNmTYEW9QyBKbJ0Haaus2RbiJiB8KsuI0p2zSZHjmlxE0m KElpUsMQtU/KK1aLRcFNB1hAS+aCHabIoqxQ3lBxQ0I3Z4rcNbyh9PDGj0Ha84aqUt6gBsV4Q5Ys MltxZLmLZkZnCFY+YRp0vlOvZYge4oFOSda8VpFuImfKG3YbnkCExKOb7+ENC7NPkzesClgob9iS uVDeUBhvKIJGb6N7UVntktlhsZxep0c6XYR2KVqCN/B4oiiUNxI7VbmrVpTOkGbTUKQmtmgOgWky ikVO6DZHuolIHQhjgTars8OuAHuiY95kgiUpTWoYohE6OTSrihEFrC14AzvMYpEUC+UNmwVJBkuz WLS2hXWKWHo+FftjkPa8YdOsdjtbj2ULnRky7lRlUHCaKGqXzIzOEPqw1xTaEOq1DKFd0IZZTKpo Fekm0hFvMBZoszrrjjPiDau1mTdw/wlW+nTDnsyFsopFou90LYLdYrXQQ51FsXfJ7LD28MaPQdrz ht2mOhzUzpp4QzVPuMgbXbOidIY0m4YlyRaWFrzBTIbuyE3d5kg3kTPlDUNHI2/LG9akNKlhiEbo OcauahihTzf0ZC7YYVarZLEib4gOehs9w1rsbQvrFFHV0+v0SKdLe95w2DWHgxmUYrUKuPpqVnbC xXXF1m15o9k0rNgQFEvLEH0WTJVUxdRtEekmIncgjAXarM5Og/JG4sVJ4vkvmmKTNKlhyKqavGHT bCqorXnDSnlDtqgiWEWHiv2Hm1PVam9bWKdID2/8KKQD3nBohoG0YW3iDZt5wmW80V03icgPCdOw yixkaRkClS21tJ0orSLdRM6UN1xOK/0E6Gl5Q2P2yfYbNuQNjT7daP5bISRaVcXew62KpNPbcHuq qnqXzA5NO71Oj3S6tP9wk67bnE5qZxa0KAG3qDaV8Ya1q1aUzpBm01AVxhbWliG8TBumaiZvtIx0 E1E6EGZgbazM7eqAN7SkNKlhCKmDvXV32Owa5Q07OJO54G5E02SrJoEqGRpq4uZUQ97oiq2AZju9 To90urTnDUO3uVwmb2ga7kQtdo2dcJE3HN2ZNxKmoSosZG0Zog//gFoLa16rSDeRM+UNj1ulj7iV xE0mtOcNm42t65Q3dLtDAxt9KupO5oJEq2nYexJokpPeRnlD6xresPXwxo9B2vOG07C73XQVtlop byBbUBPC7XxXrSidIc2moZlsobYM4WVGFTaTN1pGuomcKW94PSpY5Da8YUtKkxqGaITxhsNho7zh aPGH7rgbsdkU1YZbFcllwx0AHmttmrNtYZ0idvvpdXqk06X9hyJdTofHQ9dnq9VmY7xhY0/GVFv3 5o2EaWgWFlJbhsDGllraTuq1jHQTsXQgjAXarM4+r3YmvIHG2cQbhkO3g50+FW3ebyDR2mwWzS6D TXbbUdNKd6WuruENRxcU0iOnk/a84XY5vF5chW2qSnkDdxk2O30yptk0o9vyRrNp2Ey20FqG8DKj CrvJGy0j3UTOlDf8Pg2scuKDGub7ZguyRJM0qaFx0gh9/uHUDTs4KG94k7ngKcZuT/CGh96Gx1q7 zd22sE4RRw9v/BikPW943LrPR9dnVbXb8QSrGnb2ZEzrqp1oZ0izaditNopayxB9aUCVHKx5rSLd RKwdCGOBNrv6QKodVCXxQQ3zvZEV9KQ0qekGOHTzw6Zup0sHgz4Vbf6GLewwXbfadQUcik9HTTza 6XZP28I6RYyu+Aq4HjmdtP9QpM/rTEm12+wOm6brEm5RXbpuQ8/usLu7ZGZ0hjgcjoRpOFQ79ewt Q/QQD9Ra2ILZKtJNRO1A2MLcZnVODzhAsyReuDIPwUhKk5rhZPZJecPnchvgdOApJTWZiwMchqE6 DAvolhQDSQaXGcPha1tYp4jTeXqdHul0ac8bKX5XIOCw47HWZhgyOGxug+1UHbrD0yUzozME+SFh GrrmoJ6jZQgMttQaTkaLrSLdRLQOhLFAm9U52EsHmzXx4oR5CK6kNKm53OB0mR829Xt8LnDTU0pa MhcDdJdT011WcFrSXKhpt9tcRtd8G6zbfXqdHul0af+hyECqp1cvtDOnw+F0KmA4vC62U9Wdhr/F Zwa7lzidzoRpOG0G9YyWIXA56ZR3oXlQr2Wkm4itA2ELc5vVOTPDOBPeQON0JXgjxUt5w0DeSE/m grsRl0sz3Mgb1jR6G/KG20htW1iniKft99v2yL9D2vNGWsCbkYGbT6fucLkob/ioCRl2g/JGdz1c NpuGK8kWrpa8Qad8kjdaRLqJdMQbjAXafEt4VsgFeD4zt1LMQ/AmpUnN6wOP1/ywaVpKqhcPK5AC mclckFU8HrvLq4JbDXpREw99Xnc6dMVWwNddv8j8/y1p/6HIYK+UUAg3n26n4fFYwGWkerxOcOku jyvQwZfVdw/xeDwJ0/A43NRztQyB10OnvMfHFsxWkW4ijg6ko2+f/0lfDxi2xDsJ5iGkJKVJLSUV /Cnmhz8y0nqlQKoHTylZyVyww/x+3ZNiA592Vorfj9s5I8Wb2eKNS+dJaqALCumR00n7l41n9U7L 7uv1eHxul99vBa8r3Z/iAa/T4/dmtPsnCN1F/H5/wjT8hpd63pYhSPH7qFIqWzBbRbqJGB0IW5jb rM5nD/CB0554AMo8hEBSmtQCacw+6ZPh3hnBAKT5IQP6JXNJAX8g1fAF7JBi7xtATbfbGfCf1SX/ fio9/fQ6PdLp0v5lY3afjP4D/D5fitcTSFXB7wkGAl7wu32p/t7g+zdU8YeQ1NTUhGmkOv3U87cM QSCVTvlAGlswW0W6iTg7EPb+I7W13uCcFHA7Eu8kmIeQnpQmtV4ZkJZuvsTNygylQ0Yq9IYByVwC kJqe7kzp5YCAo38v1PR43b1Ss6ErtgIZwS4opEdOJ+1fNvbv1zsnJzUlJeD3padrkOoLpffyQ6o3 JS01q9v+Q8O0tLSEaaS5U6mX2jIE6Wl0yqdnsNW5VaSbiLsDYSzQfnXmwPz/dG7gaYggsxAJoOmf 1hGOa/9t1XiRN/+vAvt7N/r5Dpfb4/X5U1IDydcsWX2gL+5JKL0MOmdwLuQNhWEwIplHEYwZWzLu /NLxcOGEiZMmT4Gp08oi5dNhxg/XC/9annn2tCoCvI54NgQxZEea7Ifh4VAIY2AcTASsLpTBLJgP NbAMrgnqwZrgNcGbvhW+leJxvC8IfWEgDIICKEb9C2ByQn82XA6LEvpLgrXfAtWPH/7+n0Nw+B78 uQtAvgjO/L8Jss9icRXwDDzLzeAqwyPKI1MvmjK+9PxxJWPHFBcVFowOjzrv3JH/MWL4sPyheTln DxzQt0/WWaHeGX63oTtsqtWiyJIo8ByBAcWhMVXBaJ+qqNAnVFIykMZDszBhVouEqmgQk8a01okG q5hasLVmGDUvbaMZNjXDSU2iB0fCyIEDgsWhYLS+KBTcTqZPimB4Q1GoPBhtYOELWFjowyI2jGRm 4h3BYn91UTBKqoLF0TFXV9cVVxVhfttUa2GocK514ADYZlUxqGIo2jdUs430PY+wANe3eMQ2DhQb LTbKZxXPmhOdOClSXBTIzCxnaVDI8opKhVGZ5RWcT+sM64PbBjxT97PtOlxS1V+bE5oza0Ykys/C m+r44rq6tVGjf7RfqCjab8VHfmzy3OiAUFFxtH8IMyudnCyARMUsPRSsOwlY+VDD8dYpsxIpUpZ+ EmgwyhVGyeRIJpXAGOzZuroxoeCYuqq6WdvjtZeEgnqobpum1dUUY+fCxAhmsT3+xPpAdMzPyqN6 VTUZUZ5o6JjJpVHXpIpIlMsaE6yehSn4OyqUOSyQaZTTbpILscnYYZmZtHHrt4fhEoxEaydFzHgQ Lgk8AuGc/uVRropeeabpimcqvVLbdCV5e1Uok459cVXi9+pqf7T2kiC2SujDfrPwF68Ho3yfqktm V1N/1ty6UFGRORAXRaLhIgyEZyU6r3jboBzUn1WFfTef9uukSDQnVBN1hwpMBUwI0kGdPyXCbknc FnUXRqFqduKuaE5xEa1XsLiuqsisIM0rNCmyA3Ljh7YNCQYezYUhUE7rEfUW4ij3Ka6LzLk0mlEV mIMT/tJgJJAZDZdjZ5eHInPL6bCH9Gi/QwE2TuWJu7BtbbSblGnL5SwlGOECfDkdfkwIjkEIFYzE CzqOP4vSKVIwMhghAWhSw1ISGjTUKh+BjmhhCb3E01sLSwKZ5ZmmfE+VAok6iVlRpUVeOiYk62SW 8y+rZmrTCvULFs8talHBVpmKiQomcuu4nhzti0TBeIdCh7Ok6RKfhVSAaRxmw5LoKPrpzA9GQnND 5SGcQ+GJEdo22tdsfEunhEonTY+w0TbnAwTrxkUB504YzW6Yc0hi7ph6w8zYv7K10tLvsTWctnWh cXPqQlMiIwNsHnCFF0VaZhxomiBc4cSOLzihlJReVDBwALJUwbYQWTdpW5ismzI9skPHZWfdRZFH OMIVVhWUbzsLr0V2BAHCLJWjqTSRRoI0QnOajBGF6Qd2hAFq2VWBJbD47O0EWJrSlEZg9nbOTNPN gvqwgsK4TZi9XTCvhJu0BUxTzLRalsZkG9CGh61iWAlbwhpn4wLbCE16BFOewCXOQuBRjdhIYBve NZklbye12yzhgKlRixphs4brpjYXPXV65FEN8DaGWFABFZwG/mrsSlwhioNz6ARYVV5dV1VOzRe8 OFnwl0RJ6DyIcqHzsCKSFrWG5hZE1VABTR9F00eZ6RJNl3HqES/B26+nIxglFCsimSEjN1CnN5QP DDsvyvl84eec3jCxoaphc0O0QQxvJzPCmsU25p234xlvc7kZO0c7iA97cxTxwgR0HGxEJMRDBDxx ZRD3I5PPzhjdj7iIAiGMO4mdpRtEY3GdqJCJvh31c9DXMD4AfRXjuejL6BvoS+inoi+in4++EPZN HJjxkhrPeNMfz9iL7o3UeMbr6HCoH8mNT91OuHCcNE6NjWuc+l3ut1Prc/86dXfurqlP8vGMJ7h4 xg7y+NTtGH4cw6/a4xmvoHuBPDf1+fHPTX1Oimf8AeIZj6GbsJXkbB219ZWtfEXJORnT0U1BNxnd JHTj0E1EN7/onIwIujJ009BdiG4CugvQjUFXjM6xjlznJxsrN1Vyup+8D+S6pRuXblr68NKdS19Z +v5SObiIXLeIzJxM9HvC99Tcc8s9m+955h4peNug22pv48M15JZrSc3q2tWbV0dXH1otLryOOFZl rAqu2rhKcKzMWLlxJR++mkzgJvAThAmiULW8Znk0PGU57yjKKMop2li0qejhIgn69gX6R7dKON8x 6mUvCdmLe2vFmdbioFKcIRX3EorTueI0KE5V/IpXcStORVfsiqZY6RcJK4LCKaCUbpfjuMoqEysi 2wi5uTzqLAU05R046PE1G/p3KAUkvTQamBKJ3pFeXhodjAFI3+aFgkRkUHp5f4LLWQEpnRjZpmB6 4QzT9+o1523Lzy+eHzQXSpzu2wZBzaODcVeaUuOvWdxKlphe29IXL4H2VfK3T8L7KS729wcQcf8r 7oAUdG4hh77kjB9Fd5y6GO7uxf0AseF4gsCThvgG7lDpPXt++K86IAKahh3q4Sj+mHIK0wRMMX+a dSCh8wDcAQ/DDtgD++FjeLpFmKY/B6/RMDeOPMrdRCwkG9Zyg+Dn5CDm8xkZgD8nyO3kIsznT2QB GQB7uEOkkr9e2Elmo1EOQM2LyefcIOEA3A/3k3cQN3M+TH+Me5Wbxz8J33DzuaNQy9XCWtgKi8lQ WJxszAmsx4nW7ROP0Mf94nYkhdkMWwn2vAeeNfve7H0TYzNi09ldnu/ujv8jdgTj62PrxcOY2/9J mj4QIcyBDGEZG+2/xbbTkrGH34Bd8BT23Cbs3YMYA4z/koV2we3YoqfhIbjt/1buDy+x+ztK5d6n iAT/ctfUgvwR3U5yO/IFB7XkYr5WuBhP5jL0CXvFTfxmYZMMPuRxHg+DmwhGAXIaGxtIzsxK9M4Z 5DIyjaxMI7OWh8ZaDmKAWUAjzip6in87fpzLRJuTwQVbwjcpxE/6kmGkhERINVlO1pG7yANkO9lD DpBPiKN19Gui0JKzseyxeJSdh8fetXAnbIHHgf4T2qPgaB39CpSwCOeLIIYFcr5AxPmCovrVdSqv 3iAbaw0ODDdRbrDj6VqFUaP0xpeJvm9mZUPl3oYX9P46usqrKq86ZxAJcXnYJjc/2OtxS6HeffKG DCVbqy8kwoqimTOLiqqqyLXki93k7ti8PY1/rqIJ6Ghrxwga94T4JOu9FeFIiaVNQ+WmdghjhTJh nrBMEErEiFgtLhfXiXeJD4iyIIsS8ISTx0pl0m6JHyYRic/H/ZUi7iYEcENAH41g5Q/OrByJ1a88 uLeysh7rzn6uQqmkDeBzqeN+9ae0r2NzEQSNPBybRB7Gxf++2HBunbgPVCgKDximEnEskGFSicRZ 5vMisZVABKqBz5doBeZJApClIC6VKGONGtWgN8ysrMRiR+7DgYfKypmVxMMZus/TBxG4dbt2Hf+G +MV9dbFjjScbb333wc2cToqJFxm3ZcmXhktKlIhSrfCQJxF+GJQAK14mPpJN8slasoU8TmQg2niF jJDOl6ZLl0kC3zQPmioFdon1BauWWTHsFbNqNMGVN9TQ+TwvGDp3365d9cT/TWz4g+9yV3Iq8dTN a3wz9lnsycYT9OnSAvIudy23G0cuPazz5qQro9/VJ5ASnO8NLOt6nOx5mZ4FnEHePXYM73oo/jLn JfnYIufj0g2g8TdYcGZRZaxD/pChuU0z6KH5paXzqbuBeZddRmdLafy4MAdtw4q8uTpceb9A+Aes JOIhY12EG0EuIytIHbmbbCVSnjRXWiqtkW6X7pck3ijzidlivsiLalk+kLaWcZR+9MObhx3HE3cF b69QbLxoDl6ii65q3FtJO2kmHUDqXDrkBnEAQ0EgujN3MHZbn1CwlGSdJPHY8W9iX8SB2BfduHH5 4hti3/AWXHfuafTG7otVkVqyPvYOqpBRpN+HuKtw4Cp7rrgT53/vsC4ulTmYJhOZ5+gXyQhs3tY7 h5Ocxl06ViDTEPPOys81uOGxfSRjS7UQfPTU+vpTo7Fn12PPV2DfuGF5eEpIJvxdNjLWSYY51zm5 YYo5eZYr6xTcapYNc5Q4Ig7eke8ucy9z825QLE6TOJCyPE32JxDimu5WpArNTb+9hNo/ViJ3ptkl 9fte0A9WYrfQDjlnUCX2TW4eGZo3JNRbzh5qDqRMMj2/Iq+u331X5cND3l/7AXbAXq7PeO539T/7 7b1189fX/3HLN7H8o1j7jTiyLqx9ENaHy9f4bvdxA7iR3HiOD+m5eqHO84aTgMWWaivb4z/g/8T/ tV/wi2Xz0pal3Zm2JU1I673WIIqB+2mSmmqtKLOstXBgsTh4AnxFhuF0VHgybB7WhoaD9dQZzuFI xw3DU3Kw+sSPvNCAbanX9+6r31dvjjSpzM/35g7OG9InFMqjfsuWoRNCvTdev2Td17esODv70GtX XLoiZ8+cd05u/zR27Iv3Vr648tPL1ux78vlLHx0xbc9j+2pqYq99jDO4EEfpChxt+mWF5eG8fK1M 4yBbI5wm2KWl6jSFnwb51rHWMisPVoeiEqKIBFSZt01XcE8gE02mkwIr39AwUq+nNstG42DDyIYG fR9LQDozQkZmHsk1cj2EZPIHTjXO5UOLfhFbFFuHI3td7D/Jdbfzv/vu5pe5n8ZKsPfvx97Pxlr5 YUJ4IAGbBxzT5mlE0Ygmlt3p2e3hPKkyJVPwWZUKp5O3T7fxFT6bTTCtxBhO5+fBelaZSrSVyoZd +vM69iPrQQN7ELvOyPRkGqz79Mze98/btOi7RT+9ZvE83tN4pGDGgU8bvzqwcN8qMvT2m5YVcvsO xk71OnWk8TXssxuxduOQBXU8Io0J9yfesmwLsQjEKMu3E9nus2fb8+1l9rXIbfZASoUkuQi4KjQb mDZcj7VrSFixMRwH+XnsIJ8kyVgVp8eNJox2zOtow3Ske3Nf193+xdnV41/5NhZ/Hpfm9asPV165 uOKQuO/WC2OrcmLxdw7ETnLXEY3L+m5Y7OSaZSs2Yr9swHG9DntQg3Hhc/ri6tNXxMNchPE0yDzh sxPLwzJprXSntEV6XLKAZOenJ808QZhoUY3PV7IJmJnpJJnInrJk5biU2OH7yIbGWfw/bhx9y7pq 8iX31omF31wbezP2l9g/6f+OWoq9NBQtyMdeF24KL9yf8VUGx70pHhG5NQpZg1xZPWD5AA5wzeAh 5NcEPlD2gGW7ZY/lgOUTi2jpX5btWOvYgrSQFSx7nN/N7+eP8gKfVSZ7fJ5sT76nzCN6Bo71k5LQ gRCHx1vMolffCmevCslpk8TEsmL4hpvrCrMnfS/znL7hrEXmmkOpgvShewST7WWvaUw0JWuoOQoe t9cnIbBRIVUTL/38yRc/vLZ05r2xUx8c/uoGsmTYtIrS367Qdy6bOec6vvfQR8of3L17a/nuvgM+ nvvq0Q/fvHLvvakbhhdc+PPwmqNXbL141YLyu3AglsQ/EoZjHzkgBWdSX5MSeY73IImX9VWI4ilT bDj/A64KUfThbK9QwaYmmtY8j+hisBdZmZqamH2WC9crLm+IuSAQWuc8PdRbIi/UkUHXFla//B3h nsNd3tLYK0umH8LJlHbrA8TRi8C7bxN746rYF43v8PtX3nyK6DfRcbwDZ9JicRfygxcy4dfheUvT bk/j+LtEUmJDNggRWIMDaHEVD5NL5IhcLa+TRTm9WLUXEZ+6VuVUf1HTpmAsKcNtwX6ikN7IhcY4 37wgKQuuDR4N8hAMejjBnzqO2D3jBKIIHGvkYPw1cg1KjVflNgxGdmTUiDOSji0lSJNgcBjZBDXO yh0sIBsKhhvt0WuSJG94k8Y0LRYmfZd9RHCLasRej00dWRA7em5kwjkP/ebWP/YRt9uf+8NDc2fG 9u85JnwSu1fY+WDja7E0+eqbVi9Cm5oYPyo8KL4OHqKFX8U911JCuDJ1nrpM5XGXiCbWwZbYyneY Cuc773a+5OR5um3kYLGLeFxrXJysEI9E5hKyVEULVceqXDZ2Ghqs5tOytXxtLNLzPG2Zpmg+qXis gS3sq5GIWq0uxz3yXaqoamVkHlmGu6872f5rN3b2UWKlp87E5kJUkFQ5mh9uQt2anR/ntoyzW9y8 SZx6PeXwysrBlUZuwmrQ7TJ3Gbiosm5u/iEzK6GSyLRjPW7cdeT7pMwg7tUgc7Bw/eVlzz7+2Vt/ eGXpNZti9fjze3IN6fOdePn+slj9sWOx4794/UHyC1KJm8s/0P0bnWfn4vlfRUv4iR8Za6zVKss+ OVvmZRsUiUWykq1wgGtACf3/7CUWsURRLFyS8E1ToItO5Uj9f5n78sAoivzf+lb13XN0z5kJgSQk YQIBAwwhiSIZuRIUEhwFwjEbQFZQWdmAbCKggKwGRLwVvND1WBDBA1CCuh7LCiLi6qrghccqggvR 1ee1IdN53+qeCfHY93t/vD8eMDM9w2Sq6lvf+hzfrp58ZpMOx3fnhiLsMyuHru+YzTYK/7AePmyN PSruOpppdxq2qyLXDKXV5QrIShibKlcmK01Ki7JR2ascUo4quiLavZCi/DftarwXEqmRJUV2UtXX rQtvJRs/6+qA3TxlqZTVwV6yG0/NSrd9F97di20zkhv3sVEZr8RFK1GAf67ziYMGxvBz7gJB3NWO iUcCnSfEE/hzLngwvryJQbMATSo061CD+EGrMUMoLdChvwrsKIVjIq4+KJagwlPjecfDFE+Wp8Iz 17PKI8o0TKOUKWKWWCwyGEOkMcUymrYsuVh21nOr/LL8jnxM1qIEunu5jGXT5NbOH+ONmiadIy9U yEKXpjGBUpcuisoi1bVI5wauWGeyGlajKmvSoRnJcqGgL2Sa1sLWsQyyS2yRLISFKNqqFuGoIPFD 7rGahI3CIUEWmik0o/QYrQE7poHW2vnxDpcLxvGDeLaq4lGjQOMqvkQX6KJ9IC5QhNbOF3boOozD g+XxiIpHxQKSdUToKzAiqKBIik4VHmr8g1leUmK0pf+VpPmhNNnYlkToyTJsAZ60safyzFIu2fiN v56ynafz/9zC8R+0YYn/EQsYFAArgBhATDzRsd56N2m9u+EkRK+FM2HoamChjkOsb8e/xKc7itj7 dlYie75oZ+XD8VEoifjYaREBJi5CapVHKXQUU1CGhVmUlbNqJjKNzzHiUDPndmhCSqcEBElVGWao 4AzQXt0Lkkl+BpCeV7+LSJ3H4qoRqkFUDdVIrZ0H46rbXQP98E7igXWprhrIwztU6Sh5Kyud0t0U MPYkjT08DmnnZlMPl3kxQLwWJqcq26+gO9oD7N6Oy3BU17HL2mPcMSGGihOQ8XRkk5nxEVeLwGr8 UGOsNzYZjDYrAP1wgpUs8I0OS+USlVyjebadckeHePUo7B2rBsYyHBtL03wXD9pRt+eN2K4ojUdp W+QzUerRCchDuTAMalH/P2+9a70Enkf/vPuA9cdHn6ZfQCU8YTVbG/DvDLgbLrQ6rc8hADJl1hd8 BBwvZom7cQRBclF81Ggkv2qDG77RKlDf6GgXbIhI3aOcBVZOq+leeojKhIaY6Aq4Vc9YTfPRwFiR KqKDHzyT2mKc7WLOYIbZFq/BwVzEEuS2PIFL1Qyf5feGO60OEPrDNDgr9Wj/s0f02fagdVNf2iN1 RGzV3/6npdDdVoncdM1n6X53iC+j1sgmc+LDr4bbgDLwCIi33tDoGh1adNAFY5QwShYdd1otikTs YUQk1WMy/1gAMTJWJTguG3WrHNSNDY51ddlGvmTbQT4WW4zkQx9U2eZg7LmJugrgFBOz163Lz7jY +iYEfabS0+5cBE90bKrp88C6+x/tR5Mgpe4Wd3mtCVcd7J+aJxyyLpKmXz1vksPBbC9ycB55JX5z kwnNBpDFuCyaAaDMBOaUpjjXijRqlpvV5kZzp7nXlAxxtJwTzinPYTkhEhpdrh/SabXu0HeLvk7f qO/El1Q9Hz+UzA03hylcrC5W6cVZi7MoixrlRrXBigkX0IwUZUM/FbJkhTG/6YFc/1gk0lzV041F TdSYNgrY/xwj0oguhOv9NHsSm0CTaVOCUSlMi9AwZ1RUm2IvagvRCTPuvO3IPOuxRVX/2W011a1e P/UCCN9y1Sjr6IkD534x775zVj38u9unbf644eNZDeOuXFVz+Z8u2Pa5gyJHhYOIIiapj59Zj1LB gMkykLO9MNqNmm1UllajNWtM1SIa1aphVHeAP8Svk/eJKA5wuu0JTy+yb5PpvOSeNOnPNx1VzOtt pKB3YdntINx45dp7QLDeSHVax8VdHbl/uPWmK9kHHcY/O0n7F9gvsfOE7Lb5S40vqCHVAm3uIpof ZQk4QxUjWdUjR73s+dHDvy0GZJkJ0hhRH+MSmwVXMywibJFArlKFqxRK1UVNyNFUKafgcBfL0oq1 Cu5aryf0BkW+XiCqoLoQ6sU02HPYsKthp+BQ6zy2HYEQKc2BRcphERnm1e0IiPjqqzY80iF4x3ln OwKjzGHSjQeUw6QmU1Hg2V/ZdZpjCvIHZ4sspz0bMW1j4ktTiOOQkSQASSIfOEnki7FLrBesfZdY ueAFP/RdAP3AB94Au7tjtniAwym/4bSchTzhE0aSnqQP2RdvkQPhACWLCmBxPiwugkVFsNiFiwM1 kwjFQoVQIzDgHoPSsAGGpJNe44PFcu9wb9q7tkUCKTq5EOp7QX0eEJrVo1dCV70Jxcgyio0Ko8ao N+YazcYqG6xbjZeNd4xjhpcYvryEqroLE8yd8PXQme6TSTq8HCB4nbPNeNteFMm0pkwaB1MvvWSk fWUDt2BpLZm0S5JFklO0LcwfHHaqHOjMzGAgbL/O+hTkBQO4Yuj7vR+evnUPugvoP+yrtQ1DV2z7 7rNJueMGDZ0UjedZ72z85JKK6q33Pv0P719K/zr8I2vfkYPlQyMT4Fn3X2/8hK8PjB47hNELkhfj a+ZoTVqLtk7bqO3UpIyKFki1hsTrH+2fiAL9Nr/IUH/RYo4PpAiZitdGKEHqte0NKnU9rJcjrKxD MS7Vtpjr0vhzyJQIqnUd+PKioCXORoXP30x1DyRsUY5mKxFQEh49IDswYgfuMIajLVMZRYLb0yW+ 7dhBF4yAAxpGPlfhGCZiDkEZzv5+Sf2t1z8EZ625ePrvHpj/o/UJnA7eZ9h7z5z3l43WY9NfGlXI cy4FFZzbJmI2DRbbbY6YGx/9tv+In7K5geYAgqG4WKSj1YkqqsWc4hxKDPd4z4Zg7SYZyuXJcou8 Tt4oi0Tu4faEE4IvoXk1QddkDyGZtZYmCQSQ1LBhXZ4NXVuZ3V1uWAeHEbJZZhxDy1lF7q11H1ud EP1w8ez29gmTb78fRhReZJzVOwY5lgVVg2HrsHaJlcZ3PmJtKwmR9Iqoxzn1Y0y/jbc61Q1G5klL JToxhHIDXWuUS4sxEpCJXvQZKDIpKNnF2bS78J0rN6OVXS9vknXnJf5ElIO15T4I+8p91b4mX4tv nU/yQaAWamUXtLjWuTa6drr2uo66JFdE5iqsBVXYeSZIiX7y6fLZMmNhDNYcuUkWZDUrUBygdnQD PN1QroEQTniFhKp75S5fw2swjcbbPA2SDY0LHB7hedGIRNKYzoVTCwlv+aGQY8XsHIgNJsFAUT4G VRiZYvL9qx+ZMvOPVvt31qdQ+f4XkN3xA/Vk/+sg7F9+/ZTnL0FdhMlwuvWvQ6c18Pp4AWZlH/Fz XCEN8ZFBVefITDk4V3haEZrf8Rzj8OwxxPHyeH18xqdmIqEQl5pQPAkvTfhdmlfykwwyDLPPPrW1 tQ3bg6Pbw0tgDueFY8EClJEFZTFe2QzTDxL9qy+AD99rv/se5d0tU5aL0GPxFLf7tY572czXXvom ijM+2JrOPhdWkIFkGNkVv7zYVcFrmHydFZWUlWC+Ds8a3jycFRg5Pqm2uH9F/5r+zf2F/pHy2mgO NOdAznBSVOseVNvsA9+ZxbSC1lOGft4oGFKcHc7NCg5JFOdW5Nbk1ufOzRVzJ/eDfucpk6NAoufh Gk6Unp4Ilnp0LZhBPV6Ubki+lUpy87wHuYDzQaYEhQrgIK9GogywX3YWsa1Vyx1thFEoyEMZUJ7W AuVh2TkFEY3y+OB7uJhCVeAPdIkpqXzDyj/Nu+6ufn/uZ1mTKnd2tN44c9bWpuEfbe4xoK+cHJ17 8xD4nfXBa9/HLpoyZcacxKR5Vtvu5nhFQ2BX2YVZTe9t+WTognsm/2b5zvdFqSRc3Mt6//5dwrSG hb+fde7v52EWtKB2LEdc8PEzTkoAyHifoo9Xxjumrlzl9e2j6g+qQlS/pgsJj09TNY+sZxZ/N33I BXq+XYCVZJzncqcAa7JhS2cPv7KHZc3cvO9DeusFzSNLOjrF54alvq7/6PXUPL6uEbA/EF9D3W2S 0+O9HYnA1CXM9LjGv2NCK+o20++eYJClkuHSDImr6oMZa9CIs2K3Dd1OCTJMs3wTzhk2ceIwvI1e DdeIr9mHeGsfzGYdOPlGevRJHL1OroyPUjRQ3FnuYjcDFE6kTlfqVJ1J49l4x72W26a1BQ2rKyqX y5SIklxHkUOYoEpE4L+lQbKjwgUij4wjDXloDjqqoNI+e5HWJE/hT8qCKvB3T8kHJ2QcsdkQ66Xm 9nb4LQywbocdcGSZdSfGq+NOuMu6ODWHs9u2zhO0N/ZbIRfHK0Yz4P8EYE0a/BLjVEcdrBLWC5sE SSDj1wNCEhBo7TwR97hc0jmAEkyQJR3sE3YH7Ckt4XBus7iTw3jvLzeRpgt6l21r/8Pp123GLp0c t6fPpSecOEpTEJlzSWN8nJKPWSTnQnnuxlyai6nEeWSjTGVjvGyGzSgq9znmUVMy85SsrCyqBLJQ 2GSFvIEQS/RwJUK5mqKZWg85JPuceKYF3WGngpWu+ze28ZuTcumM60q9WJgvL+f8iSk2/KVibrB9 1pTS+ZH2LbG5D709r3TT+dv30nvfHFHScZLuqUuO6ddhCaW/WXBZWdWB7akzSTozBuOIfGRkfIC9 LqoRQ5Txh1T4taWRXhia5JHTCyNzMq/xZ8vC7htfFhUXNZVfUdC+47QL9z5/mF6y+srhp3W0C6Wz m48cSM1Ps90Y7IFun4GcirBVE0TBFwbS7FrlWu9irMnkKoTRegKTJbTW4eIwNWu5ua6W5iA57pQk yVXLNT/96cl4lP8hT0L1JxgmuNzdZ9tnHzPGJi1Ekn6bavgpyN6kyD57wYvOZ0GPY9+Cx2r76lvr axA6F622Fl1Ls7+DIdY7Vmcnsf4Bg1LWhtbNkGzl7L3Qmi6cj+NxI3v/Jh7/rbJIQQDn3ByoVTd4 vSCMh/FZFLijnkxZxl5Ppi10HbfXETPhCidk8DDJpctyl/hw+s2Jc5jxrVMkKOJxDg51Tpt2LxLD 79rbf7OaEuvId3+YOWra1nW3b53awzpLfLNmeupV6z/WZ+zM1IHyh7Y9d1c6B9h07LGHzI6fzbUy lT3lnmoPi9oPkz1zPE2eFs9Oz16P6pHU8dL4VhmcwmY5yqaj8g+yLHttuiJEowmXIsmaK00mB9Kn sDg+IFEmMzSZ7jry4zcjJ0xuaW+/6NHlo9iY/hsWpDYIpbP+EM2o20HYs17kx/guOQyKwquEZHEY MUCB6jDQGgqkOgS8vjQalZAjXng5Q/FmeYu9/LSsuMq73rvJ2+p9x3vM+6NX9oq15ZHqCJUj4Ug0 Uh6ZE2mKtETWRTZGdka0SIAEals1aLFF9F7tkHZU+0GTiZarhS4OLQ5dGxLaUYK9oyOoKaGw4GOy ETboOmOvcchgFWhJ3Pw8ASohlsjxJdw5/PcQOBP4UkPSeCmZbORAyJOu0RGTbX/jxVZbCWU0UPLU XdcJ02jhKWMdDISENAeMnvL7tTfXL7OOv/XQxltqEjevnATZSw8/d8mK0XunzD2ztmzh39fcdc5f x8ztO2L+ht/f9kgBj+ogRNWvxd/iepsRrzJ4nbhaaVLWKRuVnfwXDAORVKaObyY/EurY6XVEIMX+ Cj8lXrfIPZKkJkiC+ZA17LG1DbMrBnsaeKUxic+G4bPDSZREflRBZeV2FZlzfXnQ5q2tS5e2Qx/r vZqpU2sn/+m+LezCfe9Nt17ZZ5UsbBj0ea8nN9pZibgQE0qRK6vifRU/gHe8gbh0KvMm27knE9mn JwSCWl2RDOlnqJTiMl3k8jzPNGMm1x7l+Mgqy56utda075p4c1Y7TB0Qh+voztT01+pPpx+ffJ04 /t5uXSLr4xN50YTa0wpBWkTLKMuW+kmnS4wpSpYyV9mkvKyIZJUILQxprY6KdbY850KMSlcQegUI AlnEP6Apva1EAqG18+54kFd9y5FrqSIAERpBlyRqj4EbppJuvjuFcbWtfjo1+LBsr51PR061ctBa E3TZagAV8gJ6PR+Bu/OEVIEjcJEP4tNkXCYKAVmAdGWiwtPsWeXZlJa/GpkrNUuUV5IoSHWiXue6 kohXCrxgoWnpOsUyVVhm1ynCuPSUOZgwexVB0Vo7X9+hu2CcjAfxgfyIDgxF+L3p4/cuN4wr5oSN hrxR0RpBbhRAF3SXpEii9LNChl1vsMvV3esNlZFTVepMFDLuK1108EMM/DHhgWnWe9b70615IB8C aSp4j30foDenLsWwrKDLUlfQlXxuEaGHYmTc5NH4uHIFqKgG1SKVae5sdz83Y8WkmVCKTheavMAx hDbj7KzC57ruIs0SSIKrrsINTW5we7W6qNAkUEFVPTqv1PPh8pJ9PE/nA3cqFlEdKvQavV5nRHcL CeZxS46GQnvEFzmOGdfMHvtUlX13gJdWGtI7YxoypXhREgrKUFlHnYmfWWz95zHIn2A9AQwKQQa2 5ZK1h2+asYdVpXIor5jichdWo/cJ49Fl8dr8JT2ypGKpQmqVUCG5WQ8yPhxk/vEKG89F3xyBZRRg i7BO2CkcEo6icJOzgGQt7SUmFEiEjIQe1pgWkvRTtnhY28G2BuckwzBkp7cQ0f7Gq+v27pr8blYo XRQM55vdt7WxJmtExhudbXsl+vzStKBlMzMu6TnbNJ1845S85d4OV+mHqA97kuvjM59lqA0kZ4A1 Ur3EU3qVtF7aJOkMuPoqDrCAz2PK48HrNkM+t8m1juwL+6K2DRZ9SkLpmdWzuCcjPSN+Pla3logQ Iuqm5pYi5FSuDnNqQnzIKY52w/a81eaMeY+z7wvSEtK0R/wTQxgKh4W5XFgmm+ZeYbtC9egzfScO qBfbHZ0pBAaumOe4w9ePqmoQ5/F3uKA3iEcI/0U3FXGvYeaZN5qMSJL3Ixe4eLYZqqvGpbK4L1zD mHu3ax92NlZaEoPSkmQsVhrjvJsJOPaBn+KADVOq4lPqz6qaYn3RIY6rmjatavi0ae1/EEptzzCi 8wtpv8Q3Xa6KD1R763p4cD+xH/jRPPYdQNkAVt473DucQ8qhvLyytBKMSjDrchpy5uewHKkV5J1F T7pCrlDE0wri9ty/lQbwcceQyCv9SStI2/qy3aQq1jbYvrWhZSgtreLxjCWTvKBov2CGY2YMX+DV +VIzxvcUIpsMcXIonUtOnYAXXwwSi2V2QARtwkn/V58yYckNN/x58pR5icYFy2+/aWHz9ZO2HPjU +hEKPuk74B97394LRdb7K8dOqFi25u7NyEWz/z519DXTX0i8tvHGbZN2LIKcHxD4x0wYe/uDL+yZ tc8yd+X/rXTHJo6x+YixV2CMRpAV8dJBvlhs+HCxd7aYDf5syI707FfSk0Uxr0Q9Gh1VOgpIRZ3e oFO9QvTkMozCjsKzXskpxoPtkZLdpz0DMlLyUAyO529AMBylbZkIYTh4bGI8Nr8ITTo4XUIhU4L/ L8EwfxZEvi8k5KzG/Bkbmu4675oH5h/cnd2vtCsyZ08o55GxY1hZvXTpmoULVq84vKWt/tlZjQMn nD7qlkuuvk8cl7j9wb/sOxUe2suO49yZu8dMLTtnYOVdi6/bihHLxeW0GzM5n1yzQ56QD/k8eQdj 8ubnF4iq2lMjISOUF2IhLaR5Cm4ogII4/meO2/S5WZ3nK0+nh83wgCfCgxYI+CiPYu4Ol+LFg6dy RN9uzYOpH6uKYdTsxCrlWZXk4SpJmnbsMGhO1EqSlY4adUJUkMEqX35aZgXz+aF4y6zyzaBYHe3W wvYdT5nQMmte6cRRm+EWPJx5qWDlfmjs67iTXchv++D+6Iu5+z3wFK/r4GjlCI62gJSS++OG5HLl uQv79VNUEnkhQiN87DGvv4Z/NRK4kY8GqhNUqqrmIO+gqkE0b9CEQcsHsVwjEDRYnfmV2WmyGSaY OXzwYURuPubeOwZEvboPD+OalBtlwd1u1eQhiFWFeQxKYz+NQjJpV+ztQFRiCEyODnZEMBbpUadD 8avhELuhiNha0jBkM93SYZ3/a7GZBq8gwkydGq+aIj73Akbp5BtOkH4aqDuqpk2NI/Lw7Og8Ia/E ePUh654kwReCNMhDNBRDFAyqgqLkqGqfgry8e/uA0SevD+3Tx12cW1xVTCcULy+mPVwus879lbvT zWYgMxcg3mwrfCWLB8vvN4FnSq8duuzhUeshmLtV988zxUagZOxUlJIxe+tOzHkosbc1/x+zxZa5 wTxT/P7E0/8tZaDj3BHWQrHnr2bNyRvPn/E2jCSUX7UgDBCWoYnVyLnxiKoJghyQC2U2kEtgl9eV 66pyMQFXQDAe9qrwldqJuRP3mDWqoiqCmCJCJx+fnfMlyM3AM7+U4yumfX40X7bP5rMYGzTbWjLz JBxZCF+k3l/0pWUJy06uYBHrAcLIEvSGu4UVmJt8F9wF8Z6cYf1azoIc8Of4c0iht7Cu8IbCxwoF T1YWwdl6EpcrUXJbIfTUDeiGDBLEDm5jVjY+PEGwR212rFHYIjXxSHctSvzrq+T7RE7tB5UKWKak KJuZjaO95SV/v+e5gxs+/+GVuwfVJKZvTpSMGPXp0uWloWKhSbtzze3nnv3GHS/SViFRswijIG2a Oo7afPYl+0ZYgxBbuz0YNLhYi/fEziKxiXVGgzHfYMaNBAwtT6Ma302KkQ1s81vK0xAgbl4zaksT lb2hwgYQHEhX2beLidCMI/qyptVXrN362JrVyy6/46Rl/ROGpXauY5effXjn1lcbt6yFoSjXAc6r R5TAGAu5ONMK8ZFYXBfd7kBu4LEAVXk8d/B4GrwrNKXjQyaGPHqcDmI8YkXOFoFoKJbn7AxdAvUQ gqHW89brNUOqILG59QwMjmK9g+b/n9ZrBwS4/K+PCxgVPr/rhSZUFr99kqL34Hr+hXiAnwjU3F43 THD/3n2jm4EoKunOKALPOJcXYBncgMQICuG9U1Mi751yaoYb29IznHRChZH72ZamC+FWax6NdnzJ erJHrY4XrY/3CU370r1ai71SybnbGZN409uDOTX80T5TqSi6F+W4KErpTkm8U9sBJN6X7XJK6uRn Hxzcc7IMu5CeLjO9fHn7W6zzjzMfNm69YAWctjFPULKvIREyPd4DG4lIJDQwNCP0OHruUA+zTmqQ 5ktMyuJRMBsi8yM3RL6KdEbEiCviYh47FFaA5wz7Sc7Y4saRNg7WmuhJnY1m3bVMWUGZUHPge+tr 61U4D/zv3dO46N77X3zhkQsT++mi1P9aDGeAB1xQc/mjDYd2PbV30P7dxImXkGvP4ri4oSvLCAzk 35gguGXGhHSIOFIE4qZXhmXyDTKVZUHtwohY9/lKI18so975xi20zDErdvw4HDgBR62I0GRVwt6T q9Ntw4XYNiM5O/JEYDxpd/KkhRRP1Sr80MFJnqQxc8lxoQl/yOlvud3fsXENNF3twWlP5yeiNXtJ Zjpu61pBk4EKgtKtu2GOHoMdFrfZjO/UsMuOTmm5xNp/I/Z2PXisy2EbvLvaelxoSn1Ke1pC6up0 r9ka7IFIBuzCT7V7jABr9/4pp/fs1GpzAlNa4gwjP7jkOC0SmjoSPF+0zn+LPFddpDKeX4fYLJBR 5HzCDJJHBvLd/x4qakzWlRTInZim3EY4QzCBw3Epr5umt1f57e1Va9susV6stZ6d9+Vns2EJNM2m V6QW0rWplTgED/2G977z32jTrsb1MSzuVZbJYHOCHteBMh41D46A4SoAJliskziNciZwZsO0J7j7 difWmvrzpxfT5BGDvpGqEq5OldG/nLwcsYmvhitwNejET8oQFwiKL08wLwi+OtbA5jPGbGySLBdP eZJOeVvI8zTijaQRkd9H80wjH2909nfWm9D/+++hwPrgP9DXOmQd8kEVprYBcesl67h1wtr7MlwN q6xmawlxMJK9iBjpIlmkJu6RfT7I9mbXZVM34RV+O8NB4xmu53pLvfd6mdcLIQ4GQgoyYJBsO8U5 HDjzC9K7bYWC3tGyTPUU4RMuPA6hfAtMyzo2rmhI79uuvO6mvj4TEbR1ywuvCvuE8y6aUk3T/UKW Xkk82K8L4z3doAtn8BDp+kD/437q94vZedn3ZVOvIIhpdhTdwLHDnWuWmg3mfFMwTZH3dIdi8atI q2Lpvg7u1lmbbmyq5IneO2raPbevB4E0L/aWmc/anzPtOPQ7993jX8D0VGXlkG/mrRoVhWdl60yh yfXIXV9eYAXoSmnLhDM59lciI5bj3OaTqniWkZNDRETUgtwCyA85PiUkeuizOK25OK1+pG+PlQG0 LjeStMHMn45d9Gfbf7qcRXT9stVTzpu15L7VB+9/fsX5Ky+qqj1/7YaWf21/9TZwTRo5cGRd5dj7 V1z1cOLZ2hEDKkaedfaDK6/dyXdLGXwjKa4ufi2PlquChKhPicAnVkxRe2IxTrZky1ykkR+ki05Y Lezf7NGOBHt03z6elo2Yxxp+joGr1NQyK8Y33/eRj1IjnUCGmOb6bQpJkarBp1jWXi7+mF3ELOOF RFsENx5fOPvLL9oPv/E1rkxXw6Vv0i9Tvhffe28Xttj5b1yjg7BFiQyJ9xSoIAWkQokNlECSFK+S q1QpjAo4mBSkMYFfk5RRaJnqXowOmmQFkWWzIWoZQguuy/18kSHmnLAx56x4H43KTBRcAVehiw10 gYv/rqBcT5WHiUqKajITdDEFDvA4uFPV1U7YaQixJ+b3839CfT2/0GqqdeGJI9//5sdPTxj05VQF tjub3s1vzvfw2JFUyPB40b0aLNc+1uhAhdfmQaoTGgR6XABB+JgC6glZcCbJGdxgbHPwYLvp9GzZ +0npFSes+e3Iaq4fYb9VBvvpFmsAvImra37nCaEHri6VBMjIeJg2uOe76eluGOuGoBsKUeW7Q36Z z57mrpFx+ZvbxAMISOYTpKNLJdnTl7RnEDj28KtTfP7MeeegQWvgzS8/svK/PTonecGcixuElaln rY+tA7QeTDit44ynD+5+6NFW61Ic+yFEIUmYj2MfGs9lzEvgXiQI+giBIlSOPDiqxoCBdJK8iqO2 nboPnPYbY6WcUSXZudyMrtgMvR6eS99YVNV+7Wsnb8IfnoSRDQtrEUnGxHPDppkjQYl0hnSOdJck SFJ2yOUaSPh3Pug6UVvB2ObpQNgwnuBNOZLLDvFgM21ckKzKhpanr/6Qo0PznUtDJDk/WL9qwT/u mTDqnmt2PAanPf3edcvPOQe+GXfHondfPGvw2Mdv/+GZa2e/dU3txKOks5P80bpDUITlVCahzu+s rfjKSGurME+YZ58cIPYrD+F7ovYrWelX0lcYUYl/WRo+X4s/s8x+R3b6Hemra/AdPex3XGbdJFTa 7eTwd/x/OPvTEfDX2LN/WjxLIYwV8Rk3+Fmv/zbtTmmma9ah5ZezPg5n3ZWe9V4462MkCEplEi2U bkU9bk87//aR/5tJ/x9mvXZV4we3TBjpzPouPutnw9c465/tGj6k+v/9rPMzTdYWoUCYjTOY8xSG iYqM8TEIJwmxy32ljinwp6/KIuy+O+D+1CT2zVRr6WprC82iuw9f+mDqTKsn+b/KmHH4jjPEA6hW woi6fhb0QqEXYl4Y4QWvNxJUXTxJpAM+O0meSSdJJkfsFPHZOUIxScC5BLW3hMeYJd7Pvuqwvvn+ P8CuaZi1+qq5C8UD1p+sJdZiazUsRCd0GVzS/vWXMA5KDp+wPrdes763eQdz5j7xOcyZATt4ynB9 Gfeq7pqunHn6pzlT8vOUubvlQei16SIhv+nM/1z3avsI/qkDeM7YV6aOiPfwe71BoUigQ4RbkWaE kE/TeMKoKpF5sF3P+P4nlICfZgswzBXY/8Ds8zFVHocB+3602j+kReNo08frz5v7+G0/vr/lFesO 69Wj6RlX7DkJfcfnJH01Jj4P23OUvg4Sn2fZz3+RIfj+ArEVMyT2pIrHVHR2o7prRFEX4p5AjcBT ZrvghIkvaS7EY2nn+6uJ0y1tfi1rnCvW8Dn/zr7M1WF77QpHAVkQL5KUWf4FXLppWdqQnjt60pE9 IdET8npCz56kqLQIflLpyGsFT1w3jIFG3GBOuUPfxlqzW8H1BNlzqtwR667m7KVql+Fs+5655IsL 0Z8WPDKC9HYIrzoC8CaUWJ9Yzw6ceemVx0ZPqKu949Y//fn0yECx1fv1W1+uLDv+0Kv0hHzbH/+t nvxGqEtOGUu7rkpqxTUxuZvD1xyH/zNfjw7CgIEQx7y0bb22Td2Jtl57Quk2krRL/C+e/k44YvWg 6zp+yzYKb1quw9aRo2LrkXQ/Btkz3eXpM14+9xdevntHuKfXtss7pT2/kPFdJZju1ykJqQ6rg91l t56yeOPdrnnQcM2cEw+LIQMKjSEGHWlAgpepIKR3MxIKb9HtBh9/ZDthz0+rCeYpwMjPN4v49ov8 jIEw8tE+8IsaZtB5aGuODR87eOgTD1kbyqJiq576yvrU2i+dUB7aCWUS6bqmYZ+9WyQZ70WwUZeg aSE/FPqBu4cbsoGghaCebgbChZJR324Y3DZoO5SdRNyTsQ1drqGrgPm/2TsX+Ciqs/+fmdmEkGyu hIRbyBpCAgIBGaiiFiNSjDYKRUSwKxqSQAIhYXPhUlGsGqvWykWldkXU1kvtZUSlitWKdaz3lrZa py0Vq/VSdezb2pb6Ws3+v+fM7GYTkkhb+2/7Fubzmzkz58yZ5zznuZ4ZNkySqZIdmTao/4KRm5Oi sgZf642Hu7aX7IDizx6rT+56V9v5wY6Zc790/VV3nFOiXZHaFUnZnfvh+10P3Nk1SV81+IFvbOlM YzarYm8Y92CBjhDHVh5B7lCaMi1FJ3vIHjN5zNwxhswgtIyUgiy9+IkhWbv9NQaVNjx26GlDqPzh TbcsXNhyyYO3/e8LB26ru2Pz2W0Xb7U2/2ZP11++oo2vO3XGp86rqn9n+/f+WPer2nVn1l46v/bB y2+2G5O0Oktp9brKcalD0lKH+YqdMSyjT8XO9hRbLbmh18HKYNrknJacjQMptvlOn4rNLjeu22ox U/7dhzHG1E/0rdtvxPZcd3HXXvehR5ec89uyluNrv3xt9NueZt95491nHffA9p92vWnsTf/MzMtS /vpu2heiV1/4d+j2ZK1F2/g36vbQ/nT7gxc/fEep9t+h2wlCDkW3h/al2zw9VT48WbOHiAVSs6XP n5atn5StzZdeXxvaW7MrM4ozT8icm2mgbEN6afg5/al4fsBT8RCzVjYmoeG/7Hpt5imV47SFjV95 CAXvWveTl1JfDzxy4tyCZO32VgdGi6wsLTOQkXGwdvdeHoBHxbkn5M7NNbzFAbR8d5KWn3Moah4Y 6ul5yFfz73d1Lj4DshefpaVo2Z6izzlZG7vp+iWT9G1pHy5F0b93T0xc8uF44/zUxxZ/OpWZHY6e f1f9isSsylEEoaUF0wr0goKS7JLJJXNLjJyUKaO0Uei5IbWjeDfszLg764nkZYLHu/VdS37fGEC9 j+5lAIxI/SVXfH3BrQ9ZXb/b2/WnXcvuuLfmwo23W5t+++jzKetfXLT26LPOrP3DV/d0rXq5qb0m vPmsz957efRRyelZ+hz990R+8ndMZlaO0+/K2pO1N8sIZ2krs7RPZWkiKycrlGVkZRUOOTetJU1P y0j5Wa54Tib5ZnxltjtNiL/jiucJ5P3adm3BPTu+ft93bz75mOOqLp1J9HeS88e3Xv3xmx8esfFb 21d3HYheI38zRF+gz8DupImJlYUq4BNyIdBIhHvPied7pAj+m20Z7n1ChnstG+v31zXRy7Lakm81 fe6DL0n9mqXP1/cxuiHi+Moj8rKyzg5ojQH5KWwgMDRXGzxYekqdDDw147lc2T+DUnGe7DyvOzMY Mn1I/NdEPKtEUrCwa+PSxUvWff7hrTdYEa1anz9Dm3lTzZJln//iZc1XP3mJxdMnwNu9PH20aK+s SitMTR0ZHqmtHKnNGalphSNHFmpGdvpd+Xvy9+Yb4XxtZb42J1/Lzw9lV2YVVmUXpg17cNgzw/YN M3KIAIflGEZe5s+KjOfiixGTvReymtJ2NQP+RfVyW66EDClPfiFUUu695078gMjDTR0Pbb3hq99/ eLv9hXD9hqu+/8kZ32hZHHA+37F22Q2fmL5tw9bwVz7dELmxZb6Uk0J9gZHO/MhfC6munDQ4TUvN 1IwpeqXOv+zguYEWYujKgpFVgcpcYk8mMC0zIzNDT31Of179Sgi0ydUm6J3gSY16Ey/N1Jjp8R8I MUx907NdC7R3vnh+1wOvvmoEd+++Wp/04du3a7fHhDefc3QHezkMizUlRejpQvTFwhHnZrdk69kF w1IG5bBpcC6g/TTuTCfLTwGwA+ckGDdjcq43z9JgD0388k+cYyWD1usTur4ybeH51z34zUdXnX1O y7XXNs0xbO3P37oxtGXjtq/N+8qJn79gDlw6HfreTdknstVq63gcRtpdeXvy9uYZ4TxtZZ42J0/L y8saYRQUVGZpWflGRkpKTsbPpDpJwcubYXovyLpdojLkhYFB3b9GND1HUzRhDvSi+V3PD6vuWLN5 5+Y7H9QWdXRcMCtl362Rm+ZvXdN5+703aRd/+MT5l5w/41Q4twjN6FKeZnIiX3ikMtfLF6YEKpk9 I/BTpWOmt8wVzxXiSaZu3HyeFuyq1m+f/Xy9vkAbpW3/1kkdXdd0fV9mA/MY+RtK00aJUlFX+cmR 6flFPCYtrSBcoK0s0OYUaDoWMC9415g9Y/aOMcJjtJVjtDljtDFjyoor4Ut+uigKpByR/bPhKc91 v6xIWBjTf4WopLtwxlQIS/xux6DER1Fj47/ZUaj+w8T0aR9ccdulnbdectsD2758t9Zw/NSyBSfW rV8fMKvblq5vO7tj3pnb1m+5Yv2N28pPHDtp/rGPTD+vI7LAs4vT9ddSnlEzOadyfJZRcFfBnoK9 BUZiNFqBVjAizzOK/kw+J37WYybjJJ/jSXp56dHd3wAMzdFKPBLLtML52sTC6va1m3ZuuvPBrq+3 t1/A429t3fGZLWvlPHZdqB+j5pEsbHbsL/q7xoXGoECW0D+8Wwy+VxOw/YTp09VqkUjRjzPWUZut agfdI2T6p+pmC0N/Ud2Zc1DdJHr9iarLjddl/tCvKxQpRrrqM08I/4n6s/EnzuJOx2ikdoiqTb7z dEXrVdTlJ+7MSNC6CFq71J1D43cm6JnHnW8oegpUXcYuXRPZ6lb/qa/rrxkb5Gd3iZ6F17Ocu65w IBiYLUaIseLsyiPuKt9TrqdnjjWmjJg3Qh8xojyUkpc3N+XclJYUI2XIEL1wtxa4P2evSNcz7CL5 8UPaY2O4dLf+tHLH6vWPn//H/UKhXDUfG/+xlamFR3sr2xiMQu9/xHa/mDTylt7zY1cz9t/RUnlN 7JVHHzy1cPSaNRevnXPRRZdvvqju7HlLtZL3/qqdWFP91588+/PzHkzPOmblOmPo6U/s+OrXTxf+ m6QljCYXzTq1MnTX6D2jdT2YbUwZNW+UPmrU6MK5g84d1DLIGKQHA5m2/FrsnvTHiLYDuwJP54qn 1XK8N4q4NinqU0rKe9JfmPQ+PrDkf6o2fiNB9vyuXXFy8zZsiNP7we9bS/VtcUL5N87fbhJ/6N60 Cm2Dv31Zu0+7Tw/ps/WvGYXGUuNuTM7swAsp4w7avp0aZrs+9QW1vettg36TNjdt5+AgW0t6Ktt8 tr8cvGVsy9gWHB28KLM8c33WkVlNWXdm3Zmdkn1a9n3Zf8k5N2d3zu7clNydeUberLyvDZnCdtv/ jS1/aP4Vh7fD2+FNbfZHb0PPHPq7gvqCDwsvKXx82IRh64bdN1wbfsLwrwx3hv9h+B9GTB1x1ogt Ix4fmT5yIduuUamjjhxVO2rLqL2j9halHN7+v26nFW0oeqLox0W/KHql6K2iPxb9dXRgdPXotaMf Hf3M6OdHvzj6teJgcX5xUfGy4i3FLxTvL369+HfFB0Llocmho9nuCL19xJH+tuaIn5Z0/7uu5Bdj Cv2tfszjpSMOb/+UbUnpFaVPjq0YGxn7/bKsspllt5e9W66VZ5cXlU8oP7a8qvys8vryjvJLy9+O b+NO6G/zfvgY7GQ/UpwnAmKWGC9yY0XsZ8duZl+l9vPVfhF7Uxixfexzu95jP1tdqVJ72Wa6ODZ2 hzhW5NDmWPo5k/3s2C72VerKfLU/U+0XqX049jtxOm0eEWfSPou9LC+m/3cT+2pRKc6mdoYIK9qE +LRxZeLv8UwV8T+apIkgZ15ZZyxn+WWDzOMkvyx/mj/eJoX2n/XLqSJHLPfLaWKoGOSXB5O3rPXL GfRzmV8OkpvFy5lateKg/JcljuS6IbSAAQ1ZeoFfDohx2geqTLJNgDzTLwdEqV6qyvInjVP1pX45 IEJ6lSoP4nqafrFfDogSvUGV5e9UB/Vb/XJAlOlXqrL8i8d1+k6/rIlhRr5fph9jhV82xBRjll+m T6PVL6fQ/lq/nCpCxrf8cpo4Un/YLw8Wo433/XJ6wDWe9csZYkrabr8cFFWDg345U79+8AK/nCUW +G3SJX+CqX4Z/mS4qpzB9bxghV8OiIlBj/6gpD94tl+G5mClKpO9iZzgZX45II4MNqlyjurnbL8s +/HaD5F8Du70y/A5uE2V8xU9P/bL0BO8T5Xl/37JD77nlwOiIrhfleUfjsjKLPPLtM/0xjtctjcq /DLtff6PlPOeebpfZt4zp6lykZr3i/2ynHdvfotV+1a/LNsvUuVSOe+Z2/wy8565QZUnqfYP+WXZ /quynJbE57QkPqcl0Z+WRH8wqX0wqX0wif9Bn/93hqZOOeqo0GmNta0tbS3L2kMntbSubmmtaW9s aa4IndjUFGptXN7Q3hZqrW+rb11TX1exoLV+aUdtQ3176LQzQrNamuq67+2uktcX1re20UtoesVR UxI1p50xSdU1NtfWN8unNTfXtNY3tLevPnby5LVr11asivdWUduyanL7+tUty1trVjesn7yspbm9 bfLilo7Qqpr1oY62+lB7Q2NbSF4O1bSFVte3rmpsb6+vCy1dT0196FNnVp9Ibas6Wd3aUtdR2x5q bA6tbWisbUi6lyPENHXUcWt7S6iusW11Ew+oaa7jrkYa1NIKWitC8We3NDetD41rHB+qX7VU3tTd VXO8cZ8UqeZ1jc3LJTPbWxtrJZOTns7tib6OUwSMa+Qp7fWr5Iy0NvLUupa1zU0tNckPheYaj9L6 1hDDbeFR7DvaV3e0h+rq1zTW1ss2DfVNq3sNSCwQraJeLBUdolY0UGoXIXGaOIP96ZwtV7XtomaA du2iQ8sUzeLNAdos46xugPo56qy9/xbG5cb3jMeMPezvFndyVf4djqPYZH0jrVtFi2gDy9Q9J1Fq FavVvoYrjZSa1V+jO1E0sYW43sjoGqhrU2f1HOs5rlGUDsCXQHFgZuC4wEmBTwSOCVQGPhn4dGBG nzT01cMsapsG5ES8xUJFTZtPeQifXMFop/Rxp7xv0gA9T+TuOjU+OeJmNb4QLdazX8i1ZlrWs4/z rZmtRvUSUrUhnjuD7ROqp0bFrxrQwDNWUWpW1+TdbeqsTZXqFYeX9fnsZWpeQpzVULNeta9VT6xX z2tVNXVgKfc1gXZaVfTDUymdK9UzQqplm093G/Pf2GP+5ZPlHK1SdzWoEfZFszxrUbTHW0kOmHBf 1qzlWqN6vuRBjRpRk+LYctV2nbpe30O+5DPq1MhaoL3Z50K9oq3Dlz6P6nbFgzqfU+20DzEOj+oW Vdsff0L+GOO8bvM55o1AtlhNaZn6K4ryipzlVf4sy6d7M1Cnekt+eo2ioEN8jq1JtW9Qz29VbWp8 2e0t/xN9TtX7khTnZISe6pW8xOdkrZKCkNqvVE+W95bQn+RVk3rKelUOqZlv9K/V9CMP41SNJ2Or EjO5yh9bPRpeoyxAraK9Ro2tidL4xIjlbHYovWhIjN/Txv6kaJX6W5bNfq+rVJ9tif7irWrV/W1K b+qVBiS3npgkJw20XCtOgAvdM9jXWJepHuMy1q2zfUnR0h7zIO2cJ8dNies1fp+NCYn0+N7q869N actyv64mMeNtSf2e4j+9VWl7u5LBEmL/ZI7216uUhUbVU/+zu9pvW0LPDcpnrCb/mMy2Vm0V9Nlb FisU51fRxtOCFuXX5Mw3cD45iYuTP7Y+F/sSKyVD6maHsi8enzx9iz815Mu/J2OrFAfiGuhZ6Thv P0VeVY0H67ag8RrPQtQpTrUn7HayperruY0JqyOloKOXvNSp+tVKG9YnyeZqZSW9Hmr9vjz/Iee6 97hlfZMqjeOu8crqr1KyWNcvVc0H9XzoPOruvU71tFzEvXu7ors2YXn6Hrv39IPpOi6JA3Ikjb69 qFf668UZrco2rVe8k/orR97ie4i+RppsX1sTetuquObt230LGlJcbVeWol14PnWNGk19oh/ZsokW A8/QR8d7jR8R71XTQtqKNfTcqPxX/21PVnxpU7LfrugcKAZ8E/pXigP0/Cat+m+5UJ31X1+lLPwa 5WEHajfP19oO5Ucln9YfIn8GGseCQ4iHPy0p1I6i1UBtZM1qODLQ06qV3jfSn+EtpHT9VTzZzx/H MoRcbcgQWizm/bVqIf/6dIHw/4p1wGQfSFqHESJdLBHbhVG7vrVJ5C9vrV8pQk017c34KtlOvnnU 1GpMvCz/n41evaAqJIbNn3taCPq867p/NPxjQKSsrG9F09S+TO0nqv1UtT+mBxUH72UPuvwhGonP XCgp0AOBvEAoJWvQeWlXD64evD99c8asYGXm1sw/ZZWlb85al70m++acFTkPZW4dcuXQhwoPDNtZ eCAQGtk68nKwY+StI38EbqXcOvL5UWWj1o16PBAqyi/KH7w/Y1b65sytqkZutyZKrbKmyJKlovdk W/nMka0pWSlZo+cXHxMqKSktWVH6e1mXMUtCtpcbVLGVHshZUXggfXPJivTNI1vHBktKc1YM3j94 f3dPEqUHSg9IysZOLSmVyFlRwpa5tfDA2OBYt8gqO7XIGhsse6/s1LFueUmRVT5xbLB8TnkTpcvK m8pOlfXlTeWXcVRj8XqVPUoa5FE+XSJH9ett48ZJ2ktKJdVerbwSh6wZPV+2P3LqkQsmtE5onZg/ cc3EZydlTXpy0gsVGzhfU3F7xYbJU6ZsPeq8o86bOmvylKmzpu6c+kszbeovp+40Z3LcaS7gfAHH paq0zlxgpsl6jt9g/4j5I/NP5iPTZk6bPW3htIWcPTKtfdpXOXJ92kzqH1HXfjzt9enDzEemz6H2 ddV+pmw/7XXO/nTkgu4NGv1NUty9TdnqbZOnTGidPMVr64/J256d+Oz0S6dfOukFuU3/rlfH+Bih t02eIrd4P3LMarxs3t2SKxNaK26XbSQdE1olV2QrM03ya/pvj55/zLQZ75sLji+dOfOEaysrTpxd WXHw1dmPyqvd2zHT4tuJs3tfMRccM22m+nfCtd524mzZq9xku777mf3oR/cz+1FJR7yfU4KnrDl1 +ynB6jWnrKl+5LTU0zac9vBpqadulxiobm7V3AvBt+Y+zkZp3kT28oq6Pu9ijtTM+9W8X50427v2 mWUf3Xb2o59Zh5UpjW0SR8f2iePBcfrimKufF3P0OtAEmsG94D7wAPgh+Dn4Bfgl2Ad+BV4EvwVv grfAH8C7McfQgA4MUAiGgxGgGIwF88A54HPgfLABXAA+D24Ft4HbwR3gm4BnGQdiTqAk5gbkLxGV grGgTP6fBzAOjAdHgglgIpgEKsBkMAUcJX+vOOaKa/UlMVuPgDbQDjpilr6G41qO6xjBBRw3crwY XEr5Co5Xgc1gC+dbwbWUr+e4g/tuonwL5Ts57gT3U34Iju7h+Dj1T4AnwVPgafAMeBa8Al4Fr4HX wRsx20gHGSAI8sAQkB+zjKGgABzB+RhQCspAORgHxoMJ1FfDqTPBwljUOAssAos5XwpqKdfFOo16 jss4Xw4aQCNYAVaCJvqJUN9KuQ20g05wGfgCuBxcQf2V4IvgKvAlcDXXN3EvPDK2gK3gGgCfjOvA Nupu4NnbKd8IdoC7APwy7gb3gHup3wW+Q3/3gfvBbq4/wL0Pc/w+5zZjfBw8QbsnOT7F8UfU/YS6 5zh/AThc+znHX3DcB17k/v20+TXlVzi+Tt0b4G3ueYdrv6f8Lvgj53/m/C+0eQ+8T/kDpE7ErIAG UmJ2IDUWDQyKdQbSOQ+CzFin+LT+WWY9DCKxTh3OIVlRJCuKVHUiTWGkJozUhPXruHY9124B94OH YhGkpFOHOv0l8GvwMniFa6/y9HSQAYIgE4qyQDbIAbkgD4qHUHcEKOG8jPNyMA6M59qEWBhds5CE MPpmMfsmsxtG7yz0zkLvLPTOQu8sZjXMrIaZJRMdtNBBCx200EELHbSYHZPZMeG8CddNuGzCYRMO m+inBXdNw1UcNeFkGE6a6KwFF81AatcBuGbCLZPMz1C2Z6aYEWsQx8a2YINOEYuxQ2eDMLg/1gAn own9XEP5FfAqvaeDDBAEnn5EGX20ly5ElYU5KxZRVqYOaiKU+7I2V3D9SnADbfqyPPdyfRf4YywC 9WFRCmVyfh0oc6BMzpUDZQ6UOVDmQJmcFwfKHCiTc+JAmQNlDpRJzXShzEEzXTTShbooGulCoaTG gRoHzXLRLBfNcqEsChVRqIiiGS6a4aIZLhQ5UBQVExVF/VESp0I+uVqMV3xZzFHyZRnH5aABkK0Z K8BK0AraQDvoi2dXc30z2AK2gmvAteA6sB3cCHaAvvi5m6xwoo7l0d+XFCl+OD4/HGWpatWM2fDE 6ddKSV4lW6huvjnwzYFvjrJI/VmjG+g/2RLdy/ku8B3K+Dz46yjL4/HYFifgIZ2DPGSc7wN5R39O PtJL9p6z3l5TzuFYfx7/Ru+J13Twmg5e08FrOnhNB6/p4DUdvKaD13Twmg5e08FrOnhNB6/p4DUd vKZDbpUdK1JvpGXUIN+Fyzfh8j34otjNcMZOcGY5dmtFEoda8YIDcWc/9S+BX4OXQR9c0qUVTuZU JrKfBbJBDsgFvblFnKC4VY1mLwbLwHLQABrBCrAStII20A6uBpvBFrAVXAOuBdeB7eBGsAPsxhKU 4AnwwnDUhqM2HLXhqA1HbThqw1Ebjtpw1IajNhy14agNR204aothPTRg8QCSnizlA0l0sjQjueSt RmwXVrbBt7INWNkGrOsurOsm9KsYHShG7ouR+2JkvFjkH7IdOUtZLRknuGiei+a5SqPiWvRHIi2l Neobiuyu99RXFKU8P0l2lEZJeZHzW41tW5iwAVHfBji+DYjCmSicicKZKJyJwhkZoUThTBTORHvp fRQuReFSFC5F4VIULkXhUhQqJaeicCoKp6RddaDY6aX3Uaj/x7UmXVk1j0/dFqonnzzrMpV5+jbz dJ/6quT42MaDLM0/I/4+y7cicQr79o7dVPftHXuO5G+RoUPjzb2eDGF7cmI1ygYVgdHKFs0UU2OX IVNFYg6oAtXI93xk7cxYhVjI+aLYyUQV5yD3y5A3Uz+XeOs84rAajktBLSA60Os5YiOQxyodG6Fj I7Bjlo6NYA7C+iqOzRxbOK4mEpHZQ6ufNfyQ6z/i+l7wY/AT8AJwwM+p+wX4JdgHfgVeBPu59yXw a/AykJnAb2gvs4HfUv8meAu8zTUXvAN+B/4H/B78gbp3AVGJ/ifwHvhf8D70/5XjB+BD0AVi2CwB NKIYHRggwHkKIJo1BoE0MFhFV93ZRyaxWxbIBjkgF8hspJD7h3HvcI4jwCjKRWA0KOY8xFFmKSW0 H8u5zEwmcm0S/VereNQmQ3HRcxmT2mQmLjoeRsfD6HgYHQ+j42F0PEw2YqLnYfQ8jJ7LuNVGRm1k 1EZGbWTUJjtxyU5cshOX7MRF/8NkIiY2IIwNCGMDwtiAMDYgjA0Ik4mY2IAwNiCMDZBRn41c28i1 jVzbyLVNZuKSmbhkJi6ZiYtNCJOFmNgEE5tgYhNMbIKJTTCxCSY2wcQmmNgEE5tgYhNMbIKJTTCx CSY2wcQmmOIqJHkkkrwJSZ6NJO9Cknf5lrEBCd6EBO/DOq5Bgm9Aeq+RFhvJtZBcGX9YSK6D5DpI rrQQFpLrILkOUusgtQ4SK62GhcRKy2EhsQ4SKy2IhZQ6SKmDlDpIqYOUOkiptCwWUmohpRZSaiGl FlJqIZHSulhIo4U0WkijgzQ6SKODNDpIo4M0SqtjIY0W0uggjQ7S6CCNDpLoIIkOkuggiQ6S6CCJ 0jpZSKKFJFpIooMkOkiigyQ6SKKDJEqrZSFl0nJZSJmFlDlImYOUSStmIWXSkllIloNkeZ5exbZ+ zN+fx2/C2yR7/T7jW65vol1/0cA2lcd2RwR9xrlcf4A8ES1AYiwkxkJiLCTGQmIsJMZCYiwkxkJi LCTGQmIsJMZCYiwkxkJiLHE1EnOa+s5QfmVYxHE0KCVrkt8bVilp2ae+NFyksqffJUmL3Uta7IS0 LKdNXGJWJKTG9qXGTpIaewCpsZEaG6mxkRobqbGRGtuXGhupsZEaewCpsZEa+yCpeR/a+pccG8mx kRx7AMmxfcmxkRy7l+TYvuTYSZIT9SUn6ktO3/EGWtUj5uhbcqJIjtVv/LGNuuT4o2/JiSI51j8s ORVKAuSsy5lt9te8PI9l9zt78TWp5Bn0Z6oH95M9hecV7ATHJZelBxjrr0v9jfmK2JmIG3OwkjJ2 LMIyjo6dj9yf4lvK2Z6v53gG5ws4LqLN4timw37+sJ//l/v5FLXmJNeU/PUjtXaUolZf5Sqov9Kp Vg81MrLnVLZg+ytVNhGxTURsJ/dARGyrXjT15Xg+fmG4+pKcM5lL/cP5Cj12vau+RK8WlR9Lj0Og cYb8nv1j602tQnwsvZX9264w5ql3EPH3DzI38ta+HcMGj4MnoOBJjk9x9Na+HeM5gGc2HK6RKRrY dlHgZ1lWryxLruVaPFOuossVdLliLlfLLWOfWseVK+O28Y5aEXeMP6s1XCuAnw0MUivelpihL8HK osnQakKrCa0mtJo6mqbfxPE2cCfYCR4HT4AnwVPgafAMeDZWrL/G8XXwBngfLqB1RmmsGAtiYjlM LIaJxTCxGCYWw8RimFgME4uRjsUwsRgmFsPEGphYg3SsgYk1MLEGJtbAxBqYWAMTa5CONTCxBibW wETTTTQ9Hd6a8NA09seqjFc4vsfxA7SYyEIcK+6LzfuvGOlFarVoYWw1tmge+hXRl3QdwINKrym9 pam8ZJPyjhF9HXUb0JYLwUbKF4PLKF/B8UrafYkyUY++DUTVOzJTv4m628CdYCe4OybfMFr6feAB +Z6Ma0+AJ8FT4GnwDHi262U8r/SwETxsBA8bwcNG8LARPGxEf402r4M3wG85fxO8pTxnBM8pPaWJ R4zgESN4xAheTXq0CB5NerGIMabrgFHa9TIeLKJmwosETT8S7G9GinvMSN+RoJyl4gFmqbjHLPUd CcqZK07MnHwLup/jK+rNinxHZaoZLMFyjAGlgHFgCyPYwgi2MIItjGALI9jCCLYwgi2MYAsj2MII tjCCLYwEpsYiIvdjsaq3HI6+Dkdf//LoSwuMikVVtHXwKvdg/QIki9xMJzfTr1NvzaP4wTB+MIwf DOMHwyp3+s9cgS1U6+rczRgdxujo8u3AkWK8MeFj0O8JSr/XoRPrY53658AGZP9CsJFrF3H+eXAx 5Us4doLLqPsCx8vBFVy/kuMXwZe4fjXHTeAaytcpfxGB0k3YuzD2LvwPy0E6EZNLxOQyv1EiJpc5 ll8JuERKLnMdJVJyme+o0JUkwCWRCu9seGfDO5urFlfnwztbSY5LjUuNS43LVVfFYjIOk3GXjLlS e/HejvNejFMzs46xbuD+C8R4+ObCtwj8isAnF/5E4AvPoG4ruIbyNp6hegAPq+8fvBHJ0fwoaTSM 5B+e3YsPrx0dXjtSa0dpIig+Sb5lk2/ZopLyeQCbLR4DPwAnixz2R2Mrjwf3yDc14Dvks/eDB7n2 diysjQNHovVBMAyUg1ngPCS7DjSBZuB9N+OSj1nkY5b6buaHnCPVyImLnLjIiYucuMiJe9B7Y/97 GWTHRXZcZMdFRlxkxGXeXebdZd7dXt/THPweOU95307kwEUGXGTAZd5d//sa+U7ZJQ+0yAMt8kBL fWPTV6Y/Ds5YcMaCExacsOCEBScsOGHBCQtOWHDCUu+BvS9KXEbu6h0x7/3ZWvCsGpn3/YB8V9zz CxNXvU/LB0NBQcx7t1aqKHSh0IVCV71rkxlgUtanMrhhB83BQPz+KL725hd8EkORH+JK8E3wGPiB zCuw895XAi7ZUpRsKUq2FCVbipItRcmWoow7TLYUJVuKki1F1RviMUh2KZzN6cGt/r+/cZO+v+nB DVEh4JEgHhK14NvAAk+Ap2KulgGQDE1yYiRYouh0odOFThc6Xeh0odOFTgc6Xeh0odOFRhcapVWQ cZeMt1ysgYs1cLEGLtbAxRq4WAIXS+BiCWQc5aLxLhrvovEuGu+i8a4h/ct2cCPYoWIj+b+TiRDR NQtds/BVngaa4lg07niOB2uhieyZyJ6J7JnInqkXwPthHJFofQwYC/CpyKIJZyNJX65F4KyMlSNw NgJnI3A24n9xJmPPSK+vzSLq6zL59Zj/pZj62iv9UN8IixLG4yT05h7K94KP0p9/UBr61I1FSG6n 4m4ux1LigylKYqJITBRp7kRqokhNNMHxx7j2A4A0I0VRpCiKFEWRoihSFEWKOvv9Rkb6yse59gR4 EjwFngbPgGfJNAaKPF+jzevgDWX7PvrbmYP9Xc/IFIlAgqtUhBr3b9K3Hco3GAN9d5Hst/BRwoTD Ltx1e+jiN5P08THwg/8QvTyG0UR9eYkyIpsR2YzIZkRRRmQzIjtJVqKMLMrIbEZmMzKbkdmajIlG giXKK3QyOovRWYzOYnQWo7MYncXoIozOYnQWo7N862gxQrmCYTNCmxHajNBmhDYjtBmhzQhtRmgz QpsR2oxQfq1rM0KbEdqM0GaEtiFj1u3gRrAD7IbOkcmW55+dG4l0ZQniFsC3Zn3lbSLD/167x3qp fhXzvlnF6l4Ufq2K1R2RqTxSDv3FvdI9lO8Fce+0judsVBmMzFY6yVZckXJQNiX7MtR8x+c0pa/c gBiqLw+4Tq6agYvlihmIU3utfJaSpFxlZaIJSTGSv8UnD7me1jeBW8Bt4E5AHq3fDx5CgvbQUx49 hf2xhuktnDTWML2G9SgtvTXMKnrppJcqeqnyV+a6n6fJb9lFQD0x6WliEHSEoSGsf42e7ueIpNA2 yvPD8u/nHjwSnheln076idJPJ/10yufROj7L16s1w5fpN5qg4n7K8n8hPMQz9sjMnlbyi3tZ4/gj ll+8YetFJZDyKXm0g2M3n5wefJI8ylB2Xtr4byq+KBvOXZ3cFfFpjHBHBBpdke17he47ujnq3Rnl Gd7dVf7dCY6KzL/HYiHfS5R3cLjL4S6HuxzucrhL3uFwh8Md0nI74mylN2R9gkxPTAXTlEcNixnI 3rFwTsbq2AdxNggDtFgQgQqkW9wl37z20LswMxIW36WdjOfJ4QQ5nHgOENuL97hOfqWhxRq5VDzW 17AAmoxYybF0/I2K/QvgBrmVygHIrXQ0nQgkqpdQJrLTsZlEIlG9jKPMEcZxlHnCv+s7mmPgahSu OnDVEcchHcdz/kmOD3JkFuFQFI5E4UhUI3qAC1G44MCBKCN3GLXDiB1GG2Wk0X+7r8ojB8nRHHAy qALz5VfBIC4/5PyCOEbwNNEKGIlYD+C1gNcCXosLfDm7JUnWpGYkyxVeIiFTAV+ukCGNmEVDtjVG r5HtKBnDy2gvgZeT5C3FlznaKxkjX/+v/nL7SmyWib0yiUoiYhKYjQ+fI/N3UAXmgwUqYokQsURE I/UrwErQBFaBZtBCm9UgAogimGGbGbaZYZsZtplhmxm2xc20vUVFPiaRT4TIJ8Isy6jHxEaaRD0R op4IM20zwzazazO7MgqKaMT2REIRLYsj3pXZtpltGRlFtF9wbR/ll8DLQK7Fp2AVBqmoKaKPIi4Z DUKcH/4i/W//Iv0yJSleDGuqnGcSXu1oZd0crJuDdXOQHrtPOxDPjYg1kR4b6bGRHhvpsZEe+5Dt w820lfbBkx4vx7pb+SQvznqQcrckqVwLa+vZjbd9mxG3FzIHC6rYOopE2T3sB/qDZXawzI7M0ZAu u4c9eQ+pSfHtSFBZbUflcaOQrNEgpCy3g+V2/s9/x35mP1GFnfB/x39EJPEg9T2jh96Rg81s2H1E DrbvM+MRQ7LflBGC/R/7nqbsIA4yd8RbHrfgUJwrPbjQFwc+7i/uP8V8Rw+a777nN8r8RnvMbfK8 9p7P4pjzH/tN4kt+PhVV+ZT0pnEOTfLXh+Kc8jyshY20sJGWmAdPP6M8rSXOUN7WEmdyXJTkdePc 7d/7WthPC/tpYT8t7KeF/bSwnxb208J+WthPS2zGZm9RdlR6YUt8Q9nScMITd8ddVmL29iibqnLC hHfuGZNZSTGZ5cdkFjbW6sNrW9hYCxtrJby2F6tZ2FYL22pJicC+RrUPmEEvZrP8mM3ShyZ58yJs qvToXhxnHf4K/r/3K3gtz49OOv3oJIz22WifraKUKfL/UIFp4BNo4mzazAEnx+Rv9ppoYCca2IkG mmhgJxpoooGdaGAn2XCUbDhKNhxFG8NoYxhttNFGG2000UYTbTTRRhNtNNFGE2000UYTbTTRRhNt NNFGE2000UYTbTTRRhNtNNFGGRObaKOJNlp+ZBNGG8Noo4022mijmVhL+K7SSCspygmjkTL7ttFI G420xauM8S3q0CI009a0mIt2ynjaRDtlTG2inSbaGUY7TbQzjHaaaKeJdppop6mVqbXUMBpqoqEy +jHRUBMNNdFQGV93oqGdaKiNhppoqIy1TT2dmHgo1wuJhoar1e0w2mqirSbaaqKtMjoyyexlVi8z +ijZfFSv5L7D3wYd/jboX/1t0AgVzd508DqXRk6jnQ7mUz4DkNdoC9GCRZSJMjX8ioYf1uTvOxzK N0Sn9JkpHWomdINP5S3+KslbvbKb5GymUnwyQb2kmoxEWwLiVOMNNTyh1qxG4Gg8W45Cawf/r73v AKsi2bauqt1kyVFyEAFFOAdQQBREgohgAnMiCaIISDKhAhLMecwBMY4BHQNGTJjGgFnMoo4ZEyoq Bt7u4qjo6Nz7//e+O+9936O+s6pjdXfVXrtWVTdVqXjuUIyHf9uj8qX187kH5b/ri6K0bzx7Xa8t euvQ/wdvOwfTWMLbj189bV3PWterit7zDZar6DE/e8u63rEVWU69uU2U0ECM22Mcij/0YLQnrvdC u+jDc9kZc3k+xZqDRuG2aPxhWxBzfT7mujNFz4U574w574w5X0ITcR8+B+b+fJqMyym4nIrXQv2O peBMh6F94bPV9cbc+4qeV/Sq/5u+vtKiwVjHdMJfZ3zaEPyhz0FGlSCj5mNuVPzTX2dpct294Atv uW7FMrL7xuq71pysw9NaC08ldl++7DLh77C+psLfZWEqOt9xpwJLtaIOdyqwBCuwBCtk3KmQcacC U9fBUqvAEqv46VdfGrxPZcHXPgx+32F4VqwsxSSekt1PvzuL+KZn5l/xKZ97Vur2qHxlRBkyYv5P fYw37gvE3+cyFctS9DM9eX6V0XCMsR6hA2T+Jq6Ozxki8zX/jJ/5n/x9nOo3b0HnYjktwrtYyt9+ ie+gSvgX2hV4NXGsKvH/JcQ2oQp/W1U7RlSZ7A3a53GiymTjRPGz8Azx+0Hx6Np0Px9RyN9v1Sf1 xG+x8BfGewsKyTZ+F4Wyd1yFbFlN3XdchWw7P7tMHIcKUPvAdbyzcozFcY+wBQNPasRRenpDNbJd HlW6AtZaut+9e+V1I38rJ76RE9/E7UafsRdTEFMUn7c2RfEJcr6kKH6dXptqAE9V7xsbXlvnWxWZ /f3wrdtmfI4i/O3A+0a9Brfxqm8x/lAzX8BWI89dS/7WDxUSplz4+e0fXkH09YWYcqHsLWCJ7A1g CaZcgvf+eUSoEtmIUGUgvkV7zHOlBLA9B2Lv0Gtcrh0JqkQ2ElSZQPEnJytfhZoSQRlj8R4EbhF1 rIFQshufnZFMAnzsSRVxZEn0CJq4rQF6JIE0waBMHIkE9zgRT7QwP+KPGiUA1YMxaYfBlLTHYEY6 8rEsu2BoQHqQnsSa9MZgQ/pisOXjudrx0bSbkEQ+Su8wMopgC4RMJ+5kCVlKPMivGDzJerKFeJEi so20ITvJbrzOHgxB5Ag5ToLJaXKGdCbnyEUSSm5j6E7+wNCD3MPQkzzE0Is8Jk/wym/IW9KP4h8J p0CBRFA5KkciqQpVIVFUlaqS/lSdqpNoqkX1SAw1oAYkjppQEzKYmlEzEk8taUOSQG2pLUmhjWgj kkrtqT1JoxIqIUOpG3Ujw2hz2pwMpy2pFxlBvakfGUXb0DYki7al7chYGkzbk1zakXYl42h32p1M pZfoJTKNXqFXyHR6g94iM+hj+pjMpk/oEzKHVtEqMpe+pW/JPEYZJfMZMAWygCkzZZLP6rF6ZClT Y2qkgGkwDbKMaTEtspzpMB2ygumx+mQlM2JGZA0zYbZkLfNircg21pq1JjtYL9aL7GR9WT+yi4Wz KFLMYthAsp/FscHkEEtgw8gRls6yyBmWzbLJJZbLcsllNo5NIFfYJDaZXGdT2QJyky1my0gFW8HW kUq2gW0g1ew39ht5zzazzeQD28p2kY9sD9tLBbafYTuPHWInqBIrZWeoJrvILlJddondp3rsMauk Ddk79pE6oOnpUWcwAEMaCMZgQoPBDOxoB2gMTWhPCIIOtA90ht40HPrBcBoHGbCM5sEa2EXXQjHs p8VQAsfpATgJZ+hxOAcP6Gl4BI/pPXgCT+gDeAYv6EN4KTBaIQiCHP0gKAgK9JOgJKjQGkFVUGVM 0BV0GcjGS60SbnwzXqonHy816ssYqRpEEXmghf7OlFiRRmjL4jyBLmjHnsQHrTboy5FqRJxnRRv5 YoasaIwsouLMPGjfAWjLn8dZdePjrIbxMzTx+nJEnegQI2STNbEnUuSiHWmG/GhFfJEN7WXHqfCZ cnSJPvLQgjREXjkhh11JC+KNHA0kHWTH1SMKiHrEAPWFJfLRAVknEDfSkrRGJrdD3naKdEqOpA84 PuX4imO1iIxExcXGMHmOahz1ouITBjNTjg04NuIo4dgsOik8krXg6M+xI8eeHKPi4lMHs3iOKRxH cMzgmBuXEBnHJnGcgRuS2FyOizku57iG48YE8fgijrs5HuB4lGNpoojnOV7hWM7xHseK5PC4FFbJ 8W1ybHw0+yQiCByVOWpw1EseHJkIxhwtOdpydODokpwskUJzjq04+nMM4tgZ0Qm6c+zLMYrjQI6J iM6QxjGdYxbHcRynILrALI7zOeZzXJmcmpgM6zhu4rid4x6OB1Pw6eAYx9McL3K8xvE2H3NXgaMi R6XvkKGd6PwgpmhpP0ex5hDQZv/5JYqc+CukdcYJZn+Jqn+J6n+JWhy1/+HzUWSkiCqkdl6r71GX ox5H/T+hAcf6HA3/hH+dct07rIt/nScaP0Vj9Dyt0TN0xpoyggwkSWQEySITyAwynxSQNWQT1rcH yDFyllzBmvURqSTVWAkpUy1ZiZTL4ipZ/KE2xnqT5xO1lK37y+IIWTxaFj+qjVlQbSxnLIvHyeJ7 tbFikSw+K4uf1sZKDWpHtlYKla3LzlNaXLtdWXY95QxZLLtPlZ61cb3S2ljVVxavro3VjcV5llAd tWLeWNvb4RpD2YC5iXWnj7gmjlDNfLllpuCaIVEW1AR1QUPQFLQEbUEHaw89QZ+PwC3OFS0yR5kw wZmPxs2wHPlcUMT4S5kSYvTN0UwwECzE0bvxDOV/8griMQZCfcFQMBKMBRPBVDATzDGVusdEEUc2 h81l89h8toAtZIuw/l7C8tlSVsCWseVYj69kq9hq9itbw9aydWw9K8RafSPW6ZuwRt+C9XkR28a2 sx1sJ9vFdrNisW5n+7BuP8BK2EF2iOfVCLoKH+oivYh1C8OazYLNZllsGBvORrCRqCpGsdFsDMtg mWwsqgts87M8VBbj2QQ2UdQWbApqi2lsOpvBZrJZ7BdohG0lhp5iCQZCVmOgXG0xVFteBFBndUd/ xag8yRAVDQtDRRPBsG5h/Vk06poBLJYNZINEbcPiUdsksiEsiSWzFJbK0thQdowdZyfZKVbGrrFb 7AS7zg6z39lVAPYHO8pOsyPsDCqWc+w8u8DOssvsCmqWm6yc3WAfQQn0UMfcZndYJRiBInvAHrF7 7D7qmQr2jD1n79kHsGefWA1qGAHkQB4sQAU0wBJ0QJfdZQ9Bnz1hT8GAvYD6YMheslfsNatib9hb VEPVov4BU3SVFBiYgwLUA1VQA3XQBC3QBitoANbQEGzAFuygCYi2EozKoiXmUwRqa0Oupy1QQ29E ZbILgw3q5jOoTkTF7MwVsytXzG5cMbtzxdwcFXMF6otPGFqikVPiyRVzK66Yvblibs0Vsw9XzL5U E32CH9WhOqSNOGU4CeDquS01pIYkkBpTbBlQU2pKgqg5NSfBqKQtSXvagDYgHWhDVNUduaruxFV1 Z66qQ6gDdSChXFt3oU7UiXTlWrkb18rd6TV6jfRAxXyD9KTltJz0orfpbdKb/kH/IH3oPXqP9KUP 6ANU/I/Qz4TRClqBuv8pfYq6/zl9jrq/klai7n9FX6HuF3V2NNfZMbSaVpMB9CP9SGJpDa0hAxn+ kUFMYAKJY/JMngxmikyRxHMVnsBVeCJX4UO4Ck/iKjyZq/AUcTp4ksoMmAFJ41p8KGpxEzKMmTEz MpxZMAsyglkxKzKSWTNrks5smA0ZhZ6nFRnNlfoYGWv/VWb+FetrWbuaXkDWltEyztpAYol8nPKF tbV8zfqOsbV8Fdlah69stsjYWp7z2R7nYCBkAQZKe1BsX9K+dDjWEf//jD2CDD2KvP1dxtxSZOkZ ZOhZztEy5OglZOk1dhV5egOZylkt8hmMvnD2DrK2UsbYChln/y7G+pI+mEOx2O42JPnY9nUmv2Fo ivXvTlT6ZRhcyR0MbuQuBndyH0NzrJEfIUufYGiBbdu3yPoaDJ5YRzPiRQUqIGPlqQIyth6th1xV o2rIVQ2qgVzVptrEn+pSXWSsPtVHxtan9ZGxRtQIGSu2etvxVm8QtaAWyFgraoWMtabWyFgbaoOM tcOasRNtTBsjY5vQJshYR+qIjJVSKTLWmTojYy/Ty8jYq/QqMvY6vY6MvUlvImNvYUu3F71D7yBj 79K7yNj79D4y9iF9iIwVW8BhvAUcTp/RZ8jYF/QFMvYlfYmMfU1fI2Pf0DfI2Hf0HTL2PX2PjP1E P5FYsXJGxmLWI2PlmBwyVgFbzIOZElNCxqowFWSsKlNFxqozdWSsJtNExmozbWSsLtNFxuozfWRs fWw9pzFjZoyMNWWmyFhzZo6MtWSWyFhxZtCRrCFriIy1xbb1KObNvJGxPqgPxtTWxeAG7tAcPKAF tARP8IJW4P19fY3LOqgX9VAbGqAmNEQVIGoPiud682MciA60Bh/wBT/whzYQAG0hENr9JG1WmzY4 gCtXDeLZDuAIEpCCEziDCzSFZrjv51dtx9VIM94iEOdg1cE9lsSW9Ua90pYEsT487sB64fFtyT7E ILIfsQPXQm1rdZPgxEJZF9ZV5lVWY5In6SmufHyJGesu2AiWrA0LEOwEW6GRYC80FpqwQNaWtWNB ggPrxLoJDQSJIBUcBSvBWmjIgll71oF1ZJ1ZCKahR5R468SXiAosEIN4F8F4Z+L8OHKoyIaigs7H oM6Zo0HeY9Dk3NDiTNDmTNDh1qnL7U+P258+tz8DbnP1uc0Zcpsz4jZnzG3OhNucKV7ZlKs6HVwS WA9c7saXesqWGLalxRxwI/XhFbyGKngDb+EdVMN7+AAf4RPUCAQpygQQ+yAEeUFBUBSUBGVBhc+n qoIqaiTm3ChUy4xm0Rw8eBydQBTpZDoV986gM4ga/YX+QtTpXDqXaNBFdBHRpEvoEnzKAlRg4myq DYkGHIPjcAJOQimcgtNwBs7CeTgHF+AilMEluMyvJaopgmrKAz20J/Xkmqo1XtEXtbs8bUvbEiUa TIOJMu1IO+HxITSEqNIutAveg9iPpY6ppBE36AE9oRf0hj7QF/pBGIRDBERCFPSHWfALzIXZMA1m wlrYAjNgHkyH+TAHNsE2WAAr4DdYBIuhAJbBOiiEDbAVimAhLIF8WArLYSWsgtXwK6yB9bARNsN2 2CGo4lUbo9qxJmJ/S2Nu8Q5YrzhjUEDv2Qyt2AO9ojJqoUhsA4pzEaqjIpqGNrGeFKJNbMSgzXWR Du9P1OVayJBrISNUQY/R/kUVZML7DU258jHjysecaqHmseCax5LrGSuuZxpwPWPNdUtDrltsuG6x 5VrFjmuVRlyrNOZaxZ5rlSZcqzhwreLItYqE6xApVxpOXGk4cxXhwlVEU64imnG14CqzsitwFa7B dbgBN6EcbsFtuAN/wF24B/fhATwUe8CgAp7AU3gGz+EFVP5brWwn7ILdUAx7YC/sg/1wAA5CCRyC w3AEjsLv/1YrC4JgaA8doCN0gs4QAqHQBbpCN+gOA2EQxEMcREMsZMIEGAAJEAOJMBjGwWQYAiMh D5IhBYbBcMiCbMiBiTAJkiAV0mAojIB0GAWjYQxkwFjIhfEwBabCy/+zMpmV/eMWZm1vjay/Rj8K Yx2+lej3lmTpd5dXapQbkPtGlSqw/Cz9trjJD1vWUhWJkrxcYzVghnJEEi6v3FgexUuWK6NCfoik k8S+zhbjAtMMY5Q7YuiAGZ7Mp0jtzydD8xSDxKJOYoKOhevoLVL/OYX3cqKmTqpQLI0zPds7P0tL KskSwiRZEJQPKJaYssNazWsda3ovOr7v89kmeCuJ0sYSO3noIqhoW/okJA7nUyab20bamUvd3V2/ m1zZQWoqMa49WPeH0y5LLSRm4n7QNvi6v3NCQoq5d2rKgISk2JThElN9VYmrxM0J/5ylEqee+qpS J1xtihvxr6dkOM8rTERem3UJkWpLNMUVRW3lbuHJA2LjY1LwMhoSNXGjgrZC5/5RgxPioz7fmPLP bsxKYlF7Y4Z190f1Nw+JjYkX5zXu6OMtyaKWEtUvBUixIQhZ2PbD7cosC8122/BRF/ts9nNf7bJO euWdddO2Q/e9N1t8xG/IszP+D85PKhkU1Dni1TxWEnypbZxjA8/+e0uttqkEbBuTet2veM1UtY6H rBtX5t9XtTI7492gOmLeqfp+K2YGms07udnRsiSwSXrCZV1Tj0nuGu7Xi+1eRXs0oU41n2wCVm6N o3kL3+/cFDkm613v/MzsnCkbK7fPWnbKbWXHHH2bvPbXJVWk5avD71pm7sl9Eue+ysGlaovDBuVR EdOHRS+cm6yau6Hy4EvzHR20Jkcet7/s5Ff/6a7A2R4dQwxKozsNX7M+72hXzyVZHcfFy/3WdP/I BsWdo1vOa3+i8Wjn+Ow28mcWnw7MZfG5ZPm+vJshDNDwl2VWSzLfSLQxO02shXoSZXlFNF05OQUA SWaBuJUKmfMlmXMyNHqdTnwWm7TYqtNonU3BU2qOL036z9tbljqqt4ktWozTPONZFVlxs5VEXbxH bUprBDkJYCQxETeoCXqCzgmT0jSS2GvDiysH28/v5OuwzDfyuURF3K0uCEij3DrUAdEiRq4tHB3Y sLJ0d/uUgu42KY1SN+d+XBs0axgJfnjsscG12ENqBekvmc/hY3kn3oacOLCkuGvC80jfX33J09lH 518w3q6ypL7qrLIrpuvtRj17sjJ53dQb7lNazh24223w2XEbrD7efHgxVmn6uOJPt8gul5dv0t9p aDnIPbabPbP1INsh29ymliuo/t5nwMniDO9B0at3bds1xeVYJWikj3h9trz1zZGfbt1a96nq5gXV zYkXZ9zpUORWkN7kfMurLioRrmxJ5kCr8VW9I6du7LnLvSxsUpdsQ+fXHnPzs+oV9Ju42X7b0hXH 114xL9orqZ9jrqPaaHfnV97lfSV3ZtjG5u1PvP1y1drSjNZJaWroY0agj4mQ+ZhwumkB94W6dXkk h37mb2Q1OhwpOhondDNNnaUyh9P0y6okc+x/y72pcsNB0xWCO3Ts/Plw+Mnh/9D3rBiSYnD86kLH 6heR9TOWTak5mji23tI2jaqre24sDVYv9rhicVLuwqh0r63z0qybX8vvYH436ZzPkHs1cTrvlmRv ss4r1tnaZ4/reIdDa3PChuRk2uxwhnfrL85kT4u6aLLjY3Oq9udEhtfP11m4aMlC/0jXS5otuh0J MA/Re3Oi+6eqfYbHivzjVB80lytdaXxn3PPraw4kju11prLSa/vl5YuWkfg1mSefNhfW7wucYa9d /tA7TSmDxsWYb5Fu9Bx0tpXi2AuJksmS+3smnXZ8ej7Xy7Dnin0Dch6MT58OgfE9fMwDFo779Lvf tgdBAlWJKC2oMJ5p/fH0b2qH3xY1MBz5Pv1i7/ZnYh7KfM9bSebrH/ueryy+nHT+eL2IfteXpSzp pzbXe3UvbZ+GvPhM1EXWI5EVMrjfMLESDCR6GT+mva94gJnQUuIhcc93zW+a6zwgJSWxuaNjZFKc w+DPZegQmTDYMXFQrLjVMTEpISo1MiXZ0ScEDc8BN0kCPt8hpUILSXOJ2+d1Ccu1lyU4dOjQHyXY P6lOSinfEYp7n7Awm0vpkrZabbxdW/RN3fJHAWmmGbDRvvuCuelPlmktnfvUYPOcqsFTLkkMjddb RHr7zyzbYGjbbk6zUa1Cw05E7H74IXZVvzGH8lbm1kv/9XaPUVfHXRg6TG5lg2NRb9t32uZrO8XQ PlTRNumQmUFL+1PEJkH7zIrwyosRzYtJeznHeTGj7kb6eHnU2zNRYcStYa323hxWmmdeUH/p7rDn S9Z17p2m89FomFxZZOqgzI95/uvX9+i8d+TeDfWXz9hUqWI/WqJxVdpuT3bPMW8WaA17eGN02FrV w1LTqqT5njGn3J66lbobJV/1uOxyc+zZhSdvTbxh+ClKsd+GKoftTtZpsdYvL0xualVy1doXvc8i 9D45td5HY6DKvA77iPVazat+Zt1HxBR874P+Hq3TTOIubSaRSlxcXEXX446rf4PWCY0d3D85JXxw 4j+rda65xr/fcLR14BCDo6UBniH7qtfq7LR32qXVofPRsU88nS+3lc6wLZoeVW7WMXvngXZnxsi9 fZa6Z+KR1RcKYxOjh9lEPyja9ixnx8mnaz5qLVfpYWnneKrV5a6CUdrWwVGDA0OvXn9xY++SsUcy bo4JYq6zXu9brNjVdECbk5f3pfV2HFVkLWzp2mugcWRNRnqLpxcE62D3oSkKfQ70vpTrap/6u9oj U3el9LRPi+LiR5RXeE6ds3iIWr9GHQwiwpwWnx3bvrFl7wF+E284Zmt03PRuq+HkuKfWC7TfHtco y1F7lZWW3OzwLyMKToTJV8htzHXe9nZWr2zv7O45s+I3mtkHnEhY6FM+8MGYhlMG1fqbLGqLOdLg Rx5H8X+H2tGQV5K1LHSpKGFIHUeZ8KC915wdLmvb5U7dvfDROg9vn8OnJfW/nKDDhHqmyiSETzTs Q7y/VUJ/klE/cFCzgjWlB9I77tKcsjRcgapNSvSb/Cw5tNhLSa5JzfZOITnGT9ynb1vWVeXGpCIP ozPv1636fdtvnSyMEhRjRw+CAkv/J3FbBqdbbvc/l/1ysvoehQnN9j8e/TCxj9+SGWdPlF6fsu/W 3kYn0yt+L3S6kLfjeOTBZmcMLPam3fCYv9koebHFuEtbtmiFTnq18ED/wPm2DReGTVD3OKLdf1jA rlPrxzbvsDGi+w3Jw4fuJnfGV15xz3ynbTEpKiNSXphdOZ/5OI70H7ezhl3u/y7wxhVImblZLr7e iUXXbMPTA17oL9S0cGPGeevkD8122n631eGQlsW/jr/xINp18ivL2QtPbBwa2qn5xSTfTVZV6KDW oIOa8VkeyRVIuDxS/Pvk0Z8cgeij3CSuTk3RNUmlTUUf5Vy7KhVXJZmb/xPyyEZiXbtqGu8Tmzig f5K5b4ifuV9I++ZuTX2dmzhLmvo0adrax19qLbGqfSbjb5+pSYj4UOYh/ZPSYiP7/0P39lxosmn2 PsPMGOvfGkZs1m5XKtm5T8vtQ2Z/F4WDzTY1GFClIOxTmPNq24uRphH2/pfbLe/ksu1c3JOeHlvG Lm3TUlPRoekgv3sHWkxi0exXg9jHgU9s7J+2GNpr+fnEee26ZWuc3tDk7XiTe4/sttw/tVg+YlVS 6AGPw6e8tt/a2F0j7u6KspIDqa7Fr3JuZT6wvWT0orLwRdayi2VQsEQ3+33L6rW3ipyO5rOol/dq DBsOUQyZoMsqx9qktc0asurZeqdhh8vi9DpY9p8TEezvWGO1IadiZWIxHL9yyUnuUONprYoWX7DP jdt2XNtp1OTDowv1HZ0+RO8y2ejX5e366iYxY2PsZmaf7bnUqq6c+uoQHsypevNs0ot7sXd6DGj/ Zu6EEdcXOHyjlH7oMf4VpZSSnBgZ/m9RSp9TSvmxs/5G/8nv+5G3UvUa2m96iz0rmi6/KieXbda1 8tm8lUcUJztuPuk15EJu+lCz64/1NxWn33k3r1LZL2C9zq5Y+0rPmIjQyqdjbDRnuFeUXs5rP+5N WBurkTa6rRSX7FWVClmXmhbVW0jOTVwzLPzQ1nHeizybXeu+3GZB8yvF8n10Vm5SD9o/pcXEyoh5 b6OfXHhpbLvR6eoxqdLu95YD/IOqzyVb3rebYkned90rX5iZr7vT5Z3tFLPACLml419ntnmoOl2x rLvHVNNBSrG/7gtI75Ll1Y+4+SyUP+F1yXFvh2Sllh939n11pML1QFR4fvD5loknem3Uztx/fpnU sDjq4uyzI7wa9fIPUWpxEt559SAnxoeES7OEeeixfmGUSjLz/sYm2zcNya9dXfmZB8XaSVZsSiCt V7cfDa/7dU1Fqiapu1cXvcaXEwUpmrqLa+HVkQsPMqMAy6306uOooon5qZKoOqfUk3aVhOY3yrAl wSSWRJIkksC74qL5LPahZDhJxLUY3B6OSwPI8KUNMxr81E5ThicmxCSFJw4Y7vidXxKyKGmQftas S3Tw8Q/PE8taN7/t7ZG3waZ6zaAbL6ed14uZ9vFWQZDB6e09Hay6rTwedbgwQrXlzFOzEpb5X9Hq vfaX0+Pjr3Q73+20SqqLs3rilgtOI0zIH8JSpRLJnG7OKdK9ZhVtuumZxMzuEdnp+NvqzVtu6Ubq Nxr1pCDScOmqE4HPU1TG77zT5l2s9moN7YlTdKoH+Us9Bjgqnx6dcO3M3q6hb472czgbVjD3YmLW 0iPTtEKfBoyadmPzjotr7jz0LBqq3qR8WdLrlQet54z2fFR1dG9W/cDwTkMPDvwUldcvWNlghJDS 5tmpx9bjbE3uvbI9PS/tZPal9/devUw3Xi/v+/u1eLuHAx/dNd03pOPSLGYmyWJGX8tIXprF6uEm xf+4MX5fQX5TbSvIjDG/j8SgriWqfO32pXjNL3vkpOq896GppKmTRNJM6t7zT4b4rM0hu0/ZGaGs wcPxti+3dl7WOGbAd95JNJH+v0ztt+Z60SOrALe0cq9xg6wrxgzwbOxSujKsHDTTDqzKM2l+cMYM /YyDN0NyWmWvvb670GXnIJXRL6Lkp8mNn9g119alfdXSO+l7551PbjxDzQmcXi+y0dqZnfPHtcar 3pkJFra3m28VzmUPn9dH8eSjzbMy82OTT24vv2qX0dqRNVpjNEiBSNLZciebSnOXEXNNHkxIbGcU YpC6qipweeXy3nk1PlpubSwn6y2QdjpTsvBu6IXy6KvP7t/bVd5HizXr03fB/ibqrk7tPG5NPHph 6Org4Nt3BlenX5i6h73eZTO/PLd6Xb8Hzwtfn8i16KMW9Hxni0Hj5wwZPjo6xn6HxdG1CkNUPGZo PvwvJCBSxw0KZW5kc3RyZWFtDQplbmRvYmoNCjMxMTUgMCBvYmoNClsgMjc4IDAgMCAwIDAgMCAw IDAgMCAwIDAgMCAyNzggMzMzIDI3OCAyNzggNTU2IDU1NiA1NTYgNTU2IDU1NiA1NTYgNTU2IDAg MCAwIDI3OCAwIDAgNTg0IDAgMCAwIDY2NyA2NjcgNzIyIDcyMiAwIDYxMSA3NzggMCAwIDAgMCAw IDAgNzIyIDAgMCAwIDAgNjY3IDYxMSA3MjIgMCAwIDAgMCAwIDAgMCAwIDAgNTU2IDAgNTU2IDU1 NiA1MDAgNTU2IDU1NiAyNzggNTU2IDU1NiAyMjIgMjIyIDUwMCAyMjIgODMzIDU1NiA1NTYgNTU2 IDAgMzMzIDUwMCAyNzggNTU2IDUwMCA3MjIgNTAwIDUwMCA1MDBdIA0KZW5kb2JqDQozMTE2IDAg b2JqDQpbIDI3OCAwIDQ3NCAwIDAgMCAwIDAgMCAwIDAgMCAwIDMzMyAyNzggMjc4IDU1NiA1NTYg NTU2IDAgNTU2IDAgNTU2IDAgNTU2IDAgMzMzIDAgMCA1ODQgMCAwIDAgMCAwIDAgMCAwIDAgMCAw IDAgMCAwIDAgODMzIDAgMCAwIDAgMCA2NjcgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDU1NiAwIDU1 NiA2MTEgNTU2IDYxMSA1NTYgMzMzIDYxMSA2MTEgMjc4IDAgMCAyNzggODg5IDYxMSA2MTEgMCAw IDM4OSA1NTYgMzMzIDYxMSA1NTYgMCAwIDU1NiA1MDBdIA0KZW5kb2JqDQozMTE3IDAgb2JqDQpb IDYwMCAwIDAgMCAwIDYwMCAwIDAgNjAwIDYwMCAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYw MCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCAwIDAgNjAwIDYwMCAwIDAgNjAw IDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgMCA2MDAgNjAwIDYwMCAwIDYwMCA2MDAg MCA2MDAgNjAwIDYwMCA2MDAgMCA2MDAgNjAwIDAgMCA2MDAgMCA2MDAgMCA2MDAgMCA2MDAgNjAw IDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAg NjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDAgNjAwXSANCmVuZG9iag0K MzExOCAwIG9iag0KPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCA0NTg2MC9MZW5ndGgxIDg4 MzcyPj4NCnN0cmVhbQ0KeJzsfHtclNX291r7eQZGELmIiiAy4wiiA96vOMog4I1UVFRQSy6imJkU 3rIUsmMZWl66aOlRT/cic0BLUFNMu1iaZV4qPT+1NDuZ2c1uCs/73XtmEKlOve97/jif9+1ZrO/e z77vtddee+1xRmIiagrQqXPK6CGDrFd+HEPc/0Wi8CWDUlIHPn/0iU+JzlQTiSWD0keM3vbzx68R nR1M5Hhj0OgxA/RVr91KbA0iilw3YnSnroFxE0YRcSVazR6bMiyza8WwdshrSxT8cN6MnMLhE07V EHVEvsjNmzPL8kKT/FCiARYi32VTCqfO6PX3pI+IurQi8kmcmlNUSHZqhP5daC9o6i13TDn/aP9O REMwvqKUgvycyR+/V2pFfw7k9yxAgt9LAnGehfe2BTNmzZublDkcfYURRfjdMjMv52rHM3cT7fwa 7z4zcuYVBn/o3xfl16C85dacGfk3+zw2l+j4T0SNVhfOLJpVG0Nj0P8KmV94e37h5uoWE4l6gf32 kpSdz6GJ+z/0u3tSoOOyuZGZ5PNk9MgcGR64z1pm+P6yQr9ituO1kSovH4Q+z9c8BqEfNHyNw/qV uhzvo8mUgEU0iEzqXVAQdaKxyPgC/aoS+kFegVyz6XFTNzQZ6Q61HJoiQkwm4as1EsIkdP00dTCq aV6yGgGejGHJFnKShTbp52sXypGIqU5iwzBQO1dfJWdKofpBmipLI5QDvsB+tIPOgmv5jLiR4+kT WsF22s4H6TM6h5wy2kvHaB+H0Ad0npvyQe5NuZRPD3NTOk7BNI6KaT1l0gYqoemoUUZZiIVRRyqg LeBMqqLlNBrzjKZ0yqOjoh99inU9i9530gqKR42FqHGcFmBFXqWttBujaUa30ErklSD3EK2iCdSX eqPXR+giPyIc/DDKBIOK0b7saTRaukZlqOem7R6SrXlpgoeu8kiM4i5azjPVqJVYeAcnop8QjHUG Wsqlh8HjyQV97UnP0hmO5Rjqh9kU0md8AfO8n8oxltGYWTHqyTEVgENopfEN5n+Cazga7azFyPMg eV+aLjKoCTWlK5CknU6jrWDMQXImpOemAkWjFW1nB/p0cIIgLuft3JePQHpj0WcVJHOULgqHUUN3 o/VH0F88Vq8Jz+ExnOfROLkuC9CmLF2MeUpeaJwT+9DnCsXr8V6D3ksUl6BlL3eE3CQXQGqZqCdZ trMcKyJ5NKQoGaNQXIwZjoe8XuYIWkPv0Z3GOQ5BvAkJXuBlifQ8ZPUYrRCRcoOISBEp0c3ehxcg V5Z2b5Tfif/+I6Z6I6BAD7+E9Y7BLtQwkiSqxCwF5reBAzHuRlgVJGO9diBP8DSeRi9BN6SMvJLz SsktqQV1PB26O536Q8476vGrqLEVmrUbsvLKs8QjT69M3fKcXydLL0dD3+WaHlf9h0Dj0qkQu1Km exn50C8HLcHoG6OcP0UIM/RjB5vJaVzFfJKMH2AmjtC3aqfmo8ejapdmQRpyjz6EcUyG3uzDGPLQ QyQ5kJtHuVi1pbyDxrFOA3ksLaUtIhCakkQZNJRTMfa3Me5xWMNUms2xiK0Ez1aaXAyqUnpcRjbI P5jmUhx6kSOQ1mIoZRpX6HaKBc1FiTCMyD2KYowiTo0ji9rj5NLV2o2DdjfHeFdAdndCr8YjDMVb AmgedaMo1F8JlpbkGYx/LuY5jAaSFZSG1p+hRdSW7kGtB1Fb2pNXYRG2UjfjK6zYPNSYjp7XYId3 oQIRzUN5CA8RbXkbaA2vQSxNtBU9odVrhENbSlV8ALq9npvRk7SR5/IQrG4BF2GttlI1rMZi7L9W NALxb+kX+h96gl6nF+kAbcQqL0bubvoR6/s5yj+i9LMaeVWK31PkbTkflvZau4tVm7LFuvZ4LlZk K1JeFMm8jLO5Lb/Jb9IVgU3FJ3k1+CQ/CX6bT/BHPBmW7Xsu5gzuxWb25Xb0KEp/Joby+/wdB3A7 DsbKXtt/bwtNsND4CX6Ky3gGy/N/HedyNnQvWhXxJx9VMgjjkM8KSF7uLfn4geTzAizl17Qa/DVK rcdeAGEk0k6701fzPXwUI3+O30b5SKyDvS70xv8DD8a+Tp1wRKHY5X70DiS0GppfzTv5JzVOZSwQ 98yP3+K/1c3Vm+aZ66/C9TxSspKBZB+3bOrChk9jj3w8IYdjfeuFXtlCe4+pcCv2u8w3020qrOAK lV4LrZbv32Gs8sF81FxeoDnqfSr26CL6B62DJQGLllht6AXl0A2QyAnoRgA04ElI4kb4Byasw9ug o1iNe5Are1lH6/gLvsyXsb+n88v8PX/KMSIPUnNh3yRRDJ9Gyqf8Fe9Bi29CCuvR13H4De/SQb4Z PtuDdJB2Km/uEbofGhhMX0Hbd4LepMdhP+7lG0G7QDv5cT51Tdp1UpCaIuUcqfSBeBAok76jj/kn rNe7SJL2FHYTY3gMu3Yfv8PVsIOvQ3Or2I6dEcY3cYq2gN5S9Tfwq/w071V73K4oVpFRR/sggfrv 12gASoPrzs8/y/XPjt/ic7BK8szwng5/lhueHPU5T/kdbpZjkH38Th3uxKF0GQxbCPscCjs6T/F0 UC7qS06HZreHbZXn3QCMGW1BH5bxBB7Mu0GDFc1Vu0hqolcbG+yiPxv+7m77g134m7wavLbeDv09 brhz/2AH/2rH/lEod7SXTZ57gNdqenb5r0KvNf2DsM46/E7otRZ/FNbJE1YFXud3Ko4Q/Fbduv4e B2KXeqypZ/3dlkiG490kTxzcJjJxqlTzRuGHUy6U/ESEaMXTkVLEB3gWaBN1kVZBRHB1w1XwSh2W vEJJT8NJv462ee1cfUZ7CfDlFosQEYExPEg/c4DyRVYrX6UZ/KAQ6NtIeB86WHrRzZEbr1iWKIN/ LFNK6GXs1NvRbQnuI82wmz5V3t0OWMFmSJWenQO7qznqbVGe3T74TqtgWaW/7MAu64dS0lP+h6IT 8Eb2QedWUTzuNOcpHzcKM8gP4zFjv/qC/NAXdi53qvMDvT6n7NlrA/5By6Ar7royzw8jkN5mQ9vj tjHbr/NAJXvtgNe7LwO5fdp76bwasbcVueNjr7M/0rYU4A7XQXlgNyMm73PD1QlfQPeBFoDK6CmU HYPzaCq9Cl9Sesg7cKsMhuSaeaSXgBLDccqspCJFZZDQSeCDoEO4Z0l6H6OT98FKrIe8Eybh7SJu ZstoEzRsK7gMvd6JXuUMquhWeHYlKsfPQ7l1sedxmwwBzeB47gCKp89xGjJ8I9zauEY0EU1w33Kq W+B8mi964kTZCXTgnNopzwJVYo0iB2/EzasbD+Ms7sFOvDtw+wPiDiTvbonYO33ZgdpHESaAZB/R WkvVlruF89dak3OVdeDPb+cjqk+rbE3VjJWfi7jvhZDbs/DhmuDtBW7NewWhv50YZyxaN8t60Kqj aNF9vt3M2zwbqB3eOnM6x3AL7sMaVuJ9SKEvToAe7llCgwfBmyXwSuqKs1qu9TKswwaQEzeCZTiV 5cq5dWU2ZF2Fm8hedWe/G1qzU8W2ol4Z/QzdicV7Avb5o/DL+yj7GSxvXLCA7XGuyPAO7MhI3Chk T+FYXcmt4d87aRLqhWKmsnYx2twKKTtEgAggBsWi3XE0Re3caOqOHbpCnVwt4PfLG7kf9tE47G95 g1sOu9sYJE8xE2yV5HN1550N94npHpIlwiiKE+p2kdx9cg/g5FM1ZD97IQfZv2TvjrgbHlccdoWX ZUsCbc3CzgjCjOSuHgk76Kf2a6iSE8YFXyWNT+IGUgnf5EPuBzwLflobQp9QKI/lhVhHpNAZeFtP 470Mb2vwTvw1bimdQHKN/8l3eqyF14a57ViZvOn/in/LE9kAu3ntVns9Sw9FWhBpfbxc/zMDyWHQ Ci97P0Oo/1lCfd6ibGV8nSWq/zlDQ/Z+7tDw84f6HASdkey9I0uPRbK0Ut7PKSSPQf3eSFuBueY2 oHqPEWFEcD2qn4c9cD01qCcC+ByswqOK/Rp8FCj1dmU9knXWgfYZ+9TZVJ/ImAWKwB67nsj40hgL WgiKMHzl2NUYMRYu4TLV7jh1L5/9R3P8o7n8mb7rkdx18u4ejD3aE3KAXtZrW3houvL5Y2GBQ5V0 5Yej8nMD5Llz6iTwNkiGeSBZEx4NrFtsvfF423SIWFiFx6Cr3kd+phgD+5ZAn8nPBHCffQL75izs 8U5Y4r7o/yD/y0PSwg7hs7CnCbghyFJhoomnHamlfXH/iIYmyk8RJK2gbczYR4dgpeTptQhcBm2z cVsl/WfoHtAzNBYjCsMpJE+si6jlQt4avE1HXiRszhk6htt3MDeHNW6hbudT4Ilf4RZ0hL6BpxQC y3AD92Qb+9M/1S7X6DDVwm53hr3uAtJgy2Nhw/vCojvAMcjti7ZugH5fRs0sqoFnbsEplw473wJp MqWLTLm20poFftW9vIrvQN0bcS/cJcLh23vvtd4ngQJgt1rjxI+Er9MaeaeoAiOyQ0aJdaVCwcXS gsLzHQQKVjaoBDv3EGQwT1uKdYjgjShlU16WpDXQ2irYsrm8hk7iLviJulUchC58jHH+p24R9e/q Hr+y4f37d716r6feIPTexxvey3/lWXs98Ya3DcK5twsoT/S1OO+yoO0XaTi3hM9J8DPPQvvGUk/g QqxoYN2n5PFKF8uhS/koPx5rshBr0Btt+6rPH8tRexm0ow8H4hbchSeDNHgK6aIzzwblwjt2YP32 wbM6ivRQ6E4oZ/BwpT2DuSlu65f5NkXdOVlqFn8JDTuo/IcYaF8PrKk8F4txKjSwMmjJTY3d1NCy sQlUP1167K9id3SALQ9UZ5H0IDIQBiImbXiZoh3qEzuvbZfnME5uHucm2kN7sL7Yu5i73KuzUL4Q vkmm8rXlKSZPLXkKuG+3d/IbfBo3D4fy2kpwTpVwsftTdJ7HBbCl80AlHI0Tq0SdKrNxIhdA5iYK hyTi+QxoAeiCIgdO6zf5FZzj87CPN8GD246ez8K/uYMaUWcKFKH8E+44l2FxDsM7ex/0MX/GJ/gY fwR8WYTAu/ylod7W6Qr0pi4vlOGw0qW6G1fD0Hsr6wTdAP+qLW/ovWkOwo7uTy04nYbBd+lAqZDC 3/hH3sc/1t3kGu6B3+kbe7gcmrycNtIo2Iwslt7eNpqMvdQIlqMJwiZYifVYW194Qv7Ima5WR356 9DJ1UVL+BL6nDisSDDsSgjvBSliwGXQZ/sBR97aBt7qRTyJczVPVJ1vET8rPP+UnmOAT4J95paJl /Dl/D5kXciE9550By8ekaUqOYaYv/avpJ7OB8fgYtRhnI9z9/BT6kz8QviMwAHgVow8ABioMoibA YAqEZxiisCkFKS8xGNgM+AskGwJsQU2BYRRq/EwtqTkwXGEEtQC2Av4E6YQBW1NLYJRCC0UYP+L8 ktiGWgFtFGn8AA9XYrTCGGoNbEdRxmX4oxLbkxXYAfg9rHMbYBzZgPEKO1Jb4ztoRTSws8IuFAPs Su2Mb3EqdQB2JzuwB/AbWJ84YC+KB/ZW2Ic6Gl9jbSX2pU5AB3UB9gNegh51BSZSN6BTfd6eRN2B A6gHMFlhCvU0LsL29QIOpN7AQdQHOBj4JQ2hBOBQ6gtMA16gG8gBHKZwOPUHjqBE4wvYAYkjyQkc RUnA0cB/wXYMAI5ROJZSjc9hZQYBMxVm0WDgeBpinIfvKHEiDQXeqPAmSjM+gwbfAMymYcAcGm6c g2UbYch/JZE4mdKB+TTSOIsbiMSpNApYoHAaZRif4k48Bjhd4S001vgEd6pxwFsVzqRMYCHwDM6G LODtNAFYBDwN2zUROJtuBM5ROJduMk7BcmUD76Ac4HzKBd5Jecb/0F00GbiA8oELgf+EpZwCLKGp wLsVLqIC4yT8Eol/o5uBi2k68F7gCdycbwEuoRnA+4EfUyndClyqcBnNBD5AhcZHONFuAy6n24Er qAiIm7vxIWzsLOBDCh+m2cZx2O05wEcVrqZ5wDV0h3EMXpHEx+ku4FqF62iBcZT+TguB6xVuoGLj CGzI3cB/KHyCFgGfpHuMD+gphU/T34DPKHyWFhuHsb/vBT5P9wFfoCXG+zgH7ge+qHATlQJfAr5H m2kp0EXLgOUKK+hB4xB8meXArQpfphXGu/SKwm20ElhJq4BVwIM49x4C7qBHDPk592rjAM6wNcBd 9Bhwt8Jqetx4ByeTxNdoLXAvrQPuo78bb9PrtB74Bm0AvgncT2/RRuB+hW/TP4Dv0BPGW3RA4UF6 CvguPQ08BHyT3qNngO8rPEzPGm/QB/Qc8IjCo/Q88BiVGa/Dtkr8kF4EfqTwY9qEW8cJegl4UuE/ abOxl/6HKoCnaAvwNG0FnqGXjddgkyV+Sq8Azyo8R9uMPfCvK4HnFX5OVUY1/Yt2AL9QeIF2Ar8E 7sbJ+yrwK9oFvKTwa9pt7IKvWw38lvYAv6PXjFfpe4WXaS/wB9oH/BG4k36i14E/01vAXxReof3G DrqqsIbeBtbSO8Z2MhTWt+l+yqb7/X9p02P/sul/2fS/bPr/hU1f85dN/8um/1fZ9P+X/PSU/02b nvaXTf+3Nv22v2z6X376v7Xp2/+rbDqpz5Qkt/J8e3qn+1vT/Brp0C6CJbWo7wa2gTVtD5vWH9Zn GOzKGNiKCbAIk7DP7kLJTfKbzygbDRtrh6VLgmUaAZszzlMqx1vK+PTfUp6R96vvcf/Owz507cvg QsivkzUoIL/gbSKcWuTfmJoEBgWHNA1t1rxFWMvwiFaRGK0V84uOaRfbvoM9jjp26kxdu1GPnr16 U0JfwkydsMkpqQMHDR4yNO2GYcNHpI8cNTpjzNhxmVnjJ0yUk/pPPtr/WbX/+lVyDhiT4Uzs38/R N6FP7149unfr2qVzp47xcfYO7WPbxUS3tbWxWqJaR7aKCG8Z1qJ5s9CmIcFBgU0CGvv7NTL7+ph0 TTDFpdoGZltcMdkuPcY2eHC8fLflICGnXkK2y4KkgdeXcVmyVTHL9SWdKDmlQUmnu6SzriQHWRzk iI+zpNosroMpNksljx+ZifgDKbYsi+uiig9T8RUqHoC41YoKltSwghSLi7Mtqa6BcwpKU7NT0Fy5 v1+yLTnfLz6Oyv38EfVHzNXCVljOLfqziogWqQnlgswBGJQr3JaS6mppS5EjcGnRqTmTXekjM1NT IqzWrPg4Fyfn2XJdZBvgCrSrIpSsunH5JLt8VTeWaXI2tNRSHldduqwyiHKz7Y0n2ybnTMx0aTlZ so9gO/pNcbWYfzbs2isaD0nOvK9+boRWmho2zSJfS0vvs7g2jsysn2uVmJWFNlBXRA/MLh2IrpdB iGmjLehNLM7KdPFidGmRM5Gzcs8v35YqU7Jvtrga2QbYCkpvzsbShJe6aNQd1orwcGcVzvbwVEtp RqbN6kqMsGXlpLQqD6XSUXdsaem0tLw+Jz6uPCjYLdjyJoGeSOOA+pH8ujwVU8VlLG1UnWRZjsg2 BArhsuRZMJJMG+bUW0J+byrN641ieLIYtVyTsSLTXI2Ss0uDEmS6rO8yRQfZLKWXCRpgu/jl9Sk5 nhSf6KDLJKNST+pUDfneuMtud3XoIFXENxlrijH2V+894uPmVIpptsIgCwKIj9Ih25yshE4Qv9Uq F3hppZNy8eIqGZnpfrdQbkQFOTvZs1wiW+ZUe3OajZE5Jd6cuurZNmjyVrXLm7nMMXV/gUHNm6YW JLi4+b/Jznfnp422pY0cn2lJLc32yDYt47o3d37vujxPzNU0OVOLEJ6YiNBULpRyYl1h+ZLZ2KVH 489HKfXkSl8ztFKlsGWgKyh7sBuz/KzWP1mp0vha1lLBtWqeYboS7Ne/973u/brhNS7VMGA9RqRl jC8t9bsubyAsUGnpQJtlYGl2aU6lUZJrswTZSqu0GC2mtDA127uilcb2pRGugcuyMIkCToC2ChpQ buMlI8udvGT0+MyqINjyJRmZFYJFcvaArPK2yMusssDoqlQhU2WifLHIF0pjKHqFMKvyEVVOohKV q6sE9Z5XyaTSzN40prxK4U4LcncUozpy4sTNq9TdOU5vaR1pZndaibt0rKe0GTlBMmc7yW/Nq0z3 I61GckZmfX1QmywrHm5khlGtVVeM6easRJCggi1N2nYtkaF/gAorGnVLTOqkVVMheDP4EFinScBi T4pGUcBEsExdrvI3ajvIBa4GvweWKduRsh0p25GyHSmJWiWxtk17paJtFLreuqVl266XksK1LWSA hbZSW4oLaJR2kyec5AmXI+yAcIUnfEBbWtE3KjCpEd6ZLgENsMDc1lUMGtG1SkV6OVRkrTdl7Rak RCW11NZhVOswqnUY1TqM6hKQ0epapK9F+lqkr1Xpa4lVU9b2nqY8kXUVgc09KYgk+WlZ2ljcaaO0 TE84Thtb0TVqd1K2NgZNb1a4UcsALlc4SeEIhcUqt1jFZ6r4TBVPVPFET1xip3oYpTBQojZKGw1P JUobqQ1VYbqWCq8kShuBdxkO14aocJg2SIU3ID0MYRrKhSAcqqnvi2lD8J6CcDDeZThIG1iREtU5 qRDvk5An0J9MT8EYUjCmFAhJpiwHbwSfUimTgMXgQ2BNlWQtBZQMStKSUMOJNpzIcZKmOUGJoP5a f+T0Q9l+QKfmUHN0oJQDPTkgKwdadmB5HFgeB/lqDqBF60GdwU5wOjgbbEI7cagXh3HFoYc4LR4+ XZRmFcsoFKHFE0aJpfI7elprsbSidZQzqZHYSungbHAhuERsrTCFBCaFopws2wk8AjwJXAzeAN4M NlOiO8fpLxJFojZCjNB0aHf7LQ5HVxV26+kOW0W6w8bhXQOTbtfaQ0ztaQNYw5DbY8jtMVXvWxRY QHXa0W7wIfApsBR4OwijHYTRDhNsh/rtVCkfVe4S2ABrUKJ2aP/6MiZVOwrcqV4rMjUWKbF4i0Wd WJSNReopIKsaMj8dvBy825PXRilzG6WcbdBWG4y2EzBRxQKBUVqbCtEosBLy5YTApETIfQQYmeIB SPMByO0BaUqE3MSBCjt54ssQW0beGsvBm8E+WhWoPagdKBbUBmQFWUBYUa01VnMFaDnoQdADoGWg pVid0M323XYxqcfMHsU9lvfY0GNzj909fHeIHFC2yHb6UfPmOIBDgs3hSUFCp4kUwL8o3KTwdoVO hS2c4RMDzk4MeGtiwGMTAx6ZGJA5MWD4xICBEwM6TQyo5FxnC3vACXvACnvAWHtAT3tAD3tAN3tA e3tAUjBn8TgKoF0KByjsqrCNwkgeVxFAjXbyBLKasQO43Vbr3VHnrJU6V0TdY600I1jkfpvgDvrK xFeiOlunRsW5U2LcQVvrqzpaoDH8Ivmy3Rnnu993kq/Tt49vR99431jfdr423yjfUHOIOcjcxNzY 7Gc2m33MulmYyRxaaZx22uWtL9QnSAY+ukRdxYPkt9jUBVEePGwWNJRcTbU0kTZ6AKe5qvMoLdfi +mG0rZL9cJSbbAPYFZJGaRkDwly97GmVvsYoV297mqtR+oTMcuYHs/DmEktwVGZkVrIhkxZHSK+5 ipjjFj8Q4QmzsmSdzHKdH3ggi5rPSQxLDOkf3Gdgym9Atgft154we/0XjCTS9Wja6EzXC5FZrq4y YkRmpUFy0smuEr1Fz9SUKtFLBlmZVX4lonfqKJnuV5KSda0cWZCeUkVWGahyZJHlyNKgXGvRS5aL loG7XGtVrvV15cr7WVNTyq1Wb5l+qky/68tMvb7MVFVmqqeM5i5jrVfG9zRZVRmr7+lflWn9J8pE /2aZetLMH2D/Nw9X0VA+Vp48X95Qsm2p+eBs19I5BWGuklyLpYqS+Zjn8hKTnZtXIMOc/Eo+ZstP cSXbUizlQ+f/Ot81X2YPtaWU0/zUjMzy+c78lIqhzqGptpyUrC2Dcjpsuq67+73dlXfI+Y3GcmRj HWRfgzb9RvYmmT1I9rVJ9rVJ9jXIOUj1pbQeammmAVlwiVW4Rfj7QYGzI6xZA5oHFfZX2tzXGrYw YrtO/Bz544bQGLfNALDMik+KT5JZ2GUyq4m8iHqywhb2tUZs5+c8WUFIDrYNoLDUaSn4KyryRP7k X5F8Zt1UdJMK1V/RrNlguVDypwuzCHNIaqyschTss1CWWVpkabW1oqKsWaRWtWg2yfZmSbjWfF1s NlrmovpqQEUNH6kbdnIzmiuazSglC872KE6R/IEbmiE5SE8r+nkifRVFIGyt5eIEJ+OUhz+R/3uA zK+tMQxxHAYqw8PuJwP0iMIMHuYOaTIdUb8TWI20bvwuPU9OCkT6EdKYOJMc9BDNpaM0xvgGqVZ6 ki5RHPWhAqNWfS+0lhfQk+z+pXZv+kB+N1I4NLt+AcaxA3fWyngRxaOVDHqUWtAhtNjB8MP7FhEp HKiVQe9ok8xxRmfjW67W9xu59AQ7xDH9JTpAF7mNTrX3GEuNtcY6akLfa5E1e40uxgzUGkPZNJvu wghKaD0d5CzRT+w27le/x89H6jZ6h+1QqGx4eKNQ+m+0hqpoFx2iD+kcMwdyLJfwB3zERDX7avcZ Q4xcYyal0nBKpxLkRnI0J4nx2nhtk3a85tPa00ZrtJ1Bc2ge3UnL1f9VcJw+ohOsCT+RIcZomyiC +qlf0a+EzNZDkvvpFJu5Oyewk+/lF8UcXavZhxNfp2aQ4GAl/ZW0FjJ9mjbTPnqP3keb36hvB7fE 0o/hibyAF/OD/DA/zS/yS3xBmMSHmqbdrb+hX6g9ZvgZjxvPo98IakUW+L5xWIMbsJ4H6QvMrwPH cSIfFnYRp7HeuKa2tpsxyCg2XjeOk43aoWw/+LmpNIzGYdR30D20g95A3YP0Ln1GP0JKGvtxCGRh YRuP4tE8G6PYxJe4RjTH+vUWt4gKcUSzawf1cfpLNVtrm9VW1F6qNYwyw2XsNQ6o9e2JfpKxAjdS ITaYXLGX0c/rdJb+RZfRhw9HYayDOQ3zXYP2T/FVqJNZLBQvCgPe8Aptv95SX1M7vHZG7ZraLUZ3 Yxh0S4MT1pK6gxKgTfJ7oUXqO9xPqt8RbYH2HKOvOIxbc2cewmM5k7O5gGdyId/Gd/JdkOrzvJV3 8DE+wV/hyuojmkFOdpEnFomHxFaxTxwTZzXSRuNOc5t2p/aQtlV7T/tcD9Lj9M76MD1bv0OfbyKT 5tPcfOBqi6szanJrHq/ZW9uxNqV2eu3S2j21x2o/MfyN3cY5uKadMcYsmooxLsD876UHaQP04wWM 8QydpwtY828hC40bcThGHKXWLRnjHoaRj4PLNAVUwDdD/iVcxhW8k6t5D+/nd/gwn+RLuLM3Ex1B fbELxogpmMPjoky4xEegy+JnLQa3gK5aN9wysjGb+7QlmM9q7aR2Thd6M72LPlov1t80aabJpkdN a037TG+ZvvAJ8pngsRHXLAge7YDYo/fXbqGNuC1o2hfisHDwAnGFnxWRvAe9ReL+lS6SRV/4Rjug 5TMo1Hetj9XHKkIpyDdbtiEeE/HaOD1Ga0yz5C+IxHhxr8imZ3gnXRGDoWlztINio5ikrdVX6f35 OO4be3QSAfwDJVES98fafUC3YYXitc26/A0xmczaVdMMEWDcp583Ce0w7GA/FtrbPJ4vcrpoDmn1 FQ+SDe9BfBHhEOzAj6D5VXA7e+untWViqDiBtFvoId6DOe6gW8QOfgLr0hv78XZO53VaF1rIt0Ea fehm8TC1EYWiDfR5DH3Hi7gZdu4VrE1bMYV0LUDk0RGRhVV/j0NER14IPZ1BS7mU4riGq+mAWEk9 OV/bdbVlTazgqxe5XBtM5XxF36/vh/N9BZKMhOaa4XCfgU6vRS9vkFWLgdb0JpPAvQ77KRt7PVhc 5rvELTSN12j/4qdFEo2gfK1IDORHay/rSVo3SGw7rEmyTx8zmRymSL07Vvw89Ve/5yOfAv2UaZGM ax9o3xtZhrV2kqlJ7UmaD+kMhnVbir00mD7m5nwTj9QNkaYbxlgqE5v1k0YLbsxWet/ADqt9mR3c 1rDwbYY/j4SG3yT/Px19qb5Yn63fhbPpCqzmvbSKHqfXcJo8hXOrHeR4A6Q5EbZnGs6IztSVemB2 /eW/lNAQ5KXTWNjTbFjJKXQr3QbL+3d6kcpxQqVBHjeh3hS6GelFOKHupIXY//fRMtiAR+kZel+8 IDbgzrtEvC7miGn0MX2svak5eSwd0e/Xi2k07sQjuSl67oVVikK9ZcYH6K09RcD6d8cuhd4bF4xj xnM1h9DeM/LXiz4D6IJPMsXSCP5BD2cT7BtkqE81yX9y8aWB5T6+ldx4q2Ay6TKikZ+PCZFXNE2E N/KVaa8wtTSPuDPMPjzoe8ewGsfwoB8cw4JqcMl31Dgkd+ncLdgaHG0Ntk7V6apFq77qNNEVsujV 2E8XjE/EJyaT+k71CGfgMf9z/sLs60dB3HRWOJrf5mwaQOH+zV8K6s9+/SNfwjXKl313iiE4HWp5 OIXZg3648eLZs0Fnz1Ji4sWgixwc0gd/XTrDLGo+PrY2Me20mB7de3br2rxZqKbQx4ZUJIltMaJF cEgLES062Wwd89vZ+/XvIEFfVTPeEh5uEc+E+bfp2NHmd9Xczx7n6Nch3iHvR37iWW2Pflj9Lja7 vImpUtzr9GO/RvJ/X/I73mi7eIr8xS5nY0vw7uBDwaeCLwWbgrdzcxJi1xYz9n6leOrlzuaZuJft FI/hNP+G093z+P5iUA1m8/1FyM4R5IA8MQ2rZxbXIuhroI+lZUuLD09V0bBwi0k/XBseExUVw5+5 Q5wmO4zP9VD9Cvn/L86+BD6q6uz7nDt3Zu6d7S4zc2fu7OudzNxZMkkmCyTMDRFUEIiKLMaAS60F UUjBtfZFahVBLVbcN1DBtVZlkRBUbGtbl1qxrm1tRT/EpUbRUoqVTL7n3JkkgNrf+30hc+45Z+6N Z57l//yf55xE4CkqWEUXnqhNfV7Gphg+l6nuKamZNBuORGNxstPH1oW808Lxd+NUPF4yxKbx8i6Z kmXD+JbScGdLwd1iGOZaWFsLB59zWGwx9eMPNf6Y0ART3YTWFi6Ls8MTWhr6qX9uO4ZFBetZj3hV UAnmDwwOgYb4PdUO4geHBslLbCv0Dgp6CzrztHlAZ12XaDNzXdjT3jShDo1rbq3DWj30JuahxzNi HXJYbHXYRUNPoqDX0Ti+Dre1QFMudtahrhw0gpmrw3YrNE6juw55MDRoNHMZ6axYAYmtdPLUx5Mn njpHYycGxgWkgCPQ3skOgyENf4o0uPLwcg3vaR35mov6erHLVLUeMCVzkxKPmdwuqbGh2Wiszrc0 tySrxmY2Gb7jXmrvLQsW3nzzwoU3ty898cSl5IVPOHTAYbYKZqNosDgYC3TCtyxccAvcdEvHyE2G g4tuvXXRoltuWXTysmUnw2vXEC3aLBaTqXat8ItuufVcctPMpctOPumCZQhTvZVDEFM+BEbVraXS tgxPGT0Op0WUTCYj75Gc7glO4zSWda53JBDiAQDkwIsDgAReLF9JfLp32tD+dn6QB7GAO4PC2oiL EQfrxU2i2DLyYSm3S/To3hVLKZRC9bY/nLI5RNl8/rx555tl0WFLPqjhL5diCp8Ut3oFi+3FSv+G jZX+F2wWQbbG8JQKwjhXOUQtr602zVKsDyzQR5MVs6LJI/FGE6zWYoFFw3o5AA0K+YIbBoD919Z7 gKx3DyxYX+4Rq3VRlHlESWKpiUpVVeORRIla/q2r/WJpZbjyaMwmw2pfwMdt2IiPexFWC6hQeRJW C5y5i+6gp8EyerSi3BMOI8S4HuB76AcYrodlmcA7qIfx9AiCt4fnGdxjNjPv1NuwTY4w3SvAPQpY F7AuX3iRyxCYnS7uoT08wYBBWLpQtSMhWjWmqFCFMyFaRTPq516n6K1cG/L5QngZ6eNlpE+FK6fp cxs8TqcH30v6lXmkj0DSauV16lWcRyxq1Ly/Rn9Cu9E+IPJP0vif1K/Qnzhz2EyZn8K3Igs6Dwer AAWLQoVBXZhRXBMlEDeh8pZfkeMGnB96uyEuW2ykkDRAmWkntRziiE+zoWdBT0ZKpgkqTAcs2IsK 08gPckdLtPPQA9Tyiy+GNb08/D4kaV8gOwoAom5irPTbVtlx3nYcQnp8mQZahaeSR8L4KYnW7hNb SPPFjNZx08kL/vt7h2cbPjGeByZ9njaOZSUss4ZW1MZOxsezPey57IX4YnYVs4q9Gd/GbsQPsU+i J/Hv8Qvsm3gv/pg9gA+yHiuLrf34+a0G6wTUw/bjTbCoHubpggEb3hL68Y4nngKp7O8dAsSuyaWv txePCqa5FnR2D50m+AXZQt1ndTkE2Zj4z5ykzNncxgc9Dpmzggl/AJ/7IyPh/wX86GaRssQHhr9E huH9m3JMGiDpS1Q3vB+lhv+NJHi5h//9ZMDBOhgHNTB8EFDqy01BR448kRn+UounjQFH2BETz2NC ARHlccpoj8Ud0Q4x22EUjUa7rwMi0B+eLCY6HHL9PQPYBK6TvbIqXsBowGwwP4BnEkaFtqr3ADCf SuV5xSt7ZEl2yy7ZaAr4g/6QP+ynTSmlTkkrGYU2WW0WG2tjbGab0WRQYkJCQxGnT8OqKamhHF3Q cJyLatgvQ6PYshrKU9CMIXMGvtQVaARwcevhX12nzdHcQsgpl10hwVMWSCOFQmI51j/8taZBJ+UK CND4eWhkDhqPoxwnTcol2aEHjcEF9xlCorWcs0AjkV7QJUfJD/lU80CHc3nC5KlwmbLwwgQPaUbj xuHlD3jNxW5ex5SUAt+lEq/DoUeCbwD8FPyLxyg3oL4H/jU2iCXDRyvOvn3KFfngJM4Dvak/yYeO 4aWZXRm5ru3Y69Z3qd66tuOuXU/9dVfli7svG1+K3tAxa+kuzJN+7Ib2WcsverkjLscru5/dftEf O2JyAkefJd62BwjbR/RBwM0nNomMv3/4oMYJJsSwfs3fLXb7aZYboB5CNnyHxvI2G8c/wzIUmTHC jIiNRgo/w9TOmJhFv2uAeguY+DnbkJFlbDLl2kGtALbjof6oWdA5goDPAYbGP00tgTT9HvzHqgUR JtbOQ3TXCUx5sBrXET/UAVHei/l/7X/uiEGxHvXqWh4BslF8G4ul1PU4QnBraJGOZJHKZy6Wky2M TB/8+jQPQJ1XdHro+lkmWeDsDGFgj4Ak3gJfUnH9Eyayb7rNb1WNtAuhftyz1WJzdcSMgCLlIbK4 Yr1/O/jU37WsP9F0HHep46rUVXVXpe+vuz+9w7Ylw9pFi1SytWbodDwTUl2pUF3c5rISS5E/EQel /4hDEl3H1GVE78RTVL0dkeo722pCNT6N9wDIWrEdAK5nC8tabL5+/NUWfR07gM8CAMA8857Qkey0 U4shcfDAbAjut1LnQXr18xEP5Q/sJw4KDYHBwTLIeg8Q3ppIEREp2ClIFbw1EE6IXikZUdxRr4ac cUHDnrBLw2ICmpq3rVhRlT98oT7cp85tidaIMqByomUCVdI5i9lUi5o1PDOZzMg8RF1JAs2h1zH6 sm9m+Jc/Ov9h2cTaeMGzYPsZd72v9FxYeXtgZpQo7YLL9n62+Acz6hbd/z+9XrPFw9dvmPeX1ePO WLqs8s49xHZ/M/w+DcJCYAibF7Vi1A8o1tjQUBLGJY5PTEl2tf4QmZZHr2q9iV5burl1Y+n+1u3O Ac9LzpdcL3v+6vyb51PnfzzDBYE8t9UVA0UK/aDRAHTSDGdV6wRDARbiRcZ4AMmhSJ2SlcEUNkci YrYfX7dZ6Wh0wHWr2GGKdzT3Y7tmcXcYAoE2g29cYQDUEKBWbLPKbY1Gk/3TAXx5VRmE2hLI3LNn Or8X5D+NJ7ydaGRoDwwJsyXwqbsAfAtVEA00lRJJp4s2JpviGiGoGk6UFI1QWo2wVEy0AtxUVVt7 +1pRax+WqpRKGSUrwB9BL0qNWHr0ka6lEZ+pKsngXHbpv/oXfZTnPDzvuuPRG357xpO9IZ8sH9e3 9vbLZt+Q5QWr4J19ye3r/nAm9UjT1jNv+fC0el7kvdzSbUumXn8y8S28umfe9e1NLtbD13WcsvOn M2+GWPUm8S9gaEEEmbJmh/geoUJRYzAckECse58MBp+ROLfYj0/XRIfjGXckGj2HMgD3MlDRcAQE v81goI3RkD0E/U3IAcEI4lcwQFxBQhzMSW5DP3WFxmGj45xgMIy4EAZ3CA1Q56Mo7tGs4EdYjtG0 2wbR60+gjsSoOvqmQQLV1w7Z01A7T5jUIOl8pnMrPaUaahfajCvz6o/558BjwHn+9Xr7yJUr1vfh aAk3CiP8YqRTA6ZGQYhjg2HoNfzaY5PDPl94st5WniftXdnKbDz/DEPq0B+I7Cr/GkEnPJ96dygK dv4csXOQXBb9XYtZ/WwgxqblcV5jLn1Cen76/PSt6Rfkv3r/4WVkYsQSMWIndPyROOPiIwkp7MPh YBQ9jcmf5MRkQxHv0dhgB01bkJJ09uP/o7GeDouvg4cUeYC6EqWpRVvhznOSiX78t228nEvSlhET HpMZ0E+QUTU3I6ST5GR6jgYdnTyLI9br9QaMbMAI8dzLQuM3BTUsM54xywUXVtXePiyMQAXJBY6y 3HjMXGOz1Tvw8ilXd9z9xr4tF50/XVO8vOC8ZdPaZ++//IorInYg5lMIhNA3VM4Oh/++9fmDpWRL VBJl8boXHvjZo5N4r0TlCA4BhIogXR+gSBzV44c1Wz7mSjTFQmooGlIGhg+QzQTNUaLHM130VOYU +lTGlAQBbwb5RmrXmH6NNyX6h1/XLAQ94OkEY++HJ5fTNM24aBej0AqTcY5zTnX2OBc6L3Fe7bwy scO5NfEX61/Ef9idVmxkzBGTInOJSDJ6duSs6CXRS+qWFpbUb47tyLxpe9+y1yaeygAJ4gUx4nSF 3SEp6JF5rz2GEnZb0qpYcH2BymchqqTNasboMTnsiSL4yMatuQ6DgfX3479rUrjDZUx1sHbve6YO lOEzkUx9hs48Tb2MGlACJ5CNun9brKPegR1ycQduxStGKV7vNBI/hnqByEMMHCRZ+OCeas3E01aN iASmktlIlHbynMCJnMFks1vtlClLZzQcccb68S80N1IswO2SiToGJlVjTsNRLkzeseKkPaWhtDml oRqx49t1ZkdwrU8PODp7qoYeFY+Zim4pEHaIrdRsJx5DbhfQqTHTwYumbzz7ql3PPHDe081d5fr1 b1w2s9UrCXYx3fGbyk5ZuW/xknXrzz7j1HbKufT8dzfc/NVV1zz6p7uvXrDu7Bgnix6Lq/LEh9FX n7zzsWuv+MXJLeCVrw1XDG+CV7rR5U+wBhK8TQBdGcpkMlDPsDa7/Rw3crndyA3kwuaxum3IwGPq HKtF4HgLzdusA+CJmHpwi4eVpU8Po9N7pulEqKwDD+COR/cm4kwrHXnVQRDoiNgN6UMpWhVECTp4 BNANK4buJ1hiMFR+yUgO0WuiFym6W6y76uvnfYKXt4iAwh9CDvGhnkMkURGv1I4RH4i9iD5Dn9lo Hx10q7nZ6tmU0eqgvX6Hy7vaeyO+nbnduja1Tr0z9xC+L7WV2mkZsA2oL1teVJ2X4I1RqujKAdPZ FIiH+of/tqk+nh8Y/hskHwe3CExdXYLMZepiA8OfouTwJ5tSsSihRaJapzHxjnTaFOxwGgsdJnu8 H/9Z49NpiVc6DO/5OsrSDImS+vGgZm2MdPDvZTtYueGoNARMdD+pgxEo2qsbKrFT3TTrc0V/WHDT TEiMaCjgAhzKmyGHqDdCGA0LgEh+NzQ5pqChIiQcY8kFCazfzCxQL+7tQ31dZEtfHf5oM2QH8EE+ 2gxJA7lq9ZAzGL0wMnqhh0kPe/U5l63s9sLtbjLnJnNuMndEqjB3NH4DBraMQKFeL2k5rCDkPKxv cC44d/f69bvPXXhaZtwbN9/y+ri0/Z4Llt2z7sKL1nl+cfnlv3h0+fJHqWsaHzj9pr/85ab5DzSV 2k48c/Urr6w+s3vcx4vuuHPhmWvXVsyLN2w4/4cPPgi46ARc9IBdJFEj7tZyZobOmFWUfzgxkDAp BCTjWWgcXmjsjlBDky0GTYPUmE1l3YSJcT3FD8Sv4v/M7M8bdyJcJChJnuonSpdA/5+gBpBTDp4y ubYWnyu+VqTnMfYEUhy2lLWOzUA2CD27AhN2mkukOyxGgmeapQCAZol2SHZlADDLTt2vWRIdnK/k e8/ckX2aehA1jUEXv38IiNYBMI0PUNUa9pSr+YWgZ6o14Eql8rE47bY7bA7KJACdcfIunjYZkxkW bKTOCjaSUmLuBEEqJ87TJPlk0jDpgCbOR2F+K8qZCqPYdRh4oV6VAFYfHsUw6OtOWtOqR9erzpYP i3mo1JRSxtTb0mzY2bl53uz7Tt+5/odPNXW1KWtP+5+rT23zeQWbJ9X4Bm5wle5acO69935//NLG KPW7pcu+96uFtw/9bOWjH2y6sPvmQjnGewWP1YkbP8y8/dLaLdet2qxpKuhZr50YzkR2yAGLGstt kqzMJmQSd2AJMIHG0larVZYDY8WU9ml8NZMgJRV8REnF+V0FlrHGcGZ3y/jp5DW0ZrTqQqGz8Wp6 gcGmr6JlkzmB+6mDmt+d4KyyL0jPEDF8c2JBLIsGUQ7Utg16gbLxB9pRAdgbWctIHlglskeM6OIh PRM03Ejaw/rUNsID9ELYa6TEpZe5MLoRddLn0tMgd71Ki+00vWr70mawsGwCCS7EMkhgSB9D34Kw pUfoxw9vRkwP7rTihyGK3ows+CnIgx9BDFwx9YhmQ9sLJmySRdT9I28/jgKLkgvYW62T7hncMziI 5P3eQZmHZiVThXm4evWOXjgbLQ/hmo3Q5x5aJlkFOWawHPp3TBasEnVsxWSTBdmJH8ePO6FjI3+/ DCF6J30DyqA8flJTmwUIJf4J2ZbcseLxvhOyk3OQ5kvzffOz3bmDGU5FmUw2jykqZ+H7qQ2aZF9j X2en3rVje1qw23khaBHEeJq85VCUxoyipDPBeCbLGvQpk6lRD4BBlsrJTn1KkmaJkuQUg7IoxAJk 6rgwCl8evj5s2BXG4bQ/HA74gzG/z5fNZEJ+n8vv94mCEKJywP1ziXjcAvLGIZXLh/NUPs/Kuazi cyo+mfIN4DmQ507QXBnFr3FsGQmY84f9u/37/DQQnuyT9ZQi5BRxAE9AwvCzmwVLGVK8ZzUe7uUE jIQZwufCsECD+rKbC5MWgWKq5Yg+cGCSLle7Q3pdgmQE+raDvj0FoXmlUU8HVoKSVv4YsgJmtEbx RW9fYf9zh0/8Pw31p80Q/smrWjQ1HJVQ4FoaF8VHvWEwxA2GHw293XePXrX9HWk78dKDehXkAXx7 pz79e5J4rF/7Ufg9vLLy8kjCYfiEGP/Xvx5NQFZSZw3dRXb0ZoMNzQUbCqAUasDf155+LPOI+jvL b61vWYxrMqvVuyJ3JNepv0yafpRYnlyqXpBbY1njuiaxJsmcwp/NL7cs4ZcIS8QlTvOUyLTo8Ymp 6lUOYwM3PjIuOi5ZzoxXJ3HH8gxbkCOBqD/pz/gLcS6jMpfwTyV+XzBMjhyfvDByVWR1/U2RjZGt ESbLQMqoIhSUKMaoYhxk6iMOQ7zO0RBJBdOKlFKYUDBUbGiQGEpi4knOFrYVbGXbDNt822Kb2daP r9DSuSQSeIHihOuFZ4Vdwm5hn2ASfE2pOkgayXbGPgABuXHKJVWbIH7aV9ul7NWTRcLQQF96CsRX s/Za2erI5FAPL6FEVnRZrE5FTWZcuRxOWuI5nBXTOZSwKjmMxpgG2Snq6+vrha+kED8Mwcw6IRhV tDPa0NKsc9soJETN1dQ+ilGfXsDn7/rtxisu7d54xpBe3P8tTs+f0XHMjRdVNuOHTrx4wty7r6n8 aWZV3VsvvX1+4c55M685k6icao4HFrbMuPKQdNzCNu3iCeQ8+vC79An0o6gVvatdnHPhAiqjGchg lNzSLM/Zru9JC/JLXEulJd4tHktLoLl+ijSlucfTU1ro+UHpysBtBUtjkYv4YxgZGIfkaWmIxEMc ZP+iNb5FFZMt1mvoUFJtMdCUyjoU5vSoovjG+RWuGC4WiuUiXZTbVh6mhGmDJJoPDRHx6/XmqvT1 cF7bY/K0kcgOcR1Nfdx68tTHEyeeChwtAIwUsJvQzuDwp1slyRPwSiP7c4TKgaePVLFqSUVKTxzI P5hCegyucTASqfOGUqlJhBnD29W4IXgo46xlN54xS1MmpgKY37LokW7BLUrqSS8v6Jl33LxVDVd+ uHIXHR5PVPJx2Of1z+ycq4Zz0+dPnrP2qco/5s13S4KncFpv3H/cIz+f/chlmPwqA/lbZ/SF4HtB gDqbFv2ZZZX1anGVc5XrWvea8JrI6uh1qdXpNRmbtQ6nIulAlBzYZW9LbY1SXYwnSPDW6ksjny+I gh6GIuOSMa3XGIOMkOfCIUkKhjyMGmJZKsRQCYXjMMdFOIrz5bOhEI6Atikk53bgNsyM5YNjzkCo ADiB3pBM5b8VbsEXmiIZi9vB2TkbZ+Vok5JMJeuS6SRtcooukTJFkxlLIo8j7ngeJzk1j2NiOF8r FJAyf634CBnh4f5ByuREb+YxJNS9gnColO4ZwWN1CHxu4WP5GengD6886yeVdjJzBy4u3N4rJyYm rj2x8krNKea0zl84bcGyFV+eOpF4xepfzbt1esfc7uzx4A9zQB8F0EcJi5pvfnixabnJIFgdqigG rbFAuBSPBwMG1gRxZjMXKpOrluXksmkWBVHR5fOoTmfQ15QnBk4V1VIpmE/lSC5OZVRFCeYgGV6k tfsorFjjCcVXQkoyhJDVR1mZmMIF8OeB4QAV6DQoiMXd7Hp2F7ub3cca2ZKi5FGOz1G5foiIUjIJ 9CTEnuQsiJ+L+whpap6y2FvT3OAQqXTtJ5GM7+0bBGirodlQtdRFvgG9BiG97H29fbRTQzR9qKoj b4zOk7I8FkZKwcJo7WtES8JIijp2T20Gn0JdRcR+6AyikT4dwwxLyczQ/Viv04AWvFSpEtbjWGXL WLSqvEtmXq5Mna+/8xlp54OW1oGWloGWmtAX2rzTjZhjbSrPB9moP1SKxYL+xhxXH66n6tWmpmAO wkgzCSOi7FYFISgrWZTm01RaTSaD2VhckZtQMqEgJINWWJlimaZkLqmgLJ/tzhqyRN7ZRCKOsMLH FOSP+Klu/3r/Lp2HGP0nCREeI/5y/np+H0/zcunAduJHoyEFhM/X9EGKjSTxH2of08XR0keHa6H3 W5SAe4/eF6mpoOW/6+DO6oZJxT+iA84aMtxFBD/04yOVcARfsFu+XQWgg8shclwJkaMdK1rrtZHb I1SBL/MzeMPxtsmJWdZe26zE/db7E0+ZBmwsHffEFVsqriSaE6Zm1HY9amtDweZSgQBWI9eAG5rz DQ2FfLBkYcIpPufEIY8XwlOuORMO8oaov11pLijN3y+VaGc06TAA/VugRVwuJ5VJ0mzo+/l8LoQx 8k1IKRwTZihG7li5+Kiwoh8/4vUCl45nhO3tGQsv1aJmrRRzBMz1VnGuOiB0/bDI02lBPMQcI/l7 KJDypuFVN/zJ1oQUk+Ij8QcCUF8vCUACiSx5qhZYPLWdk5F4VC3Xw5AmsWjEwYzVvRW6cPrO+Stf /tmMVZ9d+9K1ZlLv8YqCB5te/dGyHSc2Y/TeCT+ZXVUVhryHd+FNlVtLzd3Xb1p1+2psXL246OJ8 oWfCsid4yqKzf9Z74W2vHojU4RZQsRd7nHbJDBo9F7xqMXhVF/61ZhPvkX5Z2CztLNDVVMFqV2sZ gi+iM38+iINqNBiMRIO+bIM+hQq4kG4sFBoag9n2iWSK58rhMlVWu8rliV3B9moeYTWptTSimkRY pXQth1CT+s/h6nCdmqirSyaC6vgSmepCrbhVbWptLTUFx8djIYQxKzco2awaUXxJRVWrOUP7+PEW SCgaQ4mmUKJLC4Sb1nU91kWt6Xq3i+rqp3Zo/kliKBoVQvWURl1PGWZQuyiKo+ZTiykD9RS1Ax1D foUB6fv34LmE/IEbq+16Rkw8tp1kCjonJK1Qo4hHm8q3jr578N+eOvpn6Aigl6UKEHpYzlWWNGgK EIi2OZwwgKZaYop+Y5uiBtWj2xjRb8wcnW5cMfSaDteVd3S3byKJxVc6glC5JSGfHP6KzDTNH7lH Di+hmiuhI1MOHchPwFtG+oekkffB5j6ABORjsLkwekvLFei8MW6L2COuiLsQKIQmGBtt9a56dzlQ Dk03dtk0l+aeGpgRnBFyk983A8uxNevb1mBJYX0caEaBQBgF5SoXsgLyV7mQVyTjlLtZcLtFIegN K7KoyF6KUhhOYVmGJKHCDB7zcuTad72jHIhoHZRNtD74v1Hlt2nrG/vYR5Qu4tTPj9rL3q2TfL2S QU8YE9aYMAn6zgVfvQ3kNpH6ntbrzXmbfZ2JxlJjS/Ox0Z7Oc6KLOi+KXta5WlvdeZt2R+djnTs6 X2p0cqi5cVLj7Caai6nNk5s6S7OKz5V/oz3byfhj/uKC2ILijU2P5R5q/ij2Ve6rZkvDRISKI3JW j5CzAwVwoDECoo4E5Uy9XkqI5K7PUfU5nMtdX8zl6ovBTBFVteBARmxsPEIRVgjFVUWk42TcrXBK WKlXDIoaI0gYTMeinU1aM12eGCsiEYWiMVc0GkPRYoyO4HolE1cy6bRcjMUioElQpZdqbVEmlMsM wysay6B+6tIt0aiXbejHc7ZFJk4soolKwwB+EMWoSzWP1l08vbikaEBFrdhdNOwu7gO21tmyA89B EVTGzZpwTDRCLALxeB8xiq4pA3jmGD3Wk8X2dpnf7xvywrDPR7Z5CRz4ZB0VBr1l36AOGEPtOnXW T221k51HPeCszKvk3IqMtFB7GWmBFmjkBmg8OWhcdeXq2cS5K40/fg6RB7yHVxAK/x1LDq899Pb9 VzgxO/j29uqBiO0oNrx7s5xoIjnGJrjCEuYSA9aTp+8Ck+gIdOCjsAST0y6ekRGlpPCsO/Rt0fVj LBCfTmbuoI45jVz/TaZilYZzri4nTl9AZu796ZaV+MXK6m+6wNDXlHEUW87K/M+yzn361vSCVzI6 NwHvmAPeEUWLtFag7yVC3wk5RCjoB/r+1xpbLxG2TilWPyHgHItZH/C8kFOUYxsvOezo8t5e4G3V wugYO/sGZyNH8sYO4X27nPAbVF4/IXE6+bDPP69L4YMRV8fTCYRW5h3l7hh54fM8C5+nlWrVyu8H 94aoyWhK67NoF3oNvx14NXgAHcAHgpYkSgVTIaX12MDswIOh7aHX0ev49eAn+KOgfQ7wI5sgTjwF gXa3ko5Nd0TnOpIPhiEfTDs5TnQGbWE9BvMo1h2jYmklFksqwXBBj8LWhsbmhoZSc7BgNepjppFm GCMdtPrd1R/mxZw37KW8aZfX63YF/fm6KgCo3SqlplOqWpcK5vuHr9ECQYwigWAwhCkXJm2oFaFQ MOSCKXDeoGYNJZVwOBQKBBVMxlMCAX9rC2VwK34qX0g1K4WC1WqjnYqNUVKtrcFQKNjSHEpp6BUc Ts1PLU49ltqZMqa0VLoppYklLrUmtSu1O7UP5vqp9zR3MIznY2oNfoX8WW06EKApioZ8+hJNckYM tIsOzXC+4nzX+bmTdsptv64lV9OIT/tkftArtBWq3719MOxV1T4vv9enHyQgs4TpD1VdnlzKBC30 QRUIwJBIfXHlj6tVYOOP+edU73dzgL7/PyIBj5GDP/iHQD/7cBx/8+DCiM9i/J1nG+LU3adXnubv 0EP8i6Q9tkTaP+IJuO2PevivHnd4KeQHXxbJuYajg9ZQlnr9yMhv+ARR6AeQQfwAMgg7kvEhbfgF 7rcyJe6V9nq/4r8S90v7ZdPvpT/zfxbflN7yfsx/LJp9vE90S5KX/r34H+6A03AXe5NtA/WQ8SF2 g+1F04sMcwV1rfE65nLbKucq943UHUamxdTCNLLttnF8o9gojfMyGUq1FfikmJQK3vGU+SluJ79J 3OTc5H5c2ukdkJlHuV/yG8V7nfe5N0iPeR+WmdnOE6Ve7zr+Juda6U7vbTIzyTnJPUma4j1BPpU7 lT9JZNLecVyzs8Xd5p3OTeEniYzVZGH8Jj+T5lLOlBs4vYxpxsnZaWT2QLoiJC0GR5KUISOoHq1H RnSRK2mWN/u6Lq0dZyWHLsgGzOgJeFI0rNUNQZW9vRA7tkqWgFAW+4cPbIYr3z98cLPoLUvkiJ/D 5S9LXilY9pKGBYffzMnkrU/I1dg//Obo2CqS8a/Jla1dneRqF8pu8lz1ul9z2PiyO2IXJzhD0GCy FeqUy/balSJX3l221a5eskNoF5wTsAMaW4z0vv2kJIlxiJzBhhwICTwC6xPNTRQ5HklqciL9g6s/ X/VS5SVcemnVZ6tO+ezpJ77G5o1Pf0ZNfrDy3no8Fzswh+esr7z/0Mt4cuWFdz6pvEX+F6EU2gyI 2QOIGUc5tE/z0j7abw6hsNMvhpP+kn+Sf7tqyYip/uHPNP4C3099VIrJMGt9N4WpoxnPtzPJ4iiB yeo8Mo5CSZFLlBNUIuEFOplOckCRfIUckAhezh8YK3KOlNZIZY1UAnrJb+JqXEID2SeA0UNj5QiH n1v7TZn/PdUkJ/ZI8ew7SgQjUUg/upes1ZXjUfz40bUBiEkfPfrXYxumdo+bVfkK23rvm/rwTypv 4N2VZUc69h9WnfiTZKvPOfPkiyecdTeRO6mfPQNyz6EWfM92FB1+TpseiU5QXZDP95S+X7ygaDCr 44pTiqf65hSXRZZlLy5dV9qYebj4ivJG+LXIu8obuc8VASh5cVJ4cvTi7JXh1dmfh+8NP5J9PvJC dK9qD+0YPohYxH2rjo4kmePHdBSOZNSoKZbLxsN51FxjjDkUKuSJ2PNE4vk8A2RUyWRINhAeoC79 v5x9CWAUVbZ23apeq7eq6urqrl6rqpd0Z+ss3SHQIV0QCK4EHRhAaUHADXACLoPbk/DGERAVFFFx zf/UcRQEJEQCqDgKiIMzwV8clXEG9EV05omgPzo+MZ3/3lvdSRPEN/MgfavqVvVW955zv3POd04T VWSnaiPgFwky9VE/EQOxHpDrXuJf5Sf9PSCuomIOk5RO5YByQtEpaJFysCoDkswJhmTExvMWnB5d yC3qy/XlGOwPwoxqvBLgqBMUhSJQLI01/LMD30icv5kreCS2hKzSjoGTqFDr1gprWghBDbElJdVC aS9yGwoOCUT4/GlYZxROQ3DR+sEpM+VMLPfD4+//+rFLOu5R0dHCx9a357/59BddFz13c34/SefP O33ivPlvlzyZbn7sawza3K+mJ09a0Dj5YYh0tsM1gYdrwjjiE7V8dP0Fvrb6XP1i4U5hmXeF7+6R 68bS50qtY0g0JZ4b89ux77mPur9xG33oSzo9DYgcN7NCTYzOeD0OPU+AEfa6mjBVnUIxCdYixpqa Umy0xSKZzWOnWFbqqleWpaJyC6WD4i/jAMWI6Mxge5AMelv5qFobC8fUMe2JJYlViScTmxL6hDj+ 8R0gVMKb6jsG13Itv0SLWhTCFhVMP4v5nhp/SmPMubUcNEQ3AUiPDw9JaHyAIIn83QWuU5H5U6AS FChPZbFBAh21RsOJnBvon7pj5dPVF8y6cv2YqdOPvvHnX6G7q53Z+cQT21rH1zz8zowZ776wWdfs R4P0pyCKUty56vK6i+tDrD9Qdtdlq/evqEGnPkcBjBkPPbFg7FVBlzd8zjm/vuNVVAgGSncT1qr3 quUOszWNnK6KP9SA3OKkSZ9GflanKDRAvC2GOYgISChMYg9of4lh2CCE5nBXlRh/0j/L3+vXOfxZ f5t/pn8hlKlN/sN+k/9vUYS1UOThZIGtmcVKcJgj9Ay36BkTWR5MjynukKsPYTsb2xmH8r/F7KkX 0N073czO/wXNbbA4vwJvoUVA/AzOytvg964Byk7Cj0opD3y3JcT4UaKFD664ymJfn+Go/++h/ya/ MXzj+y50SjJbSJ0B+CyhX/seNRg4jwaqXYyLdNWLLpdHDHCa4WwnoM2cIKDJTATKWVrztSXMNhtt DrCadTwhVl+wiqHlC03fRMIT4+gYx5IBiCcUGWL7djg2pINoI2ai1Jo60Rs0mdrMM83t5iXmVWa9 WawdNX9IHWFfKJq1uUJSpubhKCYb/S9cVjjLBSuXxkJiErzjg47pIrjEXJeG9DCjiOr/4tmFG2+Z EPTarUHQgOM2r/77z1ZchUGn1qFr7h/74onZb95Evord0hhWjl35+gVPzME9RUsJ4ird03Ck4pRT M2tVj8tDCm69QQdBWNzLG2KSlTRHSFdCU7JocjXhFDbM+1cva/e2+9r97YHlwp3u1/Sv8Z8L5lnM LHYWN8up6yUBIzBuVVDdOg/pcwfFUCAYT7gbyAah1t1Ktgpj3NPBpcI093L3b91vkfuEQ24+7jGz 0O7CLQRbn281Q+tLB3e60Y4Nu0BZZhIDmDTPME4+YONdchnqDUakyMIISUSYyKTIa5EDEX1kdTwS KYsH5DhhNeBLzA5zyEw6zLvMh83HzQNwqFfrzWaDPmDV6yQvuoQPzAyAQFoMBLxiQBI9BLwdUk/+ ezXl0lESr9fpgi6eh0ooDi0wjwgtNxH/rHrQ44b7bvQj61TQJcArBDLm7iF/qQY9MQIAaIpROlNZ TPaiP0lyxmyGmM2KfhmvkiCgjswRIpT4nFrXK4KQCES1PC2qqYaU2JGEO+FISlRjZSkxpjriofjM +JL4qviT8d748bgpvpO8GYIJN6hU3QJ8mqAm4QM+VVC9aYdwHFMFp20l1Vgarvw3b9FLrlfg2/EE Bd9aB6pUV4gHr/GAjzF6QOjb9Kv0vXqd/hV4NkGMx56cuRoV5BhUNF+KTB803Sr6FyE86jkqMv2L vJ5jGpcr1wfPepgviUEJKfgDkTOnH5txJsQT0UM7bnBniDgCX48Y7rr5KSrJmR1JZMVhO+78zTG4 yJfDRX4b2UF63V7BW1jOz9/sHQyFkwNfbCFN7p6BEy8KTHG5Ry6cXG66HEYMkmGuG6ez3ukc1ke9 f8eXf7vjthAWwEakB3e3/+fSv127R5NI1BGisj/8Ttc86NVVqOQP71B/LZHFSVAWO1A8gaxX13Jp MEpuDKdbVGub0FY9pvEi60whV31R4+XWXwi/qL688Ynq1Y3PKj1cj9yT6mnZx+2T96X2tXxAfJE6 nj3W8g/iK/AVo3jgy9YBroXlWsKMEmbkVH0dkFOpFo7jgnKKl+VUXZjhmCCo4wGoIyFmY2KOGO2M cTE5JsW8Y2MtsVQsHcvUxupiUg95k+qHmI82eU0Zspw8ngKpWEtLtrExGw5XV5e1IJjHZcfomRgA eqtVHwhYBSEAUDfr0Cf1WTi3Zur1eu/4ulgY9naXXRmA74TO021Q7toDVEActxPEcHDdpWlf8cKT nmPMScQIQzpYvLDPwxXdCSIadnQS94nwYLAzV4AXGHIcK23QJERGRJBBTkQGOREZ5ERkFC6YZew2 ATZWvshBnY7cDnB24igCN3CwCz4H2pEHu+DT8BY+k0OWHHwyPobPR9stp7+EA/7DWh+/Tv3AV6rT 7s6yDtafZfUu1EBLEFmKqhN2yR54sgU1jMfBa68Pt3Vwuw1uWTu0D0CpYTgdDCc/DetogB3DfY7D O8h1YJVm23yN2jvzz+SfuxMfn0RBsnpwV345nuOfohl9GRgHxl6G9o6iPomc2t8/yJZ6NT9W27cL BqgcPxv0PubAU0Nznhyd76fW6j4jWKJVjbEW+9cEUWeByOhrOBtNFpoxcQSImBm6hp5EU7TIXfFc abAh+2M+xeK3HkrsfRt/hSOa2aa7//td6M31f8cpD1AxN+X7yQfwZ2hWpeGfgbFwZjoCsogAVXj3 H4rvPvy99UPR/cIgkNn9pVGKTfoxKGz8PUZPqExcf6/uD/kwYSNsW42XAosumdQyYE9na6499RT+ 5LmQKPb3Dt49klgNcdbPqQ4iTjSA2epFzxufDj1fTcWM0VBGd4NzsfeXvg7+1977+bXe9cZO/mnv xmS38WX7i/xW7/bgfvvJWhcNRFAOqEfYB7zkrdV3VT9a/bx9ffWe2vdqP601xaGNtlH1RpNyNKrI SpwLON2JBploSACq3mqubOgBR9RLwPI4QdfLlMUsIw7AwkqqMpGxWuP8Y4wcMKITNkKSZBVKhUMG STkrt8kz5SflTfIu+bBskr2N7lU1sgGdbzc8adhlOGzQGcQR5TuH4BeouLD/6ESNfqjd+mLSRTJ3 DGExzHEfNBtGsiOHWXzQ3hMLCn8XYYQgNDVwgkjDhzhwsoszVZuKlQlyiwrBah5eupMIwkucA68V ahbk5PRQJQJ3SRIOSrHU3NiFuUfF8LkCV5matu3AQ88feX/U8raOjtkvSmbGTdvnPDbpyS0LkfDs ydxx7rarJi6+7tqdc25+ZF37LS85mOXjrxxJeziWdnjLH5/TfxDbff/BMm2Ziy+4eupM5DmogmM/ Fc5aPxEHkRcRLNqoWpgkhkSKzS+gY6eYdImi4FL8QSMFLFLMmrP0gDndMdksyVArz1HLKT9BUEaz JSA74J0nDd7y8GTCKrl4RCF18O38YZ7ixcRl95YOBxqEvqJTBkohWvKhaoVKuK/ACvipdNXzN1sL g6FOnmcGNZaayIT4z+Nz488pz0S2ge2Wl4Mvle3W7zcd1H1k6tP/3cQKulpQpx9taQFtlnODPwdT 9DljzjIXXKlfYLmRvJW+NXhzaEVwR+gVpTsqQB16YouFiUML/sWgoOVl5sCi6YCFY0S4eAKZieFh BjwoyR0A5Q+/3wMM+X90f7RmTwkH54lD999/CD10n/W/uzf/zeu78yf2PoNTZ5txkGLfk3/5y5Pw gfJn4eicDyWznDjRLdMWB3ILfqtWwp03XR9FPyw7Ejoi/1f072XGiKtMGCddGL2wbIqUi15SNs8x T7wmukK0CsgpeL2Tn+78uWt+9Mqyb716g1dkXN4Ek+Ci3ruYR5kHPWu9z7iegdeGoXHjEHkf5hWK frdmvxPLWTlhtHTpDP7/cMthiz1jmt4ZAqtDr4XIkLeSl2NokDtjAAUTV8eomFixu2ScobRhJkhu 0YUntVxZ+L+vwP8YohZqNjqyZuDihpwkRSvdUGqlC6WMwbBCpFMENMb3IHUIMFvQsOmBna//6fnZ +y92Maz7iqf27c+fApb9v6NsfiQlr4a8bt+Ejr8/9NTBcybxbrZi7HxAvbkfWJEs3A7v9npU+Q7e 749fOrf86nISubg2aqHUJPZyKaagB3UxvqTb5/O4lSAtKHFzjoZi0BWX4f2G4iApMh8krBbeiMp4 ukNmqQPVhAPAWxmVO6DJ0QPu7qoo7yhyoxcV7g9yUTVhEiZckfrg30kkB2c3BmtrtLopSAi67CbO hFTMkFxsJ8ohKpX4MmQwxwY+6wqbIuKgjhpc4sLpYt4E4scUp3JpGoyO1FTM/R9f987NN79z/UcP 4uOFH6x98IMPHlz7ge6zU9ci3fLsvpuPLL7p8C37wCFtJnd+9FEnmskk5i0l4UwWCYk4oF5DC+tc ZB05lryYnEPuJfc6fy8e4g6JH/n+0/Np6HvBJvrL/SmyMXie74LQDN8loXbfgtDtvrt96/zrgtv0 jhuFHf7d1G7uLf9bQYNpD+uVJLgCswHZbdTJrMU62ZvpJMBCKEE94FPVrUgZkOnkQTu/i++FqkjH i3L5hpIpeuExTEs/1lfMb8IU5NOUzBaBN0CVsNXHh4Jkz8AXg6oewD9ZEIZRWbWZSRg1gpGu6off Cp8+d9kfxzjtjIep+WbpB/nDwLHvj4CeKr63Zs1BL3j8qTeb6x0iyzJ1U4HvrW1Qc/y/pSs3brgH RVbfh5j+EjgzU8R+NapaJ+k79L+yLq3ttG6xbq14veJgBe02OczWfQyjmFPVRC2o7SF1LxGEUg0B SA9QVS+AMzcSV4hoLiEHCIKTxOoqj8FsohU4F1W6gagEkrcXT821qi3pUl0LXQdcOpeYvnE7eLtA rbsQ0xubmKPYPdSEnKf9OGFxGEM7N4yqbS+v8MEBrQwRFb5ECKCilEuXnpVTB3cKCTtDmYcGl6uI w5IA69H+dtTufwm1L224d/GyepeHNzkfuvoXi8EKrGht/ROKOJLcjubjknmPCSaB49yUe8H4JRpu I4l/y9+uux3OzDKiHgTV2vH8Qp78SH43+oXcFz0ln4wY5ieurZqTnFN/i+22xKL6uxMd9Y8n7qtf n+is3xG0kyakDWZjBWHW601mhSSCFbUeiXFLcCztwTW1skRXyMSamBEaOwZgAPGABCSaZsyd5s1m ymFGDqNN5l6z3uxNV8sd4dXhzvDmsG5XuDd8JHwirAuLqfLLT5usWFugyDgcDAQgs31IpWaLrPmR w5REySzeSfgGThLegZNbyk3QCvhuS9BE9MCjSlMN2iSs9aizSkgOOa6Hqi3lQHowYMkb7WR4KO99 REMaaREyneLq607LoFuqrX0Rz8IZF2Ia41fnLS4Tlr33wqlTL7y3bP899/z+9/fcs5/c9wjWGNsn j628LI75cRecWz7mh+0AdHcDIn/+A2//Yc0Df/gDlIUpUBauhbLQCK5Tq9Z5T0mkDrjAXMONhtXg AbITPE1uBl0k/YzhN8at+m7jXuMHxsNeo9fEurHedvAhnuRneHje7VHYRBIDnsoZNZWVyRolwdCa vrcB2wzsDFQYDb9aojMK+LWxDh2H08nadLquVmkEiCCuS8TjcLgbCZ2RoU1mSTzsAXCdeEq1jCJk qXZXTW8NWdMD/qtr5ITLBzNikJLBElVQ+TgywZ5V4f+zLDd4qsgtAQOvoWxqgLglrLfALYECyXh9 eqMh6tOLIeA1+jWRRDnjQxGN7YRh4GS3ZA3xGvqZrvnOtWzyIYw6KLoajjWeLbQBLp605tLZK2Zc Bo2PUP44Nvx+deOMMckFpQxXLNkQF52aOmH8qrb+fwzKL3XpLVXS4v4vBquMNGuZ5sQrcDYIepag IIJdopYrYp2oiheLc8QbxDtEo9PGTOMhjjVYzdP0esUq+MW1LohjqT1kD3jgJb/BZqXRL4Uj9y0J zRC7TqeXXG084MXARUuGbESmH49SU/bbY8NMRaKUI+wKp51n8M4KN4BcfdsScB763v0ebMyd9w0K 3evZDz/MX/TD1yWaCmIZpJe25m+nGvE3CxBPqhUMKvJCMtSljul+iO78Nzg6iA7QQXZQax32iaZV pidN6/07/Hq/yYcCUX4ozXqLqQe88JJOp1i0L6zaLQbvZFHinHZhTRCFBWaqLElSVDBktUmBQJsO 6MTgDtAN3iE8Qy5rTLgshgX6+7Lf9g+x01H2LlwA0Tcf/MalKQP6uoY0efi2pXkbItmQE6ZNGz05 /w2+Aeb5d6Bv3/8Dlvw581dXhbDg330VlPJdcFzXQClPkz3biQScxoItm0D8f96Kt2obZ8le5fyN k9ydAuV8ebQ6UZ6Kp0dGstHRiWxqHj8vbLnSCcLOBidZwbclPox+mPoi+kXqVPRUyjQqOio1LzIv vZ5fHzZE0uEwoalxy6AO9yOh30qEQCiE3tTKZEM4yQ0i79CMcCikhBV/mKiqx9qipqY1VVNTn1Kq UmnWgl/InqTtdgutsIiiAy0ojZ/jWYcJOoqPd1bGUP+ERGJGNJGIRZXKaCQaiUjpFJ9Op8K8k3NK RJgniDDhTEd4fRgoGb/flfEZYpnK+kxVVWUlaclwLGHKAJLmkQltbg+D8CPRyJT0DtBJRGGPbWGq I0VKqZrUrBSVQtooMMIJ1364+iw0d5hJxiyZa+AOWocMZrFhJ3ic6NCctEN0O1T6EOUz4Cxw5I4t eGExQbOQnuIeuUxXrXm2nAMHu4JNiM5wsMs/QtuKddrWXYW3W4aIdgAx7ZbZC2Qcz0+l852h8M5+ LVRlZ1x+GuXOOXCkyxtJ8ZijwaacSD3CLS4ihT2+Qwg6DBE0b4qigl6pgW9LeeXwCnjVpMJV33VF xZQ0mPeEnGqAHW4e1g/y9eRB39kZ7raXS4zFPeCKCiwrNqQ4Ls/3gCcvxyHgE6g3k38I/DJ/V4np +D2oROoDZ95+mZ8+6Ea7HkrUTihRPJQoD5FTU7Nd17t+5YLgwzoNYUaIEqchhMh5XGtZVvEQEBgS QGIZpo3ZxVCMKJZqQ1xQ6uxa8Kwa8L7T9d/XSP8VTYcSpQ4/qwvl10JM1kqWq02OEY5G+0jHKEeT Y7RDdbQ4xpu5mLXButW3pVJXBhoAOcU/2zjbf4PxBr++wVjnH28c759i1NeYRozG8nl4FBjV2jxq 1OhmZYTLgbqCEgcmcQe4I9wJTkdwDKdyFNdq5ziHXXFFQxgoEAqjkEprUFFCQSXaUKN11jP1ZH1r sr6+Jqk0tKqo84rDLaClNdvSomaVqqQhGKuuigf8BmAsH6FmiFZDuUx5ZbOZMo5oaIhGXbTNLrkF NZSuEToEUvghFghKZTF0HOuIkbEfmomklG1GjiyieVdzbzPVLE4of8FT4jOBOxVNg5tBinyBMl0M fHMjif8F1z13JnW1KDUGpIUxmBgOKgqoQoonPCJt1ekt0YSuLAT0BpF2h0BcXx4CHqs3pOVkoiR+ XP4hl4NwwzeU0kEPfEno4MM4cAi+1yEIXt4tYk+g1YQwok/gbcZ5YHCLPskWuNWKv+WcLuxRw1bs EEAJs1pJptOPS5DKcCH9fP6CMbPlxutHXdowAee3PTqxvvrKMa14t622qnJ0C+7+BLPy8C41e8r1 41tbx2cuuKS/G81m8iF18vgr+t/F+/e1TA0k5moHQ8YInOUL4CyfCmd5I1imjnjP8J6J3G3YbSKf Mm0xbDFRi4wdRnKOca5pro961PeMgbw11AW2kpQ/NC9EEkBHkkETp/kiHK6Qi3S14gC3wg3HtNqS ZCfswN5aWJU0TMsQUSZKDgO2tnSrBmzrMo0GsAMcISQwR3UGZJ0RYlyOY2kzLXkPi0BECwqD4e3q mk4Ib0WEbYcgUwHZapOz/yRcLP71PIx/FdfyPr/eZDQZTKTBr4cTzmcKaNi2HGNb3yBbh4dP/euL Pl6bXotwgnAuBxFcQ8HwPGN2nD6LzoC3U6fdO31WW+OleD58jOtR/fu1P7tlUSm6LcyVJdPHJYIr z+0/PoRup9/a8uv+r4ZNEIgB7xs4rGuCM8RCuME5aiMn6ATeLVBvgbcs75F/1v/F+J7FMN94DUte QV6hu8Z0DT3PtoC9wnml2+SSKYdspixmo1UmcN6kmMVbuxtvVZsrvZkADFFDzIIQs4dcpno42aCi rEoVXtNu2GXoNRwxnDDoDT3gky4PVEFFuwUubsf6c4uQyVCsUHlaetZOQoAIlB84uZXh7bx7x8An cMX9pMsWZIND9mQOLaNIrFWLwDO+LI8aFrk3nY5g1sLDxkTDxogaFpWzCUDEZ+QtHDwJG4Fn3c08 apw8CmX1DOxWObhD0xCsmVBDUo5QE6gYrDc8WAYG+W6L/plSL1dT/tjru/NfAm7368A55ePOzo/R A2x6LX8CsLtQadUTv3vir4cff+zIYeQ5z9+OpRdV+KlSs7W0Y2QZfKSrLgJTyJxtLoBjYphvuwHc Wn5dteUNw2v0h8YPzYfKPqw9aviUNolUJXWr8W5qHbWBMgh+LLJiMiCK/oAiaKuUhdt32pI0RkkW ViNgSyQdGZc/A2eqPSlb6IQM1uiMRCgTNcRkhwmYvPWVhF0KOgJaPFQXEOtKne8Y2hVd78easAPh x/wHP02BK3WPxa01yO6owgQ4mwTQqNcO/OXFsvBpCXmoCoQmZci7g5zoZxWp03zp52+48bb/e32+ /5WP79bicO0lLvXH33143cGD6x46SM1ed+mMG3qv684PbMsbNCIWxBUZDIiuua/3wOr7DvQibyQc u+fh2IWJJLgA1Rr6dotjZAJNvkbHyI3Es76NUepiYqZ3LvEL7zz5euI27y+rf0Xc472zel3sscqH qp+Lbaj8TTX7dBg8mlgvrU9Qmv1gL3UDabrZ4tpXUMuaGr4YqeGicUB4y6o8GQ4BeXuV7KfNyENU JhNrFGMEiGZR6qCBgz5Cn6Ap2ltbLqMCH52hzSFdb+hI6ESICok1RSdyqWcI8+Wh6oWDiuiN2aYf cwv9hJY9fWC9WoJ/EgLhKF+JilDF+YoeOLKJYSOree3PSmnXnKKRYe6hDXuwJxn7k/MLsUtv5Scv 5/sB9eqRlQcffvggepBvrUMjeGpPcUTB99sA6H5pIH/+fb2999134IBWa1R3CbUYanuXyt9mB5Xm NnoedzO3gnvQ8LjT6NfcOKF9BevN59pBboTGjqqaC0YZSoPcqLbFJ+IcSKXCYufxb/vqjTbgJHg7 Q0eiGaLCQGcZuBhCWwyZZD7aYTxhJI3eKoKXIo7wpLDmwDsRNoTFyv57PSWJ/SihRctnwRVDsWFd rJgE2JH/LPX0JxdBOHJsYeS6nbxd4PxFBFWQutMSX8/mgCXJp58af/5S0UnbneGUOOLRXeAGDN2v RVb8/kdRS80++MCUK7xO0egMe6etz6fw4HCsm3y5gG96Bw5TeShl48BX6nI+6x9DchcQ04lrxm2Q Noz4P41vO98a+1fnn4Q/Nf957H85+1Kfj/3BeTL13VjO4jQI+mbz2JDTJbiafWNXKmtTOx2Wqc5L Gq9pnJe5pfH2zIrGFZln+C08fW+mO0ReZKpIhGO16uimlNfjsBtd1pFEqq4mrKtucNitFE1QrJgZ PVpm5Ra6B6S3UlI1qO4BD6r+WIMsExnjlJFyWxBRUKmgt7V2cjiTcMkqWiUFuB6q09sTICGObzFS hhgtWy4riBwmhhSKhoMKVC18kJGKxhhzjtlC9dEhPmoh75nTamQ1jhjLSf6oM+pudoWIjG9kCIyQ YMONhYdC1hMi3J7m0aMCTRDLeDNNjaGGEMGPYTGURsBKa0Cxjn7J6G/N8Cna//LAZ4QbSu84KLbN /AgovV2K0OQf8uvigmsamaQRrrFmaHZkeNg0ohXXw7jgEWzGoSV2HA8X1XG8xZH1o9eBdwZdtA0B Cx41JUssXN1/rOwHKh9YwrLlDSUs26HSgmWxSKGKF3WbZs2iuF7jxcvumZhprblz07jLZ/7xzTeX mFw2zLMV3eF17U93XnRx/s3lFxxcs5GqCMCZujroFcSmssaRFemmuN/h9IRvO2f+s1covN0bfAFO X1d1qCZ7y7iJyaSUurppwRJkdd4P0VZGdz9RSbylRk75gM3n9ZFP09306/S7dB+t/6X9Tvta+2/s ey1/shjcJlTXcyOhA9epLpNOZzQpgOHNLhb9eDuvF62JHvCUygYzkYgxAwBhsMqihV+u6wHPqXxl pcksxeS9hJ/xS/6F/l1+PUQAn3ZVIUMP1XbHYZaTxbR5lMGkBUrPqAWixVe8Ptpi8ZpDBO2zhggt voLD1TlQlHCWHx6iiqVPj7cILgj3cW5cvvHGRVP2juBtjMcm/WPRmo2YEvooGgxqNhLu/nfOnV0v 2VCdZ/nCu24kk6gT15FA9/FSeB+nU7OJMqiJrbSuWyDjAvCaHGasga1Jk9VqNikOLZBq8U0sBFLL ZHRchYpmtkqRiCwpZUBw8JKcIcpotycTCgYdJnOGcRh4mbJIEkG4BWSDmBMMK5l6jcCIHOzx4Q72 piatsFpTfzEzVNO8/9xyWFS3Kg1UpGyl09zonBPVs3Pq2BDBGXjtzmti6CyI4SuEC4qfAMEQN/BJ IWKICSBlJbcfj82IocMi/+PODftuVX+meYKunviH9XgYjmMz4tbHWqbdSAbxYNxz8byXtV3Nj4zG IIN+xw2OQRgsVZvWg/XcBicl0ZJFQonedskhQcstAxq5Uc4ryavYa/hrwpvgRc87OTUEwh4zg6mz sCUxdRbuAEydhTuoMMFG1WUjbIwtaaNsE3GBAoVmOW2JhfciBEqcpKj6wEZUxWwGLj+gmEmgOUWz Hs0rOnHQKcqSAEgcy0Mrkw8ThOTkeaeTd3KAoAvuTx+ToakMbTaEM3wPmKdanGQmyWbZTSzF7gDz CCcwqzaVAzVcO9fJHeB03CtgE5xRUSAX2KcQJh09mSvURR9kX2ebkslibuCP1R77H4qLnZ09inyF cvgMt1/98B5y8735Z3+OHWO4ltFKkIqCau1nJJpQzGQKZSsW1eufoFmIRW9Z48CA7gE41nFqoro5 LpS576SeF55x95Dbha1uE0Ey5BJhlbBJeFU4LOQFUye5mewlKZPO5PLoPK44mdDFXWXuRl2j6xzd Oa6puqn8NNc0cVr8SjBfd7XrKvdV4lXxW3U3uR4WHnT/hlyv+62r091N7tT1uDa7t4nb4m8Jb7r/ LBx0/03oc1dYBJ9QQVYIFe5l4rL4BmGnsFe/l/9I+Bx87v6OPCV852bPpGazRWo2q1Gz4XxhBklI Gi97o1q5MAKIiBRRI9QJtNcZORChFkY6IiQiapORyDrM0lYKLO2NamImpuFTiKvdZqaOm8EmTNim UM6xeR0mbCsFwjact4FAErO1FUn0rMVs7YHz1LoiW1saZGtLJWxtqYStLRXY2rvAEWiu3wDn2xHk ygRH1LCOmAwANVlHl2Vkb0ZyZmyGjFWWJJvNamj3AM8bIkBh+BixRlRr0qIar0iJarQMNoEgbEQv bBxsSsyos+IgvhM8i2naK1W3MIVUa0emSHQdia4jVYZNkT3gWdWml2a5gOsNXreGz+iRq6wmjTZd jSNT+LBCO4Rvg7fwFfAWPh9v4YuhrcoJ7pRedaWX6FfpScTqJvWvgE+IRIlMfZvLDa79xxBzO4eI 3fBfP6Z154q07oqTR9FJwpNtKoYQMBo+2cT0MfiHiP5FYvcZQYJcbtGiM/vO7Cxhdxe9Ft1xk2jS MUOgCFwnGymqjBrGbi0lbhf7qOVXb++5emMCietnqJm/tmtuz6p5yMN9FAHnOCD9/X2gRIavJPn+ L8hHSuX4Cqiz50E5biHvV9eG2BBHco3sVJb0IT9NSJkFruXa5fbwrJY3wBvMH7k/ym+H3657PfV6 i8NEeIiHFerHSNqYmC1pxGxM2JY0wjaZARlHBqpTZ4bLyBkp463N1GUimXCmfGymJZPOpDIZtUjK LquuLstO16d6QPVWqeWRLIMCTj5EzpZlwWrVEwJABO1HHPp2OD284+vg+a7wI2Ucvk5+pGy6I5As uCL0AXEcTXvpckPGcHQHMA6W8C+C6b7Bop0lPO0cImQjsjYmZaPUgWMepq/I0i5svYRnGEcbN/pl 1UXa9e+H0a43FGjX33ZxYbT9BPnH4PavW3xNzWcQt9Uw04Co3pWDVG8aPo0JIn51EPG9lcFnDZK1 NfjQzXpsjlQ91HFb4LZQ4wHHkQo87k9UM2fJskELl61HRc7PgzssLbibWbiwNreMCXJZgJqWEX42 C1DTMsLHwD3YtKDfVwGokemA1JxywKaOF33NDMLudQiswy1X2Lb0DOzuYnjkPd+t2uBOuAk2MmrO mi6MAD2oE/4nYnih+u5ZiOGGMNkJlsZ4B7T0v0ZCsTK/Pb8TL3H540GvwxkDS/PPR5zw/KdoxZsL fCAwF4nQp+hsBOzJrzIKtkJIa2T+Tc1fahOM0Lg9x4TPID/PccBqUmUVTFCq1uZv1z0MpaoOTsUx HsLDeZQKm+xOgzTbZlPdp5z/rVjMzvOd5ylXg6vZm5w3Kcudy5Xt7CvOHcpe5X3FrnjMjrFTwrgl 8HJl1+CQ6sBdBODqOLbOqQGjoM2WHEREPiXYEQTBdUowqCg+JVxRi8LK1TXY/HSrlrrq6to6paLO adZ4hnr9Oo1laAb/n71vj4/qqvZfZ59HhsxMCEkIIeQxJJMHYTITkjRQSilFpJQCpZQipYgEkpTQ kKQhUIpIaeVXkdsiRUSsyK1YkXIppVixIqWIiBG5iJUi0siPIiLlg4iYInIh+X3XOieTSUhrH/6u /SPZn7X22muvvfZr7bX3PjNnQskJsiv1Kuil9QrF9+qVEJ+RHF+Y52funJycUGZOjj8zIy8zI76w 0JeZkZCZmdEDq5t/QDcunrRCZMT10MiVZsZ14/NTnz4JtyQnY8ErPj/5b8kbcEv//nkxlDY+TdWm nUy7yJff4vH85k6s6TNrzZPmRdMyexfl7RRHb/+O+mcfjj0D79j6HCPiBOV8mZN/aNd0HPY/+Uz3 g56qWpOxHaWjXLFDXEO08FGrr9b6v1He0wQ7fN7aV1U1L+ydluztmShvHzysfUab8LDzbkJsQvD6 +S+Kfcqbm1oU/Hect2c3ceB3q5dtM4MBtj2ZYj/+ZyL9AiyuF70zLDqGH0Vrrpho9VrLZfK2XKFo MvjeExXSo6IMPSM6UexmRHyoe3x8bPeMxBhNxSmfNybB643xelSMluhVHi2mu4964fzsc3uitc8a t3SPvi26hp8L9k78bA3/G7OkeRGPAsc636s+Hf7Pjze3/QcJuEP7x6yU/eG84u0eLk1ieDXEx7bD p7W6sfY/jd/xd/LlF9z7az211l+/i+p7kxb+7Xz96PX/UIPkex3XSdVdv2xfIu+6fqv8W7SGu9Te OiZ+TpriNyieMf5EXur7I4xR0TCvm7Ri8ke5esc4b2wMGSv/LAHVOv/Pz/7lF/WM5nx4kmRsvfol Vmg+ChfD/7x1oBOeoT9y0M5yULepNQhNul8fqpfpO423zOnWzVGxUS+6pnZb0K0x+qy70ZMu4UHP QQ7eCzHnu2+Jre9xR1y3uEsJgYSGhIbEzyS+npSXlNd7cnJCckKf4hQ95bHUiakT05N9qyUc9R3t G49Q1LcoY0AmZf7QX55lZG3PXp7zTE5z7hfysvMW9f9LPuUfCm4JrSn43IAfFj5QlFocf9PfB24Y 9NbNjyBs6ApdoSt0ha7QFbpCV+gKXaErdIWu0BW6QlfoCl2hK3SFrtAVukJX6ApdoWPgj8FosNpN /H8n+W+2YKY1SpQU04pi6JhD6zSMzji0ESFjUpKW7dAWpWhDHTqKqsMyLiqgjQ7dDTJTHNqr1mrz +ffK5O8mI8ahNXIbn3JoRVHGQofWyWc87tBGhIxJHuPbDm1RjPGiQ0fRwLCMi5KMUofuBpkfO7RX G2P8Apo1Q0ddHqun0CboWCtLaEv4NwkdJfzbhXYJfY/Q3ZwxtGl7DG3aHkObtsfQpo0IGXsMbdoe Q5u2x9Cm7TG0aXsMbdoeQ6ajI9rvlraVCu2J4McI/bDQsdw263Gh40HHWV8ROiFCvqfosenECH5v KfttofuIjK0zNUImPYL2i/xLQucJ/ZrQ+UL/kmlXRPtdEXV5Ivie1r68QD4qxIgMAPbRRJpF5YjH Ug1GrIbq6VGqFc6nkKoDzbgU/EqRCCLndqpC8NEE8B5E+XqaK6lyxOWQng9cBknWMA/pSuH6aBzi RxBXinwpoF50l4E/B3EdPQReDVV8pHZx/Q+ivirR1LHc4PdsTS5kK2kmeDVoF9deT/1okkjNdbT7 qAQ1DMKYtddi6xhP90LHRBqFvEek9jKUuAt59QhVInm/lPNJPx9FPE/GikdgljMeFVJTvYwMp2ul 3BzkspZy0TlDytY7Y/Npuo/GYDbssnURObXSmzLUMlM0VkofHpG6ZgJ3Xq+dZtmZaPU8mZcyka0B LpP8WhnfR6WV1ZJbK6Nha5jp6CoXzLbSsd+cXyVULkr1Q8xzPyNcU2etqr5B8wcfozbtZaLpQfDq xFLrpd0zwxbUed/t2m9s1y0RI8A9sftSL/W12ibrt/taJpbBPa8Re++8p/Y4l7YbU9tWaxxs98qm 5yFVK9gnrZ0vvSkP62HJKki87wy94CssGFDomzir3De2prqm/tHact+naupqa+pK6ytrqoO+26uq fBMqH5xVP9c3oXxued388rLgp2rm1VWW1/nGlT/iq5zrK/XV15WWlc8prXvIV1Px3romlD84r6q0 rjVvcKSa3LGVM+tq5tZU1PebVF43F+K+kuCgAY4IJMbfO3biqJpHSuvKfHeV19dXldfdXzPPN6f0 Ud+8ueW++lloR0VNdb2vdK6vtrxuTmV9fXmZb8ajyCn3ffq+Mbcjt04StXU1ZfNm1vsqq32PzKqc OSuiLOLK6plV88pQtL7GV1Y5t7YKFZRWl6FUJQRmQqq8uj7oa627prrqUV9uZT9f+ZwZXKhNVXWr cKctEvGyyuoHfXXlc+vrKmfyAEXUjuJhXbdIA3IrUUt9+RwezbpK1FpW80h1VU1pZKVoc6ndUowq uluDqoDn1dfOq/eVlc+vnFnOMrPKq2o7dAiOrEYWUilMphomW8PLSPPCTGYj/Y640tb8e2E4tumL q9Of1V/WX9NfB/xI36lvidDF0pXh9Nuiu7xdXeXttIk+I80YYNxl3GHcCnwzpEth2rxobHc+S9um fRsnIF7Kt0O+DkugWnQ45zFqzoQvpvDZKPJPJz55+ElraeE9ERz+scIlclaaBXxUfh/ut8g7pp4i TT2tvkG6elY9C/qb6pug16l1oL+l1oP+T/4n3Oqv6grof+gmabqlR5Guu3QX6G46Ti16tO4B7dV7 kNLj9GRw+uh9wEnRU0Cn6iWgB+ojkXuHfhc4Y/TPg16kfwH8xfpjoJfoTaDf1a+Bvm6gD4Zm8G8W 6HwqMqL5jGJ4cdrQjUSjF+gkA7UYfYwU0KlGJmi/kQ06xwiBLjAGgC40ikHfZJSAHmjcCnqoMQz0 7cadoEcbd4EeY4wDfbdxN+jxxmdQ42SjAvSDRhXoOcbnkbvIeAz0EpwgdWODmUOamWv2J90M4Jyn WcOtUaRbd1qjQd9l3Qt6ojUR9H3WZND3W7NAV1qzSVkPWQ+BU2VVgZ5jzQFdbc0H/Yj1CGQWWAvA edRaAvpx6wnwv4jzlmattL4O/lrXAZx6ful6h3TXObeXNHeMO5F0dy832uPOdeeB7u8eALrQXUTK Xey+A/QoN9rmvtM9BvRY992gx7vHg77HjVOpe4L7XtAT3feDnuK5C6enMZ6xpDzjPC+B3ubZBvpl z8uke7Z7doDzQ8+roH/kxVx7vV6cDr09vD1Ax3lxCvQmejEj3j7eQnCKvEWgi718FjccK2WIpiXa eNJL60pnUMKs8hl1NKqqtL4algxtI2+f4KOU+yaM4L2GxHbZlqMcmjW5HFrhRNxNNHNaw5kY5cdM HOWjxAl3j4UW4VM7HP1QeV01DRU8TvA0duZULXih4CcFr5rz0JyH6FXBewQ3yL3BQltcqDea3B8g bfdbOfehzukeWJlenP+7o/09KI7iKYF64tzfi5KoNyVTH+4JZA30t2McoulYzbW0mJ6kFbSG1uP0 v5V20G7aT4ewxk/gJnGBLlOz5tHStIA2VBupjdMmaVXaU9pqbZ32vLZFe0Xbpe3TDmpHtEbttHZe a9KuKUN5VIJKsedM24e6NKz/w8TfjDZidsQc7W50L0T94HZfjT4jjq+2456reYZIS5ztxEud+Dkn 3uXEx534il1Lr0S7ll5bpRYt6QmkoxBvtflJh+y49zA77pNrtyb1eOqFNJWWKCkrbWna2rQtaXvS jqSdtfPTd6QfSD+RfsnO90331fuW+db5tvmcXvVdZccZi+04c7JIuvyF/pH+qf46/zL/ev8r/gPC 9WYtyVqdtSlrV9bhrNNZV7I92b7s4uxR2VOza+1WZx9ijLjJia/ZcY7hxB67tzkJTpzhxAEnHuzE I5x4nFNuqt26nDI7zh1qx3lj7Lj/E7Zc/konXgsL4fgw4qcgM/x/iv79AW1JVH9Tf4P5XlaXnZ1D 6fG8WxgW7tBu8ekebIzp1N3oC5/eA948QAnwtrmULH62Dzzs3ZRhTYCfzYGHnUT9rMnws/3hDXtS AL5sEhW7J8OjDRTPdbP4rMHim4aIV4LHjj7onsi3RC2RhpMWGgQYSpQfjXhEGChwpBPYS1QyDTAf UAvAjb6kDLAI8ARgGWCFw1sNeBbwHGAjUTGmNAA/EjgOOAk4BDgDOA+45MRXAM1oC+Y1NBpxLCAR 9HjEWJGhSdBTAbAA2NOL4wBJpBWnIfYD8gBVgDrAAsBiwFKim5pJG3hRgAaizECUHWjxF53BQ/mB x9ScQFJwQv6h0LpAQXCyQElwdqAktCEwLrgitCm0NX9QyJ8fAxgdXMYQWBDcHqgKbglsCO4LbAi9 EjgdPMuQPzWUlD8DMDS4LLTTls2fFFwWjA158ptDCwJ50M3gd+AEyjGkBcsA04LRIU/QgFwFyu9B PWmQSWttT3A+2rMstCm4Iv849J5H/pBgrcBE8PcjPRw0wxSkD7Zr56to5xsR6d0Ci5GuQ3pTcHfo GNJbg0cFXgkezT+D+Bjadsxp40rAxeAFB5oELoMG5KtgE0PgYshiyI9BOiYinYA0g++fQEwozgaU PQZYF4pjyA+APsE6nHnA+OaPACRgzGMAzrwgPw/jnxYe/ydDo/KfAjwfmpi/GelDoQqBI8GmEOs7 HqoKnAidDky3xy//ZCSEhrf2H/O3mOcP8VKZR9suzmFOahmkTQzjUQ4Qnl97XsvC8xg5nifa9AZG BWtDFyPmreM88tzb878a9V7GnC8SmB58NrAG6Y7yN5Z/AvZ8DeWfC20qUIGlmPflmPd1wd0Y992Y 2+2BNRHpdvZd4IpIH0A6BnLbASJfkNBO/jDykwM7YTsMe4KNDpwS2OnAfuTtl3ybfzB4tMCH9BvB o2ivxAXZiM9hnM45trfSGbv3g1Y5Zz0GrsE+Afmu4NV8V4ja7JdpQNh+Q8QAnocBNnWVIWy/ybCV 5Ag7zWabBBSCDojdtpt/2CVsAsA2ybbaMT8ZNPuUSbZ/EBtmaLXnQQ49FbZsQ3u/MtWx8+rgsoIA 0jNCeQWFSNcjPUjWQUH+jIKh+UNDQ0I7C0ZI2VkAxx8xXTAa6YWQHy/pEqRL8keHhudPAswKFRSM Lphky0vall8C+angrQqNKpiBdbUN62ov0muRnoX0DqQbkF6PdDXSu0ITC56UdTgO63Ac1uGU/M2h 6fa6K6iH/W4MrClYiLVWFTgWOp1/JlSVfwXx8VBdWz78r/CRbvNX+wqWFCSzDxS4hLra1u0ohhts Y7EDJzrA4vaQf8WG8JpPDC0OpohPXhHaE9raKheMDc5G/nLIrUS8JpgRKhHIBaSESiJ849V2tpWA NCDs25JDSYGVmDv2S7PseSqaVJTA64Gh6PngWQa0aRnGYXhrHCgp2iwwLrQVPuENrPf5AlOKtmEN rbB9RtGO1j0M/mJ+YAjS05GHMS3aFZxftCucPneDPPukPfAhrXtRxF5RdLKjj8AeuLNoL6Ch6FDR EcTHw+PeYY+QdcPgrKmi86E4gTOgz7Tl23Qna6tjunVttK0FO926FqZiHchaKLoU2ll0pTg6VMFQ 1BxsKjagv92eAL1nimPzrxTHto5LcWKoqjiloJ79aHEQkIF0blu64x4TPgs4dnTDHvneezL72uSS hOAFAYxNCXwNfONZhpJs22ezfxYf3bqnOFASCHlKCkMLSgYFV5QMDZ1rTWOMVmCMzjn2Gt67SkZA J8NoB8aHSkomAZz4hnY6/QDwvt9UMhVttKFJAD6oZBbsGP6lpJrPPPb6QH0b2Z5K6kNJJQsxL06+ My+T8oeWLAntLHmy5Cm0dxXa54w72uyRdTcR7d2P9icify3SU5A+GDr3iTlTK+qt/qJwutRT9QK0 oEgvoUT9cb2Jko3xxj20wphofIZWmgHzO7Ta3Gi+oHnMreY+Ldbcb+7XcswGS9Ny0WBTm2G5LK9W ZsVaidpsK8lK1h62UqwUrd5KswZq86zB1m3aV6z7rTLta1aFNUv7dvTD0Q9rz7t7udO077ofcDdo L3p+4/WoPvwkQt0nzw3iAMnOE4hUvnvry60BfEMy+obzc538NL6Z61/Wv4wLfYFVQJrnz54L9n2q E2n+BEv35niR9vb3Bkh5Q94C+1YWIT3bkfbxZ1n6TRgbQg3L0Y6/6JfIMEvMgeSygmhTtFViDaQY 9PIWivW84zlHcVJ/guevnkuUiD6lUJLUlyz1pUh9aU59mu7TN7XdbDKmAXAT8aEXGbPDQOlXO4Gz RP1wU+i3DrAGsBKwHLABsAmwFfCKw9sJ2APYDzhIlJON8he4ewDcKtKbEOOG4YvjX75wYtxQfLih +PLQBtycfBgD3xDQuEn5uK24QeU8BQgACgGDAEPlLkw5uAnl4AaUswqwFrAe8DxgM+r34/6bKEB5 KJOHsnnQkZeN+zHK5zVjdIbSeJpBVTSfltByWk3raRNtp120nw7TcTpNF+iKprRYLVnza4XaEG2k Nl6bolVpm7XzZPiX+bf4V/i3+1f7X/U/698NznZQu/37/Af8h3Mp10/KvwipJ/wHQO337/Qf9L8B 6pB/m/+Ifweoff6N/sMoqew8/zFQ2/yrkHsc1EbUsNu/AtQ6/2L/Hv8aUHv99f7N/oWgdvtno/4n pOx0/xp/Bagd/kn+tf5qUNv9Y1DvBFBb/cP9S/1TQE1CyWr/UFBjoLvWP5L03Oacqn5GTkXO9L5D wC/2JyIvBVSe3+Mf5/eTnlOYMzpnUM6InKF9k8GPy2zKXZV5FZQr81zuCr8ildmceTJ3eeYZUE2Z R3OfzLxARsbktpDbmLuPumeM/Ggh93DuqxSb4froIXdt7hKK6ru/85A7NXfEJ8hbdtNjddzt9cf0 x1qfels11nyKdhe6C7Hm+TlsvDyB7SlPWhPlGWsyado4jf+/r4dO0U1EKeMASZ0AVmEWVm4WVmoW Vm0WVm8WVnEWVm0WVm0WVm3WGw7vGOAE4DTgHFEmVlwKVm8KVmwKVmwKVm9KHqAAUOLEQwDDAaMA EwFTANNRFisyE6s1E6s1cxJgKmAWoBqwDbADsAuwF9CA+qAjG3LZkM9GuewRdFOGy1/nX4CVsNS/ PCMmKznLl5WdFcgqzBqUNTRrRNborPFZk7KmZs3ImpXxfEZCRnKGLyM7I5BR2PdyRiHwtYxBGUMz RgCuZGzO2JaxI2NXxt6MhozmTCMzOjM2M9G/MkNlB7OLswfzNxrEBkhPhA1oYgOW2ECU2EC02IBH bKC72EBPsYFE2MA4ShUb6GtNgg1kYvbjyO9OgA30ExvoLzYQFBsowOz3pgH/6/VpNA2+jq1lOHkd a2mdrSlts5ZSAagi6luMGbFs8AcBF8jr2+Xb62vIdGXG+A75jviO9431H/U3wu/Am1F0u9UUr/eE Fd+N1WFiXdxHlqyLKHe8O55csgq6eZOxCtyyCjwfu3zr7prk7K59eRe07uS92rPHc9D+rg0lyN5v A1FGJzIKMmkiaevJjMi1pXX5vo5G8JgA+8m4v1NNidgEbdlEkcrqRJeOvFY5u8bsG6QUjcIpwvcv eOYao55WX8MYf119g7rJp3se+aTJ6zro+hXFuH7teoPiXEddRynBdcz1O+rpesv1FvVyve16m5Jc p11/pN6us66z1Ec+Y0qRT47S0cqt9IpYFs8ApTTS2JTGlFMpZ1MupDSljku5mgpzBfakxqVMS01K TUv1p+alFqSWpCxLWZY6JGV26vCU2Qivpk5MjUsdlToFkp6UaQizHWiUEKmxTV8a62JNEXrGpVrI m50yBpwx7YN8foFdlSz1nHoNY/ET9TNKUz9XZyjTWmgtpE/xGZJGuNPd2fTpsF0FndnJ4c9bUBIn DrVR7SRT7YKWZJFOcXSzfWXIePD3kiidYSppfQbxaVM+R4Uc6tBoMA1rG7c+yRTfJwHhqT6rAGs5 pM9HGJY+Mn1M+oT0yenT0svSZ6fXSj38XL+b+p76HlryonoRnJfUS9C/XW0nXf1A/QDt/DHaZqJv DeSSXkVLO93YOZZpDXw2pAnUgyj5uY8FWtoeGps8BWE6oEIoO0TSnaU5VHXgV3Uiw6HuPfgfNrxf Gzu2773a0ll7pn/4tojX41VIsgo1WYVKVqElq9Alq7CbrEK3rEKPrEIvVuE71P0DW7GmRqpVsGUP Tr24+yRtIy0CqBN4L/57yUbqUilHJR6bNPKGUIbQSs9GuFFiZNIYhJFJtUnPdpprh/lJzwFPQGjP X5S0LEw/kbQxImeLcLa/j87IVi1LehV4heCPH96/13Z/7RpXt2vJyA59jOzdh+3Xxw7sL8L7x9fh e76BXSTa9UvXL2Gbh12HYZtvut6EbTa6TmIv+YPrDxQv+0SCe6x7LPVy3+2+m5Jkz+j9ofzveEAF oFo8cK7srVMI90V4LwrzNOLv/eKWRyMj5AqlZF5YDjd0u16pJU1qSRe//E2sQIUVx2uQZA0asgYt WYNRsga7yRqMljXolp3QKxq5DyR9MKUP8k1gwoqmFYAjUnc/4ZU57TwbwWvt38UI3iXpnUaNEbwT Tv+2RPAOSO80qnV4nfdPkftjzRzPGf+3NEvKkJTRpIySMrqUcYl0N9khn1ZPo7ZnUKcmtVmiI0ra t0KtdFqiSy3Ge46hwthMlpm35y9P+syfiE8BDAnzFGZ2ocx8pByPaaFz6iMZpdZZ7nyM/nVzrzD3 q1HfZqc9/YXXSEuQWtOOd4KqkHqiHe8Vmf3qdrxcmf0p7XgbZPZHhnn/bNz//9lF53P6Qa2lszHU aAcdlLNRMn83pmcj4BRR3Fka2+PiJzVwX1y/cf0GfT7lOoU+/8n1J/A+8C5N22ln27mxRzRRwin0 F0e0HhbjhJ1Ch2Mnx4pIdQgRklUOtJUL50dI3agrknO6feAV5fqt6/hH7WFsrsDY+NMI5xBO88+6 cqqHYhy7XPAUO3ZohPiLrWkuYUu2yYTDudjLrRrb9LXKiZ4IDYjTEDa1D9LDI64zH2K/UvDPDThh D3LWZYBveVpAK+Q7kuZvx83QUvhOrsW043pALkP6SiSXLtM1qkP6VDvuSTpPuFvQwXbcA/A1I2UN tXHZrxQg9VyY9/7+Ik5tUN9B3nfVRvjNF9QLkN6ituA+sU1tw2i8ql6lKIzGT8il9mFMuqlfqcPw J2+o35BXvanepO7qmDpGseq4Ok491El1Ejr/oNiH+Nw++JBMdyb1dGe5s+R+bI8z32yeEfy04G8I XiV4teCvyf/aNLRojFus05d84fk19vHXInnUqCXKbhvJu6a5ZFeO5F0l/p7Wnna88+KHt7TjnaZz SK1tx1ss36Fd2o63jfbKrhzJW4N7sobZiuTV0rNIjWrHe4qeRKq4HS9B5t4X5rWNzTNimzx3JP5W E3+rxN/q8LcnsJedhNeNYmlXY8R4flU4RwQfjhjhp51xZv7vnScZ/LQjI/xJR5Bab7oabpu2HGPe o6JRorFlHTmfQkh6Tcvsdumylh+0Sz/RYoXTHor6nwQqk+cVbc9BMGPQEclhucYPIMecSR1LXlt0 Q8loMnEujQ73rJ2n6p4NWEhj3fWf1BBxuvmAvljbpF2QZwB16DfFjCYtZnwYON0RbP6ICFjfIf18 mNZiNgO2SWzzRtPY6JJ/Yzj9b639I4d/3dn9A+7Qp8RjerB/YFfywIN55hN1ew/wjHHoaW3gmUxj XWM+evAM/zil/1n4iOf8j7SmorDjRy0NA6c7Qnt+xY0yLl+bLOhWaOWNtU59gsNZBz5h4X99TfF5 42rEWZL/u5Wrufb66cjwIfZxvtFqskp5z21oGdS6B6sMYyzwMH53We0wxgCXmt2By4x7gKcaEyR3 JvAcyX3AWC25TA80HhD6NNNmGvAEkXzAkefcvTr8uiowpzC29gp9UehpjPUjjI1BgvdJbi5puof5 usdYx9h8SjAJRq7WpK9lbEyHfLZezNimjVPCP8U0y2tNgveyZm2vlNorkrukxsmsGTINTDucBVK2 QfiNov+IaN4n/DLBXGoa87Vp3ELgQYL3CZ4udR2Rupi/12nDZaEXCM4V/JTw+V2pEaJ/hLRtBOeC 5rInRXKl0AEHrxM+61wpGp43nwNexFgtNb4I7BO80ORxuGJ+B3ibeR2jV2vCDtRyngv9iBVgzHMB eiXzmYNcnh2X9HqX4OXStuU2LW1bzvWq5WqTjIa0kGltpV4rrd0n9BGhtwnt4ZaLTIBp9bmWAYJz gOtabgae33Iv8KwWtooJLS8AX2j5JtsD26paff0404zpavMGxmLDDUI3NPOZdA1jFcd8bSvzVVzz DsFneTYdDreq7jpsWDsqOIZltDopFdPM7a8TToBpNVlqnyxlJ3Pt2l6nDT6mpdQ0qf2q1L5L9K9s HiqSrD8gMittScFN0vKrzVs4V/q16/oubpvQcVwWNK+y41J7nEgmMVbZonNas4wnY7oqnJXcQm0l 09DMz/zOyMhsFm0uqXGNyDTJvIx2xo3beVLmq8leWWJdTWJXMSI/zVjFfbetXUYgIPyDIj+ax4qe kl4nCT+meYLI7JDZvyycoaJzgayvRpZk66UGtmdq5LrQki3S/l3CXyf85/g5F/PpFbHwQyb/ZvsX TdwEVD+2Z/T9uPTdtrRVYsMryX6nUcONgvE2OasXCr1P6AbBcjNpma2gp0VuhS38SxRoudDLBNfb pVr+DmyxZLPc7bTnRUOZ4KsiM4Zxs31DHCL0UH5yqyYL513Br0vZOqFfFPw74SwSulGwfdf+nuDt gn8l+A2RXCn4pHDsp4IXhWN/hn1O8PcZqyqhX3No3FGMch0Wq4/V2Z+PY+9NL+k4ZeiflvHfb1ZD ZlAzv1Hpk1XQJFa9n7Exm+fRmM1WpBezd9VHiWVmm+zbB9t83Jk07VXG0DledBYKx88co4E5Vonw G9gXsSTtl1lLub5f6DEsz7R625oseBhjthk9UUr5+H1I4MmsgSX1tSJfZt3NWDx5gXizEVaM+NJC yX1E8EqR2Sz04zw71iLhLBB5O7dRcoV/nfe1suaHgJ+9zl56/nX2pXOu/0Jymb7TuFd2vWbZ9V6U 3ZB9zjMmdl31WMu3gIPGu6L5Vin7VdG/kHOt77IGi7XNF/wDadUw60ciM1M0c48mmBmyn/5a9B8X 3CA1viv4dc612CfPN7nlD1h3CR4OHG+9xRqsnrKLHRI/IB5GPMBz4tkKZNU/3twDeJjgg7IDxrMl 0G/FHvbyTGHPKhPMn6xcFD/zlGjbxX4eeyhjF2NqkLU5jb0EXRVfMY3HGTTvgPFsdc0bZO1Ysl5G 2eux7XmF1iS2PU3wXpGRpxkqW/AI4Q8TLO9Etsjzs+YVghcyRgsYnxa8SzSPkich1GJ/v2K3YIxz y7TmdxiLngOCfyL4AuEUhDJMvyQabhMsz+lVqZ7HIxk1WDD3tzmKbSbOVSGY11qc7JWbrqcwvraI R0bW3RE5HVUZ/M2Sq8xR63kdGZb46kZZX/N03KHVYv02ltTZwhfodwgOAj+hs8Uu1LGO1Jf0ySI5 TPBI9pP6cKFtDfyJ7mDjMPAU/TJj4xmZL7bh6fx+MuyzQuyNd//PGdeEf4Gx+bTQbOHl5lbhSK7F zz0eNLmuB43zbHv2Ccqol31T+iWe/KptD2Ib2cZiOQ9ckjMDr9wRPALw3smcyyc0YNl/xVPtUhdk D6qXHaRMrKhMdhA5dXDtWkDaMF/20GzG0BaUWpTgpeJzEsTDrBM6V2ixWOt5XoNydrpTPEAVLAhY ziTrr2+SmeJzVJX4jQL+7A3ngRDj6/w8raVlI+9xLfyZ4E9bbuV10fIo04ab/Ta3DThaOKsc646m jSqNIt9jniXvMc8Pv8ecLe8xD5F3j13yzq8bt+telEw9nLeYXfIOrYe6427Qh+Kom3D5930U7p+x OPH3phSK53/+wtsh8hRuAnqHN53TIt5x1qinEydS6syZc2ppm+CdgvcJPlRWVfkgHRXcKPiU4LMV ldWldEFwk+CrldWV9RoJtgR7KufWVGlxgpMEp6FoqeYXHBQ8qKpmZpU2XPAowePmlJdVahMFTxE8 vQ6iWoXgKsG8p+ryTrT5HpSSt6t59GNvwD0Ed7sBu2/Arg6Yvx/Bb193TrW9u05y9yJ5ZtgRe9vh JJxUBtNwGk0TaApVENtExze0D9IRnBZO03lqomuO3jec+Jj97XG6YN/3NCdWg9H/bqSriapCLXTe yX7efuvaqJAea8ZBJz5jx9Fn7TjmAOQRx5234/hYu3zCJDudsFL0uBJ2J7zRc1nP7YlbEk/2upgU 7bx7vcvWknTQSTc572JPcOIKJz5ix8nT+XkLaR777eV6/g4aGVGeKG9UTFR3+WzsH3zm0tI1n3yv /hLmJoWCNAxjNhXjtYA20BaM0kmZc48eR0r9Xe9FSo9yOHcIJ1k4mCP+3qAe7+TdKXmJEdKjhdM7 LJ0g0qZ8BysJ68cvNfxVtF6S8k1S5l3Py5C0pExSa2nmqb+3K408dYXbBx3JoiNRdPRmHU4b0EL1 N65ZXZZPP/nXNEh+R8Olx+u8xhVlU6yRbGQavY0MI91IM/oaSUYuf+/RCBh5Ro7R3+DP+qNQ9hKW QxPrVn+HHkP0WHoc+uHSE1BXN/mVDbc8ldKtJdZjqkXearB/UCFaj5b7uUfWlv2Z+nDHzuNkxQ2T T84LIng6BRDs70C2clu/qajUP+CnSX4DJDry+bq2Dh5sEHSPoUk0nWbDFhbTMmd29/A37/lLcVqC lqblhr93z59TGO5caP2GUP3CVF4rpf4b1FqhDoWpX4Wpw2Hq10Lxr6HEUIJ6g1PqJ6Tc49QfQK8R md+EpY+EqTfblTsq5fYCP61+Cvw1kflthEyi2sf61M8ws2sRHwtr+l2YOh6m3gpTjWHq92HqRJj6 v2HqpFBR8HZJhJ0Fa6WEhqhfoLZvob5fSK3fUj+XbxoeQGo90geEu141gLtevR3WdUootiP78/vn 1EZIblJbKFptVVupu9qmXqZY9X31CsWpHWon9iH7l/4S5LPEYTL78c53Ir+NjP9S/wWdr0BeV6+p 1+R7AEqtlmdt/E035exbpuybmc4vzaTKb8ykQcfrlC7Pzm6TZ2fDbKvV37ES9K+yhVqJViKRlWzh fMFvArE+7Unao6fpPt2v5+oBPagX6iX6E/pS/Ul9mb5cX6Gv1Ffra/Rn9fX6Bn2jvlnfom/Vt+nb 9R36Tn23vlffrx/QD+lv6Ef14/oJ/ZR+Rj+nn9cv6Bf1S8Y9xn1mvhkyB5hF5k3mQPNm81bzdvPT 5p3mPeZY8z7zfvOzZqlZblaac8wa82FzrjnPfMR81Py8+QXzMfNx84vm/zG/ZH7Z/A/zafMr5lfN r5vfNP/T/I75PfMl8/vmD80fm6+bPzF/av7MbDD/2/y1+ab5O/P35tvmH813zD+bfzXfNf9hXrc0 y7S6WV6rh9XTSrf6WplWlpVj9bP6W/lWyBpg3WQNtG6xbrVus6ZY06wZ1ix3kjvZneKe6p7uLnPP cle5a9317gXuRe4l7qXuJ93L3Svcq9xr3M+617s3uDe6N7u3ure7d7h3une797r3ufd7jniOeRo9 Jz2nPGc8Zz3nPBc8lzz/r7r7AIsaaRgAPLuzE4RkEaUjHQWkZgFpggWUJiICIiqK9F5EqqLAIqgo dhTFAgJ2QNqJYj31RECx61kRQZSiKJxY8c8OynH3+bXnf77/vl8eh0kmO5tMZt6dGTbJO+o99ZH6 TPVz2VyCK8KV5ypzR3M1uTpcmjkPZ6ECVGBavhJk+k5QDaoxSmpADebsaUNtxiVdqAsQ5EEeo9M4 OA4IwVSYyhiVBtMYo9JhOhCBK+FKQOLrrSiYBbMAF66H64Eo3Myc/eEwG2YDMbgdbgcj4C64C4yE +TAfiMMiWAQk4EF4EEjCw/AwkILFsBhIw1JYCmRgGSwDsrAKVgE5eBweB6PgKXgKyMNz8BxQgBfh RaAIL8PLQAlegVeAMrwOrwMVeBveBqrwV/grUIOP4CNG9qfwKRgDW2ErUIcv4UugATtgB9CEXbAL jIWv4WugBd/AN0Cb48xxBjocN44b0EU6SAfoIeYH6CMa0YBGBswYloeMkBEwQMbIGBgiU2QKjJAF sgDj0CQ0CRijKWgKMEF2yA6YIkfkCMyQMzMGN0duyA2MRx7IA1ggT+QJLNFCtBBMQH5ML3oiCkJB YBIKQ2FgMopgxnRWKApFAWsUjaLBFBSDYsBUFIfigA1KYEZttmgJWgLsUBIzxrRHy9Fy4IBSUAqY hviIDxzRCrQCTEcZKAM4oVVoFZiBMlEmcEZrmRHQTLQOrQMuaBPaBFzRVrQVuKEdaAeYhXaj3cAd 7UV7wWy0D+0DHqiEGWXMQeWoHMxFP6GfwDx0Ap0Anug0Og3mo7PoLFiAfkY/Ay90AV0AC5l2UAu8 UQNqAD7oGroGfNEtdAv4oXvoHvBHD5kRfQBqQk0gELWgFhCEXqAXIBh1ok4QgrpRNwhFvagXhKH3 6D0IR5+Z0U2E4GY8IJLgEBwQRQwjhoFFBEVQIJoQI8TAYkKCkACCKwoVQSyhTCiDOEKVUAXxxGhi NEgg1Al1kEhoEppgCaFFaIGlhA6hA5IIPUIPLMNXBS4njAgjkEwYE8YghTAnzEEqYUFYAD4xgZgA 0og5xBywgphPzAfphDfhDTKIQCIQrCSlSWmwipQlZcFqUoFUAJnkPHIeWEN6kV5gLelL+oIsMpAM BOvIUDIUrCcjyUiwgVxMLgYbyXgyHmwil5JLwWYymUwGW8g0Mg1kk+lkOthKriZXg21kFpkFcsiN 5Eawncwms8EOcju5HeSSu8hdYCeZT+aDXWQRWQR2kwfJg2APWUwWgzyyjCwD+WQVWQX2ksfJ46CA PEWeAoXkOfIcKCLPk+fBPvIieRHsp25SN8EB6i51FxykHlAPwCHqCfUEHKaeUk/BEaqVagXFVBvV Bkqol9RLUEp1UV3gKPWGegPKqN+o30A51Uf1gQrqA/UBVFKfqE+givpCfQE/cVlcFjjGRVwEqrnC XGFwnDuKOwqc4CpxlUANV42rBk5yNbga4BRXm6sNTnP1ufrgDGNQP0iCqlAdakEaGsFeuBZugjlw J8yDhfAArITV8CQ8Cy/AWtgAr8Fb8B58CJtgC3zBeN8JezmunNloPJqIrJEtmoZc0Qw0G81DXsgX BaJQtBFlo+1oF8pHB1EZqkLH0SkmD3V0CdWjRnQT3UUP0BP0DLWhDvQa9aA+9Al9hS8IEqoS4oQs YUB4EgsJP1KRXED6kAFkCBlBRpNx5BJyObmKXEtuILeQOeROMo8sJA+QR8ijZCVZTZ4kz5K11B3q PvWYaqE6qW6qlwu4HO4wLsVV5Kpy1blaXD2uAXP0SdhegO1lYXvZWF2I1eVgdRHWlcCuCmFRh2FR hbGoIlhUEotKYTm5WE5RLOdwLKcYlnMElnMkllMcyymB5ZTEckphOaWxnDJYTlkspxyWcxSWUx5r qYC1VMRaKmEJlbGEKlhCVSyhGpZwNJZwDJZQHUuogSXUxBKOxRJqYQm1sYQ62ChdbJQeNkofG0Vj o3hYJwOskyHWyQjrNA7rZIxdMsEumWKXzLBL5til8dglC+ySJXZpAnZpInZpEnZpMnbJCrtkjV2a gl2ail2ywS7ZYpfssEj2WCQHLNI03MtxxLZMx3o4YT1mYD2csRUzsRUu2ApXbIUbtmIWtsIdWzEb W+GBrZiDrZiLfZiHffDEPszHPizAPnhhHxZiH7yxDz7YB1/sgx/2wR/7EIB9CMQ+BGEfgrEJIdiE UGxCGDYhHGsQgQWIxAJEYQEW4ZYejVv6YtzSY3BLj8UtPQ639Hjc0hNwS0/ELX0JbulLmT6fKFgB VeAYOBbqQ0PYA9fAjXAbzIV7YAHcDyvgMVgDz8Dz8BKsh43wJrwLH8An8BlsE9Q9jgvs4bhw3OEa ZI4mICtkgxyQC3JC7mguWoB8UAAKQRvQFpSDdqI85tPsADqKKlE1Osm85iYcg35BdegquoHuoPvo MWpGz1E7eoXeonfoI+qHbcicEIEqxEhChjBAVkxsHuFF+KIb5ChyPulN+pPBZDi5iIwlE8ll5Epy Dbme3ExuI3PJPWQBuZ88TJaSFeQxsoY8Q16iblO/Uo+oZ1QH9Zrqob5yIVeIS3IVuCrcMdyxXF2u 4ErCFf/PWr6gt6SA278ibv9KuP0r4/6QClZAFSughhUYjRUYgxVQxwpoYAU0sQJjsQJaWAFtrIAO VkAXK6CHFdDHCtBYAR5WwAArYIh7KkbYgnHYAmNsgQm2wBRbYIZ7KuZYhPFYBAssgiUWYQIWYSIW YRIWYTIWwQqLYI1FmIJFmIpFsMEi2GIR7LAI9lgEByzCNCyCI+6pTMcuOGEXZmAXnLELM7ELLri3 4Yp7G27YiFnYCHdsxGzcw/DAUszBUszFUszDUnhiKeZjKRZgKbywFAuxFN5YCh8shS+Wwg9L4Y+l CMBSBGIpgrAUwViKECxFKJYiDEsRjqWIwFJEYimisBSLsBTRWIrFWIoYLEUsliIOSxGPpUjAUiRi KZZgKZZiKZKwFMuwFMuxFMlYihQsRSqWgo+lSGPGp+p4dk8w8zISJjOtIItxoIkZ4Q3GCcFMBosZ bbBY6sxoUxFEgZ9BA7gDmkA76AX9rGGskSx5ZrQq+Bak4DuQesAIX7lnAxzhb0xL48M+JlwBPzDh KviJCdcRqUyoSAQDNtIlQplQnwhnQh6XC9jUc+5wJnzxd3J8h3N8j3P8iHP8jHPk4xxDcI5hOMcI nKMozlEM58iM6YlIwdY4FjUYWzQYix6MLR6MxQzGYgdjcd9jlONgbDqOMSUpKDUAGPFeM3vwFvUA DiNfHyAY/T6BYYxaZ/H3KAXXVIuA0bjsxZjy5gyWPOdbuQtSREkDxilm/cBvfDbYgjMDBFddCnKQ wTM+hsyrepjR/KOBV1FHB7Ye+A3b8auKmVcN3FVTC9CCeZFvs1nfn0IgP+Q7rPg7NPAZDgtx2ILD Q+C/5c4Agr0eDtyBJ/ABIUytDAFxTDwJ8JlYJtjAxAWzVju/Hd9woAMMgCmuRZOAIxN3AR5MbCEI YOJh345aEh9jDQ6bcJkZw248NwvxumQc1uOwF6e3f2sfXTiswGHzf1UZSeDSEcysr2D+ZzLxDUzJ LAe7QSE49C12lFkr+G7DyW+lJYFrh2Bm3pn5787EBaXs8C2ngVgSs/b7lTlS/8tyS8Xhk//aMhT6 Vh7GwA44ATcwcK3RQMpAq5H/ptRAeUjjY9iDw8dDyuHzkOP877kj0cA13PgusOw2wGYfwX8xyWP2 TxlwqbPUOepn6jx1gbpI/UJdomqpy1QdVU81/M18NoupFwiIMeWhw5TUpIH5fHY9nuFsGJwNbhF8 ExvHWgdjz7/HiCWCrf/hjOnA/QLw3nJHMuoVMiPUQ4NaReE0wVyzJrDjijN9syamJkKmBkKmr9vE jHrbmVgX0wdugs3f0o3/nXSm3g6mD9b6dYPvagA8uRJMn+/H75rKyD40/4Etf/T+/8KW3/aE2fKH +yQ/WEqSTI/2MbPFHvgZ5zpQNwf+TvftbtNyMbjV439yYTRfLogQ1sqwy+jjsoTYeXy5OcyqWWwW i0fSwgTSFoVsOQRob0JEm2BxWHwTNouT50rPpHWGrJHfq5gizzQbwc8MBg3BozTC8CMw/MEEwQ+t MiQzjsRa9Sv9fM/q8mpJh+lFxATP0dzMsXl8SX2az9lO82FqHmSz2GxxQ2YXdylmaZRU99wzxfu/ i+YO7i0LMfuViHcTzuIQ4uxZrjxxeoRgYZi4yGzvxUHBEYExkRE8MVpUsFJIXMjF3y88MsKPp0jL C9aIiEsOPnpg6NMKeGq0iiAdissNTffzV3YNDowQ3Lnf2XoyrSjN5RnQZrSRAc+I+T2XWTSkDQcX 6dS0/8i+cWlSkE6Kc6bPcHb5vjn8O5vTfJbq0DJjIQD5rOGAWS/C5rNY4EShfZjYJ9Wd/nulcsbX e/t8jBlbnEXI3ljkIZu5YDY32CfCOM/pi2pik8IlOe91Hz8XjBgjVXvWQ4eXuarEQHHVg+QJMbP7 0guNXesmvw6uDt4d7t4R0VKuPn3xdb9FlUp3vFesAiqvgzzS5tmvr3xyc9yd+l/pPa6fl4TuWKF9 TCUw5ta7hJ+8V+ZsSdJqDHwpe/rXE75dFk4TlrE7epJLG4cfS13e++lF30a7mvWWa2uFNsv3nIpt +eyrPHaPWc9kN1NFN79JlSsOmZT0gLXN3I/55cNVq/YfLLkjfZzuZqspi30MnTO8qXjP7oWpm6Ej N9dHtqJm68mNc/clZMTlhl1x7BxZZWrLhkzLKOCzuEyJCNPiTFkqjOFQtAgxjKndCAlBSCsIVopy pDgSj2c+HPVW0QFNHCl/7uT5Cn+djSiHVhIkq3FkaKkUifoRL+puVEp5sC6b6BlKSR133CGiRLsL NlDizKCn09Py7PNsM6YGxcREmevr+0aH6YV/P2t6vpHh+lGhwYK1+t8eVLFYnzmpTMVjqh1T47xo U11Dnq4BzaP1mI3oud/3kcXiONGOtMP3ZZqdMeHbW8THx//oLfyj/2HeMX9qZlBQU7QK+16dPa++ JM1HLmHnvItfOiUv7E8Qvy7tqklSwGqi6fDVD/1kV4xLtjve2LFkdf6VGQebarpsxfql769cLXbd UTKve8TX+1sb/RpTvxjuP5+wuSXpdvjKRXfkvZsbnPyOLZ74YamG0TuXibbWZ0VTo1zPbWXtnVZz RgvGL434dM0mU1qTV4SeSWVWv3EIlpxv+OFJ8hYL26kKJXVrLvWtUmzv30jtmSEk3KW+KaJiiyzr vVdqW8n91euT5y5Y4VV1Osmm1ba030N7Y/LKhzZKM7MbzvvkV13y6rgc7Llo48Esd2Udc6fNX3KJ 9cWZ74OWjz+RaLXZzP63aws6o7KsYi/yZ20YVTXLm8HpPIPT4SE4aZsZ2J0rv2zXgzXV/jNO8f8R AFRwjWNavMzv6W7B4f66rjHe4VF/polnYGiEaWJWfF+kUyv+L2jSoMcMLCpGWAdHCR5HMsV1qvJU Vydza9rGSNeQNjXWnTLVxpQ3hlYbOCL5Hx6Rq3+04PEl/5SyfT/RotC/xHluRmj6Hrt8xctnXy2w F0pvitqadPq4l48XoX5zjVmNjGqR3qbie9NWmcuV5qaV1C4wW/uzwVKphC4TU/M3Pp+C+Oygzo6a SZll/fk6xj4Lo8wW+oh++kXGNPLopjuTuCn7uAlrjEMzMtWlZb4e7Zh+omas+HNPu3VRNsbq/Ubj Dhs97bt/z7N9+zLeselKszuS7mRzAyhFUQPpCTq3U/KvdvTlbGQ/POf1xTTlvVnyfL86E8/xM+fN T/e6qij/yei0V6vmrCi3ndWdSWCWzTzNHE13x6dvTgjLbk2106Jr+wyVw9qCz1fcOu/USCua3Mpx uXVQbWp5iedk+xmVPkGbv1MmzJQIGqJWkYyh68TshfnWUq/X/VJ6Q/vEbo9rf1BLzej9ry42USJd kz7FfarQPnp+XMVw2m1ALcYsmjErb2qG9b+l1kCy4Czik8jUSmyWxxCzGLFouyFmWfxrZv0w55gf 0T3sR4wtbzi6tFX/dJt7VePiB4dcrkuZ/VJ55ePdfkX/qja7nlybVSggRD1jQf9Kj4e/+K4zTekd ZpOkvbhM9MzkQ/WXDm0pNOn2M3t68faHq0JXD7eM6Q4uuWXT3B9gYHF320UDlY+v5MbkLaCsdEYY mvPTvnQpVJy+urFw45hFxw9HH91d2tIIfFZHHax2cF77Sl/Vr+jA8+mh2TryiZ37C3j3NyzcUBjR v5nNnagzmvtLiHrqzDQHsZpY3ymHgof9AjuaG8kud/PO19H7RahecFO1ulQ1uqLtwk5SMy+l7tnT u2YBdJuNtEzbR/7HOeSSkwe4/fJf4Rrf8ukj2WozdCvSV4xvzrd90ShJ89FJhrHCAcZEvA3V5bBe vD/r5YVZEBHepL5681sdP5asFGTOBU+Wlv7DSuHBU8XTpbUH2vHo39uxS2QkgwRz7oIDgn29Y/yV J8fGBEVGB8ckYqVo2pSRyYBnZmjAKGXwbdFAsPhX9u3+GTXl0XM8ZWm/Mwo7FiorW22Pcw2bMOpO ZEP9m/bQ/m1SYk1PzGPS5I7p5xl0fn38s5WT2u1o8GDcbJHVdSXK9r3dQUemT8sqOpU4bVGurdD9 L2Oe7Ipd1Xho8ZTku6kPek69NS687Dn1YWmxZZNm0Da5/UXRi93fSG9p+TJuS3TenTgvxfipaemm UtcWz0MnAl2yisqD9e/Lkv2bYsY2x+m7PZKg57y/keXzpf6ylw3P+biGeMskujF6rJim6iUTJ8s8 A8sNV/JNiXRPJ3e+phYyODbt7gzfthu6Pm+mWrYdGQbe2eTvvj5vrbrriyWHHN7aNJpYmO6ujPcs kt6dVT9ivbvFuSPCXvDmd2oWMCUylx4uaHriLNZXDqIh82uIPT/sEAk+JRSGczhMDcygRxLC38YR kiwOwhkzHweD69iCXL5c5zndVM/MfpqzcPwBXuQ+i5P3dGnZwY0k2BxKUQS4glhm7GENJv8BN9Ej /IWT3DW2tY4R/6z1VMQ1e05LIe08gJs9bUtPzbPOm5wx8V/HbTA5mqnaApUwbG5DYLOjbegpQ2Az /XdgEzQY64Fc/7YbxmaBOWYTktVtSjsiJ5UZVIV0iOpHHLDv6/CK7XIcr3vXupjsr3+pyytQa0hy zklRmX/EUt/xxN4D7jufRdVUV75PrLKP7pvQPjm57iklHVxftFNZ9yPpfMH9iu4zhxsno9oOcPfC Ivem6sxps99mW+180/P61bMMJSOLavcd3a5q6VqFfPnNzVuEFN42O71fm1/3Qrxoo1PtqBvro7O1 FoXnyr2X73a9E9ig+tVT4cretac0yhN93afsnXnlw8sCD/dHueypU/S9eu+X3OIbRHwuzBZv6Qhu O7hX53Sttpio/7rtD37b+3GkurC/6ZY3S5Qcaq4/dX9xLWGrjOflcVJejzYr2K/TPV1sNEX+lZik HJj/aNw8las5l4RfpYuunREuKu5kmTTWbmf09Z6wunOdUQWzN81etiUrb5QdnNvXWBAoElNk3KWr L137PNpkZG9kmUUg/4NLeZahlL+iaOYjscd+vZFXbW7dlH6ZeIFTefOTzhOlzN1HRD6Ja0wqbvnw 9GCyTY3QQlv/hZOcjlp1OnVVxCXeEzESDpdP4Sk1i7o9as3/1GorVuyX89VZSi/pDFJZ0pw9WSP4 /Ob12Zez7uWqlHA9d3bvLckISqNCdGviQoHC1uK3UkvfSaWNPr6qMeSALU9/x8NniyzvguU+ttev rrpcLfNRNDrrXIFlKXtSyNfg3K3NYgfEKk2ch905b0nzCSHG79ff/ZYKMsJ+y/8VftMmtBHNiD3O EA+ADXh4UTAMZgbAf1n395/pvSc/rOzJA7tNWkmherJPTzU/u7h9pppz8dVHMk6jh7+6vv+6Y3EM rTyiQ+i2W7ak/ZZRVptKcjxp9fsg9MXSU52rhYb3iXJyulc3KNUbjl65621voLzO56VtqxTa25wK 8s+pudZlfZzaKHxtQem1o1acvR/2hW0OvKv50Mb1aMa1Vk0bPY0jGTNmuVAtUOdTyIYNdMTKnjn0 ro/L72yreKGybfn7G+I9w465hrtUTt2wxw442AaM0BgbcGBby00i1WHvhxX7R9hKCPP3rOialdDP 2qHgPCwdiNE2Xcceq9nUXNB121OqmDCZF9+Q+2R82uZ8b3aVArfsc19uOeuq6jS3rx/Q+Z+Vye96 H2ZKZP8/0vuHHcM/6C02VG9mDaBTcwbwTd1Ap2b9mN9830Lv/3j15IslFkvlO+QVFTsu9ugVEtfz /3+j/r/UlWXKWmxb5nlPOMX40cvK4vgHVxNnTmeV6cUsmhdOiR++enrp+mq9WyP3rg33qZ7NrndS Fnfe/mjJpObZNaUeO+SfKrAyjtQkvF1zrXM861Xz6fUiqDbLrrnbVfLRjMObWtqyQm6nnHu+5S2h nw5fbtQarRr16d3nloTtetw+oeaokzJOu9aFikRnV+eb7QzUvThTtN3Hc6JUzhrlic1CcgYfGngO cTxL7Wiytj3K8mu6iPiTn0W813XfrZbucFqTfHGc9oKCMx0nl5FWS2+5Rqu8outqEvw957GkRSRE b9yXyPnN4niAR4WuftuH9IyGme4vdkVtCTti5njrXeKZQzJLfMa+3ps71oiIl/O5bKkYrsTvJi/p 1DRaV7R+6FxW9azwQMy4aqeLi9RGqseRFi5rF821sZY4WVFxdHpg7R6rrymJKim7JemAF1YjF8jV 7lZVuWb9UvtlTa9dg86tewYpjupadqO95ra7v973ePuuOvPIU6kaMcSIV3EqZ3L55zTcfioLsVyd H+ddGZEvvu/MIdvukZFfMg3CyvufzKxdq3Y54NQuhZUj/diWuqVz1le3qLRWHa3zrUxwQ7cm6zkf 2XK0KOFwRd7WWLlfN60Uj1XVNzgwLCJv3toxZ/Jer6hTudOhOOPyjlf2TX0s/8jV5LLa4NrnEe37 t13ljf0qenGe573po/LvfdTfPVFvllToZfGCLzw+h2nCnP1sFotmmttf11/+8VTt7zO+eakXBN21 b/VXGPKoodPJzA78vkTyROmhqZKCzuD3F3J4DEomZ8ymFzyhv0iW2QfPbLlpdvpovTntN+QlFM+d dsvTSvnRw50FD5dOxA/pDcSP6Y0CQSAxXz1l9N9trDGJUZGB0d5RQYnKf/pQ4fBZIPeJ3h1eR/yt aVm+V+mbyTKbyl/6aIVMrlakirTaOscc+HVt65y0sveOlmny7Lcj6m2vVNlmBMuHv0+9HTbX3NT6 eGlASPH5G1knjsWElZYEmG8J98sL2u0u4rSO/3jt/VhnD6XEqepZEgl6qWT7JQkLYxnVWJGbmbHP 3pScMH21TPKmsu0e81bbkzv8dB1OrshXAvsdVbmWTc+8Yi23fUibbLGrnT8zPrTcpK+Ghp2NxkGp G24rBvbsUgzrTo4PbjlUsCHw6+1L1ZnaL6ND1s0tLAlZ1HDXRayyqfzLsgitIN2AY3q1nnMuvP0Y Z8jabFUcrti8tH1ND5qw/UKHl0N7ZsHjh3UK+Xy2Js1nj/79HBE8PpsZZLJH4Fq57i/rBfx4hm5I nZxPywytkuTvfwZhMW8+mIJ4wwWzaTwebcqMU415JnP/pkbSibJ1D83FZv28MHau/POvrcuCzSv/ 5LWgroSz765Ml2+om5Audqa4bM+dJ/ke/f1Nh0YFLkhYTMxyCW29v/rtx0uln7+83JZRtNs93z25 c0nJbF4ydfn+om2RklnsTfZfzdvyZ9zsDSgvMJfTzn4blr1rk2Ql/aXbolSZb+XLcn5ndCHhXeLm 7eUS2WNNQi26nV7Hv7x6X2i/mJamfHZxmW9jgwNr2KkNLevq24qmHfpt28H2llzrX6bJZm523/dh ehk6Nqx7eMnO2imzkq5U9V5Tivv6U+7zS/N5k1GnQal3reP9JVyr5S4FZbMvGO2rTLr8KLytiRdo L7LY5UmAhEerYkDCKujStXq4/aWQ7KTOlMMiCSf2qfKeS1mfH01ZqvX9DzEUy/gNCmVuZHN0cmVh bQ0KZW5kb2JqDQozMTE5IDAgb2JqDQpbIDYwMCAwIDAgMCAwIDYwMCAwIDAgMCAwIDAgMCAwIDAg NjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCAw IDYwMCAwIDAgMCAwIDAgNjAwIDAgMCAwIDAgMCAwIDAgNjAwIDAgMCAwIDAgNjAwIDAgNjAwIDAg MCA2MDAgMCA2MDAgMCAwIDAgMCAwIDAgMCAwIDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgMCA2 MDAgNjAwIDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYw MCAwIDAgNjAwXSANCmVuZG9iag0KMzEyMCAwIG9iag0KPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xl bmd0aCA0MzY4Mi9MZW5ndGgxIDg1NjYwPj4NCnN0cmVhbQ0KeJzsnAl0VEXa95+6nQ1IIGwhEEi6 aUKAJHQEFwiBdDYiRk1IWBJE6bCjQCOroiLOuMaFzLgzOqCDjqgDneA4wXHBXVQUWQTXxtEZ9wEV 3JDc71fV3SFB+fS8M+c95zuft/L/V91anqfWp6puAqJEpCsUJTnFVWNKr3rgiwxRJ/YT6XVBaXHJ 6PW7735f5PUfRKyrSyvKq3Y9ctk4kTcuFxkZVVo1vjDq90/NF5V0tUifO8qrPEMSH6x1iKgmpPom FJ9RfVJ1+YWkIa/zTdPm1S64uWfKjSKDSbcWTlu62LnuvKuLRM5EZlzHmQtmzct7+w/xIidk8d5l Vu2iBZIk7dD/GfISZ829cKbr8Xn/EpkwXaR67ewZtdPf6rp+GvrySD95NhHtt6jVvC/mvd/seYsv OP+0ytfQlSyS0n6uf1rtgNykuSIbo3iPmVd7wYKk+7tfS/7byO+cXztvhvfLvQ+JvEgd4rcs8C9a 3NxfXkH/yzp9wcIZC95Z+8dTRYZ7Rbp3Ed13Ma9OPmvLtzFTOuUdimsXJ/r503h/gfZfvsrV2Lzs +7HRd7Z7lrztTH794MesP3I7nb6teZmdE31nS0rkceiYhN/IqRJt3i1JFI/U0Cub0WtyRG2zHiU1 Lnp19FBE9gn5jlqZaXWJjrZiY9pZVrQVFbVPBtlb5AJ6WfekyLgzipyCc6yI7tw8TtfEmuUVZds2 pa+LelS3VLpFbZNZOje+rvCjKk6Wy73yR3lPvWf1kS8JV6q75FH1qqyX+8B8uUpul8tlr9zG2x61 Td1k/0sGyjjZIQ/ZOyRFvNJHUmW6FEg2JeYTU2y/aR8gz3IZQNqlUilJ9mv2J4x5ptyqmuV+OSJ/ tjeqdVJjfyoLZZQUys2gl6yUgJTIZfa7MlQm2Pvpp7lyq9yEfLF/oHSmNKhqS6mxcom9A+1eSTWa kujDo24hskJuZdghrcWlht1jKoOajJJp6iS5RHra3+MeUOtUPxlo70biWVJJS8ehs4+cI0WSLGMk VnVWnaQv6YPlPvWo/YZcI8soXSKlMltmmjoNtF+3X6fs03K3bFfNagDt/4Op+x30eKJaTu/slcfp yQGyXQZQIlUDd1/YDTQuybiVKk3Fq77Krf6p7la3K5faoNKkmDZdSs/cKg2Wst+mrlr+cnqsUl5W VeoE5bXvYnaJGZdCZOrcXnpGY5S90XoGnTUatG4UElLJpVEsl0VAzw7VoC4TyDXXQMupZEQ0kiih QS0MRjHe5TKWmfKaLJUnZJK9Uf6gulAPS10SgWZmyUA5y37DSlXK/sLqY/XRHELEqUusPjp36O14 4eM7axb68bV26RTGBrlY+uuRpiYF0kQvWrRpDWM6SeLtPfYeq4PVgdXwAulZKks2qJNMH0V6LtJL GgWtkM3czZYO9PPKNpjPfE5hdpwQ6U9m0ZmmPyN9GurPi1v6Mgz7k/B8v9mM024zIwerbTo+Ap3O avoX+iexvv5tf2Z/q75RD6g1spP3I0edbDErNd6MlV6lKUjUa3QO9RjAOh1DHTJZpd9Jb1IzGcPX ZCHW/gz5UAapk2n5deoF5nkOdS5STur+ANbgDPqkVHzKQWgC8FFvm5Z6adV9xhZYrOT2yO4gGaYG 2Ab0pUo7+2szCpUSbb9JjQaC5ZTTObNoaTxr7BN7p72PlUL/2W/T/hp6S5cvob0J6O2Mq5U4bGo2 2jOY+wspP4i1mkl5ry7PmH5PmUL7fTnR2Jdi8txqLELA/oZZ3wsJgySD+GJJp201Vro6TY1RY6x+ 6m+429RthMqsftbJtPM2Sxz1slm9LPVyNvavUn6jcsWm5S51ilTJYtZElpyukuVF6S5XyOPytFwn i+S32IY5cj62ZKSMVLfTt7nMsSqpsrdgwq8gLeTqjWst+ajc043Mh2TzUXnkOJu0p2W6VaSuUz7V Tz2uHpc/A1E71WywU10BVqndao06VSXKK3CafIqGHfKJLCHnfdaJ6ilsUYp8If9Q8Ue3KkodMO5J tUU1qPHMAkHauWoMIxp6osP+u/In418tM1vtdKHnFNq8nvqux83C/QX3oHzHepsUjp+tLkbXKjVN rQqXdIT9TIP/0qNWqz+bHU6HH6e976mr1IXyvLyk7lQvmXrqlH2Ew+1T89WIlrZG/Hdl7U/5aolK 1zB90LofjvaHo41/7LNR3mrjR/p2noR6JU5Ch5A66W/0TVKTzPtK5r5+30Zd9UN7TFtOYWbrZyQ7 1zzW4zysMlCfMtrMC1bTMkZzN/2ewgy4Qs3GYqeYUV8VHo2LmVN+5afUPLWLWfAke93VapF6Guvb zupLaIJciCX5Qa0Iu0p1XrjMbbiH5Fl5Vs1T89ghn2dHSWC/WyaT1AXMwCO8h9x4ma/ay+fyOaNw p3LKYVXRqrcjvaBnyqxw/+nWluPOZn0E1UuM1++I0vYUu6nq1TlgNa6eFVCvrgSZqivW/Bx1jmMZ Z5a77LvUDepBk3ouLgGXKd+qJPBhi6tX9W3ej7py1QNkRvbPX4o2e8dPIbJnRHaHX4pjdo42yDzq TB0i8n8iL9bhXdkFtC1sh4WtZAZpZOMiUvQJri92Wu93+dQZWeogY95PDVCbcQPCTq8iPRMjs/HY VfRL/eOstp9bhS04NeyfcjSuZYUeD8d7jreCj12xP+frFR1BdEts6Ims8mP9iMaf88PW4bh+2Fr8 rB/pT6yKqlWhuwK+mqzmt4zr8aBoU9iahsc/ZIm0Pynk2I22yWR5h/G5hHX4KntLN2lvpVi9VQ4r 9mR1MymnYjmG6B3NSlFXtvRQpPfDvU56yA4q3Dz29rCdaw1G362Gy0iri5VCHSaw/37JHB4vbnZ8 r/0u5yDhdlLJ2VVo93LuJFwjYQ2vdJTunBS4h3K+vY3y8wl3sl+jlfqMo093l0kO5xwxJ+X25q6y H8kXYZfGSDNnplRzXu7N6tGjqU/K+oZyiVyPrALcmbRdqRNloHq0pZ3tW9ZAoV65KrHlHBg5c2rN ERuwUA3ipBl6dBrlzen9WNtzrI2JnOojdiByul+G02t8KJZ3na5xixS94hPb2B9tF06ib+LpLX0S HoDj7i5LOKNxpeesk0W8n/EZIC5uux7uNC/Ic8b676CeL9svC/d2RqMLuV2cIifIMFwK9bqDe6Eu n0UddJ3m0HNdzX1wEacufSfMQe71vJcxMktJXcopayBll6AxWx7m3FfMSOmUY5/22h6qaG50l2EV vpavVTf5UnW2Oqjh6kZsWLMVY8WokWqkuQW6xW2dzO1wHdxX+6yGbSbHbcb1VWepjsgoYhfsjIQ0 1ZfbH8w9V9/d+nF7c6k0Su/Gd+OatQ5HfyNLS8hT9x+VpltrynzDnW8DpfJ0aZwuGW+k9g35nJjX U47SKlU1WaLv1bh4dtudVpbKY1btRmK8ceeqv4Vbn4GsgapQDVIdVbZyqGh5G/s4QL6i9rqVWbKV G1Ox/QL9N0ESpcKM9Tn043Scm/3YK6+xdl5mHU+mp0cx3y+SPNb9XFmAv4ESc8kdoNwDnPlSOYkX cmLvIMM5cQ/SVWANCbeQL+zvcG/a71DufG4godnV+unOewfcQ6a0voedTW8kqUPqG73ukdzZ3C30 /be99LK/ZuY5pdZubiVjFHcwr7khVJobod7FIs99Zr87UUbo7y5mv9OOHNxNHMrdsovr1afXgN75 KGH0XC5lWr+pQ2RF6BtKB1ZFBFrSXnKcbG7IA+x/sjZTpbpFfwFzlXpxcytTb3MDaVL3qL3MvHvU B+Aexxj5h3RTEzhh3aZjOLm+T/wE8n9AzD0YvwPcUhJxGeBtTjrGWrTYsJAdW6hv+j/CT51EprPq jt5q2yJiQbT1iaDVNwNzz32TeRFB+BtCm28JraHHbLmxl3PbQFuiYxH57nDs94fW9+bXwvMqckdO Cp9atIt8p9DoJTeZG3QNbR3b1tkP2Y32YDvLbmfHNm9t3sp+0eLstfYf7V52cvOh5q/o0TbO3mFv tzPsdI3mD5s/5N70OCuuVEN248Rutp+znzG42L641TepCfYme0nzc83PGLlntHX2BtthO5qfxTK2 cfbNtq+5oXmjwf3Ny0ztdS2pC7vqOiP5DHMv9/1sG3+mLb9I91HpOyJOfYMN+kbbnlay37df5G4f kGz2+ZtJOSzf6+1bPsCxp3IzD2KBbyQl0gcP4LSvz7Ff0H8vyk6sW59W9dlsf2Svs9dJbytZHZLb 5byW1aW/KXZip9HfwvRB4W7cB7i7sXMuXDyW/OOwy8ONUR9gT93qMZMr2Yph3DZiZ/Us1eEPmfML ZSKuiPX/G3mfnbyIU0lnersQrJTB9j7u6br3PViVE5mrZ3CvF/39jznXIMvlWuxAsbbC5EjA5rwh 67HBDpWiLDUUi5zAbnKmsrgnvy0HiO+oSrDOURryMZbbxY71ibxJykBcD9wAbLkbG56ERc8Ggzhd DaDc6ZyTvsK/HsswlluQm3zp7FRfmZiTdcxRI2l9IBu5+XVR4+WQKuReuEp9xS4UOTVHnnrs5wBW UC6D1lF60L590qgcrOe7OfFEnhywUFtQ9ojB9MkOY4OKya3X35mOGxiBBLXW2PfQzODmoR6Qzdiy Keo2eVz1xAo+hnuZejzODPjPbxHmO0Wbm0DkOfb+fbxT/XGeyH382Hv5j/zISfyY24b9Krb4ZuaN 0E9zsUld5GJ6qZ3qJPp3GM/JJvszLMFnWPK+pIr5Sv4eNnW/+UKu7/6ZnNJs/AFGYCmtf5T5uIxw Ge3Ve3wO8+QMc1frzuimqmU4NyfwIs4b75h5kiQ/MHcGcobxmtkznLce8g1zYhrrpKc6jXzvqM+Y ZdvM+WEQp4WBUqAtLqP76LFWRr4POeaZccdaNvkI9wMu8q5P7PrrXE9Ok6G9Vn9B7cMsi6adLtpY aM7Ol5lbasS2z+e0x86tJoacPClPqjq9dqnVky3ftJfr73tymtnF9K6lZ2GN2eknqee4sZ5nVn+e JLNPJavxoa/o+AWsxfH0yFnYqQFyqtlV3HIvazc08tvkC/Ue7hLco8alSQ7hd9U7aiFSXlF/UXcy 65fLhbj2rItO6nvyv6Jeo467w+4tFST/W2onc/5+y0H4wx/dciOPnjeRuLewbEHZ1nLjOtaP3Mo8 3Ls0jpUVeSI3zX6s6S72IVXBnOrHii7FyqxWB9SD6kDLTe7Y5zi61QWcm4fR8grO93H0+QtEDueu cPRpT+wlLW/Z9KkenZtxetanyoNYFDGnxB3mjjCcVXS3uky+MueBUP83qlvVTvzZrL2dJnSF/v6p v2UBdl16ejhuhbpOfaQO0tcj1AhuBPrhNqn0E+1wYHmVJEd/1mGLfBtn6/6wj3DKawdzF4M5gcLx zMwfsNmaOxruBB/m1JoAd+ZmeJi1mwh3hb/nXqvPht0NJ9Gz32Evu8HJhntyt/yWFifBKdID7m24 D7PwG9rfE04z7DTnWhdnua+xAJq5vdqHGKNUON1wf3HCGdLXPshM1TxQ3PAg6Wd/xTrRnCXpcLb0 t7/EKmv2SAacw7n0S87IA+wvZIgMhIfKIPhEdtwDjFkWfLLhUyQbqzMMi76f8dCcKx54hJxg/5uT v+aRMgQehX36nNPySbBXToYL5BTsWKHhIhkGF8twuAT+VEZLLlwqI+BTJY87+RgZCZ8mo+Ay+GM5 XfLhMzhRf8y9WXM59ucj5lghPNZwpRSxS1cZHicl8Hgp5RY3wfBE1vC/OH2XwTXwP1kVp8NnYV// yV3mDPsDOVvOhM+RcniKVNjv6xMUXCuV8FTD06TK/gdninHwDMMzZTw2eRanufe4H2meI9XY9HO5 O+zjRKJ5rkyC58lZdhC7pdkvk+EFcrb9rpwvU+CFnGfe5YZby51osUyFl8g0eKnhZTLdflsukBnw hYaXy0z7LW5fs+CLDV8ic9h5V8i58KWGV8p58GUy136DM4vm38o8+HKZb++VKwxfKX74Km5he7Ax mq9hB9/DnrUEvhZ+Xa6TpfD1sgy+Ad4tq+QCuF4uhH/HKWmX/F4ugm80fJNcDN8sl9g75RbDt8oK +Da5lDV9u+HVshL+g/yGfeQOw3fKb+E/yuXwGng7VusK+C65Er4bfpW9/Bp4ndTB98i19ivY5Ovg P8v18H2G18sN9ja5X1bBDxh+UOo5j/xFfgdvMLxRbrRfYpe5CW4w3MhO/JJsgl/kznkL/Fe5FX4Y 3ip/k9vgJrkd3iyruSE/In+A/y53wI/KH+HHZI39vDwua+EnDG+Ru+zn2I3uhp8y/LSss5+VZ+Qe +FnDz8m98PPyZ/sZbKXmrZyGn5EXZb39NDZc88tyP0w77KfkFcOvyl/g7bIBfg1+kh1hI7zT8C4J 2Fu4e2yCXze8R/5qPyF75WH4DcNvyt/gt6TJfpzzpuZ3ZDP8rjxiP8buonmf/B1+Tx6zH+U2qvl9 eRz+QLbA/5Qn7b/Lv+Qp+EN5Gv4IfkQ+lmfgT+RZ+FN5zt4sn8nz8OfyAvxv2QrvlxftJjkgL8Ff GP5SXrb/ho3XfFC2wYfkVfth+drwN7Id/pa72MOcj1+z/8oJYwd8WHbBP8APyRHZDTfL67Ate+xN /0Ob3vE/sOndjU1PMjY96T+w6a7/FZs+2Nh0j7HpHmPTc36xTR9mbPowY9OHG5uea2x6rrHpI4xN zzuOTc83Nt1rbHqBsekFxqYXGpteZGx6kbHpxcamF/9q03+16f/f2/TGX216i02PNTa9/XFseoKx 6QnGpif8xzb9v3VO/+U2PcPY9Axj0wcYmz7Q2PSBxqYP+tWm/2rTf7Xpv9r0X2DTn/gf2fSt/8s2 PfS3DBq9w389/Uzor6bVixLFbBH9lYNQDNY1HVum/76pDDtTgTUZj82YyVpewNpbLmsdK8z3UCf5 Bv8oXy0r/zxWcDif/f7PuGk/rP3RX3Mf91ExcvSPwi1LzF/itsmg/9A7WsLf3Hg6d+narXtSj+Se vVJ690mlzmJaGnoG62/pQ/WHs2HY8MhTXDK69NQxUna6nFleMbaySsZPmFhdM0kwiv/d56f/YO1n n/9nRst7evXECePHVVWOrSg/87Qx+aNG5o3IHT7slJNOHDrkhBzP4OyszEEDB2T0T+/n7utypqX2 6Z3Sq2dyj6Tu3bp26ZzYqWNCfIf27eJiY6KjHJaSrBL3aJ8z0N8XiOrvPvXUbP3uriWitlWEL+Ak anTbPAGnz2Rzts3pJefMY3J6Qzm9LTlVojNP8rKznCVuZ2BbsdvZpCaNrSZ8fbG7xhn43ITPMOGo /uYlgReXixLOkuTZxc6A8jlLAqOXzq4r8RUjr6FD+yJ30Yz22VnS0L4DwQ6EAj3cCxpUj1HKBKwe JbkNlsQlUKtAL3dxSaCnu1hXIeBIL6mdHqgYW11SnOJy1WRnBVTRNPfUgLgLA50yTRYpMmoCMUWB WKPGOUc3R651NmRtqbuuKVGm+jLjp7un106uDjhqa7SOzpnoLQ70WP5B8tFXhHcpqr6qdWqKo64k eY5Tv9bVXeUMrB1b3TrVpbmmBhmUtdJH++pGo/o63YvJHiqiq6+bEmrUDHeJjvGd6wy0cxe6Z9ed 62NAetUFpPJCV2OvXt7NnBN6lTjrxlW7XYH8FHdNbXHvhm5SV3nhpp5eZ8+2KdlZDYmdQ73Z0LFT OBCf0DowoyXNhEx2HSqrbOlOpWvkHsM0CDinOalJtZuGDNM0Y5jUTRtGNp4aRanAdIZhTqBdka8u MVfH6/KB6PREt7PukDDs7s8/axtTG46JSU88JDqoJ0fLBCM9Eg5kZgYGDdLzIraIgaSOo8z7SdlZ S5usbe4FiU48uk8qqilWk+uhz10uParXNnllKi+BlWOrQ+9OmZrSKF5PZk3A8umULZGU7uN1yspI Sktxn5vp+5BZ590Dcf1bfjolJnUtmZ0bUEn/l+QZofSyKnfZ2EnVzpI6X7hvy8a1eQulD2tJC4cC XYuqHSlWOGSlOEwqM3FyS2b9Uh0fiErnJ8bM5OlNsXFMRROjnKMDib5TQ1zT3uX6hYWa7AO6lPGO FgtXM5Cb2fZ9RJv3NtWLr3NQ4aj+Vtm4SXV17dtW/czMQHx6oF06syKQkB7oaMJd0xuTOo7PdAY6 +tIxIJ1aWJNKHF+9K8VV46x2BsYNwrLkJR/wHMgLVLDcAx3Sma+ao42sToYTjNDu6YEe6ckqMe+H vOEjPcn7Duhs7dO1+k6G49IDiemBziaclN7Ys7OuQWeju0sLa5If1UBXIDHv5+vQyfz0SA/0TE+W xLy4HyRcF2MfAirU+RXVvpTaGr3y9E90+vjqQIzpXpc2o+H+6mhUJJqfkNhxrNtAeSY/rNKay0Ir 0xUq1upBgqO/ShwzIjvLTUhMyNnfzQ8xelI6fSzD9LphKW5XTZNt+7RVNR1g+dKdOrnOR9AdqBqk U/s7UzAHvv41FHOQdzRbSV3daLdzdJ2vrrbJXjnV7Ux01212JDmS6haU+CKLtMl+5NqUwOjrapiX s1UuBsiSwga3unpsg1ddXTWpenMim/TV46obLWUV+QprGvqRVr3ZyT5qYi0dqyP1i1O/SJmiDxqt OJM/ZbNXZKVJjTIR5n1akxITFxeJUzKtyQrFJYYU9TeKvBympjVFhVK8kdxRxMWF4laGcg8I544j JVGnPCLszmISQ48e3aJx1a2XuBmdmmyRholpBSOjlHhADnCYUD4oB1OAD7wKgmA/iBNnOO8qsCac Ei1pDls8IAc4JB+eAoItb6vAGrAWHADR4nU0b+rQcUhaQamjmaLNsgCsAVEUPfq238SsCr+tBQ7p xKFSVzqaqkZTkWiqFk0up+MH4hMdh8UP1vK2D0Qh/XsqoXFYyvF9BofBD/IE/nZwALS3tzi+3TS2 aogU5Dm+Q9B31PI7qQALwEoQAPsA/QB7HEdo8XcIPmJy+UA9eIL3Lfjbw7k7IEfnOEKOI7IRBFrl 0jkOgHao/65xxC1DNptAQmcTOLQpN2/I9oJujkO0rd5wJ9gD8kE5WAU2ghjUHGxsF2/KHWwcnjuk QDfpoOTJUHslfhU+75vGVtLvqUTkg3KgE7eDaOQepJIH0XRQdNEotB1Ew0H6/yCjQQwivmw8Oddo +bLxzHFDCs7UIS53Q42/M+w/HPbvDvtXhv0rwv78sD877E8I+1Vhf1TYHxn288L+kLB/QthPD/t9 w74z7KcZ/4vGqqH1BQMdX9BxPsfHjOTHNPdjplEF3DqmHqwFAbAFbAftpD4qSpS9BaZejm+siRzV 05B7wMhNcRwwcj9CykdI+cjI/ahNTD1YCwJgC9ju+KixXRdngddxObPnctF+BYii1GpKrabUakqt 1n8MACcCJ8gBXlABuHo59pCyBzuxz7GT+bNTX7/gROAEOcALotu8ORxPW1NkOut1nXV24/Q0D9Og kWnQyDRopO77HLuQtcvI2oWsXZTeReldlN5lZB19czgmNTqmpzU5nmos0t6Tm1zT0zoVnOAoQnwR M6mIBhWZdVlIJ22B9wGLGVVIaiFCCslRSJMLJdpR6sjkHpXmyLMmcAlMc4xwZBo/15Fl/OFhf5gj s/Ek9PR15CAlh7mZo22CI4O3DN4yzFs/3vrx1o9q5sD9KJmBPxS/n8Ot3xlEZ2PXnmYeOxtd6eHA 4CFDHnO4rPFc5HQW16aS0iG+gg6O3tSzN7XPcKTIHmCRmNJ4whBTLKVxdGk4gP0o6OzoYc01urpb h5iIaY5u+APxu4b9tMbUwrTNqsCqZhSEeRRPb8fTVfH0bzxdE884x9M98aiNZ0bEMyPimUfxzKN4 OjOeeRS/qWOXLt4ma2tjv6FrHrFekP3WC97xltOl1kTvj7bWRO2PstY49jusNdZ+y3oi5olYKy0m P2ZKjD9mVUx0Wmx+7JRYf+yq2Oh8K99RbpU7opypzr7ODGeWszQ6MTXRldg3MSMxK7E0ZkrBHOs8 BnGK9TZ3/bctP3f6NFlpvUWc03oDzoG9wBIfvMCEVsL1JrQWDpjQFpNbl1lp3hNbyumc28E+4DDx pqz1hjXXaHNae9Gyl9x7xWHtte4zsYnWHlL0OtCcA7ygAkRZe6zVJs991uvSBPYCh/W6dR4LK83a 3Xhip7SCI9Zua4J5fxn3Eu5F3FbcC3RoJ4MXTau2UvetYgP2NOJ9YAGoB1tANL3zIm1ba72styfY C3xA539RVoEnALssuT2E8o2sKbCSS62LZbnVgKZLrQvAhWA5uIgFdKm1GCwBS8EyE7MAnA8WgkUm Zi6YB+YDv4mZDeaAc8F5xPjRMcPo8KPDjw4/OvxGhx8dfnT40eE3Ovzo8KPDjw6/0eFHhx8dfnT4 jQ4/Ovzo8KPDb3Schg4FXwAuBMvBRSZ+MVgCloJlJmYBOB8sBItMzFwwD8wHfhMzG8wB5wItP9fI z0V+LvJzkZ9r5OciPxf5ucjPNfJzkZ+L/Fzk5xr5ucjPRX4u8nMtf0NUboGNglwU5KIg1yjwGAUe FHhQ4EGBxyjwoMCDAg8KPEaBBwUeFHhQ4DEKPCjwoMCDAo9pgAf5HuR7kO8x8oNGfhD5QeQHkR80 8oPIDyI/iPygkR9EfhD5QeQHjfwg8oPIDyI/aOQHkR9EfhD5QSP/UmsWE+kBsIHJdak1DUwHM8BM kz4F+EAtmGpizgKTwdngHBMzEVSDGjDJxFSBcWA8mGCGfpaci54ZRo8fPX70+NHjN3r86PGjx48e v9HjR48fPX70+I0eP3r86PGjx2/0+NHjR48fPX6jZwp6pljrZRK69GKZBqaDGWCmSZ8CfKAWTDUx Z4HJ4GxwjomZCKpBDZhkYqrAuAIOqmiaYjSVo6kcTacZTeVoKkdTOZrKjaZyNJWjqRxN5UZTOZrK 0VSOpnKjqRxN5WgqR1O50VSOpnJaVI6ecqMnHz256LAITQPTwQww06RNAT5QC6aamLPAZHA2OMfE TATVoAZMMjFVYBwYDyaYeTdLBhkdHnR40OFBh8fo8KDDgw4POjxGhwcdHnR40OExOjzo8KDDgw6P 0eFBhwcdHnR4jI4gOt40OoLoCKIjiI6g0RFERxAdQXQEjY4gOoLoCKIjaHQE0RFERxAdQaMjiI4g OoLoCGod1sXqXusi1YtVcpjV8j2r5i7WxlrWyBrWynTWzERWRikrpIiVkseKyWFdZLM+slgnGayX dFZFX1aHi1XiZLWkWrOQOROZM+RwgZtaf0/t76KOa6nrGuo8nbpPpIal1LSIGudR8xzql009s6hv BvVOp3Z9qaWL2jqtKm/P1Fu+nZ52DVgIzgcngMGgSfXynsTJ6DBYC0pBHsgBGSAd9AVOkAokKUlE unSO8xb0sEZanAMkQT1meJXhGwwvM3y64VLDud4eFQmPVSTUVST4KxKmVCTUVCSMrkjIrUj4u2qW FeT40NtnRcLNKxKuXJEweUXCaSsSClckFKxIGL4i4eQVCR7CTvWZyiPj3YZvMfw7zXLY8LeG9xk+ x3CeYafhVJXXmCDtmtShRtdI2n2w0VWO93mjayre+kbXiWmPqnvFxY0xTa1rdJ1D7J8aXZV4sxpd J+HNbHSdgFfY6CrCK3jIlZP2vaspSnk7pb3nWpi2w3VaWsA1PO0uHdeYtsYkdUhb6MpMm+EalDY9 FD0x5BVp7+G0ka4H0rJDMVmhmPFd23VtV9+kNnuHxtY/H1vvi63Pia3PjK0fFFvfP7a+X2x9Wmx9 n9hucV3iEuM6xsXHtY+Li4uJi4qz4iSuW5O9z5ulf7fRLSZRezFRmqNMONHSrH8Nou/gKs6S08T3 iDWSY8LIBuuUQFdHmVVWVajKAlumSdlUZ+DrKneTaj92UiDaXagCXcqkbFxh5qLkskDPqrJAFff1 JmtkYGVxmZMn0LPSvG4prgn0N8EmJYSHhMNewrnh8ErCpeEw+WsCp2SWNcXalYFhmWWBdhVnVTco dUMNbwHraqSMq25Sto66IkV/Zt4sSqVdcX2K9u0rrq+pkaSl+cn5XUZ1Hj66+CfIF+bMo0/y0aDW XXGhNz5tQ2xaSWza0Ng0d6yOL6sisn5DbH1JbD0DEYpM7hO4payqOmD3oWHhQBmjVuWcXL3ZyrdG lhRvtkZpr6Z6c8+1Vn5JpY7vuZZGtuRjceaTj7WZH84n6TqfpB+Tr681SufL0F4oX1+Tr2+bfA2l rpLiBpcrkqfU5Cltm2dt2zxrTZ614TyOUB5Xqzxdh4nL5HF1HfajPH1/QZ6Mn8yTebxnRuFxk1o/ arNUqmDDiKX6NwI+d8kM4Atcu3R2cmDlVKdzs4xQwfAvC/r7pk6brf3aGU0q6J5RHBjhLnY2VC79 cXpgqU6udBc3yNKScdUNS70zihsrvZUl7trimk3ls/LntlF3TURdQ/6snxA2SwvL17rK5/5E8lyd XK51zdW65mpd5d5yo6tkjl59FdUNcVJYUzQ55G+yOrRn1vtSXDWFSYkLRpklMMKVvCLlEa7+90mH zJpAvLswkAB0UnZBdoFOYuHrpI761z3hpOQVI1wpj6j7wkmJRHd2FwpL4EdPSfF/3y02z6Jf8PyS nBJJX5xcMqe49Y9Z1JmLMxfxk7mkRRBvCJZF4YjFizKFPvbG+zJ8Wb5Shy/V57IWLarRkY9xq9K3 Hn2/UsSpxcLkC3cNBcMPUkIB0eJExyBbhTxdRUQ9IuJYgZAatWjxEnIskZD/E08kIeRrBgiOBJZk ikR9CH4vKfipjqnm37kHw/hH8wqT3r35COZ9D2Z+WxihZ6ZsUxm8a3eL3A/XgKvkKnWl6mlib5T1 8HK5XG7SjZdL9WVQVcuDMpD4NyRTJpj/7+o73rrIc6Rvs7+QQtkp40z+AcTdyvsz+v+NstLYarZF pctOZUd9qro47pGl6lL1lWMK8m9FQrP1hK3/z6Mr5I64LHuD9BevzJOL5Xdyp+qk+trz7TckRpLQ XWLfY78gtaQ2SJP6i6Mi6hJ7DSWrZL78Xh5Sg6N8UVuPvN/8W9tv75B4uUbuVR2US/8T4+hB9kTp LcMkXybLS6HWK2fUwCN28zv2/6HtywOjqs6+zzn3zr7d2fflzpZklsySmYQkk8wNWySQgCwC6gC1 Ii1gIbgAAiWKipaqKG6otFiriLsEMCwqVj5fbcsrdUVtcYvWT01Bi9gKM/M+584Eovb9/vsY7nOW u537nOf5Pctd8jQcP4o64Ehr4ay3IvpkyFd4LH6dDUtQCZe95T+V30Uy1A773iG+G8hhPx6PHyVW 5lXmOwipbagT9r4QzUcL0BK0DG2D32MwymM4g7N4LBlLCuQGcgd5kdnErmF/CTOzFu3DCLM4ggU8 EU/Dj+LX8GvArZXMmhKC8fjgesegcWgSKohfp7gTvSyO+h1UxBhGcAlegtfge/FWfAh/SA4y09lz 2C/Kl5SvFd+oNgC/eFSD2uAI02F+n0D9aA/s/SGc0Q5jb8B5uL5ryCRyJZNhpjAXMKuZjcyDzBvs TPaJUqb0j/J15fvL+8tvld8rD8Hx9MiP4mgicHo6moVWwczdin4HR30BvY2+Ft9j/AW+Bt8OHtnj +Am8H7+FS0RDHmUamU3MbhazAnsH+1JJX3qgNFA6Vh5Xnl0+Ddd3EVqHbgBpewA9BBK3E472Pu7E k/C5+Hw8D454Pb4Rb8Mv4i8JSy4ku5gw08tcxaxi7mC+YUPsVeybkitLhdKm0p5ysnwZjPiG8ufi t8XsqAlcmuloDvo5SMZSdCVaAWNeDTy/BkZ+nfi7Ca7gcTjnM2gf8OUD9CX6BiuwBmuxGyfhNwq3 w1XNwpfjX+PN+Pf4I/x3/C3BMJIoaSQ9ZAHM5/3kIHmdfMhMZx5j9jOvM6+zFrabnQFSuI19QoIk emmb/M+n3jn9ZPHu4j0lUqorFcqysrPsKneWnyy/WH6n/A9Eny+JgVz2gE6tRhtBagZgpv4EEngY 5voT9HeQIQnImx4HcRh34wvx1cDp64HX9+EH4LcdJOdJPAA/+rbbAfx/8GHg/tv4A/wJPoVBeEmY JGDEF5JLyCryMHmWvEhKjIpxMgHgZ46ZDzxdw6xnHoJreI35ivmW1bJGNsy2svPZ29hH2RfYd9hT kk5Jt2S5VC/9tfSWKnKcxRPqR44jGTg+wbNB/9XA8V3kJRIHjTj0/+F3I/4WvYxHo09wEaT8Rvhd jT4DPZpJxuBPQZJ+h5vwbfh+wkDkdCM+gLai+5nH8FtkHfo1aH89+gIoJj/D9fgG4gI0vJX0o49B Mg6BvnxFOqF+CGbahg4xh/BSiCW+xjch+i2/ecSMFuDX0Ch8Ax6LFpM6FECX40PiN0iQRGCx5ALA 2wUUe9k7yOfkDnwMYrPfimP+Nf4J2orrQN4O4QvQk+R9tpF9FqR0PGipA7aeSqR4JcjmfYRF28hL ILtPg571gFbcBdq7FfSkA0Zdiy5HY/C54O9+ixVIj28EaZ8DmnkjjOdR9CguMiU41/jyXnH5jCRB zitv7+1BQfRI+Wb0HL4I9HgnVqL70IdoEnOCNYPFOM66JePKpHQROlI+F/0REItjjqJz0Ht4A+DG OehdbEH3lheXMyCNh8qzYZzXop+hGZIOiQfQ+CcQvb4g2yo9Ks1JU1IsuUpysWSqZKJkjKRJkpLU SXiJXaKTKNlj7N/Yw+xz7O/Za0B361kzq2aOAn4+zWxmNjBLmG4mz9SDTLoZlvyb/IP8X/JXcoQc INvJWvwUjPK98svlzeUp5bZyU9lYKpW+Kb1YeqJ0b+mO0s2lvtLS0rziwdN/O/366adPP4hPFo8A fr2A/1g6BTbgivL55Unlk6BvpvKmclvpbXwLXGMIFUG//gy4ugnm5ffA21mAcAKh32YroW/QEHDo LVi/Bz0sfllyHjpPOh1NhvkOI/okaEUa5wPWboMWA3NlAAuQB45Pgjm5ECIrBteApT2IHivfz8yA YzwtKss28ir2lR5ANYAyvwD7NBF9jNvR5/DbiXYW76Fv3Uu3wVn3SLejb6RbmFNwxD1oAxkn0bMJ kPkiWYJvKl9QukD8ysce9hM0A9Gn2qjkLZDQR8RkqFXwSGXHCUYS9jiDlFLJcYYhDoWMPY6RXT5x lS3aw53IdRdzPdzJXDdXzKF8rpijSyrZoOf1IV7PL2DRaR9z4LQgQaeQjz1AY8X95Y8YVpIEe+BG MwWVQqXQEYkKaxA7QIYEj3OMWWW4lLNdijgvl+DyHMtxS7nD3AfccU7CDeDF/UklVu4nSgjCe3A9 skW5k4WhQu8gN4jyxU/zqSRa1osZqTTgD9cw4WymsSFtMZsYE+2otEi5mR0ViYxiW9g548fNDY9u aRnd0drawY4r3pnMZpNkYULXM3feBN2p12NtbbFoW64S425j9rFvIjmMfLTg10qOIKVCgSEKVm5X pfTbFa/i7SQl356A+SXIbrhsD04gkUmF7hPFEydPFGGAOY7+KIswXx3j2YocyzskvMPpk+CLadXn hCr7ZskS9nhC+ItKCZ7Cb8ofsQ0SCVIhK/gcTWDjdwn1dps/wNZ6xCLkDPj9abvNZLfbEqF8iITG mn3OpJM4x7Yyaa8Zmwfw04JSl1agbJprxa1isyOdUKXT3nbcTpuGUelELB8jsUtrHeNQmlNjNe3W GNI+aVJKpJd67GMX7sGqyiV2nygUYRpAFKo1mAugIuEGC/B/vbY+uoY7iPQGazOmBFgwYk5kmXDA LzWbLA3pRomk0t/U2BSqzJ5Myvwv2zLs9hUrHnlkxYrtv5w3ZuzceWPGzMNLi2qXxhrUKkPkhEtj C2iVxkcqGz2ylm5AF+bjFdvpnttXjp07dyz0/eMbRdht1mvU3yhqnBa9pjRuxfZHlp/ZYO5cMFH4 1fL7DANIrkK8oFY8z6g4MFZvSO3qvWBg66usGCqg/FAqGRoxYoaZes45U+myoKWnpwUWmjjZXjax IclDyIguFDgLAMJbSkajZggwFeMBcr7gVKpMSqVKCR2sQS4WEq3E4DDLXzUN4Jee3gzC/2kP9ylK JIa5ul7bXR9dr11zEOsbGtJpYHIvxlJZlaG4sanCUFZTXGI3OUKRNhvZZDc6g5Ece/K7FVnOnAs7 M5L1UGkNO0GOHwZZC0sS4BHx6G5BOcHUqZa4O1kNv4/MEfNAc3bBpTgCNtpWIDu05XKF3b9pD56L qhDBDXFDFZCACvCGcmfiU7opK4VG4rOEtCFD2BmWhrwhs8oWRUYNF8UuiT0KUQkfxValKYr1OiAO mTuKfASIGL+fDXauxmYTgQsk2WzGAJdnkGVqQEJkUrPZZAUhaWrMsuG/v7360c2fvr3q0Xv/u5Cd V2idPSfzkwtbZ5N/f/hy6bZLcej3H/4XXrK49N6D29aMm3TZIx8+vJoWdJYAE2uAAw705B7kKB8Q ODuftzoucSx3MGZHyEEcA+XjOyz2zACsU1uwXKFUqTVaHaffS+4h95L7BI1nAjr7QOlwr9kzQb/W hE1CZ9Yk+EMZk5BIZ2BaL92FJAq52vocmYg4AJ6LAG8wuUhQ6KdwG7mtHMPtJ13IiTbhdyrglwOu UvDL5YeAGYXmaBQwMNpbALaJcy0N8BU5bNBX5aCxibz6alc+31W8g9JXn4lZapojYySJU79pro83 04WZmVS3x+sS9OvZ4J2wJuBACI/dQx8dETo6syt9awNrg6tDbEhVF4gGO4PXB19SHlTJJipnoMVo fvCi0C3oZEhm8HMBLsiFDvsPBw4HD4fkygjHjZ6FBsqf76QVDTCtvz2foaWga8n+lj+gOaxh+pRY ipkBPNiPGQb80i93SgNBZB0gqp1cp0eioHuN7syI5aRpYrljelYxQLp3Y9Qpk6s1e8GL4rF291Ip ljpqTANkgaBwfCVHQk1HBlV3F8sJ06CEtYDlXvktciK3h/eCZzynCt1UaIGr3AkqwQDkQ6KV+wS6 8kND+uZmzBXbmhM2xH0D2Na7LBp9WkofQqJ8emungssoOcQ1RGdjOim4t4BgWkK8qINmPotAMjOi pMqk1ZmpqCe0IWp5IcQnmk6fTyKXtay79BcTwlZtKhhpW/Lmmj/8q3P9wkOe9okXvYNfWTcmN/Ey wT8mEszV5nYu/mLbjFv75sOsrQfNFWDW2tB+sLJgs8m/8Ld2Qi9aGJ9IZd5A79mJrMWmsLb83H5J fKVkpXlFc2+bQiFXaDPI3+l1JV3E5Rol0woqTUarNXXKNLqUN0VSqWjnKInXS/n3QX8gJPJxp82Z QbUDZM6O1lbrXsACBk6j4MwZhslnMirYYjdnyiAVViUK0YaEviERHWpIDDVEo/rmBBiLdKJASVRv aE70DlHjEC30RlGhF1soP4A7NRUstVqoSlO+1cCv0tfUTs7wsWo5KmxkNlxw/vKPnvnHijGtEbc3 VpP7ef/WQk98UUNTrv4SWe2U1GWX3d5j1ZodkdyF17/y3KdjyePtD8y/dM/crkhLrM3kUWoLM/JX +AwypjkSy2E21l3XMfc8u0yVi43rmHvk7sk3Uq+JfiHeINECQrhJQXCsdd/i3uJmcq7z7FNcl9j3 2CWNduwWWrLugXJff3RG3k3lzlRbKWtNtOwTpp8fz8sdErfDUecIuJsdXQ7BPdd9heN294Pu3e63 3NqgO+X+o5sxGFx+hzPjEsJZl1+VdXXB5Kx2YZ876V7lYFRubECyKtScxRqKQBSRKm8ljOwdftxd Lqv27vZMsNtcbvcAWS6o7Q5wIRwOt8XmdlK50XPOvNODPTKb1SoTnOGM7DkylX4HET+EtMhFbhA0 EIvotGr7ZNtTNgJG4VxYy5LrBaUcy2Ryp9VqQfsA0lxIDpDmcCGLz5K0CJYplqWWPstWy2GLgjaJ ZT+ZAnHtJlEPuZO9FZDLVX+DVDEp4EFvMVeK0r6hYjQHFrC5eX19lKWOhqGZpuYr6rjTFQFdRNHZ IFdjaD7cDZzPecQZEBRNrrxbMKlp62g/8JaWu8NZB7BY3EDr5PIOAVY4/CptRpw1pb6yNWcWy6d1 zSOzb7NB0Qu0QpOCAVcfzLlLgAl3CTDbrkGYaVcfnN3VB2d2CXCs4f3ELKLSLTgDGScl0AWgUckL 9hbwMqyvALr5+7ie5bPYqMfMpo72vABLV2mI4vqU4uv4i9GltyXa00eb4/GWqS3xCrhPfZQZe/oo /ldJDjhxrYgTWhRB3wk6uwrL7chlAyFk/XIsXr3ZlsFfOTs3MngJg5m9+DvkJe4dgYiIJFpHII8E nQkIMAgN4If6gwEW/PRNgtrcqdDMrVlSs7aGqdmLNyIbmSMY5kmXSvukG6XsPCiI1BHD+wCq/TTs FRRgLXyBZIAJ0P2NXk1C06fZqGGTGkEzT8No7NG9OI9vqEBzL6CxKBY9VD66hwbB7SoOAjoDs8DF LvYWhgYBdMesFGwWByt3sLYotsiB2CVOcCtk5mglEXv11cBibAbEMABP+WGcOQMzFFYoqOj5CtPx 3mWFZ78rlk59en1PWyQwNS5cvPeGdQuW3OyzxVrJZZTxbMeJYKn059eOzUx31LWN0RiXr1r5q3P0 QgOZQtlPMeMIcH0GoDPN3DwutLMek6fTO0M1UzfTu1x3I3tXaFtImfHh8aqZeEA5oHtZ+UfdW6r3 Y4OqY7ETqmJMrdDZdV2eLi/LR4Jgv3oEfaSTYRR621euTr0kCJyvoShsxtp+6ZCG30u6RQ5bl4oc 3ap5SnNcI0Uaylnga3wvHot/dYavxU+44qDoqRUHKSCDpTMAJFMRhP+NVVaJHlfGEKSYLDap0wUI bLSc4ZwUP9R1/aT7v/7whfsPL3wNu36zZlw8F7HFnfaLXuvOSn2Xzp9/6Zqe9pvI/vbmMnqh/293 4lEHPsWph1N8Q7zNpl1y2ZTSxBWzFl7ws19dRT3mu8slEWmN6A5BheQ/xLldni6kVKkGyF+FlB6Z 9HqkMmqVCBm8OIEJ3qqQs1q1Xi9TLlUcUBAF+NFItlR2QMbI7CZwWM/YfOpUiWYpD/FtL+jkTiw4 slQbngFxx1TcqbYC7IAHQAOb9ZI1B23UwvMVO57lzbhq0ZuYllKUbYzVNzNXlXaaG4LRNo7tyGXb u3YfOh1sr8vVWOhTfdtBHqiXaUchlEIloU3GypWKWmmktqYm2qlZVCtfUbs8ck/tbRF2veRaxZM1 T0YHJYOKk5KTCvns2tmRRVGmU64VLI6MNqYXNdMHdWQNI21d5+QA1gW8gVtAsQIJdyf4cmDLHftB PqxYi8JkTj/jaPDRth7aMWir7OmzHnz3kOj90EKUCWBNAWw0DTiaqcGuaFkmnnJ6DRZWqQhJwh6j L4pcZkcU18tjUZSUhqPYa3BHsdMCJK5MRFGKBTLSlb8a/lERG2HzRfveFD4TxUtrRoR+xpFh4I2T J88/cu21b8yfPHnc+OP79x8fd9Py+ZcsX37J/OW2DQsWbFhx1arl5Ja2ewsLHr/44icuKdzbJmyc suXjj7ecu/FvkxYtmtS9aFHx63PXrZsm0EiPoLdhPs6D+fACLn4sFDR+Vzjzufrvui8i/5KcUp/U nYrI1stvUN+pe0h3RHJE/abuM4lc43F7zvHO8i7QLahdL5ENqHf5Xlb/Vf0X37v8kPo7tbxZ3aWe jS9UL6zZrH9YL9MijYb4AnWi+vrrOhPMWuZ55lXmGFNmpF5mCUMYxhDoVErsX7k7DRpfVYd3aIfC VLUtWCs4kdQL0bgAeCqpwOpW6VPS41KplELlGZUu9HYPDhWLn4gAOUQ9rMJZlQaGFyBWoOwmVc/J YOZkIz2uhnTQGD7DbGZ9d1/ng9/OuPz1uz5a+M/9H6weW98atXlqovdiKeGvnjbzqlWTNxDH6CYs P7h5zWO7So/uKb303G1pvjHWqjO9jt+7efl1v7jsZppTOVT+K8MwG8RMRlZQM88rpObndcpKssEr KA1N1QDb9v0A+4QYY9M4cmQOwTgy4sbDETcZrjBMNfYu1gwH4QTfXq5h5jHrYQRm1LwH4tkvBZ04 Di8EDm+g7eqyZoDs3gVAarZb9uKJxD6c0ykOVUaR+MEoMI+HNX/e1HM6p54Lpy69iCWl911Oe0BG Si09k1uaJ/cUvy5FEx69xgLSVod97DZmCdJAXNe6Wx5HsjjCNB1mMWvjFl3cYlYhO7Y73KzB7lo5 UImrKUqhRK6bekYnqSGk+FMZRcVUMd9rsZ7Tt1PjxCymdESd/G446Cv9clR9PdTqRwHXBdCAG9j3 UBN6XeBXNGGvH2BvMfo5vrjm4rqFTavwcvPlNSua9th3u1UJ/z5Mv1iIcJugMdZkGeUfGOKsiSpA tmcLOmlCm9dO1s7VLtGu1Uq1+8hsJEUysq4/5GiGOHc2IJ5NpEkIE3SGjDeJkwP4s/5Rv3hIjHCj VTPffSI3VKimEPJDg9yZPII/ljE56hPxBJGaQw1hR8wWQaaMNYLsCWcEWdLGCK7CTATMfQHiClyJ tKp2SwwlQL5pxkBMGDQ1DieeKoogphaCVK4eNsYdwK14nd2ulpnrNnVdeM+Vbz+/bHJ9xhe01rVH 2uZdfd/u26586A4sv332vewNDkd71xNdeas1H7HGG6fsXH3d7S95DVmfsT0SSY6vbZyYw8zmDVux +c46auGcEEu8zT6LBPZm4SfXNqxrvbbtVv/tiXuStzdsa3/F/8f8B4lvEuqIvyXdlZ6dXuFfmZai hKI9m+j2T0i+7383IeP8rvya9uuTv26/I7WldUtObuYXCwf5N/hB/hteqkgp82P4a/nD/Dt5KU8f 8RkXa8kY/EJtSybnzyW2+DcnbktKEv79/r25fW1vJCR+QaPPL05gQ5APtf8G3cs/mZaoc+o2dTvj j2g0o2cJyUQC7S0fQHaaALES8WXbRvq27chA42xapIMjD9Nu8O8nwF5OWAywGGHRw6KDWF4raOgB Oai6BPV+qNI/jlVLj+/WVF+q9vrAqzlCD9TvmSCICQS1jtTWRaKxeH0ikewwVc7+H07DVU6DtJVT CmqXfGR+5uEzgz77WvCzI8csaO1ChAy/Q+yHcTwr7qLzTEgKCeRPICR0TsuI7NU2NGaQ0NoJZPrF NNWwSPgV7zfxlKu8oDXms/x4fja/iF/Lb+C38I/yf+Lf47/gT/FqHe/k8/yrPMvz/jZfBOaHklZK 2nz5cdAE0kpJmzC+M5OjpJWSNmHiNGgCaaUk35ZLs0LS38o24LqMPVNbayMtra0Q6MvBXz8JIRs/ he/jN/KsjMcw5qd2TMzyNKPSQou+HXmx6O8RLwiuXa+EgWssQNSmPO16xmjP87xd3roXb6FvoghK 6GjwCxAc+Qfw4/1LEjhB9zRwzQnamUwIiVUJ+jxAYoCsE1SXN+B5DUsb+hqYhgFs3TWaR8gH6Hce REZJDnP2jqnHhpMxxRyN9U4WCidoWFQYinLRIURd1OHQq3fZUCXJDnRwUEyJWr8Xm61n66sRVVTM k9K9m4cf4lrPaXNrDg6ncPwQ6Klc4lXvgBLRaCwq2kwxhuQh/qO8EJkCl5WbY9Xm2/xAoOuNfjsv ruqnnKJllVk0dEzTkjKNhpA6sf/oDs2ZMFAMBQGlemERz5QuHxUUWmveLwARWVkDR48KcPY4JTFK En4gfkEklPOCUpNP0hNBkCmeUGen7T5BCxU/nTs/DT2h6+gzIIb+GJAfPUcymyYS8TLqJZh/mEys WjxJpTugxxU/zVoNkywWa3WPAEXXGnxrFzU5pVe68sIEn4/WcROlXXixz5bv+gvUx5Q2Ltg1vmlC vr1rrzCpfUk3furjMxlJPximZkwtVOkwbrbWuhMt0Iq3lJ5r/bqfbrF5Q+8CL8LldwFDi4Chaebo LmTjbMQmJlpiGWDDZ7tbsilbSxaqu4W77Lp8SgAyM/1y+kiakVhVdrPVaWcdVrM9Yg3ZWUOSAiOi JEmTtIiSpODwQQ2IDmGbLsnZfDbBdtgmuwXdktyQ2pD+LfptcnNqc/oJ9ETysdRj6efR88kPbMdt 3PzUwvR1sMGm1D3pB1KPpN9KvZNWvmH9q+09+7up99MSe0StGz3LOlB+C7wQqJ1BTR03EkuHAZAi 4JlksqAQlIJW0LE1wxiYfP57CKitIOSPM9H0kMM4x+lG9g5/JYH3DfcqPBPsyVRnCqdo8q82nUlR 3mZaKmV+GpTkPcFls5tsNrsVpc9J47QPNksLsE1agA3SdIO0zQobWO2pZNqKk8K07G/B5CNagg20 ptJync0L82aTWzOWjCNjJynY7WtBhWNyCIgodMGcfrBjXlYsplWKKZVifKXIiUX/6DEZWgpNo5oz rM1ku9h2u22nbdB2wiYz2YK26bZrxY6Dtjds8qAtAx10C9qU2WBuE2KuTl3ROoVBlU8k8gmSoOhk 8vWBGSWI53gfnwSYFuoAKfFnApcB/MKYppSwALuIWGbndMLocRmdEIllbtFhry6hIzp7wysPVT30 aBTQDSAN0C1a6M0VAaNydgC2ZWfQCtmGbx+ezOVPnBikYRhgl6F5WRT+Y7FKs18UBdevObi+3jZc rcLbMK7Z4FI0KifIMRAkEqrqwylqOpZq8krctj8wPm8TgYPvyCcFIIgSm4h7cSt0AUGUVLpqTdBV SzNCQGxV5BFXudV0lZquApK2qQxQA5KkRNxCI2aSdHSzSpD9Y0SCIS4T2REt/PAGh76KQBhXQEim /wFkMQEcrqDQP7vy7R2NTfmuD7EVGz7uyo/Kiomzryfk8xMmfrGDSRa3nEUeyagIQE/xLrIQECcy SnL66HAqjfy0eJzGi1cB5vwGMMePIOQVrHeGHwwR1suGSZ18pOuj/Z5GUh9EBYsaFgXVTqXkP2vn WT9kuJf3TPC5IMYOxvyBmB+4qMuD+xiN2TM2gmOgHzUDZJbg9rmSLsE1xTXPtdTV59ro2upSbHQd cBGXIx4VxTKgR5yPS3ICN4Wbxy3l+sR7TMqN3AHuMMf4qPmNDeDxTz9TzQ0Vqnf8C4VueitkKD9I 0wHUQS+CIQUhjEYr6QCPo87j9rp9bkZa56gJ4hAPpNYZCeKwOxhEldxbRAz4/5+WRaYlgZpqPimA j4u3rUpjR9iQI4f/Wee99a7Vf3ju+i1rr/wSb339Bxbj4wfOn5K7ovXQyhnnLIaZcsNM/QtmKo5G EalwUdjT6BnvOc/z99S3KWlTanxqeuq89E/TEqu/KdYZmxlbF7sufmfjQ417fC/7lNqoNiZLh6LR WHN8fKArfl5gZnR+4LLAg7Hdsf+K6dbGbo6RerlgFxyk8o2ZYCDg/c9e8PM/lARBrf1fsPnHs6/2 TPAm4uu8eLoXe6mGgVWCcqC/oYWW9+1OpDMHK6sEYcy0jHdV/PH4s3EmLozJxn2dWZj8OHir+Xgs gWiJYol6qnD19XJbU8w+ShSjSDRKgTZQPT6U9/XXpzNiG85DS8EEB+8L7A4cDDCBQ1Oi86J9USZK bwdNy0b/maEiaBoWvQMuievbFjDf9O8On3XumqeOOuvccRXvbqgY7R0SUaj6xER+EAJA0c3Lgcs3 FBUzT9He5ijdaFkvEptVwfMlGx2edEOqgUgbHZkgTnqAZJ1NQdzgTgSRx4u4XFX4rqaeDZXAH90o /Z5vI7NYLdZqkjNAampogFiVx00/9mkOv3NsaW9HV31Pjz3e3nXFv3//p62zx8/pWL38C/xqqfQD 2Xz7nnlb2ppnN6/kvKPizfjqnkPO4JSalp+CFbRA/L0X5DSB/9L/rQzrAYF3a7OBPm3WSr1ybdZS KQyVwlwpjJXCRPE6qM0GYfNaWEKw1MAShuV99DZzDA1KP1BKUnIlnSyrdfQsWhE4q5XMQoIF2pSS WR00bDJVw6ZIuU+ELK4aqVlhsVXiPjG8clRDLJcYp2m5Mx/Ber4Sp1Vl3SreJydVJwWkYViqIXwS rHj4E0xKxXB/HPpNglFwCW4hQt2cPp3ACXrBKthA1ZyCAyKLs190qgaF9GhcJFNXpyeuGlbh5/cR +hcbMIQrOnMGf5uy2akU6pVIkVQQhWA0ZxTfJoUraMIBrHCR2uEcfaiJxhMncmfylKKMBR0+IpfJ pXIi9fo8PiJ1SuxB5Gb4IHbIXUHkI64zEFfJOIz8h0DgTCjgrydZfUXOQsM+tJjMhOD5+wLItpaO HZi77bqujvb2iVgjOs2bF5/zy1q7KH60p505Vtz3XOnkmGtuupKMa4nXj8JUwIrbL7xpbEddK5lZ dZ7jLZQHnwMGfgKyFWYAqubjhcb5poXhK/Fq4xWmFWE5RlIDhxiDmWq6fW7GQEXjXKggAcj76LDh sJl5H5URMSSMvClvTJrmklnMBcbpppnBKaF3jG+a/k2+0X9nPGH6znIyZPDhRDhPkqHJpMcIExmS h+kdTFy92waielTQQsVIe8N+e0tlFZTiKgVUjNXeD3aqNaNnhahfRzMPHYqq8VRWDalGhFHJGZA1 jIRRMWXgDwSDoZG9w5/6YshwrxXETSmoBA1402rJ2c+DVdc/A6vvRkaTb6Ds7DcQDKXrGXM45LME QyEa3ivNFpMZFhQOV5omaJr0DCFi06A3GQx6o/iIzyKhEXTEYDCGLIkgDpqJ3kcfDTcaTGEzMiAz Ezb2mbBpUOHJKDNut4KEQyFCsDy4Dz+OzIjBj+8y+BrF57lO9jcbsAHKZ+gf9+pDDJiQCagGb3ga gmc74KttqDBUcNjFZ3BsI2JkCr168ekh0YHUF2AxFERPsupF/qgAGXbYznqUu5CgcGVEaYGSSssO hWuETzks94VK3GyinjR4i0bqMppElzFggBaQMCX0Pks/lEScfQinQzEglZZFkw/5gVRa4GCH/EDC NAlBHW3qbf8wgBXdWfHE5nJfvzWep4BJS3HAYPzogJ+hRpCOZ3gnXMD06ebvRbhGY4PR+H3LwATI guc+vSqRBC3UU11cNPb+N7eOXSSqJdeVTyeu/IiML+5ljkmzdckWMXotvoj/VgqQtqpG1oySn9aD XzK7fJS9m9mG6lATbhOyEQlO1mNJo6Ux0JiP5KP5WFv8Uu1qrULiM/vukr8ofcX3hnRQerJRjtAI 7KuIsMkzQW9MRpr8CF9fh+siTRm1QQT4hMeX4ZRTlARAVEmU/NwYnhzDsVidSYinM6b5Bo73yOqU fRmc4VmVBg2QmTv5uX7sHw5//I5m6/rkAJkhGGQCTItX5pMl6Q20UfndFTse7S7SXG1UNOgAn735 /BC9ra3jhPj5eY66G9zwHbShKL0ZsWyod1llhkCP+2EjZXVmxJJzVkuzWO4Yjgtmi9LazH05LLk0 NVPohfCAz1bvvdFbkyPu4GYah/O6w/aeqTwmJj4l1tjEeLv3Tnrwv7Hss8JVk5dccOv/sPelgXEU V8JV3a2eo+fo6ekZzaGZ6Tl1j2akGdmyZE/7kG3Jh4QPsNkIfEjGwrIly/JBICDuBEjssNwQTLJA COELxjbG5ojJQkhIuDYEsEmCnV2viQ0Gb9Z2AsHyvqrukUY+svm+P98fuaxXr46urq6qV/Xeq1c1 9cHyBjnWMPthde+7UdKjx69e+Y1Lx/trL2l9oSVVXv70ldf/QU4nJ8SsjUlfolh0eR/bPHQpWd1x r2diaVlACk+oRcyZT84c4O4pcsDyV4G3qOkixmQyW9hdxteMHxu/NHEhRrSEYmKihlEsNTEl8Uni k4qv+K+UMzFrTDXZ6ehWTYDEVLOQoSEPIH6Vi/jVMnPCeAHTDjQ8deVjvcEWq2Itsw+CdMoFUSTM GezmsrBZCJEOlZFBpL2pGtoN/DYDPmjABvpuhzln8FWhuA2GgyoXa0Ybb7sPuD93n3EbtrqxO5/N 7a1cfg0dB5VrNXUdHQprqbRwDEZCztGwFvpr/PDcYVUFkUztL28XRKpiowusr7xCCReZwkWhEC43 AVD4SAhXGMtCKL9rfz1qWXCVKiZKBUvCUhblSoV4FFmsGLg7VJlPjcYYNsZEokVRFlIZPJxK5yMM zB92kCGBXGRBTpS6omdxgLhgkxF3znqw7T1cNnT443n3zThOeL4oHRfswu3XD25/5M47v1/kGMqk 00MfvvOLoZMV5bWU09tI4FcPXLdt2zVrv/td4Oz6gdJvB0qvRJ+rc/Y73pU/iO0vPSIdlg/HjpT+ Tf5b1GyUTVGmXupyXCF1uVaU/c3CCxYstUhzShdLf5D3xz6Rj8QMPq/Vgop4p9fvtlhFk+jH/t04 vDOCvl4OHfXlTjFcbjDtxq2qieHd4YjAzw1SHs+b7QseDDLtwXeCTNBX7aTE3JfAKKEkUom+BJfw Vr11TX7jEGh5qB+I+pC263L6kHhIPNZBSI7Ie5ohLyITuyoaiVbYTICJAGIxtB2miLzUThjtwk2X ROlw045st2j2F4jstDweL60CXrki4PIkL7r2u08/8crgRamLoxUTO24bOvX5zTtx7JOFd7JXRHMt N7VO8ki9/tSTN2y63SfOmVQxbeI/Lb/549/hkEJk80lAf0d0+utTa8yC0VbkYk/YsCiEXCFFrFCE GleNolT8LvG7Ckp9jtPKVzG7Quitgg5tQBRCizTkAcSvOgn1WaPGAtPOlwplOmILei4JeoIt5luM biehPqfBCNRnFYrd7pCJkJYd9eI+zLyMD2IG+6ripGd8IbFNvFzsBdn8gPi5eEY07iViU2XL5ry1 BrVMzOvGh6lM/HSYxMyExNxnk5gnWmqTolI8hEptAGIOILCEvYDANAoqrxAsFQLQV7klFMWCeTR9 KWHZpbiAvsIypLrc56OvvBaYzspI69zirE5fhZv43MRoiwqz7PT75308dBiXvdf+wCxKX1GNvL77 gyLH314k1FRbXoGFX7yDE+n0mYbqZAF1MWgK9PaVQF0BFMMPqubd0m75Of8v/JyVaAFbSoKZTqZH /gX/Ab9P3uf9mP+T/CfvfzMn+f+WvpL/Gvoiaq/nZ/CM1C13e670XRlaEb2L2RraEn0q9Gj0S68Q MBSxgjMWxEayJFVMyBjpvpM3khk0vmNkjhshAbuflYJqIEupzR6ABTeI1eBgkNkcxMHd2KNmkSoR q6kwICXZEMJ21IbeRuwZsutqsWeAOQ+TZTJMlslw2G3gwqIQ3M0s2Y42CoR5iU7PUX9Wgvjw/mgs c1DAgi8R2whc5RJVBm4qG3L2ORmnarVnnN54S482K5Pl+RAZMNA/c3R7DmqDCZM0Uasdox6Q87NB 1RvOBXVTRepDpYM6b0b87WX5lfgzqr4Um/J2/Rqfx5z5LawJxblQFYDo7jO/3e6g/NViMhHA4Ai7 KeXrhM+NjA1wLm0/1sB1f/Wc8i+39780N1g+Plg29KvNp4b249w73/i3upk1yn/U3Ne98r4Uvqx9 WVqeUFVWEp+K3b/eh+2L6lpXz+7csOiSSxZBm94NDfrPQP91eI4aNviL/aX+cX7u/gRm7KJUh4DV tzDn6lhcVAwwkVSuQINnGaW3MWuaOyJ8qBYLR7Xu6do6hEerd7Tk86h3JgdbYmrzjExMnTMfQHYC AGDEYl1l4QCSOqvrUGd1VZXoSXlUT7tniWfQw3t4e6fJxHQazagydbJoNz6qWoj+mQn7spXYgclE ovhE1ybLkEhU0L3iVvFpca/IIbEdvLdFTvRmdmP8TH6Kh3FwSGw6Bv1HTZxBIiCzSJN4LLcW4k4T 5NixYdlVJLNMEzllA1Js+PxqkzrNEqLAKszl0rMyXTg2Ynn+9WUELnt978IDufHlzsR1y1bMwU0k jtk7ZMvrSvCfCZx7yzOh8VU1jQbvxOq51IwOIx/07FPQs+PRp6r3gAnzvJsv5VmiI2A0lqjY4/E+ z3wwsqNB1Q41qVT6nMlaUzyYTSZjIRel8dbaFshIMUjf4U6n9Lw7gy3jkbIb/0i145MBYKzKy8oc DtHs9ZD+EI1tJtxnetp00MSafBNQmETaUqnBNA6lcdrb0HaF3hn0JBXph7Ua+6zJacdOHNNIjay1 IJpQngXa15HXSmlGz8WOjGaDd058npv5Z/XGqVc+vnquJzWp9WhLLuWdE6v52rTuxW3F6VzrkdZc 2jOXzrjAx8xKxGc+uGHoOnuogXTD+JCI8bo2pTK7aGiwIE5jb8gVaNAXs6AvWFSCXtlDFC07BOsk hsxPLkDaTZhI1IwumEPXjCYRq6YVGrl8u7ATuOEruvOx5mBLDZNj2hgit19Crc2ttoYcSM+ohIde LHoBYmXEMJdsx5s4uri6XIqckpfIrOwNXProsHn/6RP0VEpTLrcWZj/axrBuwRroimYvqBhktp3A 4ik6kGcQeOoxsnIVOfbvH7r69OTRAxfGaTO0zf3QNumiD9XvI+fFzmXO9c5B523Ft1T/vPr1mned vyv+TfX+9H86j6TtT9Zscz5fvLP6+Zp/df7c9XqxkXM+UHx39SPOR11PFj9WbeiCaX0zui2yOf1d Jy86K9MT0pejhc5LI5enDQedR9Mnnawp4oIJoT7SFb4l8nrk08jR6F9SZjm6JcqgMJeaH14l35J+ PfrL1Lvhk2ETCj8oPxi5L/V/5Oeje1Jvy8YI2T2bkSUi3/ZWzRKglYZUz+w5GXnh/KxkR7Z0CJWk a1DcecppcJJVoXJqhigfd7TPI/7L22dlafS0NhJsUevnZyPKjKwSnqxMC89NtYcvT232by7ZHNgc 3BwSZBUe98slHkaS1ADGDMsV8QaYdid7Ro0SBf7C2mihih9Bt+IIwV+Aqh4HYTU2D1uIRAp1iwE8 cr97gbInqc7Iikkl+UhyW/J4sgglDySZJPncyJTsgSROJlO97q0g7LCPuLeBd9DNhdyb3U+D6MMR Faxaksi6VRP8VVRl3GpD1j3ohYBgmbLI7ZYnW/Xa5Wuer3WA2rY46fLtMI3cSJ+vmC3YEkZIheYj 26JEEFdNTll2OuVoJEJCsNDK6XQqEk6pJeT0EAEzvVlLGnvlDfKGFOtE6YgcjiZTaXOdhgNqwt7U XkzuiboHpTH5/U+ZuXunJDkRXThsZhNZM5BpiakPpip9886k2y0QXzVbxJzJWxuJONPPM18ihP+i ep1KMNzpjQWjnak/Vpo6GXOnQya6GPl5fBQ5mcdVh4QcyMd7+QpzpRmbGfO7e/DvkacS5fR/Jw5V Es2UeOwY+QOiBJYEE4YF2MhKyts2iYfFU8c+1Q8QFjdQvY7x1mRl0TfEVzltt5OgnkqxTjcHAV4G 8I5KytlQY3xrLAqDIhKFhkMofKtobDI2ITLFLs5vjspk3vJrQ1jbGR0+NBAmkgAINmk1b9Nh9+rm H7binKxCSCYWGiWEDCBAfQgH9XBQDwf0cEAPl+nhMj2c0MMJPVyqh0v1cFR/NfWJWQipghN4xVLS d1ECIhTozxFfjdjITi1UXNZAfis2NShIOZkAUswO8Ev13o7qfoTq2oipSCQfsoEElIoAkIkFE90J MsGwgFJFrTY9gj2XGIQRUEZAKQFBAgIElBAQIeMjrVLMbQUMQIRolNIEBAkIEFCiBb12CAIoISBB QBkBpQScb5v4/+bf4v7KDqSPGML5piPQQpQyTcQwJgIfpRAdIVQEkcbTHtLUmmvX9negjv7+tWtB 0jlbeVDnGKf5ugRkwHmMrsul+KcFSoTjj3hhTSZYwxe4jTJBn7bmIvvxwqEfj+gRTs+5vtKfpKvL r4b2aQtNaeJlWGOuhzVmNqwxHnQtsY3/B08hUmbmHN7XFGzxQGazNin4HCLpJ8pNkr3iJeIjhIn0 5hdRarpAz+Noq+cFF803z7NiagLdyAcC4/ARQkWYfQJNYe9Q3Qfs+HH+x4EfV70Q2BN8oerNwK+r jBJRS+3wRaneSG12RTNSb6g3eV3ouuTm0Obk1tDW5IHQgaQ5bQypbveURQQyi+wUT1JopzFJCg+M P5BjclR7bcuMp7QEiKSSY3CZbP248Q0TGhubXmK2jmYX878XYrdNthWmIbe+PSZpW2Ug0RXlf2sk FDxP3oItNdUiFeV/piRZ/byWVeujKZNV8ksmTY167PZgi30PoA+oJcHqiiw2TPGFzRVhbqPZMIXP ZjLxuMsM/Q/992yxW63NUhWdnyxVwQmZvO6OW0LPXW1xc5J7Nz6uOoJKKBViQqSnQ6TPQ/D8rlIp AQVQxWMskU3QAhIHEp8nziS4JYm+xGBiS4JLkGcS5JkElLQdVSfh0e1NYiNlHBLZrY3Y3vhI44HG g43HG4vepghLE6surc41qhNzmUZ18pRM4+BUYsY4cxZgs4kFY/sCAP/UkWn0Ts3pQqz+j5jGzJq3 aEdvI27cwwyhqTAWF9O14hTZbNCm/GcTZGYhld/hDmi6VCu8MKHNI7p6qnJxE7W9OUUecJMHaHMV wxNuktFNMrrJF7pHthgW01eAgDTMpucIK+mgmrGG5mmzyIVtW5mtzdvKlk7T7isjc0ZHfqUxaCsN Llhp9K0SbR/bEzBaS+KWuN8UDKBA0GjwCsUBHDD6AqzH6gtgqqch762kh5g0oTtHWhvmqMggWagI iKh2aVKagBxVn0uT9OojfVvEoBkpYt1Y0UA232Q/DW8HX8u7FpMzUvRcVD09E+By5Pcs9ZOXo8Ij Qd0yR5c/KmovylYvqsiubuifcZk6aVLrK5FoJBDPUjQajU1PqzA37CHGOcQch71jQjpeVVVVObH9 hqEssblhbq2JSd7moeVaIBmvnqrhGr9NMJgL62AuJHspWVyr1hGh8N4Ae8B0IMSYhueDKYtMlPq1 uYFKjHktOlGjj4h6CWpPV51M1pwjMeaZSZPxnCSNmSMUP0ps1Im7JqnPtEDFQSI0OvBJPwiNWb40 kRBFu7nYTQjXaFIDWcp0Sd6sLj7aTdjkGxdEIcquJZODNThUg2u89aNFyCZ6Km54ZtaGJ4iRtOvz IxDnR95ifWsbg+Sj93NBD+tdeH7Zcrhr7x1/a9uuKxeQbqP9F6u5bMaqeXnBMuVp0zq0OZVae+md QzcOy0g3Tg2VjVs8dKM9OEGTKe16N4JMefGZA9zV0I92kCn/Vb3sOWY3/755v+1D6T3X+573vB/6 95V8bPsL8wVvfc37mp+RjjkPuQ57P/FzH3reLznCfMwfNn9iOyIZOj1XljxW9EPT48KT1ifshm5m Bd9lXmW7Uup083LYYvCFOUEk2lYzQiJS0EHEoReYk9CdxczC50LGlLHPyBr3QEyAmKeI9IIL7fxJ Rwfd1lQFf8RuykkEuKhZrynnJWa94GsUROR2mai5Yvo5b86t3VOQ135efePQ6W/fcQbd8s0zt9+B 2ZvenLH04duff/Fbt72In93whxuv/+iqq4998/ZPvrF8ft/29Ut++EPEnPl8aAF3N7RPAmXwPrXm dOhE5HT56eoTqRMZnvebE8yu8GvhfeUfVP+p/HA1H/KLiRq/kuCk6kGzkKHGwGQnKagGKv1qbazK eEH7JM067R+2T/KRQ8TollhlOOA76d0UMHj42nAMBHtbKWnlaEpRlXaFRYqoKMpBhdumYMVX7/+6 z+f1osSfYfWjjIdXN057W1d9G7YS1XdW32HSN5iaDlOb8EN0wGtWaUQHfuIQSBOf0slxeJdaEDO+ s9XgwbpMWUUwWp6IJCqCpSFcFwVQFqoM4Uy4Nq8ML9hvSqXjiVSiNsql4zVRaPhR+nCpKukvqY4n /ZXRoqoSSPf7tHRNJU6n25TGVafoAgMsdDWxwk4SUE12GJIE5BcYokDviF+In6pz6VtXdHsTZxL6 bibdq1p5NbVMuruAwZz5wEW/wWVP3/lc2wOMPO3bl99/6cSnr7/hJ2uHtlFirE42sPT+h+np1NB/ 7P71TWuS+DuVNy9e19Yy78EHYFbtgVmVjLZyvOk5BeP7eCxRBizuy5rFVpF5WnzaAbwDZ6OmyIJq Ua1FI5L4WZrTvN2jpvfhRn6BbRRnes7OtjfYYrEaJYdSXZNxqJNnAAjHMw6bjy5fqVq6nO4IJqi/ S/ZmcLlN2I0DathGlHK8z2tGRgUIut24BIia32LERl8lJr/YR0acI4KoMW97eEm4L8yHvRUFmlJ9 i2WueBh4fqIwmkNP0UFPaZBuRJ89vTq1Y+fa8LCLDCsytmiRnXVEkeigG5L5AQQCLZlIZJFwHA4C KMftIECfQtYOz830wBmdji+gg2Warr97cudl6sTKxIJw5Y8HR6ldqdkQe8dgx6TW2kzVxNk9PUO/ PktpBf19L8y+TdDfM5ifqVmTxGe9kju7InVL6p7Uo8mdyVeS75t+a34/fdj0cfqE5VSNw4wNRQaT ob4sVV8zo3x6jTFGRkcfMU0l9qlmZMfG6Dg0qXw64mtQNFaWrZleM+PW9L3pL9AZ/NeoWSoSWIup xpIqFmRLwBPy+lLShJuF21O/EX5XYzvc8McJX9SwSjFOxYrZuqTFjLhKQyzstnhTTFKBvk8RYCHW QcnajFn3LdpRFrPm0dT6Bi0VfJK6o31+xqz7NL21TUsHnz49gzz9vOYdVIWp2RS8nCtFzRP0dxBf NflKMxOaWIvZvJvpUZtTSTmVSrLhcYZQ83XNnzez9ua2ZibUjJvVaDzTrNZnm9+fOLGJL1b91Zni TSKMt4NhFoVzYSb8vs9cGpYFFZFNn8lzK8mk6dCMc7eJL4sHRV70tRheYBYCpxNjlqhCMDA3VKfU pcgRGiK3hqOZOu/Mts36pg85OCdqN5gAD0A3CY+t7ThUCYzBMcpH546N3M8EQ1hqkEafmiHctkPb 7oX/a6mlBRmsPkSOmUwlYDoBzQRMI4BqE8CP6b6i+2FtP9WaSxE2mxqlEBufmojdm7PShdNbqHmg fv7cjELnTECmkulyBgHTCWguuEPhLNEeEzUR2XBKDJt6EOfWJ81S/by2fuJfuywoox0I1A/DFudP CtIH2Gm1a5o2Tg9VKL1vtHf3L739o8X35uwRKQW0E6+11dx08bfnxrPZx/8yf37HtW/MuLHJGbZV jBeVcfHxzEOhUKkDKiDaS0rid160pnVVKGi15VqbW3PltWXlVW5Pmc8n+VpbVq1p6fSX2CCpdqqH Gsyiu4AWn+feQlXoqe1eY3g33q6G4m4UTsTjAd50sijsEPq82OuVq8vLcZ/loIWx0KaFEe9LxmOa IJ8IBF1IJlrwdnmJ3Cdvk1+WD8rHZbMIkSRiUC6SyWUAGGfzlwHAAktX2bniZ5UdjoYaYoULs98c ctUC3R06RJdRUXIyHAusBQ4gxlkUQPrVCg79BqaRXb5E9qxtopG9oYy/oud7N9cFyhqV9NCB5Xv3 0nmqlc5KV+u7Ql1TXOGpvqbKskBN22Ob8CskcQ9J26Pr2oPQUvezd8AqZVQj5ojNmTMRJZXZ5rN1 m7uVU0pRuW287drEAbzP/ic7v1u/temcO1M0jfHIujSaxY9YVbfqomapkm6KSsxT7ZbCJeuD/FZF OFju5Q3mMGH1zaaTwbBFMEYiYbJVa0d9+Gl8ELPEKMdXGX6BMSEf8hPm3uEYlHBIwpK3YjRzf5jy OprBNKKbQ7kmaWTl2WmC6Va/7YReRJK/NkTbgidDe0RDpVmc5sOcj3Dpryx6ZGbXXZUhjW3ITZ60 uVtfMU5PJqx5TVnZwln1F2Ha5KcfnjwxreIf5FcN8lOx70H7R/A7u+wicjISuY7gOcGWeQdhyUln UJNpyqKIQiAx6gQ2CDCRhlVTwQVaZ0lVJqwxE+FIBOUbl/QSSES0+0jvvaTFjzwy8tuuIzt1UElC DoJDlB0o4hAjWlgitpikjkjy+3wg1fEIOmynJMHKBsgutV08LjIgJxhUs+K49Tj0zK0oQrKolzux k8SLSjvCHJKpCeY2VATlPrIjet1mzQRzre90h88DM3HHMZ+XokQ00w8sSg23GjVlNvgeXZVdwE2g POM6LKwRZQUmhgJU5WoDBKtEcFeLq/UoQEQPOYZEgIOc9MvLKCMzJKJsx7NoEGLpjVjwENK3/VGh DgB3xJ36nQA6wUYxuIIz5jBDMiv/MK+4NteKvWTARIZ+9P7QjwLUGtLZAkLg/C/wa7bQBI3TOD2Z 2atxHQ1BO3BLX8HI+TcYOQn2r+qqLdIWF2Ni7Cwf9zIh1h2/13mfvJ/ZJ33gej9+lPmT9LHrcFx8 AN/D3OO8X74/fk+Cl16WXnZp5smfo4PSQdcZdFz6s0tAgxOI7TK5qWowqAAoyaJBX1ZWfVkn/BHL +R0zqJkz9ald6ATN7Jn4o8ygbwSEmGsMSoP6qwwhJEqi63LULrW7tiJSb1MFE483MPXxFmZ6/GuO +e4bnHfIb+NfMa9LrzrfkF9z/Tz+cuILfMYhmzDPmOJ8wouDjCPuTjTiukQrnpa4GK/HtnfwAec7 8gEyqUZcUFX4gIRaot1DUpJ1B70TMtCn/04spePgU0tpJpi3lDaZpyxyEP6EIHSW82CNskQHMYx+ 7DyG0XuZR88V+qQCC+lHhx/SLJvISqoXBXRFDaB7RhtA71dNmgE0tX/uGW3/vF8t0+yfY/EyOeFi 3YhlSp0SZh2a9bOMnKKTcToZk7wbX6mKwWAgYDabeM0M2ux+nvkQOZgP1bAKBNdHSe4gOo4MRIYn FLgFIniEykrf2oO/BSual14v1nHoUIEFNBUeaXxT/hJFqsKj3ogRNKHLESPo0cRJDKADugF0QDeA DmgG0HiUBTRVFLg9tuKc02N1EOPnX4G4F8w56wFohvDEMDoCgKocHZQebbk4IV3Jk2M8AAo3LtZ2 oP61ozge1DFs6wyvkfQtI1o3WAUl3adhqIGk+y6d4CXdL7SNJj6lkWq9vOpiLQycmqT7NL/Tq+V3 6mGoq6T7Lip2N5DgM8LZnKXOqYFQ1T9idK0zDJrRtT7t1A0bXa/8zs+eGt+Qa93bmov52+Zet2uw fbYnlWv9WWtuXP0TP8VfH7qF2cs2VJLJJal4hl7Ac4Z24GbdbqG8gTs9maxVz8CMUw0zjht3qrEV 5r+YGZfZ72beMx82M1OEZvdr6HWB24+OCozktlms5Ka7j9SIYM0I5E68UpQwR4UF6GJ3l9DlNpx7 HxEQSisa+VXxkVuKBLe24ti1Cz4FZHabLELQvZs5tYuYR2IEMsQp1YSCsBgaNrncu3FYFe2CKLQJ vcJmYYtwUDBAef8CHxFWoQjnJuByTO0m9gAsht7i6zYP3/sC68uJjtNN5H7IY4fpgD98GtaaBpwf 3B7N4mFkdNrlnDBI7OFp32o+3f8TCJuN8tbIwiAAt36CnTYMObxppo9HRh/e1Kw/+zvi4fr81Yc4 rImw2XFhF/4rXsk2VCQbvrqJKRl6jdrFs+1vNkD/7nqSccnkSOYfyY/Po1rdfQe9Sxzeco57DtaM 21iBPcw9WPQl/6yh3fCS8WbTeNNVphdNL5r5Avez0c663Lrc/j3ixH6x3/EacdIJ6YT803Od68fu HcWrPMu8q3yLfIv8Qf9T/qeCH4WmhjsjMyP7YvaYPW5IvFm6rCxSnqvcRFxV+5gbc2NuzI25MTfm xtyYG3NjbsyNuTE35sbcmBtzY27MjbkxN+bG3Jgbc+dz9Pc/57NHEfmVTvLvSgoJjpGbhlj6W55G zOg4i1rRPh3nkILyzxYhD47oOI8SeIKOG9Ca4XKMKIUe13ETKsFLdNzK3Iev0ax94F+Wk3UcI4Fr 0XEGcdzdOs6iODeo4xwSuW/peBGycI/pOI+c3DM6bkDjhssxIg93hY6bkI17VceteDb3GygZcyy8 y8bX6ziHEnyQ4kUQb+a/puMcivDTKM5DPM9fq+McCvIrKG4g7cY/pOPQVvxNFDdCvIV/Ucc5FOMf pbhJb38N19pfw7X213Ct/TVca38N19pfw7X213Ct/TVca38N19pfw7X2J7iZfvtBHSff/kuKC+Q3 vA28jnOogv+M4hZSN0NKx6E+Bg/FbRAvGi7VcQ6VGaZQXKTlpHQcytHzO0kbGm7QcWhDg1ZnmdTH 8IiOQ30M36S4C+Jlw2s6zqEqw08o7qb5/6zjJP+HFPeS/EaPjkN+o9bO5FcpzcZpOg59aqyieID2 6UM6TvpU67sQzb9Cx0n+uRSPkT41flPHoU+NfRSvIO1jfErHoX2Md1G8mpbzho6TcnYS3FjQ/saC 9jcWfJex4LssBfktBfktBf1iyffLj2C01MIISANU0AK0EnWBPwf1wgjpRQPoKtRHY6bSX4zvo3Ap xHfTHElImYx6wCloHsRdAc8PoHU01AV+F+TeALATcpIS1kO4m8YqaC74G8HvpvmXwt8ALbsT4leD 349WQVwvWvH/VK+zc0644PunQHwPvFVBZfBUN1oOqb1QJ/LmAVSOLqb51+klK6geSh8P7TW6PK20 djQfVQ+XOAfqPRPwjfRryBtmQa4BcD30mcW0BIV+7VXgr6ctRtphpd4qK+g7B2j7kHAffW41pJJS umiZy+izA3oLNaOFaDb0ifZsf0FKH/2uTnjLclpiN/2ajfRdywGe/71amORdDrVeT3unk+btBdhJ 0/sgRfuCpZCvU39Xt17Ccr2sLgrJiDn7u0l6D8XK4Kly8MkIWDb8pvPVas05Jf/jbTRSeict6QqI 66fjdYDWe/nwODr/t2tvP7dejQUtQL5E+5YB+r78CCXla9/aSUcG+fJeOurP/6VaOy8d1aba+O3V ofZVGr4eQn0UKrS2G+jXdA2XQ3L2QI6/20M/UmpT6VplwcouZU7vmt6Bq/q6lKm9/X29/UsHunvX JJXJPT3KvO4rVg6sU+Z1revq39DVmZzau76/u6tfmdu1UelepyxVBvqXdnatXtq/SuldceGy8pET Cp+f0tvTqZTN6V7e37uud8VA+cVd/esgs1KfHJ/W80G29vnVJOOcBTN7Ny7t71RmdQ0M9HT1L+5d r6xeepWyfl2XMrASqrKid82AsnSd0tfVv7p7YKCrU1l2FaR0Kc0LZ0+G1H4a6Ovv7Vy/fEDpXqNs XNm9fGXBs+B3r1nes74THh3oVTq71/X1wAuWrumEp7ohw3LI1bVmIKnk3927pucqpay7XOlavYw8 NFLUmnzm89aIZu/sXnOF0t+1bqC/ezlpo4K3w+PDZTXSCpR1w1sGulaTBu3vhrd29m5c09O7tPCl UOelWk2hfeFze+FVANcP9K0fUDq7NnQv7yJ5Vnb19J31QRecM4l/BaWZARibF8o1gNZjK4ypIxfM sYJS1IVSp+uz5QXS2W+yL7GvsnsBPvO/1rT779Z0NsSshPQNkE5yrr9gzhmUitfRmXqAUtWFa38E aG0VOgWlHoGUC+W7mJZ0odSZ8LYeKGHF383VDvHkK9fDvKbNXFf9Qy1ywdpzIW4S18hN5eq58ZzK TeRmcQ0XLHHB/9rPs8hX4DTkuXAOMpr6oL0uWCfsQP/ORmEuvXAv9tIZfakut6ChKHoToWEZovAf iwjHTWzCzxB+CGIsIE4MUpliJcD3Ie4DcCzax9yBMPNt5n7EMg8wDwD+IPMg4A8xDwH+PeZhwLcy xwH/L+avgH/BFiHM8qwBsayRNQJuYoFDZ82sBXAr60AMK7E+iPGzfogpYUsAD7D1gI9jp0PqDHYW xMxmrwb8GvYbEH8tex3gg+wJwE+yXwF+moNv4DDHEHmESACcmfDdnBU4X5Zzc8WAezh4C+fnSgAP cFHAY1wC8FKuBvAUlwa8lssAnuXqAR/HTQR8EqcCPhlkI5Zr5WYBPpsDfpZr49oAb+cugTcu4lYA fgXXA/hq7mpIvYa7DvBB7vuA/6CoFOGisqJKxBZV8ZMR5qfwMxHLt/CtgM/i5wO+gF8A+EJ+EeCL +ZWAd/NXIoZfxa+CmB6+B/DV/GrA1/AbAN/Ib4Q8m/hNEHMVD3Iefz1/A8TfyG8GfAt/L8TfZ/wV cMK/Nh5BrPGoYEVYsAluxArFAtRHKBMqAK8U0oDXCnWIETLCDMBnClA3oUWYDfgcoQ3wdqEd8IuE iwCfJ8wHfIGwGPBLLbOAc55tmYMYy1zL04Bvs2wD/BnLM4i1bLfsgpjnLLsB32OFvrZarSDZWB1W B+CSFaQTq9sKPWL1W2shps5aB3jGej2MNE4fpeTPjAZxO2KX9i9dhuSVXcv60cyepQNrgMqgtOmT 5ymoZOG8aYTDQHTskrFs0HFSklHHGZD+TLRkEsYg/8HzsxfMVJB7XtscKIXGo1HQvKqrfw2aROFc Ci8jSzhaQ+HXKbyFwjtXr1q9Cu2m8GUKf0nlax7qYoT3mpHwD4S172Z0vcH5cQdQphXZgFZFRIz2 nUhGLpCDi5EHeclBJvIlRBMA33u2X4OWADX3oWvRLeg76B70MEi6P0G70E/Ra+gtoPGP0GH0GczQ Q9iCg7gKT8LT8Vx8Me7Bd+C78EP4UfwU3olfxK/iN/Bv8e/xIfwpPoG/YjjGwshMidZn+FV4Fwb6 fweRXzvmbLtsx+0x+zx4P8Tad8I3g+/8jua7dpIeQth9m+4/qvsv6/7vdf9LzS/2aG8prtfeUvwW fQv2/ADCBvDf0uI9n2m+d4nm+2dqtQl8GbQFI8F6GuKDjwZ3B98IHgyeCBm19ND7oaOhIUXS0pVB 5S7lCeVF5R3lkFZOeLvmRx7W/OgmmtMYmxdbEbsmdmfsidhPY7+NHaWx1vgj8Z3xX8Z/Hz+e4BKe RFVCTSxIrExck9ii1TrxGYEIl8q679N9RfcT2teWJnW/Sfen6/4C3b9M91fqz23Qald6reaXLdL8 ihWaX/mQlq/6Sd3fDiOE+EfBvwPyTPlb3f9/B3VxM/9D3ffA2VSt/T97r7O2Y9Y+Y2KMMWbOn3GM wWDmzJxDklxXKiG50oSkSUKTNEmuXCS5EpokqTRXkiRNklxJkqS5rusKSVJNktSVpEkS433W9+w5 Zvypbvf+3ref/Xme/exnr/Wsf8/zXWvts/f41vyW3feIecSZOUxRT88WLstlkQKm2y6vy0d1XAHG 9PMYzbMokdE2k1KAs40YYa+gdKsX42xTRtg+1MzKZ5xtwWhYn7IYy/pQnspnRGsD5DofmNUO2NQe qMSIHbdJ9dZPL4wk3s0a4Y0gyonn85YYUctNZ1JOIlMKk58pgymLKcTUlqkDU2emrkw9mfow9Xdk TQVMPPvmDGcayTSGiefknMlM05hmMs1hKmFawHXYwefFTEtZ3s1nRuDwHj6vZlrHxJiUs5lpO+v3 83kXUznTPqYDTIeZjjJVEoXYT9pmMrViOY4pgSmJiaM8zF4brjBvyfHmHclb1cbKCeYdB0XC8Tnt w4k5s8Ndc+aGe+Z2i+SD8sN9NIUSwwtCKeHFoa7hpUwrQusicSCzdVbu2NZZOeMiY3PnRcZWnfN6 R3rl9Q0vjtlvHqWQn/Mw5WSHTSZ33kBON5jTzQ/3yVnElM3pslGfFK5PCtenP9ena57FdZjJ9juF /TmXMpVy2tFMPVjWtJzlVTXquZppXbXrMpCfrzP4uifLfZgmhytD05hmOlTG9dOU5VB5pJVDeaB9 LDPluiJ5oNRIF1A6y+nVrjNZ1tTqZyg9vDhKbLuMr/O4bgV8ncfXftjIz+3FNIDHYRBTt0gc6+Jy h7GsdVxGyIwMqup3zjOPaWHunsjK3P18/1hko6Y8iuRp4ntbmHag/9aG++TZTHV5vPTZjuyuaj+P 3x49fnzej3GcHe7J41DA4+LncfFzffNBI7iMUWzLGV9nXM2qccyLsC1N7aOUu4PtFrK93mynL9MG zruJaSDLmrQfbGUaHM7gcjK43CFc7vCcwnBWThFTaXgkj/sYJj9oJ6f9iGkcy5Oc/NqPpoZDOcXh tpx/AudPCWXxuIe4b/vzmPev5gcFLA9hOuUnK5g2V7vezrQr1Jav2/L1cE47skb6XUzloTnsOyVM kyMuh+KYEkJz+DwnksD+WRlagPtJTKlIu5hpqUMrnPubOb2m7Q6V/QxV+WlVPB5g/2TKjYu0Y+pY zX9ZDmdV89+OoX3hLPbVbqD0SEdQlf9W+XeVn7Zjn9TUhX2oP3y9xvjDTzMj6TE/TWdZ04jIgNxR THkOngxw/DadfSHd8WVNeQ7tiAzSdDqusI/nhY7yvYUs72Od4/dcR+c6HB8yGcPGhfdx+omhozXz 507ka01LWF7GeY5GhuWOjQzj9CV8Pz10lM9jwyVO+imhA076qvJ2R+bltuO4OshxVcH119e7z33N MbaEaRnH4Zrc/ZH1PEZbNLH/Tua4m8Zjt4VpB4/1FqYdiMvo/Zm4v71K7/jpGPaxPuEVeckcp16O 12BkYTRmmZqznB1ZeIZvOFgbWh2lvE4OXRqlqvtcb1BVzOcc4bPG5L3hOTlfhjPyevC9Hrh/MK9v pCKvd7iAz8fy5rahvPlMi6J0yrfCc/isfakXKJN9UVOexjVgWzrK1dg0JdrPOcl5h0Kr2f81OT7d YmvNg/GgA2g2j9NcHusePD9oWs5zxCqetyLhzny/szOHlfBc0RlUytejcR2P6/mMZYsYy9rjOj6G bVWY4mBQtb6PzhWnYn4fUwfEdgm3ZQ5TSSQz1u9VMVgVQx25zUxV7aiKpdj1afd/7oj55kr2yzXV fHt99JrjYgT79gj25SnsyzN+qd2qfoldO9h8rutYfaowaJdDTvuZ8kCn+u0A0+HT1hJHQ4c5LVNu AvdVEqevsteB03XmdBO475lia4OquaWI1w6j2U8Pcb2PsE/qNcc4vj4e7slYUBD111Nz1+nxkVfK frucyTmfhunrGCNaOZQHqmS58tR17gw9XzMGbeR+34I1D+KDy5vJ5U3LncVj8hiTc5/HZZQzLrN4 XB7jOOuVN4nLc3N947m+U/laz42lXN/RfF3M17P5ejlfrwoX/GbW1CY1NL82D/F2PU1kcw1yRYSS xN2iglJcPV1X0gxXb9fVVCyz5FM0Sy6Uzxq2LJXrjQS5QW4wmsoyyzAyucLSKLDclscYZCVYScYw K9lKMW6zUq1UY6TltdoYd1jtrIuMB6xrrEHGw9Zga4jxZNxtcbcZC1QD5TWeVv1UmfG8vc1jm430 kwjzKjw3qMuU5zyB0L9zSjFVTOUNe46VQ4Yn05NJpifLk0XCk+3Jju6pXIFYzlZOTq/ed4v7xH2c M9vKJsP+wuYdlv2VffCMPJlOHv3rqfA09TQl8rTwtODSWntaR3dn1VIPc1L79W+bIsx9R1zOVO7H r8VhcsmIbENuqxXXNs6KWG0onnvhAkpA+XVRfqL9jX2YkrjNqZTM5fHYcnlZlMrlZXPNo+UZwi8W ndr5ZG4BUZD3q5k7YkT+rWdSkHc9Qd71BHnXE+RdT5B3PUHe9QR51xPkXU+Qdz1B3uUEedcT5F1P sMCRNfGuJ8i7niDveoK86wnyrifIu54g73qCvOsJ8q4nyLue4AIm3sNm8m4nuJSJdzyZvOMJruYz 73CCvOsJ8q4nyLue4HamXaw/yGfe9QR51xPkXU+Qdz1B3vUEedfTxMUUx/tj3m+34PFvwrueJrzr aZLKlM55K5iOce90oJ5UgCfRE2gqzaISWkTLaDVtoC20i/bSQTpqmEaCkWIEjZDR3uhi9DT6GoXG YuMAuQITA48FpgTmBWYEFgZmBZawZmFgZWBJYE1gWWA96zeS6Sv0LfIV+UpZGu4r8Y30LWYp3zfL N8L3GEt9fVM5xWyWOvtm+ob45rDUxTfKN8A3i6VsvtfbV8RSyFfg6+objrv5vo6+ASy19/XwRXx9 WWrLubN8XVlq5WvnS/d1Yak55072RVjy+zJ88b4Qmd5yzunSd717fJ28x3xeEs0SApFmSYH2zVID HL3eNd5jfI9BxbvWe8i703ucRMAMJPqPB1IC7kA8mYHZgbmB+QH2JW+Jd3NggXcX6yZyL8wIcI29 U72rAsXeDeRKG3fqyNzDfV0nrfDXHZk7eKQT0lr9+iOzJHMF1Uo9cPYjsyBzzG8ITWuLBMHeKsaL 8VVPxa1brVEUp0IqxDGvn9PWwxPa+ngSm4RnsClkGD2MNRztNu2hMEcuR2nqoTPJz5Hr58j1c+T6 OXL9HLl+jlQ/R66fI9fPEevniPVzxPo5Yv0THFkTR66fI9fPkevnyPVz5Po5cv2LmThq/SuYVjNx tPo5Wv0crX6OVv8uJo5UP0eqnyPVz5Hq50j1c6QGOFIDcUzc5gBHaIAjNJDOxDgaYPwNcPQG2jF1 pHDqTO9s71zvfO8ib2nqHO8G7ybvVvbRj7x7vV+ytx7xHveZPjf7emLq6tSS1AWpi1OXpq5geV1q Werm1O2pu1LLU/elHkhLSktNS0/LZN+Yl7YwbUnasrSVaWvS1nuXcyR14Fjqqt/cgQ+QSGIfMOAD FnygFnwgDj5gwwfqwAfqwweS2Ad6UBp8IGD1YR9ozKNfl4IqkX2gGXygBXygFXwgm0e/IeX8r5dn 0ADGOu0tnchDlL7WoQ3VaBMT43/6TqaPGDe9TEGW9zJ9SZ5GgxoNazQiLSst1GhUo7GNJjZaGKwI Hktbl1am3/epEU31RH324is4OiTHxVVkIS5qqXqqHrkRBbU9KRwFClFg/8f5q2bXZGd2DehZ0LpM z772WntT9L0z0s9qUxziJp0ljclpvEgZtdO42t1oam0niSmV76RS9Ml58KyWksjvpE1CqiZnsSX4 XlW6aIkZZ6Qy6VL9jtN/4ZlsvDndfJj7+BHzUaqNX/9s/BLlcW9y/5Pi3e+4t1Jd9w73Dkp073S/ T/XdH7g/oAbuT9yfULJ7r/szauje795PjfAbVCp+WfJxLUtpOTxLjwA1OELdGxxpcLzBomQz2Z3c p0FpcnyDncmJySnJ/gZFyRl8tSg5izWh5LZ8dGhQnNw5uWuDnXzuiaMPp0xpUMTHzuT+oCM4qlms YU/bgqWYnT4Njif7k9s2KEoJwk61Q/epOVu//2bOM1/jvnjDfIu85tvmPmpsjbHG0O/1GpM6K5/K oIvPWBc21etCzskYaC40V5E0V7OVFKROdWxr/0pHf+j32iiZI6qhm4wk/aTfxO+snI7LMKgddTzV b0kdqV5SOz4WJi1hvgxHflJ+8uHko8mVDV0N4xomNAg2TGoYLWeOnjnMZ8xnuCbPm8+z5gXzBba/ zFxGwnzZfJnr+SrXTXLbysiNVsWhnopnjimGjlmbetF5RPU3/EdkJBdS9/oT+JjMNI1ppkNznPOE 064nO7qqY5pz76eOaefQV9mc+TP5q46fq+Pp9TtXvU6vz+SztOtnDqCejkJCFBqIQhNRaCEK3YjC 2ohChSi0EYUejsIvqM4v9mLD7GLOZF+2edXLa/t6vPavRnQWOpf+XGmr2zKTJuLcvV7hGcdUPqrk Yj7OTFFYr4iPwnqz6606693oMbfe2ljKmnk3xOT59TZVK2sr80V8nNtm9VpFSy/9yTqc61j+b7b6 p9tbvY3VW1d1/NJ2/ceHxovY/PEIY8+jPIvEuf/u/jv75hb3FvbNd93vsm/udpfzXPKp+1Oqh3ki UXVX3amBukJdQcmYMxr+W/jbk2kw03AgcCbm1r6kf0/t5aByJtKNYerA1KVauhByNo+l4x16tFyU 4kUpPuDy4xyB+m1mHYOEGHQhBi3EYC3EYG3EYBxiUGEm9MCibgOhDRJtaIL6cETTDKbtKLsZdIOc eu6vpqtq36FqusNonUG7q+k+ctq3pJpuI1pn0AhHd/b2maT+o5HTY5asxwx5CHkM5DGRRyCPG6lr Y4acbk7n0h7kMg2UZsFGLdRvhlns1ESgFNc5+9DkvsnHyEfHrzna3JmpL1P7mM7kkR2Dka+eTvdp yFn1EXqpapTP3kf/vbE3eexncXmLnfq0gG436a86ZtfQfUSFfDWxhq49Rn94Dd08jH7fGrr5GP0u Md3P9fv/O784+5j+Um85Wx8atII2YW2Uot+dqTOWiXupzhTqzvuZ3+ih2+Le5t7Gbd7j3sNt/tz9 Oet+8SxNy2jVqXWjnU0UP5a62xl8ZGkePxpy7Ozcyap2ddpxKmV87yhVyxe7X83embZ+wrqOKPd7 7l2/toWqB6i7ZyAfy/kYqApUgb5S5dAVQZ4QPUdlHAuqrnUOJ2XRKZ1zLPcMrrJYzV7RKUs1LAxU nc880MLt7n3/xnxlMj6X8Qq7rROX+osbYWQZIb1HMoI1tOlGqt6TG/E1tDaLU/j6aHUtHaHjpHcv e2poy+kA9efrTTW0GxlruiCGTmlNLj+br+bFdD+NF3XN+fpvZppPmwsZN581n+XUS8wlvJ9Yai7l 3lhprqRa3BtvkNtcz31S2/ynuYXxZKu5jTzmu+a7VMfcae6kBHMX/vJsuVnONj81NYb4lZ8xpLFq TPVVE9UE++NoP+udzYPg08EfBZ8JPgv8Yd0Ww2XEcb8lOG1pCV3Q0Bh/vLqOdhtJmG2r644bbszK 1XXHqIKv1tbQHQAOL6mh20tf8tWcGrpxeMd2Ug3dUlqHWbm6bjbvkw0ereq6EaT/t5BLa+im0WQ6 9RtGS+ephB57f0x3qm8ehG/qsSPgrQG8NYG3gvH2I57Lyhl1a+nU7t3V+vMhaLaDb6nWw9Odftb6 D50nGfppR3rsl45WVLXTNXi3GU2nuZ6j3JjjnN8gcHX5Oa7i2PbEk21j1zbV+jGRBuFJxaknIJzn 5LAaGp1u91nTDTxD0+30nMeLzsgZR5JXpHGxNtXAKG4TWYupuzXvt3pUW9f8QhQ2FhkHsfsv4naT xRHiNmOkr08n6GsdP0Vu92nX8THZkGUxgo7zd5cF/3eH5fq/LP3XH/+9VfsvnJv3ACttnjl4NAVj plzHVHYO2uyct58iV13qLkb/+kPO+09y/9zxK1f4vyqmXPMZS5bGSF+fTjX0jMhnppldLe3IGFXp upvxv+Ej0aHf2PG/HlN6xjtWbRWp34F3V444sbf68W/M4M7b5uxnerYti86emH31P+PkXPCl4Fgp ntyMOa4MHPM3z5LMK7GuMTqCZ0HfKTq7nzzCsoU0g6HvgzTrcHc09NF1TUvIHTQ386H5Dvx18Gng T4Bjz2uMhTwefBT4fdAvgPwy+D7wreCPgZeCT0ApBZBDkG+3eN0rLpb6icoGeQzyYsj6N9EN1mRw vbra4NK/KF1sLQC/X/+6BDmPW2YYKx0eZM0ILZupJxaxpvjEMp0mKmu9mKNlc3wtXhWat9RqzLyJ xaNlPmZ9zLyj/AfzQdbfIH/N/DLrK51SfqP1shZ4ul6xQXOZ5FWj2U/mMx9l3YOUL2h+Qn+N1a+y I/OnTtzNfMqJ0Vpfybtis9eJqVq2eH41G1nvoZRtzOuCv2w9Cwt3cJ3x5YmB38OMCvBijGZ0X3IM YzcfHKvPyjGan9gFGevMk3hTxcwA74xch5z13w1sbQD06ajDIOsvuhWyWNfcuhy8E6f5ztL9kGdd wrmCOhd7V/S3MD1GXzrWLuKytlV+wXzAyTeQhvvh5P0nv9ecIswnnlypx5192XCNE+fzWAwWN7Pl Hq5urH9BTGJ920r9fZmffZPbW6lXtRvAy3gmM1zDZF++21/7iZmhR981TseOuFpzMwNxtBtyoYsx W2x3faN7yaV/Zc/SepeFuyXgHWu3At+se6D2Ql1inN6PVNQ+osdCr37MRcd3gOsRbyef033o0uk7 yxHQ6DG9UI5l3t7iFbeZInci727odd4LralIf1Jzi73RzLU2gO/iGj6rPV/Mgfc2l5nab7XM/t9X y9YW3RbrbVh4gnnQelxreJy5FSc2wot0KcPkCeYTxbc6vsR28C91ROt+M/uJA8z7Cm2nkzioa467 /cRM8IVIPwNcz3WXuSxdukiDBR5rsz80C8B7iV7w8+HaZ8QcLbsytOzyMW/jCmrZPIq8l8J+f6TR fd5L6ojo5arUetfzyLUC0aQjq5XrO8TXhUj/Du7uAt+N9O+B61HoIzrBfgLzqUJHaIFLR3En6YP9 d6BnJDGztZ8w172UjV51oyZuVwL4Tj3iUW65mG/SX/8x763vRkdEpzc2Ic0C8HI9aswztR68XFsz Vorj2ls0N4qjspyGNNMgD9J11n5lZDl6bafC0Whe7KSMysyFrX2M04yG743Tes1Zngt5LtJ3Qfou Ogr0iBgVKD0DdatAKzJQYgX4AI2xZgasVbj26HkBudbpvzRjrNO9xHwaeCZ41P4a5F2DlMilMdnM Rx1Wg+fLQ1qv+5z7U/d8vpUFTRE0vVGHQ+Cog5a5ztsR0cXgWdCjLK2nY+ilARKIh1YPcPhI6HEX fyPnmI56KgM/pvNyu8CRfqzTokGoT2/Ih2ChDDLmR/hGZ4xUZ4xFZ90DYgTQoHP0LmxmwWYWcnUE r4DlYrkV+q0oMRNc68fqdtFy9MYAbZnbpeuTDF6qe4PlxRhToJmuIY/+IPSGRont8OEMzY0F6LHV 4FNRz6mQ86Ma9GR+lJuLMC7wPXhyRdR+1K+iva17jzUYZbSumOcK3ZZEaOaD2+AbcbcEHLOq8KPO w5B+NGTgLe5mwU6WRngjC7ixGtyNel53Usdg0Ukdy6NOav8ZAt5LP/E3x598S9dZz+lm4cl5Wn9S Y+DBkxoDs4Hh2SdmMZ+lOc3mmVBz/SvK7sq1uocxI5RBLsM8Mht8kCN3gKxb1xU8S3Ozrs5llOq6 0b6oRqfnNJqXV+7XI6VtGivBN/EqTGuWQIZvROcs2ByAuh3T3IhH3cp0uSxvB9dpilDP5ZW6h4sq V8DCCtztgPQ61zGt4XbNB++AntE9kA8L63jO16sdjZZTwTOgGaA5lw5eqUdqHVo3ADU8VglvQU0G oFbF4AOQpit4Meo2CBYm6jZyfZagf/Tdcuj3RdNEZ2GstUqjHHUrhc2u0ZQYr1FIvw+zfNfoXI9e KtM9QGWof91oi2ChONqHaPsx9Mli2MxCSjfWANuxKijUmMOxr2e9bNI+E6DF6KXWel7T6y5a7rpR c9lQr1FdetX9pmum7u2Teg2QdlL/kvj0yX6QF1L0G+iFppeqfwM9BN9Aj4p9A52Bb6Db47tlN74X Vrz/bkApdJ7zBbQb39/aVId3D42oLtWGVv8dLJN3qAm8J2hIqVSP4mLfSJu8VxCnfSXtrfZ9tEH1 nXMSpd1wwy0jaCn4KvD14JsHFQ69iXaA7wbfA75/8NDh19NB8ArwY0OHDx1pELgFbg+9/dZCoy54 MriXs15vBMFbgbctvPWGQqMT+KXgPW65cdBQozd4X/CBRZzUGAxeCF5E0XerBL6NPptk4stsjfgJ Z/DzwGufwdUZ3H0a1+9O6C+3zy6d+u6dsDuLPlU8nXtq8GRGqnbUibpSL+pLg0n7xOlfd2+i7bSb 9tIBqqDjjt2tznln9D10cr5DNpyz2Y7bX5uE2dscbI5xvudeEP1i2zUYLTZcm5zzvug5bn/0HL+R 0/O57oHouV5CNH9in+h1YjHsuBPXJG6tP6X+sqQlSeUNDiXHOd9tr45aSd7kXFc433H3cs7OF8wN t0fPKfppLe+P7R6o+Uj9fhq5atm1PLXia9XB72Y/6H2K4TP8eIv+MI9NKrWijtxn/bm/RtN8WsK9 VI4xt0VdMs3vRQMyRS1Hcwk0KdDwGOl3CkU9595luJdULXVXaBrGUicitcT7WckcP0GU8A2sHkb+ CuT5zn6RU1rIk1yVW+vM72vk5nvmUV0/tpECG0mw0VDbcOrANTS/1SWbR/DLqP5LHIS/weEW9YSO cZMyKMGV4mrsauhKd/lcXlfAlezK1O9EurJczV1NXS1c+j2AWpz3MIdDhbZtfs92XLBjibrcDrdI 5LJq4y90KDy3EtYEa7ypsUPon5C41DgRp9cRwkZsRX9vd54RcJt0xHXEr+rZ1XSCsviIvh9Zpa16 i9E0fxAm7NpoR+wJvDGXEawt2+5GfWggDWNfGEdTnNFdq9/K1y/MGYmG18iMvZOvf8NwqUy2+iik ZjGpeZVk/oOlOZA2x6R/xqQtMekdSPovqcRTorlVX5lvkKl6mJ+yPBtptsVSb49J79bItwP51jGf br7J/GGkea9amiRzvbZnvsUjO4fPO2OW3o9Ju2LSBzFpd0z6MCZ9FJM+jknlkGox2iUTzywcKxFq b/6NS3uCy/sbSn3CfBtvIW7kqxK+3ghtiVnG2hLzk5itPZC0H0V/259nLuSUi8wlFGeWmqVUx1xq vkgJ5kvmcqprrjBX8TwU/WuaifidsSNGv57zvuSTfOM58zm2uZzTC/M18zW8I2Cas/A0Tr8FZzrz lsS82dj5KzVp+Ps0XrbxOvnwdO0iPF3rGPVa8YWVKB7SHmol4deKFIvXkPorIm3PmExrhVf4RVBk iizRSoREREwUk8RkMUVMFTNEsZglZovHRImYLxaKxWKJKBVLxTKxQqwSa8Q6sUFsFJvFVrFD7BIf iT1in/hSHBAHxSFx2HWl6yrZUraWOTJXhmUbeb68UP5OXiwvk1fK7vIqeY28Vl4vb5RD5S3yVnmb vF3eIe+Uf5R3yT/J8fJueY+8V/5Z3ifvl9PlA/Ih+Yh8XP5FPiWfkS/Il+Rf5avydfmGfFO+Jcvk P+Q78l35vvxQfiI/k1/Ir+Q38jv5gzxhGZa0alse6zyrvuWzAlZjq4nV1GpmtbBaWq2tHCtstbEu sC60LrL6WgOsAmuISlYpKlX1VwPVIDVEFaoRaqQarcaqCWqSmqymqhlqppqtHlMlar5aqBarUrVM rVCr1Bq1Tq1XG+zt9k57t11u77H32fvtL+2D9mH7iH3UPmYftys9psfyxHlSPX5P0JPpyfJk8zis FWkijSPfJ3iFJBqLxoySTUVTHr0WogXjUkvRkqTIETmMTmERplribnE3Y9Q94h7GqHvFvRQn/iz+ TApfZNliuphOHvGAeIDixUM8+nXEw+JhShCPikfpPPGEeILqiifFk1RPPC2epkTxrHiW6ovnxHOU JJ4Xz1MD8YJ4gZLFi+JFaiheFi9TinhFvEKNxGviNUoVb4g3KE28Jd4ir/ib+Bv5xD/EP8gv3hHv UEC8K96ldPG+eJ8aiw/Fh4zsn4hPqIn4THxGGeIL8QU1Ff8S/6JM8ZX4ipqJr8XX1Fx8I76hFq6e rp6U5ert6k0tZZbMolaSD2ots2U2ZcuQDFGOzJN5FJIRGaFc2Va2pTzZXransOwoO1JEdpadqY28 VF5KbWU32Y3Olz1lT2one8vedIHMl/nUXvaX/elCOVAOpA5yEO/fLpJD5BDqKAtlIf1ODpfDqZMc IUfQ72WRLKLOcqQcSRfLUXIUdZGjeSd2iRwjx9ClcqwcS5fJcXIcdZUT5AS6XE6UE6mbnCQnUXc5 WU6mHnKKnEJXyKlyKvWU03g/eaWcIWdQLzlTzqQ/yNlyNvWWj8nH6CpZIkuoj5wv59PVcqFcSPmy VJbSNXKZXEZ95Qq5gvrJVXIV9ZdreA9/rVwr19IAuU6uo+vkermeBnIclNH1cpPcRAVyi9xCN8jt cjsNkjvlTrpR7pa7abAsl+V0k9wr99IQuV/up6HygDxAw+Qh3rHfLCtkBRXKo/Io3SKPy+M0XP8h H7rVclkuGmG5LTfdZtmWTUVWgpVAt1uJViLprxG9dIflt/w0ykq30ulOK2gFabSVYWXQH61MK5PG WM2t5nSXlcW74rFWK6sV/QnfDY6z8qw8Gm9FrAhNsNpZ7ehuq73VniZaHawOdI91jXUNTbKuta6l e63rretpsnWTdRP9WTVQDWiKaqga0n0qTaXRVNVP9aP71XXqOpqmblA30HR1k7qJZqib1c30gLpV 3UrF6nZ1Oz2o7lR30kx1l7qLHlLj1Xiape5R99DD6l51L81W96n76BE1XU2nOepB9SA9qh5WD9Nj 6lH1KD2unlBP0Fz1pHqSnlBPq6epRD2rnqW/qOfV8zRPvahepCfVy+plmq9eUa/QU+o19RotUG+o N+hp9aZ6kxaqt9Rb9Iy9zd5Gi+z37PfoWfsD+wNabH9sf0zP2Z/Yn9AS+zP7M3re/tz+nErtL+wv 6AX7K/srWmp/Y39DL9rf2d/RMvt7+3t6yf7B/oGW2z/aP9LL9gn7BK3wGB6D/uqRHkkrPbU9tekV TyNPI1rl8Xl89KqnsacxrcY3mK/hG8w1+AbzdcagShor0kWGaC6yRZ6oENPETDFHzBXzxAKxSCwX K8VqsVasF2Vik9gitoudYrcoF3vFfsb7A6LC9QfX1fICeZH8vbxEXi7/IK+QV8t+8jp5g7xJ3iwf lA/LR+UT8kn5rHxRvixfka+xjQz5tvy7/KfcJt+TH8iP5afyc/kv+bX8Vn4vf5QnxX5LiXSrntXQ Cln9rYHWIOVVA1SBGqyGqeGqSI1SY9Q4NUVNU8Vqlpqj5qp5aoFapJaopWq5WqlWq7WqzN5h77I/ svfaB+xDdoWHPC6P22N7vJ50T4anuaeVJ8StHwvsJWCvAew1gboCqOsC6kqgqwVcrQVEdQNRawNR 44CoCohqAzk9QM54IGcdIGcCkPM8IGddIGc9IGcikLM+kDMJyNkAyJkM5GwI5EwBcjYCcqYCLdOA ll6gpQ9I6AcSBoCE6UDCxkDCIJCwCZAwA0jYFEiYCSRsBiRsDiRsASTMAka1BEa1Aka1BkZlA6Ny gE4hoFMu0CkP6BQGOkWAS22AS22BS+cDl9oBly4ALrUHLl0IXOoAXLoIuNQRuPQ74FIn4NLvgUud gUsXA5e6AJcuAS5dCkS6DIjUFYh0OVY53YAt3YEePYAeVwA9egIrrgRW9AJW/AFY0RtYcRWwog+w 4mpgRT6w4hpgRV/gQz/gQ3/gw7XAhwHAh+uADwOBD9cDHwqADzcAHwYBH24EPgwGPtwEfBgCfBgK TBgGTLgZmFAITLgFaDAcCHArEGAEEOA2RHoRIv12RPpIRPodiPRRiPQ7EemjEel/RKSPQaTfxWu+ eJokAqKJaCZai1zxrbhfPCgeEY+Lv4inxDPiJfFX8ap4Xbwp3hZ/F/8U28R74gPxsfhUfK59z9VL fOvq5eoj7pftZAfZSXaRXWUv2UP2kX3lAFkgB8thsljOknPkXDmPZ7NFcqlcLlfK1Zxnm2giN8iN crPcKnfIXfIjuUfuk1/Kg/KwPCKPyUrxuWxnxYmAVddKtkKyE0v9rOusG+RW1Uhdq65XN6qh6hZ1 m7pD/VH9Sf1Z3a8eUA+pR9Tj6i/qKfWMek69oF5Sf1WvqtfV2/a79vv2h/an9r/sr+1v7ZMe4anl UZ40T8DTxNPM09KjvzKc9P9Z5OvVUhri34v49yH+/VgPBYAC6UCBxkCBIFCgCVAgAyjQFCiQCRRo BhRoDhRoARTIAgq0BAq0Agq0BgpkAwVygAIhoEAuVip5wIIwsCACLGgDLGgLLDgfK5V2QIQLgAjt gQgXAhE6ABEuAiJ0BCL8DojQCYjweyBCZyDCxUCELkCES4AIlwIRLgMidAUiXA5E6IaVSnfgQg/g whXAhZ7AhSuBC72w2vgDVhu9gRFXASP6ACOuxgojH0hxDZCiL5CiH5CiP5DiWiDFACDFdUCKgUCK 64EUBUCKG4AUg4AUNwIpBgMpbgJSDAFSDAVSDANS3AykKARS3AKkGA6kuBVIMQJIcRuQoghIcTuQ YiSQ4g4gxSggxZ1AitFAij8CKcYAKe4CUowFUvwJSDEOSDEeSDEBSHE3kGIikOIe3p9m4OmefvJS V4znKJjOOFDOO7yYrN9946iZToaRwbtNL42gdbSJdlA5fUkVVGm4jbpGKu9W9RuS+v3IVpSHr/q6 UDfxHUfaRPE980niB+ZTxI/MZ1h3M/daQ8mULa2bmbe2bmGe4/GQae/z1GG+/xwWj8DiUVg8BovH YXEiLA6DxUJYHA6L8bCYAIu8p7du1akhjYhJt8Wkoph0e0waGZPuiEmjqiS7W0zqDol7UvcaESPe 11yDw/JbcjHyfU8Wo9+P5GbUWos3Ldvpb98oiL5P4P52xXre5fS7vhOvQoxTrI+eMRqmHhnSX2Rq C8l44pPLub7l3fyH0Vz20mjq6Fn/Xs25nudc0b/I2Zyy9XMR52lWVFfV29FnVnjLRnwKvgB8L/hi +q381QBd6zrUh/pTAQ1jrxxGo1geSxNZmkrFLOunVnOd9tWhLApRW3hRR+rGci/KZ2kgDWa50Gl1 fbTxVfBy9FlEHMKzWQHdePC/g1fg/pdOfHwF/hL4nt9UHyWid/ST9UlMU1ku5p4ZRyW0gBY70lLW 6l/IVju9lQjv0E/me/5PdWcCD9X6PvDZEMY6jCVrZF/ODKIiW3bGMrakRMQojN1owQgRKUuJMESp bKFLqSS51hJJqyzpKltC2fM/M5bce/3u8vl/7v/+/uZj5rzv88573vO8z/M9z3ln5jzgvy24TdWy 8UpPy1vHwNrVX+2g/5d6i6A99/zX6pBhRR/bIIYQHIR631IHyGr2m2WvEVih1LI+eGjHkE17frdO DwvrjnPov+YIl3/fTbuDLGwQAoMV0j4xoYDjE4GwIB8ga5EPkXXIR8h65M/IBmQjsgnZjGxBtv5u PRsK2gUdhB3UhxyoKa3l9XxYC22Fs3VtNXiA+l1t2taHta1fVrfoQ6mt/3DFdPleArTRsnCC1MsD r1Cvr9GKSJNR15qlIIYsKDA26wUtEQ5aIByMdXvBq94hcGsUjIF74f0r8m1/Rw7a7Zp8zerPrO0V C3Fk4QJjvo33GgGSfX3/yy032v9faLkyErDlhmMSWNMSNxjRvgNbZMMXaL0u2+by53Qrd6rmd6F5 Pe2P3wEg89vSM8pEG0ZPs0AZYBQyvwFYpQuDQjHMACM9nSwrHMZPBwGc6Zlk6aEIKFkVBkVQ8IAl ILeuRiBXKFwAdBvqwxyEhj8tlYobLTnFLuoDEF3XGYLr21BQcNObpbjCn59p16RIhNYJWelTyNwS ABlxESDDIyhwGBQGQymBQ8wUSpAsrpp8qUYbfybAsjZaKB04LhJtmHAbBD0KZoPHoAAOamETisnO 2d+D4O0e4OONYQdYqZUMKAYrN1cvH29XjBAgQK1hQnGvpatYn98CIwaIUuVwFP96uaubCJ7g7k1N 9GChqw0I8bBgsMB2QBmLUQZfHcCiEqC0VgQiIv+RsbEAzFQ5MwphZm5htdoc/h+aA2TolvU6g9JB 4GQoGwSsZ4KRoVDInTyjI+zzWy655aLTdrY4u8wFSBcl0PN1+Nrzxe23YyG4eG+j4Ba3kHoFG/id z8wtXObYim58YC+HiTtVjBU69SZsV4DddFTeNnyz9mdCFSHLy3bYe6BMwsy/3dW3QrjL+eQpiOhn D/vIvUaJFT3PVLpaXgHZ+IXQw+knZStF3QM6v4X85ByTlnJMps39E9/9V3cOjqrjdh2HDU+GlbSx VUacmJr/OH3OsDpRI76RIVlg8l7gwMJBEens7ZPa1mpC1q5aFSevqxZPQuL7WeZyyti23Lp6rbiL 5zYwDhMTYZ87vIettyg760BEMtyUJcOFr7z6/N1zDldCooMyjjw2HeG8pWYAg4OecZkMZQE1wgig QF0KbkUgASb6TaB109ExwOGAILWSFYFGcL2zfLt5QsiYTpNToPZuXbmb3Dm6NECYKhZD8ALocK4W jo/NHRVoe2iTqoISGn3bNJ1JGLClNhBGmANmgAnFiGIQrecREEDcoah40O+IgtfqrCkc9PFSJB4m UGsVV/Ka+CuCkwoaHmh2oMU5AWryShh5LIABFMBGgMPqGKFQBA4wBYxXywAsetfKLoKDgzfahZvf H/Yd8Bs3g1MtRSZveuxBnURopAt/yKW99Ysj3I+uhqDaefBSzEiIjqYaW+xbV76TKmGGt9uGQ2Nz Hptf660eNWD/zvM6Jpa93ZSbMs6x9Pp8m2tbxKLS1bqQ5IFjz71ifLsEnPtbca6V/pqzRyWVv1lp Gug+YI0g4mvPQ3NNqmtk4MFHveef6sfxSGHy6d6j46q+GBO49ynN9oSlqBvoCRY3n26YPiU09P0c MtucgXFUIsm7PIUPOuMUMVj8OjYxzGH/Sadb94/pfzAo+W4vey4s5q2+sGVqa51Lzq0Gp+EmgqPv uWsJtiJyO3DJixn0iUVxMx4ndt4h6SRvN/r6dP8IMUEnsJ5sc3bzLRtnEE53QDjlLsOJyRmWZEyD qMhvmRT8j/i9KM3QQEfn/SG3Jni5yeMDnL2I64iEAbZjsVgVlRUiKa8VgYjy/wsiSQJbl4tC3roE IjVpzW68nogeHrdDR01PV14J0FaWV8UoK2O2AmLLRySw4RHh3fyoSW7+lGCPD9ZN594yeDW7PQVf eDjxCklNdKrdW4NbDV3rL3cxcb5XfW9frunpshunKyaMG24nW7RLfI3Bl4YGThqw8ci6y7kOHxg0 ju89vf+RLUF1hwxiHw+0seZa5bOmG5fPFUS815SvfDnwKARC2t622ODaL8YZkoUMkjRKTC9kDXhP b2Nj1Diac4yo53yizBct1FerfbujA/vJSnxymwXiRbePutFz5s4grQU3N9SJVMsdz5eq0q/aHTbh /qBqL/wWwlJfPNQTYTA7blX8S5tDYS+X8RXflPC5xVIx4MCnyAOFrOJCSxwTg/k2Lqlj2WFZ0l2P TZiCIsd1ET30RiITeIfdpZah21cJxghqhG4drB7zFTqbpbjLfitJCdzzZtweEpmz41ewElOeeWWl T2Qa1ZoPmi+XLa1TKWcDrJdhBaIKAFFF0YvW/VuwWhZTZ5E2iaBV0lBlvw5VIKgAw3WoUv9rqNqw 54CNiL1pI3pZCnT1bBN8G34XdvXniz3oxrP0BO507O1JV/aOuEkX6c2kxOYD18r3PRpAy6vE7874 MuXonKddnH76jkW7J8mz3CU+08zSgvnamc6aXPYHBs9xgVaxQprnPRMVtLjo/cY0iXuS9kWNRl6c I5/MGoBDRVE3JLtafY0+vBZc8lFnxD1POPQR2RQ2huGCqTodJYzE7tIL27p7UJi/q56/znig78vJ tEHz2WtpV3n1R04eGbkGZ1o8FmoleOlwXgFbWlJeUoijbMj742UNr34iHu/Z64+vud55yMJkvGln atSHWmO3zK32FtUi1udIOzYzOHraSZVgGrIH6wTHDBYlX4S1Bbx1q9cEyHR3QXrlrdJLSYKfRi/M b+nlRMMCE2OSRGzyhJwrlA8NB+cCwwfw/KqScW2qMPKA7LIfi//wYysfHxAS4NwRDhEOOge4iWgH Bnj4+BECSDRKAYCaEgaLxWxXwoKUwq4UsdTivxnS/Rlqyvz2OPIBrjWC6QdERHQuBuGP7Nrc5dPa 8mXo8PcLaPbenh0BkfyVihTsyNK7hzo4sed+kDcqdkyxzcUiRlPjHoVmJgn590gmvhkGDK8Xt/Zk Bp5qu+6/O+xFxJvJexPb8poc9d6WFGn0Snlc4L+a7+dv+4UnZWBRJcWP0hXkJBSsFxmlhn7qv5fu jrtVQn4ZQfE1H/P3pADp/iBF624uYM9MR4LLYkuTkz7G4rYkakALaPOTZpfa0qCK06BgNc4+zlGj j3LE2ZKlZOiwlSYvzA8Odsi7fNHTGCzcBPmmn5PVvjdeAv8x9LrxhH6bqrpaVkWwYz5PVkILR6Kt em0hoxP82Spq9oMacQDYqK6HgkKXEHQAHHxZx54N4yDqWUKQDYEALTAa4KRnXLl84IYi6Ggdg6eD tToYtZfFdgzumURcal/agZ0FGJ8r6ndfygN8a424YAikEBMET8ufpQvR/hXcWAvJB7RsJS982Ipa kOljwqfuGcgDLJbhZgQYAHoUXYp2tOZfh9ua2A80bSqVaGCzXgc2Q0Af2L0ObGp/B2xUh9Fd7vX3 0RcMCtmzfVeYhH7JsI/WTewtz2FWRe8Co+lhp8BR053yL3SLmL+3fJLHXBZrPWaRFi66r1BD0fRO boHtpffE6qqKGdItI7/pXUPaYc19SB5CS/4lEfk5ZotHto/l3xt33CUOFrDkwvNte6viTOwmUnUu fZn8PPY+WlhZvco2fRwvFiWTRxZI7k9hEJzox83E5zR/ROWfwzVu7kj0S5Xx9crgnxEYx3e5t25Z chR8nBt/T7KMdNB2d67l49lPl+1tuzNgersVnaZeF3eSsd4LeamogWHC4LVcufuNsuysbmcuvvma O8cpweimlvIlVNi4ur3P9uPTkPO8jk0qaKfuZEGjM/L3i5R3C4yxc/ND9nWr7BV9ktbAOBbFGm/u xYrCaRyTNrzk1z55pLl2hHjZLsnueEoCZbMh3GG67bI7U0D+tlF5RZ7GX/xUOad8bqq7k2etyhKU 0G5CrHHd7O9cp3ye6Hc+4/lEeoSoeDYv1yMcl1XINI+S1CoamO27FqZfzXDAwO2AFq5UZwQ3Wh5E esmkzOglEI4R7me17v6QM//BgL3INW3JAq1wrIZONLQ/VVuSUJecmNqU8DJDtJjF8dJ4bnG0RyTS U7466DBE8HzRBProN3Sk+O1TbZ4FBhjF9LfvfTVeQE64GLQ/OdVUxTvH6pdQe1mjBKbluUTION/P XsBeoWqxqatOAyDTM4D8/rzKb7SHMo3fAv8GvwFVMKwEia2iRLvuxWJoRerVLxhl/mvh75/ROzvn yM2eN4ZJMscOK/D13et/X3/RUsyi6Ek3L06cbaz9artpUQAgwjHM8Nw6ldsoZbNOUnGaIyDxGnL4 49F7I7EMbNOsiLTx2FbhFiXxmMyJKXcBuYWjg6cEhwZxl3NqxfDNCXN6bYxP95c8LdVB5M5eOZLs /kLqrT6+NPrpByl9BcnCaHMbK+QAXG7e8+xZwDtmcg+QOXei60L5R9ELJ2Y6UJObKvFeVhV6Z7MN IcYGhzgkpQ8VXBh4Rh9hnDt78iqHARcjOfvkqE3Id2i6oMWmKAg7oD9a+U5Mv/qRvHV2iVCINia4 NaNnZ2RyjjPsliDLzYXpjDLoky0m1kuzdHUPRZhX6X0D1MjVP6L3hoHhr+jNvp7eYA0EiEhbhm/E WSAiYWP85hzMc/7HzZPMTipC5xhT8otM/e2nGFAKbv9vqP+XQllQ1+wX4uoc4bu3dX+qKAp+84Rk aQa9qRDgu9cLibrx5P7RxCqFTs7ceC+XKjtYC04EZXGxO1Sr3666xD5doE8QGl1YHTJx+unITuhY //1EJrrGBMP+cTx3t/mNpIHBBM/n4bW/pEzQK0bBP52TEd9CnP+2MBByUYFlmqGfeJcXl3nmMJNf alXO9kvu8vWWrEMujprotNMimv0M/NjZVoxxEEZD1o+5cYiosRTFhOp5yOR8ZvxFFc8w7nRYvYrs /ss1w3ePM+sc7cT7iY4BzdUhbo57oTxMXKwdr7nSvqrfPmRfLq84OBsV3Wpp+zGTmHKkcLtp5zdS zXXeUBfpz7kZ0sr0wfwuTRpCXsLkceYGueo23fIPsyPHb73PKwhQqcLV+4pxSgQxq1vF+zro63Ld LS8vNXNvzNZZCieJhmdxA4c+6nDu52/M2iL6VPeT7KfqKcNWuc6X2HBTCRlDcSeHIdvPV95dzGze 4XMvQjKAnmMsSLQmg1wraf3TTU+N2Jwg5wrvHNSVmusG45w+i3HYI2Xfeywb48WaDt3LFIzhdIVp yJfsSawaEP1wq7T5YEWINV2ntoJFYUppfsiNcsr5QP5XSTGowC2K2IJN3pS98VtrKJ9PNot2DQuZ N6WPGfVOQ918YpmPNxIaf/EeunrhCUZ6ibV+r+NLs805L+cUszQVbNCHm1CXFzFkRC5ARmTBoFAA dLd/L17eeIX2x0IvJeIRNVxbsV9GOAa5fhUZHMCPEjOGFVgv5aYGg6tvRGBAKAUeR0S1mdoa1swx EHtdWjyEN+1+CLiuewsSYwtYU2TCN8oHTs12SqJlc3anZYIlQjwgpByJcPH/6KwBJKKPu58z0YMk 8puTCoIMhaQtXrZh6u6Ls+i/J/XTToUgx6cdRGfRo8pT9Ze+srmwMvg7NrDR3zHxQPcXPJFeomx6 2ztOcDbpFb9g8gbJF9O/mTc6O7idsLV7skhQ6bTtPPc3NMkwqTWRfabiwecewomOry8FaiOLbfC1 uX2PtlTzfo2pXeCW4TYUYZ/6ecEsscE1Xu0cd7jQFydJRSM53bC7jrw22Lfcuje6dNqC00odhtq8 jASf3WptGNtXbkUq3Kl/+qEepLM4+KHTOXdt7fMD71WVj79vekAMZKmrOC9Q6RlWlrE14ewpwyI7 N7W4ugcDV2ZLNHP1G+dh4V+OI2cKTaa9Gs2E8qZu7xNDffIWT/pgfzSHDBMGyLDNP+aIHkOGIcGq TTSrjPrXooBfLcytM8V9AO96S2T+8aEHFNznmoQOw0ZdRMNgADXw8nQbRtXhd4boinf91n77OWNX cobUSznr/n2HTI78BtNUEzGLZXV6RE+/9J1FqbT9CN3NsQWteAWXPTmShukzUvyqzi1a9g4PK8Ij 5kgXkmO8dir77BIf7WfUXXzabCiW+yhCyAJqnnkx95r2C38/CYDAUqKdzsZtzjqWdWnLlO2g0RtP p5HzFLexXYb6x3T3yJ3lJOdfnZhBLaX9tG3GSbgLfl8UAiFPpr8Ofvxdcc6F+bV9DaQs7aYC4Q3S jhjx4lDk3KGfR5rTC7KKXpeTnDkOiYsT+uYbKAfPnpl88kQ7TmWmJTCasVKTUNMlt9MkZ3PKTpzu sTCZlvtm7aq7YJ5FVscLn1SSFe3duEsqp4LJRaPd9Wp+UjfzxDwUTiyQOawrvUeOQyD/A2iqxlMN CmVuZHN0cmVhbQ0KZW5kb2JqDQozMTIxIDAgb2JqDQpbIDMwMSAwIDAgMCAwIDAgMCAwIDAgMCAw IDAgMCAzNjcgMCAwIDAgMCAwIDAgNTI0IDAgNTI0IDAgNTI0IDAgMCAwIDAgMCAwIDAgMCAwIDAg MCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAw IDAgNTI1IDAgMCA1NTcgNTQ1IDAgMCA1NDYgMCAwIDAgMCAwIDU0NiAwIDAgMCAzODkgMCAzOTZd IA0KZW5kb2JqDQozMTIyIDAgb2JqDQo8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDI2Njc0 L0xlbmd0aDEgODMyNjA+Pg0Kc3RyZWFtDQp4nOx9C2AU1dX/uTOzO7O7s7uz2TwWAmSXJQ8IGCBA wPLBCHlBeCThkQ0YkwghQUHDQ4P1AVUrEIVoS0GpCj4q1LayWakEWkGrFV/4RMG2Kq0gqMSiRaVK dv/n3N0lIa79bKX/r6E5u/d3zpw5c+fOueeee+9EWWAAkIAgQXHutAkFF1b9/EVgg54GSFpWkJuX D2nSKICnhqFVn4LiqdNumjt5DR5PB5ihF0ybMW7LlZc8BMy1DisZPGn6tMKl69e8DBB3EiscOXVa 1tAJb+24DYAdxOurZuZO9l25pkHCugcCGA7NWVhdX/bGuh4AN/RCm+fnXL3UfUdDrQVgDdYvnphX X7vwsaW3LABYfgxAXlZbvaQeksCE9++H9Wm1C66Zd/dPN1cC3JELkDymbu7CZatX34J19VYBqrW6 muq5b9eMOIJ1X4z2I+pQYas3UXuwvdCvbuHSZZtOWqsBhJFY3a4FV86p3l7+y/cAfnkzgHLZwupl 9cpGOYj2LWjvvqJ6Yc2P++Q9ALD9BIAtrv7KJUuDQViO7ZHofP3imvoLf/QV+mdlFT7fp0C+NfTY yMxPr620j/4MkhUg+lX1jh7EnzuafHlwXNtRZav8Zzw0c3si5PJNQfS78kFwXFBXtp45EyGpP2mk YTAARF4E0CALZqHXTsrL8BwDUbyP3Q4GUAwbDdlYZWqYi5thuRCnCILFKApE2HJhJVYpRuuePM3t Bh3c7lWG94MXU0vYXjfAJn7fLMNOelIQ8bIVpBHQE8J8zj8U7oJC+JaE9pu+6Zw0FHpE+PQIL/+2 9X5bMvwK4s9lfdgXxWfkq6BIqoCLzhxXwsqY13TSi71gPecDYeq5bFs3dVM3dVM3dVM3dW0ST4U+ j8rSQtgs7oXNZ447yFEyrIHe0stn6yU1fCy+i6vGbuqm/2L6tnuNc23XTd3UTTgHmcNzEPKZHfm/ akckbaQ3If+IpGGMKKFdw6KMdX7XEqbeMXRo2A/LQH5A2Av6/OP7dtN/B8VhcDgBUknWVAQrOP6P m/SdqX1oxJK6PvXkn7Olr/UaPi4RqHBKCYEC5lAbmMASOg1mjhZQEVWwhr7CPrehbONoBw1RAwfq HYhfYojEoewEJ2I8xKMmARIQEzkmQSKiC5IQe0CP0N+pSYjJ0AuxF/RG7I14CjNOH8QUSEF0gxvR A57QF9AX8XPwQl+U+4EXMRX6oSYN0kKfQTri55AB6Yj9ET+DAdAfMRMGhE5iLstEHASDEC/gmAUX hP4Gg2Ew4hAYgjgUhiJmI34KwyA79AkMh2Eoj4DhiDkwAnEk4gkYBTmIFyJ+At+DCxFHw/dCf4X/ 4TgG/gfPjoUxKOugI14EF4U+hnEcx8M4xFyOeTA+1Ar5kI9YwLEQChAncJwIhaHjUAQTQx/BJMTj MBmKEKcgfgRTYVLoQyiGqSiXQAliKZSGPoBpHKfDNMQZiB/irDID5TKYGToGPo7lUIY4i+NsKA8d hYthNmIFx0vgYsRKjlVQEXofqqES8VKoCh2BOXApynMRj0ANzAkdhnlQg1gL81BTB7WI8xHfg8ug DvFymI+4AC5Dm4VweegvcAXHK2EBYj1cgWcXIf4FFkN96M+wBBYhLoXFiFfBEsSrYWnoEDTA1YjL EN+Fa6AB8fscr4VrEK+D74fegevhWsQb4DrULEd8B1bA9aG34QdwA+KNHG+CFYg3I/4Jfgg3It4C NyOuhB+ifhWsDP0RVnNshFWIt8Lq0B/gNo5roBFxLdyK2AS3hd6C22EtyncgvgU/gibEH8PtoYOw Du5A/AnH9fBjxA2wLnQA7uR4F2wIvQkbEQ/AT+EulO+GjYj3cLwX7kbchPgG7mrvQbwP7kW8H/F1 eAA2hfbDg7AZ8WdwH+JD8ADqt3DcCj9D/Dk8hPgwbAm9Br+An4dehV8ivga/gocRH0F8FbbBLxH9 8AhiM8cAbAu9Ao9y3A7NoZfh1xwfgwDiDngUsQXxJdgJ20P7YBc8hvJvoAXxt7AT8XHYFXoRdsNv 8Owe+C3iE/A44pOwO/QC/I7jU7AHbZ7m+Ht4EjXPwFOh52Ev4gvwLDyN8nMcn4dnQs8B6hBf5LgP nkN8iePL8ELoWXgFXkJ8leNr8DLi64h7YT+8EnoG3kDcC2/Ca4gH4HXUHET8PbwF+xH/gPgM/BEO oPwnOBh6Gt6GtxDfgT+g5l3Ep+EQ/DH0FPwZ3kH8C8f34F3EwxyPwKHQ7+B9eA/xKMdjcBjxA8Qn 4UM4gvgRvB96Ao7DUZRb4Rjix4h74K/wIeIJOI5nP4FWxE/h49Bu+Bv8FfEknED8DHEPfA6fhB6H L+BviKc4/h1OIn7J8Sv4PPRbOA2nENs4BuHviCHE32B+T6K/W5pkA4hI7Wk/Ihr4JwbJMXRoaDBA xN7Ai/Svz07ddP6Q1CEyjOdJZLQPjVhS1yeRf86WOj+cWcGOFKUOeUOK2n2DI5QYOkM4KLrzRjd1 ImM4MowkK4TnQWSc73lD4p+zpXAPtpMFOxPzhtTxqoidsbNtmGLlDYoOY7TucHScL07spu9EX8sb 3xBVXYnah0YsqevTt8kbqhnzhvTP5A1zDB0aynJ0C0PYnTe6iROGgWyMRAaFCEpdPjLO97wRcxXV 6fWE1YwdKRkMHa+K2MkxX2XEzBtyd97oppgkh/MGX6SaFDgv8kb70IgldX36NnnDZvln84Ylhg4N FSW6hVF4JedL8u2m70QUGfLZeaPLD6//yrzR6fWE3aLgBtTYIW8Yo3ZKzFcZMfOG0jlvnD+Ltm76 ThTJGyaSTYRK1x9e7UMjltT1Kebuy3S2jXau8gYFhSl60J03uilMpvAyg29uLYTdeeM/nmLmjU6v JxyqifKGseNVnEz8E4PUGDpTx7xBeP4s2rrpO1HHvKGauaLLD6/2oRFL6vr0bfJGnNUEskHukDci A/4bHWGNoUNDszlatxm680Y3RYgiwxRZpKoWgPNheJ3veSPmW5tOecNpM4NsjJE3zPwTg2wxdObw IrRj3ujys0o3nQuinBHNG1ZCc9cfXu1DI5bU9Slm3uj0eiLezvOG3PEqTt/oCHsMHVZqsUTrJjx/ Nnvd9J3IHM4bfHNrUwHOh+F1vueNmG9tOr2eSLBbQDEqcserOFn4JwZ9Q95Qz8ob58EutpvOBZnD rzX45tZOaOn6w6t9aMSSuj59m7yRqGHekGPkDZV/YpAWQ4eGqhqtm/D82ex103ciy9fzRpcfXu1D I5bU9Slm3uj0WjNJU89N3rBao3UTdueNbuJkCb/W4C/FNEK1O2/8p1PMt72d8obLoYJJNikdr4rY WWP+6STmP5Jj/Xre6PKr0W46F6R2yBuOcN7o8sOrfWjEkro+xcwbnf4ckhxvA7Ni7rA6MEftbDH/ dBLzn3JHQ7s9+uqD8DxYjXbTuSBreHuikewktHX94dU+NGJJXZ9ivu3VzrbpnWAHi8nSIW9EBryd f2JQQgwdGmpatG6NV9LlZ5VuOhdkC29P+CI1ntDe9YdX+9CIJXV9ivm2t9M2w52kgWpWO+wq1Kid I/a/25kUQ4eGcY5o3Q5eSZefVbrpXJCGkYHjyUlyopMruvzwah8asaSuTzHf2sSdbeNJclDesHS8 itM/nTfionUTnj+bvW76TqRhPGiRzW0SoaPr/17H+Z43Yr61cZ5tk9rTCTaLrcOuwha1c3a2DVPP GDo0TIiPvvqI55V0+Vmlm84FxYW3J4kk9yR0dp65uh61D41YUtenmG9tOr2eGNAnAexWe4c9Z2TA J/JPDIr1cxRomJQUXYoQaudN8u2m70QJ4WUG//2i3j244pz+Rur/BbUPjVhS1yeNf86WwHW2zQWe JHDYHB1WB46onauzbZg8MXRo2LNHJDo4xnX9WaWbzgXhJNIDx1Myye5eXNHlh1f70IgldX2Kufv6 +jZDiPy8TzyIJDG0YEZo/6FyQSCbswlPilKHf7RDC7MkTBrJsZoyOIYuF6CgEKCIH5RgmQFl//tD /f8jCfyIGeBGSYG+MAiGwQgYCZOhGmqgDq6AxbDUrbmXuq92L3Pf7L7FvSoUAvrl9I6Wc7/JMvRe 7M8heO9e/rmTPv/MTy/p3yv3zZg+bVKRZ+KEwoL8vNzx4y7Sx475n9Hfu3DUyJwRw7MuGDQwIy21 n7dviiveodmtFrNJkY0GSRQYDMzz5le5/WlVfinNW1g4iI691aio7qCo8rtRlX+2jd9dxc3cZ1vq aDmvk6UettTPWDLNPRpGDxrozvO6/ftyve4WNqvEh/KaXG+529/K5clcltL4gRUPPB68wp3nqst1 +1mVO8+ff3VdY15VLtbXbDGP946vMQ8aCM1mC4oWlPwZ3vpmljGGcUHIyLuwWQDFSrf1i6l51XP9 xSW+vNxkj6ec62A8r8tvHO+XeV3u+dRmuNXdPPCJxttaNLi0KlOd651bfbHPL1bjRY1iXmPjSr8j 09/fm+vv//3DLnzkGv9Ab26eP9OLlRWVnrkB8xtSNa+78TPAxntbj5+tqY5ojKnaZ0CiXxjvZ6U+ D1FyPnq2sTHf685vrGqsbgmtuNTr1ryNzaraWJ+HzoViH1bREtp1a7I//7Zyv1ZVxy4sjzxofmmR 31ky2+cXUvPdddWowe9Yr2dkssdRTm6Sx+Mjo8M8Hnq4W1t0uBQP/CtKfOFjN1yaHAA9K7PcL1TR mSeiZxJm0JkV0TNnLq/yeqjv86oi36vrXP4Vl7rxqaQ0/k3FL553+8W0qkvn1BGvrmn05uaGO2K6 z6/noqBXR5yX1zw4C+2rq9B388mvJT5/lrfeH+8dFzZAhZs6df40H78kcpk/frwfquZErvJn5eVS u9x5jVW54QZSXd4S307IDh1qHuZOfjQbB3E5tcOfOB57OS2v0Td3nj+lKnkuBvw8ty/Z49fL0dnl Xl9NOXW7V/P3P5TM+6k8chU+WyfrqDE9uZyquH1CslhO3Y8Kdz6Cd9xoPKFh//NDCpFxo90+lgxR M7xLxIKks+qRqEfHF9IpkS4dX5jsKfeE6R80KTnSJkOqX+lQl4aKM20K3+cbmxa2pgb1d+fV5HZo 4FmVGiINjNQWu50C+SJyY7xCoe4sjJ4SUzEVoE7AariKetFFke/2eWu85V6MIb3YR89Gvub9WzTN W1Qyy8d7OxwP4G6c4AeMHR2H3ci4YZHYCduNDB9901grKvoHYw3DttE7YW6jd5pvdDKvs/jMwMeR PN3X8R7J0VgRxhfHPhEHRaxo+rhBAzFhjWv2slUlzTpbNW2Wb6eGE82q6b6AwITxVePKm/vhOd9O N84CXCuQlpR04KYDqqkUDxRun7xTB1jBz0pcwY/ntDDgOiWqYzCnRQjrtPCN0viNdFwSzGmRwmf0 qLWEOiWsW8F1nJqBHlw3G3RFN+mqYBWSmxmpAqjZhROcicGjKrOy5Ga8qpSrW9iKZpOeHLZYgRZ6 uIWrZrTfesYs36Mq4GUc8UbjiDAiXHXoSpws8txzKRauK69rrCqnkQyJGDf4ZX7mHQN+wTsGG2JU /WZvzTi/xTuO9GNJPzasN5JexihkiQwvv5F60M8IZ/s8Xkd2cqPWWj5oJ0xn9wSK01Ja2MZA8QXI 7gqUErszUEJsQ2B2CNn6wOxMZD8JFCciWxcoHITsx4FCUv4ozO4IFJPl7YHSUMpFJlbLdAhBCpsX 4dUBF52uCrPKgIHYJboqhVL2/IzpP3Mk5D/YwtTA0JT7WliCHkhJuXLvEM8VWBZiWYDlciyXYZmP pQ5LLZZ5WGqwzMUyB8ulWKqxVGGpxHIJlgosF2OZjWUWlnIsPixlWGZimYFlOpZpWEqxlGApxjIV yxQsk7FMwlKEZSKWCVgKsRRgyceShyUXSwvLCVypIBsRuILY8MBCYsMCC4hlBy4nNjRwGbEhgfnE BgfqiGUFaoldEJhHbFCghtjAwFximYE5xAYELiXWP1BNLCNQRSw9UEksLXAJsdRABbF+gYuJeQOz ifUNzCLmCZQTcwd8xFICZcT6BGYS6x2YQaxXYDqx5MA0Yj0DpcR6BEqIuQLFxJICU4klBqYQSwhM JhYfmETMGSgiFheYSMwRmEBMCxQSswcKiNkC+cSsgTxiqr4nV/HMLxuS4sMyE0tJ6ZCU/NwhKXlY pk4ZkjIZi3vD4A36huIN0uBVzH4ba7p5083bbt5z88s3G5rqNtVtqxOr5tfPF5pms6ZZrL6MNRVv Kt5WvKf45WJDU8mmkm0lYlPpptJtpeLY66ZeJxRfW3Vt/bVi/RRW38QGN1U11TeJsJbhV19bv1aA tYPX6muL11bhgVGr1+uFqqWsagmrz2WQkQH0308oeo597EuJzH5/yv1CC8vVb8WOd2GJx2LDYsWi YrFgMWMxYVGwyFiMWAxYJCwiFgELwwJYDrkUz5/iFc+bNsXzhlXx7FcVz+sWxfOaWfG8alI8ryiK 52VZ8bxkVDz7DIrnRUnxvCAqnucFxfMcUzzPguK5KIHdBtnsVpjB1gJjq5E34jHxVchXIi9H7kN+ ccB2MuUiByvA4/F4nI9Io3QaDGWleJyH15M+NyDSOB0fEPignonmM1BdgqenIi8OyHR6asDIT49D tY7qi9CMuK7nQCjFa8vrq+Z5zHluJS/FmNdHyust5PWCvJ6KS0lU4pU4RVNsiqqYFUUxKpIiKKAU tcghXPEpxbN9zYytLffHFQHOJTvxBqEfrsk8h7RkHOtd5E+e5vOv711e5B+KAvRuToRx5UVuPPL6 1+PE5h/cuzyT4bpsHMM5sVnBs+MvDvNErX5Mc05O3nx3eMWHybp5MNQ/OhT3jT3qXfVLOC1duqQz ncuH+AZaCjGUAIb3seyEHljipSx6hxo6iuU4leDFeO4pgOB03CWPRPl13EBPQ/7sv+H/angW9uFn PWzHT5h2o+5ZuA02w09R364BeBg/RHPhergRLdajTVT+KbzWUS8MZiOYiz3I3oR8wcUGst/hpv9N +Ag+Yq+y5Ww6c7I8VscGwhphOCsXxxoMKG+HK/CqS9iL7EXpIO56r8ArXoNKdhLPLRNeYbeLy2GF sALPUFt/FrwfhsJOvN93JuV/7Y8oUX8QUX/8W+jf1B+db2M4Qq/yDS2QDHMQO/0VCT2QAE+GfRD2 QhiDFwd9/KqE0xtDJ4NH8Pj+4EOGI1jbv0TR/+1HmgvpUgP3+sfBFrgGVXdju7fj822GOzi/Hzai 9B9LwfWxtMJThOxW4RFcc65gl4grpEtABBnS9ETD3eI90t0yJEEO/VagfDfDQ4CstrZWllVZgWzI YKfD40j1ODwrRGhbIUAQsApowzFAb9Q+DB0XEjE3yOCEdfoimSWxdCZ+wZgCLsgA8RQm6i2wA+iX Wo+CYStrYc+yg+wYM6RbciwFljJLraXBssGyxbLDstdywHLUouoGmGgAgy6xiRIzzFekOmmZJJ6S GEiWmxyy1WoscsQz5SabBDYLjB2rtb3EtDczW/dXVixq/b2WqWGpWLSoctHiRYuHDGZeYbgj2xEv Dk1MiDd6+6YNHzaC3VMw+S+Xj/H5xmBhN7AvnmYbg7V72u5FRXk5KunJCqUxwkTDb7inqnS90MQU iR2UjkmnJFEBdhCOwSkQJdlgBJEJslEsxF2MYgh7IIcVsDJc+jagH3ARTq8esaFvV1aMxqZWvP3m Pmwk/1AjK9DHXjGbYTnyQxeuAhGkMez64E3senpLuYl9KRQJP8eWpOpJGWKduEzcKkqiDOnhXpNY IXZZK9ZcWbGPKhvuSdjEfsm+3LYNb9wDDYyGPZi1rcysWw0up1MoMqhWq1BkbAmd2E6C2hI6tF3T uPDhdrs9IkROndSzVFUoKlDL1Fq1QZWSzOlmQZXcqmosktx2vAxlNJVsxqssMxVxJlC9dBsU3uf1 onCS1wvRelE4pVuoXtAsFn78jp5DnQsKVQwF5jJzrbnBLJntFt3VE4osulUjNCOCRRats5RZGLJM lSV0btyorNbW0dq+SpzZuI+j01zr6NZWDI/MzH0V6BsWnvrQxuF1eIazbEd2AmMecfHuthrRmb8u +EiwlD3CmoKLWNNicffpyzYK97X9neJheuio+DZGugY9mfMxUbPajEUCd2BE0B1ms7GoxsTEDBgJ hRgVEmlTVXymBGm4lCfNkGqkqySjyBLLFBPLMG01HTSJ6aYyk2Ait5CjUPiY+weF9/Uc8oxJYo6y HNtK2wbbFtsOmwFsieQgm0r+tGlkbNPIkbZEMrcli3jfg7pFQz8xFyH1s96fpB6a3Q5FKteqCsk9 ZhuNTuZWNTaJgXO2agUDerNV25fpQI+GXVjJuANRqGhzjNqPQgUOtacrK1BRGXYqZLIkI8gjsmmY CV63QwNR8wwdMXxYmrevsLnaF3w0btq63334yWPBPuyaLYwtuP7muqDh9cu9wYULi0+9dSD4pbCc mYVBpwcF9z5wy60PYWCXh46L76DHE6AP9Gdv6tYUHrUpKj1sn2jsJpJA4deHgpgCKCl8ylgkY4xt J1+hzXG9kjpimYvV9m7oLQhl6kpVYAmJwxPzEmck/jrRAIlOV4rqcZU9YtptEkSTSjWYVLrcpJGr TdzVJu5qk5H3THpZtFcwF0XvaqN28B6KdioJupV3jqd3WbPIRJHGAxlzgYy5QMZidKhETnHhS/0C ulz0lOU4y5wCOHnrnLx1TiNd5uSjyBlHFzgH4AMf0yfQA2/twYQNdlaTeFXiTxJFcWXKhhShLmVZ isAGuNmA/mxGv5p+V/UTRdmYZNxrFBWHyyEIKaAmSsn9ZjuSZxsdViNFhNaqtTqSRlVWYI/zwr+Z 7ZHRuj8zKmn7v6aKGzXKgaWCAqYik0ImsiRlaZSRedgYZYlStNsxbERqOHAS4p3GhPjEbB5HrLri 6re3Pf7KDdOYcvKNYO6drGbVwrKnt/xp1YL6O+aKWUMDlQ/s3v1g1d6sdZ+88C57+NqPr1ixurT6 17cEP2x4YeGPb68vaaJsGh86bngAV1gq+3InmEKHdJcFY19yEzKONp2nLHqfpfdWbS5bhm2kTVRQ GGkrtPlsW20tNtloVGQB/cKyKhZpbw4ZnKyHJoosT2DjlSWKMMl8nVkQM0SWYWbpCvYAY3dKbKXM VlkYi10hQL5qyB+pFqpCjrHAKAgZxkKjkK4WqIJxsawuxgvNFklSrpLNV9VZWK3MLEsEeYloMglX 7RVZutggrhS3iAfEoyLOSA0mxmpxyjK5TEKdaZXpTkw2LaZnMeEcM50ymXyM+XBCnSgwsYCxAokx mrcEGaeulWwD28J2sL3sADvKvmBmWUqSBFlKl1ZKG6Qt0g5pr3RAOip9IZn5NUnRqU6f8LVrrZFr k/DqAqlMqpUavlaLHUyLBLZIWiQLJhX3fRaB+3UshdxhrRUn9fCHxxvF3iJEWpwsah3VI8uFcdlW gSFHBxhhUUUl11Xyy86ELE0A5TzwPCLzMtHLshl+pSOPBJ8YGtzx8G4WN5tVsHGzWH/xV6fHik+e nmbY+dXr0qAvC2gGKMZ8dFgaj6vVVHbvY+k8SlIxs2y3qFz4WJ/As44zySmwWneDG/vQXmgXYId9 r/2A/agdPZ6aYDfkuApcZa5aV4NrpWuDS3alTjZPXpXAElSaMhNUqjaBMhclgQQ+sdCgTuhD6SdB teF0w7W5djtKPBEkpHumjDQwMPA5wcDnAQNPVIY4usrArzLwdGHgVxjSLsQ0lIqz7hPbreEJ+gl9 qBVrLAQf1MEykEBw9ehValZspYrdZc+wj8Rn8dnvtG+1t9iftR+0H7OrYKfrnUlCEQn6ELqXPW5i n1l9BLFPKSguO84zim6zQZFYoJQpAiiK6i0V1NK4HhbBEifD2EhmGYUdilN3JXaZ9nY4ibyBk5D2 dgVO3NjfkTQTPhOVMxe1tj5ToT2N/V5ZEZ6KAHNSJstMDa/0+mUPTdK8feX0ETzDOOKTEvqmiZhY MKtkD80RDi/+6IadrPfboycdC1QVj9hT9sHr05cuyZzV7/bHfn/7ldc27Vj1wJ97Mqlq08Cx+48G V6zJqhzH3o1fV7NlNcVDUahVPI57Nhv0YH30xGSdr2MmW3uokwuszOqm7rRG5wAUPtct1D1Whezo mGd3FA7rFvKcVaUexeO/8BkBhU/1edRZ1njec/FTVhrZSMoKYFSoJmNPqsnYk6ox8o428o42amRu 5B1t7JnK5zhxgONCx0SHKPpwASkkuihmE/m8llgqOmi6sJHKUWrCla5gM+Okc+IxPg1ZaE2yg899 spXWsZn4wW5qzcRvJo2rjr1T0Za5/8wrhoq20VorJfvU4dghwvBhgB2Cq/FwF9D6IEe8YM7jz33J 3Hsfmrl7d+H1dz3OqgbdoLGpc5j7xCdsxhT2yZfJ4ogFh/3B60e5yesXhY5LvXAU9oC+7D29sR/3 +rwE5ktiYovMGuWN8sPyLvl5+Q+yUSgzsDplmbJKuVPZqhgylJFKoeJT2lUtyrPKQUWFMhyMgtKX OgqD1k5ITlUG0mMr3J2KRu5UUqYU9MGk56JBXMsH8BbXDtdel8lF0z55H4V3eJej8Cc+iF3hLuXC fr5oQeEw72QS9Gqq2cX6TGFTVmpspFao+bQ6TdI0qk7TqC6Nd5XWi/pIS6QqNB4TGu9kzbtS3iBv kXGvUjpAuVCZqIhimVKrNCiirCQpgkLD1KLCJBT26yarFSW1TzFD9aPO+ELi2FBWFBGAC3qyzVaI 2zeDryfrWRovxLusNlZECEUGWuxY8JAE3WwhValqiacBzReNlVprBQ1qvgLX3sBQwEzcWrmoPVQw Qbdq0VU6RlHr02fEReHFBY7mCpbJPPFJuALISTLKGEPpacO1HBzMSbheYH1lo+wZlp4mWU+frp3d 9MDC4oGzF9/2/Jp77rv990duvCHYb/nMUotQUjhVMPy2xle5eqB7wOoNIWba3PSD6/aNZfNLpyxd Mnk6rglWAohv4Uh2MqtuTeARJWtJmsBol/IY345MdlKm1zRcb1FuprHtpCGdxBdfZuodJx+OToUv wVpCb/Jo4EZ8eDttNLzpat3JV2+KZbIyWTYlmdJNIuAg+5RXb6JlH1VPAq3iaQcQ2RGc1B18JWrm q09+I25ONzLRKpFnEFP8ZdL3JUGiQW0ne6nU7jSbzHbZDBCeWDM7jFmeUyNrtsiWKZMnUpbpcdAq zCjjXinbEcmhYvLVl+fdmrp7d9rPZz38pHDfssaSEacPG95ve/17Uw/9rq0SIv48iGPUwUTd5Qz7 055kF1iDk8Fkh0bP6aCxQA13RP3p4A9ID+zg/nTwBwxryQGOXlSRg7vREV3ac4G/EyA7XASjdJZX 0Yfcm1Effh7dZ50Mp+OIM6maqPt1C3do1L9fktfxOE4qtTnQj2ajTY748WwvtnXwIXqxIoYXk0Zk ux0OsWftNRetTdt94+hL3mBVwpW/WD1l1OnDUlbjvcHpbfPJg+vRg5TlbLBN72PGAKI05NIytGc1 aa/tgE2wGU2TjZPrZCZTXNITyrR1pGeTaeLggSJL3PUSPQg/Tc+BwmnuOjkacqTRU+mZsZdwc59j Dm++jWAw66Q2C6W4NJPNamTC3pc5tPKsCKIAeoaWV7gP5LmdnjWBxqmwb8zYmg27d9fty64Sl2Xu uqlto5T1iz1x+IxTQ8eF0wYf7u+e3wlJmHDG0jZOcbqcQkYSw0yobVBwBcvAaBJNjG/EJodXJ6fo B6T5cgnMfK0eF3m38On2yLuEEzzfAg2B3vx1QyI9B2jkAEjjIzqDv2ygR0/m5wY6RzsFu1UwyNbS OFaqlMpxFtnItz64QtGeGY1PyLI6dDAOm8zR2mi+Sgk/PHa20zs8e3iOw5PgccSH0xatRF647rrd LC34h9k1E6b0unrN8m3iXesPTQgeWN92fOXi1C19Hr8dhNDnADhqF4EMZpiu9zaZJZA12S2Lsqym qFnqVHW52qQaBqtMElrYRt12s7IOc7uhycRMJsnwG4arEpDYrTA2m16HVMSNqmjF7UmFI5vkUUMG e0RPeM2LfPiC4GULXmUfzWQvBKUH2UgWHzwuLfqqSRwbrMVZdjPWlCwtBxPEwVDdYQQbs9nizfQf yblxa67tZneCiDdUkQPdsnUs3Sub7ps9ZDBmi2w3JGjgSU/EgNc8feXN71YxA8sJvhj8dPmAStYQ vP6CqdIiW/CPJ4KHgo89rLBd7AcKRgXeWVwtNYAFqrYLBrNZpqko0WQtlGXFarfifGQARVPciqhI Lewu3Z7CslgT28b2YPVMAWqZCVtmQK5EWkat4s1zZFfQJiEzOysO/cF4N4ULm8V+FLxCSG9LET4Q 3wg+tTH44P1Sw/0QbpGUzls0RU+wKHhkdVuZLIogaZJbEqk37tK1FDlLbpK3yXtkgyxLprP7I7tz E7LRS3hfnlk9tCRa2Na6a5fg3MWCQUFqCHrYu1+tprv3Dp0wLMK7q5Cru5lJEkHVVLcqqqotxcYG 24ptVTbRyCSTaDTLj/N7GvGeIrsNQ5dunDQqGgRJPAb4zsdJOx+n9PDq4G/7B7de88wHk9hV7NJJ p4WX2zzC223ZUsPpP4sp4RZsDp2QsrAFJhihOxRZtrgtTABRE92iKMotbPV2IxMlcRfe9f+xdy7w VRRn/5+ZPYGQCwkREIWQ0xjudxa84C1QVNSgCIiAp0hIAomGJCcXgVZrSxuqliLipbyr4r3/16rr BSuKtL7t1vul9W9f11vx8lqvSy+2VWqR8//OnD0nJ5AgfbWftv/CfH47s7OzszPP/J5nnpndHIR5 KsI9Si+MJ2oXgH5m6c02W+peBp8uvXeqcu4rtHI+7RFZ8WlE7frrqpBveZFvaV9aLCofmi9zIo29 L8LM9c7JWdy3Ce+jb9bA4wfKgkhEZBVmRbOsrHyJ2Df36ZN18I/NUF9Kz7OS0tYDvmNihrR1YpxO aQnI0mG0pwxe9u3RU8pwe4FlgjVu9yOFl26VWSdJ99JaWf3paaccdk1b20UTz5KvZe8+JLIib/cD 595e+uku9c3shfVnzczW0imEr79AOj3F6eUjtcKWa4WVUhVESlBnllGFin/Zvcb1koW9xvcq79XU KyIi22hzFqOldIuPp7FfCRtKM01jNT31FuWX+qmKh3ZfZD1s/feuUdZ/33RTIiEO2X1npEekXumN e7H7TnLC3UnVQ+/3ch7unXE+0Jz/L+5I7htzfrA532d5RrBQzVaPZT2i95lFeXlxtpT516u7dNcL 8iLlBX1n0OlIdm4P7HKPrfIcOr44uW1bdJRNh7GsSeNBmq73O2xyx+6sqli/e65SU762+8+7fyuL rJJnnlmoKj59ulneu/u/9LPnqWPVNrM3O4hnD45IWdCrV2HfqKbN4EPHF5QXzCqwCg6O5PXo0Sdv q6xk3M7R2mEenjZbWv7aKhzco0e4kWlsd6EM+aG3ouQ7s901BRUrW9fd8I3v3yWPnjZr1rQZZ87J en75wrWVs69s+9Z//ug6ufLT2xacOvOcBaeetghy0LoT1d1Zz4iDaN0QMa98xEBVnFOSnS369leE orzCsmiZKisbVjK+qLxoVpFV1FfkFEeyvlTwEE09BJqcIzo0OdlWmDxOq9lEzWkjsfQ2Ws/0i470 LtrBZr3LmVSXXN264opv/eCW711yg5w7a7rd8pWLvn/CnMgRFSvOXtl89gWnn3nl+eu/s+LWrx2+ cPyRtXM3HX/huplTz9XjXSgs9Zh1vtUzUiTEp/eI3HulwPDNF8dPnqyvz0v8Vm2zvs31vunreQ/I BR3XP1Z3W1/nen9zPX8z7nVBWGCyHsWFrOyqI9OZcYrFtPJClV1kFReXXFQiB+QXZef0m5on14tC OUsMJM6R8/ABsjgKeboxdMbMIyCbuW7cwVoidnK9P6xj6dkzHEXmZeuSltpvtx1y3uP/94+y8PVr VhSfcelVd9y8/tJzIuN2vXvzgIbyM66QoxMfy8Mbv7vwha1bf/HN5Mu04WG4SrzcZfhLZpDXyevU PDXP+ub+hsim/QtZP9ChR2XP7+qQbWeEXyVDrxMJzx4IB8KBcCDokNPXhDNyHBNeSoXcQ3MrwnAb 4X8OhH9wCAh/IuzqCHlZhHxCv4xQTBhCGHMg/FOF8whb87bmF2eEMYRjCad1Ctd3F4yvYatz0385 NVGk/pgNr4qzZFqx6jguTFv4lyeE6UhGmSzS88N0D9aw1WE6W/Tj7mS6lygQjWE6VwwQXw/TeaIs nc6XFeI/w3RvMZJ8S0hWo0rkyT+ZdJZZBURMWn/ykqfMX+NFepr8qElnm/R4k9a/f1itysO0FAPU LWFaid5qW5i2xFD1ZJiOZJTJIv37MN1D9Leyw3S2GKlOC9O9xGCrIkznRAJrcJjOFeOzc8J0npiR vShM56uN2T8I073F3LBMTkYfc3X7c48z6byM/N46nZt8bqFuf26yzoNIF+XWm3TfjPL9dD1hun9G /iHm3qRsB5pnJesszihTkpEuM+XXmvQYk3Z0OjujzdkZ9edl5OeF7b8tOnH8hAnRmXVVzY0tjUtb o19ubG5qbK5srWtsGBudWl8fba5bVtvaEm2uaalpPr+meuzc5polbVW1Na3RmXNm1yxrq69s7rg7 fXFeTXMLVUQnj50wPp05c868uoaqmgb9lIaGyuaa2tbWpinjxq1YsWLs8lQdY6sal49rXdXUuKy5 sql21biljQ2tLeMWNLZFl1euira11ERba+taojo7WtkSbappXl7X2lpTHV2yiis10RPOrJjK1WZz 0tTcWN1W1Rqta4iuqK2rqs24l5jG1LdVc2trY7S6rqWpngdUNlRzVx0FqihFW8dGU89ubKhfFR1e NyJas3yJvqmjqoZU4S5bZIpX1zUs00Jsba6r0sLNeDq3p+s62jRgeB1Paa1ZrkeiuY6nVjeuaKhv rMx8KG2uTLa0pjlKdxt5FMe21qa21mh1zfl1VTW6TG1NfdMeHRKnoe/NYrmoFPWiQazibIlYJfNF jTiX8/dAx/U5opW4ActRSV615Vj3WD+2HgYPWlutO8RtIir0V84TCFExU9SJKso1ihawlHuj4sum tiZzrCSnjlSDGMuVqdRfT9xM3jJRy7UWc1ZDXEN8PsdqSs41eUtEG3XXkmo1T5ojZpNeRm69aVtX z977znmm5pawFVExmfon0P69S+on7J07mnuqTQt1mxtMC6NafhznkddAyRqOqZ43ECpNLVFzNcrT jiIcbmqqMz2uBLU8Y7mRtM7Td7eYsxaTqjEyWtrls5cayUY5q+TKKlO+yjyxxjyv2VzR47eE++pB K6XGdimdqBnt88wzoqZkS9juFkawrtMI6idraS83d9WaHnbVZn3WaNqeKqUlYCNzfWUFeXXm+VoG laZHyfFcZsquNPk1nRiS5GPUPL0tTC81vWzlvDndej0KNeZ6UlKtlI/Sj2SrG83V7uQTDfuYknVL KLFkD3SJJlJLuavK5OhRXh6Osn56cgSqTW2ZT680LWgTXyXUm/K15vnNpkxlyO09mTw6lFRNyKSU JOPUVGP4khqTFYYFUXM8zzxZ31tKfY1GU/RTVpl01Ix8XZhX2Q0fhpsrS0OLkBrJ5WHfatDRSqPD VabtlaZv9aRGpHusR7PN6EVtuv9JHeyORVoPkrpSZaSq62xJ15cqVWXubzF6U2M0ILP06Aye1FJy hTgeKXSMYFd9XWpqTHGsQ2e7YtGSTuNwvvl7ej1q9en8yrDOujQjk3JvDuXXYrRlWXitMj3iLRn1 nhw+vdloe6vhYKmo6CTR7mrVXKgzNXU/uk1h2VJq1trVSs4UMY6wwoSx1LknF8cayS+nTFILGqmh 2Yx8LefjMqQ47gurc0HIWM0MrZttxr4k5ZTUt9RToyH/kxxbbiSQ0sCklU7J9gRxJpKcmmFBU1eS FqLaSKo1bbczLVVXz61LWx3NgrY9+FJtrjcZbViVwc0mYyWTNVSFdSXnDz3We/ZbX683qeHcNcJY /eWGi9Xdtqphr5r3X0YdtVebmpaJ1PzcatpdlbY8Xfc9+fS923V0hgR0T+pCe1Fj9DfpKTQb27TK yE7rr+55YzhDdNXTTPvanNbbZiO15LE1tKBRI9VWYylaRXJOPd/0piZdjy5ZT4l9j1CHJ1VpZo7U +RvGs6rp5GnVdPKljMWPDI5MiJwaOSlyLMejKF1p7Gm18cCmUqLZ6Lm+S6/xkn/n8VdxUTd/AmIJ s8ISMpFI/hKK+anX/iL8hZSIzTGSsZbUP1O/SFwrrKpVzfWi77LmmvNEtL6ytQHrqcvpnXtpVpSp tH7jpSrmzoiKAbNPnxlFIsl8FcZWGEdE1nk1zYy9OQ41x9HmONEcj+zUir2PugYl9BthcIajW6Ai kaJINKt3z8XZ63pV9Nqesz53Wl55/ur8t/J39x6fs7736oK1BfcXri3cnr/6oDv6bR8waMALAwZF ogM3DnTBzwc+acJGwp8GnTjoykFvRqLF44vH99qeOy1nff5qc0WHJ9OpjfpKsTkf3FeX1c8cuDGr d1bvwQ0lc6PHlZaXrhmSp6/lTtPQ5XWgVYQhRYVrBwzKWV+6Jmf9wI1DhpeWF67ttb3X9o6aNIYU DSnSLRtyWmm5RuHa0jWla/JXDxg0ZPjQ7OInhy4tfnLI8GF9hy4dmj3suOInh83gbMmwi0ndNOxi crk+7OJhNxGbviRr1TXqNuhYP11D15xsWf7q4dN120vLdauTV3VOCvrK4AZdfuRpI5tGrRu1bvT4 0RtGvztm5Ji3xnw4diPnG8b+bOzGcRXj3QkXTLhg4qJxFRMXTXx64kd22cSPJj5tLyR+2m7ivIn4 6yZ1pd1kl+nrxI9yfNV+f1Kh/eqkhZMWT2qe1Gy/T3r9pJ8Qkz9pIddfNXnBZDV5kv3q5CWT1k9W pvxCXZ5cNalwZFNHoI1h0C3uCOPdZBhXMWrduIpk2bBPyfDu6Hcn3zD5hjEf6jD5heQ1+kcPk2Fc hQ6penSfTX8Jybu1VEatG/szXUa3Y9Q6LRVdyi7T8jo8ckTDkbOm9Lebjik/buHxd5efMnVx+Sl7 507frnM7wpGzUmHq4j1z7KYjZx23UIfj706GqYt1rTrocl3XM337Z9czfbtuR6qek4efvOGUB04e XrHh5A0Vr84snblx5sszS095QGNf106vPt0Bj5/+JoHUrBkcdY7Jn7WJmCuzds7aOXVxMu+M1Z9d dvr2M67EypQlLhNHJF4Rx4Cj1YJEoBYnfFUN6kED2AzuBw+CZ8CL4CXwMngFvAp+Dd4F74H3wR/A hwnfkkABCxwMDgGHghIwBMwCXwFfBV8DF4ALwTfBLeBW8APwf8DtgGdZHyX8SGkiiBwGysAQMBQM A8PBCDASjAKjwRgwFowD48EEMDERiCvVooSn4qAFtIK2hKvOJ15BvJIeXEh8EfFq8G3SlxCvBevB 5ZxvAFeS3ki8ifuuJ30j6duI7wZbSG9Dog8TP8r1x8Dj4AnwJHgKPA3eBG+B34C3wTsJz8oBuSAP FIGDQN+Ea/UD/cGXOD8MlIGhYBgYDkaAUVyvQFJngnkJxzoLzAcLOF8CqkhXJ9qtGuKlnC8DtaAO nAvOA/XUE+d6M+kW0ArawRrwHXAxuITrl4LvgrXge2Ad+ZdxLzKyLgcbwBUAOVlXgau5dg3Pvpb0 dWATuAsgL+secC/YzPX7wI+o736wBTxA/oPc+xPin3Lu0cdHwWOUe5z4CeJnufYc157n/AXgk/ci 8UvEr4Bfc/92yrxO+k3it7n2DviAe3aQ93vSH4I/cv5nzj+mzE7wCeldsE4k3IgEWQkv0iPhRHom 2iM5nOeB/ES7OFWdzajHQDzRrpAczHJglgOr2mFTDNbEYE1MXUXeRvJuBFvAtkQclrQrWqdeA6+D N8Cb5L3F03NALsgD+bSoNygAhaAPKKLFB3HtS6CU86GcDwPDwQjyRiVi6JoLE2Lom8vo24xuDL1z 0TsXvXPROxe9cxnVGKMaY5RsdNBFB1100EUHXXTQZXRsRsdG8jZSt5GyjYRtJGyjny7Sta3ASNRG kjEkaaOzLlK0Iz12f4TUbKRlsxaxjO05ShyVqBBTwDGJoWIBeQsTN4lY4g6xJTEISTpp/Tyf9Jvg LWrPAbkgDyT1w6H3zh664BgLc1YibqxMNa2Jk+7K2lxC/qXgGsp0ZXk2k38f+GMiTutjooyW6fH1 aZlPy/RY+bTMp2U+LfNpmR4Xn5b5tEyPiU/LfFrm0zKtmQEt89HMAI0MaJ2DRga0ULfGpzU+mhWg WQGaFdAyh1Y4tMJBMwI0I0AzAlrk0yJHjDYt6q4lqVboJ1eIEUYuC4i1XJYSLwO1gPWDdS44DzSD FtAKupLZOvLXg8vBBnAFuBJcBa4F14FNoCt5PsA6ZbTC8qhPdIuMPPxQHr6xVFVmxDxk4ndrpbSs Mi1Uh9x85OYjN99YpO6s0TXUn2mJNnN+H/gRaeY85Osby5OUsSeOZ4b095ohU3Lf1+wYjslnzpJ7 jtmes6YewyHhOP6Nsyezps+s6TNr+syaPrOmz6zpM2v6zJo+s6bPrOkza/rMmj6zps+s6TNr+sya vpglCnZfJvqAssQwJOGlJbEMO3VuhkSamfX2JY3tXH8NvA7eAF1IRWmrmymZfLjeGxSAQtAH7Ckd /AIjnQo0eQFYCpaBWlAHzgXngWbQAlrBOrAeXA42gCvAleAqcC24DmwCD6D5pVh+Zl0k6CFBDwl6 SNBDgh4S9JCghwQ9JOghQQ8JekjQQ4IeEvTEgE6MX7APZmeyel8MzmQvTBU9xIzEf4kKcAaYDeaD hSAPe3sf9nacOGr3W9hbIY7Z/TF2dh52dgKaVoI2lKABJWhACWwvEX3326KcZeyX9hgCdDBABwOj Wyl9+iM+l9afabThFVjUBIuaYNEgMT1xAy2+QZwKKsBscGbiBqNnmlWaBRVYvHlpy+CElsEPLYOD /Bzk5yA/B/k5yE/7LQ7yc5Cfs4c1cJClgywdZOkgSwdZOsjSocVang7ydJCntrY+rff3sAYOPfn8 upRjbF1SZh12q7PMkjZnMmN2B3PkTYzZhcyRFyKvO5HTnYzsI3vZor+Hh35WaGdSre16/uzoQdfz Z+de/S3c2j85bQ65dYcoTEyBX3eI4kSxGAzK8DEm4ldMT0hxIpgBKhK3I0MJ10rEvESOmI8uLNz9 G/ShL9yz1Tl4ZIvx1CqJl4AqgP+gaoixKnBzhsKqKKwKls9VWBXGIKaWEzcQNxI34avo9UVzuK54 hvxnyf8F+CV4DrwAfPAi114CL4NXwKvg12A7974GXgdvAL1W+B/K6/XCu1x/D7wPPiAvADvAb8Hv wO/BH7j2IcBvUX8CO8FfwCe0/6/Eu8CnYDdIYOUEkPg5ClggwnkWwN+1eoJs0Mv4Xx3rk3y8u96g ABSCPkCvVw7m/gHcewjxoWAQ6WIwGJRwHiXW65hSyg/hXK9dRpM3hvorjMfqsYYJ0HnttXqsXQL0 PYa+x9D3GPoeQ99j6HuM9YqNzsfQ+Rg6rz1bD456cNSDox4c9Vi/BKxfAtYvAeuXAFsQY61iYw9i 2IMY9iCGPYhhD2LYgxhrFRt7EMMexLAH2i/04LUHrz147cFrj7VLwNolYO0SsHYJsA8x1ik29sHG PtjYBxv7YGMfbOyDjX2wsQ829sHGPtjYBxv7YGMfbOyDjX2wsQ+2WAuTfwWT62Dyr2ByO0xuh8nj YPE5MPgCGPwKDL4ABrdjCb4Gc++DuS7M1R6KC3N9mOvDXG0hXJjrw1wf1vqw1oex2mq4MFZbDhfG +jBWWxAXlvqw1IelPiz1YakPS7VlcWGpC0tdWOrCUheWujBSWxcXNrqw0YWNPmz0YaMPG33Y6MNG bXVc2OjCRh82+rDRh40+TPRhog8TfZjow0QfJmrr5MJEFya6MNGHiT5M9GGiDxN9mKitlgvLtOVy YZkLy3xY5sMybcVcWKYtmQuzfJiV9A2M9xuuCrrzEeqZeTL9hC49YPIvo1x3/sPVZqXb4UN06QmT /yArSbQAxrgwxoUxLoxxYYwLY1wY48IYF8a4MMaFMS6McWGMC2NcGOOKy0RB4sswpkX0IS6GHYNB Geuq6Yl1MOYmGPMWjLkJxvwHjLkDxrySwRhvD8Z4acYso0yKNeemmeOFzPEymOPtgzkezPFgjgdz PJjjwRwvZI4HczyY4+2DOR7M8fZizie0rXv2eLDHgz3ePtjjhezxYI+3B3u8kD1eBnuckD1OyJ6u /Q80q5MP0jV7HNjjduuPXM21TH+ka/Y4sMf93OwZaxigR12PbEO4M5actbxuRy+1c5U5guFIdZJ+ 5myRnBm8tMS1lPUsMCTcvfobVzXi7nDevx/u98Ba3g/3Z8L9mXB/KNyfBvdHwP11cL+fmMP53EQu /K8TCxJrDsz1B+b6f/hcn2V2pvTOU7jLZHaYsswerd4rDfdDzR6jZLX2S7N68ML9LA+v2MMr9jJr wCv2TC190YxX0Yrfcd9belX1uVcuB1HjDmrc8UXVlhjGjDXsC6sNG5Do8YXUNvSfdvexyLyfSL2b 0Kui5L64b3ngUfAYLXic+Ani5L64bz0PmI8tnzzWiBYWXfQP11fuHusrvc/r8ky9w6531/Vuut5J d61XzB6v3jX3rB1mt9y3/mz2d90Is2ukp9kNd8VRahG2Ff2lrTZttWmrTVtthX6p64lvBbeBu8Gj 4DHwOHgCPAmeAk8nStRviN8G74BPkAK6ZpUlSrAbNvbCxk7Y2AkbO2FjJ2zshI2dsLETOdgJGzth YydsbICNDcjBBtjYABsbYGMDbGyAjQ2wsQE52AAbG2BjA2z020a/c5CtjQxta3tihvUm8U7iXegu /oSYIu5PHP1v0dNvmP2jefiXWxL6TWFcLdr9EfOmniv1HGmbubHezIlxtZJrF6AtXwcXkV4N1pC+ hPhSyn2PNL6Ouho45v2Zra7n2q3gNnA3uCeh3z666n7woH6HRt5j4HHwBHgSPAWe3v0G862eV+PM q3Hm1Tjzapx5Nc68Gle/oczb4B3wLufvgffNfBlnvtTzo808GGcejDMPxpnL9DwWZx7Tc1fcOmz3 R1bZ7jeYt+JmJJL+nx36f92NSEmnEena/9OjVLKPUSrpNEpd+3965ErSI6ffkG4nftO8ddHvr2wz gqVYjsNAGaAf2MI4tjCOLYxjC+PYwji2MI4tjGML49jCOLYwji2MYwvjkYmJuOjzhVjVGw/4XAd8 rn+4zyUjgxKO8bH23vfupS6EWazIFCsydZV5o+4wD8aYB2PMgzHmwZhZMf1r7r0ebHbXuZs++vTR V/pNwkgxwhr1Bej3KKPfK9GJVYl29VVwAdz/OriIvG9w/k2wmvS3iNvBGq59h/hicAn5lxJ/F3yP /HXEl4ErSF9l5os4Lb0MexfD3sU+Nw9y8JgCPKaA8XXwmALGWH9BEOApBYy1g6cUMN6OUIYJSEn0 QHYesvOQnUeuS+5sZOcZ5gRcCbgScCUgNzC+mPbDtN+lfa4ee8jeS8leDDcjs5K+XsD9F4oRyC1A bnHkFUdOAfKJIxeewbUN4ArSV/MMUwP4ifk2Itkj3ZtnM3pDTz736K4+sGN0YMfI7BhlizxxLOst j/WWJ8pJLwbYbPFz8Ag4SRRyPAJbeQy4V7+jAT9iFbsFPETeB4mYHA5GovV5YAAYBqaBxTC7GtSD BpD8piZgPeayHnPNNzXPcA6r4UkATwJ4EsCTAJ4Ee71jDr+lgTsB3AmU/s31P4AP0QwJFLBA529t 9n7nXGRm33Z4EMCBAA4EjHsQfnuj3z8HrANd1oEu60DXfH/T1fp+OJJxkYyLJFwk4SIJF0m4SMJF Ei6ScJGEa94ZJ782Ceh5oNoSyTdnK8DTpmfJbwv0e+XOX58E5k1aX9AP9E8k36qVmRYGtDCghYF5 y6ZXgBmrPrOCG7DXGOxL3p8l1z3lhZxEP/iDXwluBz8Hj+h1BXY++UVBwGrJYbXksFpyWC05rJYc VksO/Y6xWnJYLTmslhzznvgwmF2GZAs7Sav7b3OCjG9zOklDjBXISOAPiSpwJ3DBY+CJRCBzAcyQ WhIDwSLTzoB2BrQzoJ0B7QxoZ0A7fdoZ0M6Adga0MaCN2ipov0v7WwHWIMAaBFiDAGsQYA0CLEGA JQiwBNqPCtD4AI0P0PgAjQ/Q+MDS88u14DqwyfhGgf5RMUbuXsBIMlclNdAWU9C4Y4j31kIb7tlw z4Z7NtyzVX9kP4AYRqvDwBDAnAoXbSQbz/iqLY5kta8cR7JxJBtHsvHwazTte8b3+BItbr4801+W hV+RmS/Bcvb3XbAopT9+Wm/uJb0ZfJb+fE42dKkb82Fuu5FuH/NezhHjDWMcGOPA5nZY48AaJy3x n5P3CIDNsMiBRQ4scmCRA4scWNTe7fc0eq58lLzHwOPgCfAkeAo8zUpjX57nbyjzNnjH2L7P/s5m 7/mus2cKI2DwDOOhpuY3Pbftz5cY+/r6InPeYo4SNhIOkG7QSRdvz9DHn4NH/kX08kh644R8ceiR R488euTRI4ceefTIy+CKQ88ceubRM4+eefTMk9onGggWmVmhnd659M6ldy69c+mdS+9cehendy69 c+mdG1pHlx7qHQyPHnr00KOHHj306KFHDz166NFDjx569NCjh/pLXo8eevTQo4cePfQs7bNeC64D m8ADtHNgpuX5e6+NRI6xBCkLEFqzrtZtIjf8lrvTfqlay7ivN7560gu/0vjqvsg3M1Ih9aVmpXtJ bwap2Wklz7nIrGD0aqWd1UogsvZaTem6LDPeqTHN6mptgA/V1Qy4Uu+agdV6xwykWnulfpZhUh9j ZZw0U6zM7/RZh2yk9PXgRnAruA2wjlZbwDYY9DA1FVFTLOxrjNpiGX2NUWtMOZRM7mHOoJZ2aplB LTPCnbmO50n9nbuImCdmPE30pB0x2hBTN1PTFmKYQlmH58dEblc94XkO9bRTj0M97dTTrp9H6dQo bzR7hm9Qr5NuxRbS+i8UtvGMh/XKnlL6a3x9xQ977CNtbL0oB5qfWkabiDvk5HeSk5ZRrrHz2sbf buRibDh3tXNXPGxjnDvitDEQBeGs0HFHh0STdzo8I3n3jPDutERF/v/GYsHvRWZ28LnL5y6fu3zu 8rlL3+Fzh88d2nL7YqHRG1Z9gpWemAgmmRk1Jo6Ce1OQnPbVsQ9iIYgBtFjggQrYLe7S71s76V2M EYmJrZTT/jxrOMEaTjwP8O3FTvJZX0m0WLKWSvn6EgsgtcfKGksx3xjfvz/SYG1l1gCsrRSajgfi qFLSeHYKm4kn4qihxHqNMJxYrxP+Wd/RHIlUHaTqI1VfHA07juH8WOKHiBlFJOQgEQeJOBLvASk4 SMFHAg499+m1T499euvQU+ef7ovz+F48OhGcBGaA2WAuSPGHNb/AjxE8TTQDeiJWAWQtkLVA1uLC kGc3ZnBNa0Ymr5gl0pyKhLyCQxKfRcJtSe8lqx3DMWYZ+Rp4I4NvWSHnKG84xnr93/qr7kuxWTb2 ysYriYsxYDpz+Il6/Q5mgNlgrvFY4ngscVHH9XPBeaAeLAcNoJEyTSAO8CIYYY8R9hhhjxH2GGGP EfbEDZS90Xg+Np5PHM8nzihrr8fGRtp4PXG8njgj7THCHqPrMbraC4pLfHs8objsTczsymh7jLb2 jOLyJfJeIf0aeAPovfgsrEJP4zXF1SD8ksEgyvmBr9f/9q/X1ximJH1Y26x5xjCrHWGsm49187Fu PuzxurQDqbURvibs8WCPB3s82OPBHm+/7cMNlNX2Icme5BrrHjMnJf2sh0h3MMmstbC2SbvxQWgz UvZCr8HyjG/twCivk/1Af7DMPpbZ12s02OV1sic7YU1WaEfyjNX2zTpuEMwaDKLGcvtYbv//+6/Z z+zGq/DS898xn+FJPMT1zt7Dnp6Dx2h4XXgOXjhnpjyGzHlTewjev+x7mqF7SZCxw99KSgsJpaTS SQpdSeCL/tb+BMbb2Wu8ux5fh/F1Oo1t5rjuOZ4lCf9f9kvE18L1lGPWU3o2TUloTLg/lJJUcoZ1 sZEuNtIVs5DpGWamdcUcM9u64kzi+Rmzbkq63c++LvbTxX662E8X++liP13sp4v9dLGfLvbTFeux 2ZcbO6pnYVf80NjSWHom7vC73PToPWxsqlkTpmfnzj6Zm+GTuaFP5mJj3S5mbRcb62Jj3fSsnfTV XGyri211NSOwr47cxQgmfTY39Nlc1S9jNi/GpuoZPenHuQe+f//3/f5dFoXeSXvoncTQPg/t84yX gg+D9nlonycORxOnU+ZEcBKYAWZR5gwwm/Qc4rnEyb+saGc17LAadlgNO2hjDG2MoY0e2uihjTba aKONNtpoo4022mijjTbaaKONNtpoo4022mijjTbaaKONNtpoo4022qh9YhtttNFGN/RsYmhjDG30 0EYPbbTTewlbjUa6GV5ODI3Uq28PjfTQSE+8RR/f5xpahGZ6UiYCtFP70zbaqX1qG+200c4Y2mmj nTG000Y7bbTTRjttOdTspcbQUBsN1d6PjYbaaKiNhmr/uh0NbUdDPTTURkO1r22rHHzifuQfjDd0 iNndjqGtNtpqo6022qq9I5uVvV7V6xW9w2reUeXcd+DboAPfBv2jvw061Hiz1++9zyVZ08jTwGzS cwDrGjkPLZhPGi9TMq9I5mGpf/thf74hOrnLldL+roSuCVt5Y7hL8v4eq5vM1Uy5ODbdet1qViRy EUi1mtlQMhPKBtMDX/Js3QvZCtq4dwXxqs47KunVT2oH5e/1RdH5nSx7ptXW1nru32Btv08d15v1 Y4elzbSsmVZVW8+PGVdtMVPWMtM6lotb5FTDCU+eQnwa8VyABZMLOT8bXnzFSNlGyo5k5pDV5C0F rAWRuoPUbYnlQvI2kreRvCebuEY/kL4jW0i3km7jWfjvjIItV8Iv+pZpjY311ZZXW9V/pa+viuRM 5pgzwGx6Owdgc9AoD41ykEaw319n9TF+9zVpvTV+K2M0ohPr5yWeydDTJMPbxIj0l13F5h1WRy3m XRa19N1DdwJGNcjQnYARDBjBINSdINSdgNr7MmoBIxZ0+9VXodlTuaZjD8O0ezF31YU1NpuaRnT7 3dmSTjszn8empHZWMndUOjTCRyOcbm3MVK6dAlJjqsdS25mFRl6+rCRmHpG1ob2pz7A58dDW7I+d +Wf+Pi6/01vQjYzTdbTiRvP2S7+D8swX2gFP079jpf9eQq8Jc83bquTvR/nhG7TUb0j54W9Imbu4 Q38/qEsn602VcM37rUNEnv4WCyw2uwWu2GJa4YbvuFx1cyLzHZerHjB3+/o3qix8H+vXtOx1Yv2b SKxgrB0J/Qs+MesTtL0HXnpPZq1+e7x7NXOjeSun38jpN3HbsBkPU4OuUfc3WaPuQXu6Rv11erLW GabW/p04fHvGtyoh/7p867aZftwPHqTd+GvWmzx1J/GuhBNh1WikW2re+uEhUbObevvHE7Std6nZ Dd8CeuEbQI+aPdqe+rUoL/y1KN/Sb9E+MFLxLNZzlt4d+jPp5K9EeeGvRPkRCbLC8e2Z8CI5xLoN EcOIDDYIKbbRdyUuEpb5Xcpcgv5tyD7klWGRImKMGC9yxERxnOgtThAnioFiBn5DsTiVUCJmiTki Ks4klIkFYqEYImKEYWIRYbj5xdER5veex4gm8zuyK8WFglWHuFwcJa4XN4qjxW2E48Sd4j5xvLhf bBEnia1iG0/4CaFCPCaeEjPFL8VzYrZ4Xrwg5oo3CfPFW4QF4m3CQvEe4WzxgdjBkz8WO8U5kn+i UlrSEktklswSVTJX5opqma9/G10WyAKxVBbJ/mKZHCAHiHpZLIvFclkiS0SDLJVDRaMcLoeLVjlS jhRtcrQcLc6X4+V4sUIeKY8UK+UUOUWsksfK48VX5VR5grhQniRPEqvlyfJU8S05U54m1shZcp64 WM6X88Vl8kX5olgvX5Yvi8vldvmG2CA/kB+Iq+UOuUN8X34kPxIb5U65U/yHkkoKR1mqp7hG5agc cYPKU3niRtVb9RY3qUJVKG5WRapI3KL6qr7iVtVfHSJ+oAaqgeKHqlgNF7er41W52KKmqWniQXW2 OltsVYvUOeIhVamqxY/VMnWu+KmqV8vFI6pRrRSPqQvUavGc+rb6tnhRrVFrxEvqYnWpeFmtVd8T v1aXqWvEa2qTulkE6lZ1h/hQ3aXuEp+oe9Q94q9qs9osdqkfqYfEp+on6mEZUT9VrO3UI+pp2Us9 q56TfdQL6gXZT72o3pH91QfqQzlU/UV9KsdCt/7StgZYh8pTrEFWsZxplVgj5OnWKGuMXGhVWKfL r1izrZistM6xVsl66xvWzfI71g+th+Tt1o+tn8ofW571lPyZ9Yz1nHzKet56V/7Set/6QL5t7bB2 yHet31l/kO9Zf4woGUQikSy5K9Iz0lPujvSK5MpEJD+Sr1SkX6SfssLfT/0osr3T76ceZ34/tTr9 m6mFIhvrWoSNGywOEyPhsv5/xCbB4+PEl2FtRbpkb6H/34mD8ONL0IpR6I8Uk8UU+D0DLqd+d/VI 87uri80dfXh+ligQfdGuKDo0WkxA/0aIw9GPcjEdbTgtLJcrelBDP3GwGCS+JIaiVxPR2yPEMWIq 2nmKOD0slyd6cuwvBqCppejjWLQuIo4Ux4pp6PCp6O0ZVRNbquS75vhbc/yTOX6ij0pU19ctUz3M sbc59q9uaFyuBptjmTmONMfx5nj40ubKKnWMOZ5ojrPMcaE5Vtc3tC1XDebYao5fNcdvmOOa+saq erXWHDeQ0aw2muMmc7zFHH9ojnc36vL3m+M2c/yZOT5ujs826eOvzPFlc3zdHN82x6Clsr5VfWiO O1vqGpaq3fpoRcwxxxwLzbF/y/KqJmuQOZaa43BzHGuOk1paxk+wpphjuTmeaI4V5jib40Rrvjku Msf/196Zx/WUvn383Pd12rWptO+louV8S5soaZGUqBBZWpFSqYQK7THIzzIRUbJLJEa2iLGXZM2W ZYwlQygijXqu76mZaYyZ3+/1PL/nN6/n9Xq+9+t13ec++/ec63Nd7/v+49xhvJ3B21i0VpDI2xTe pvN2EW9z0Q6A1bxdx9si3m6Lnx0bD7t5W87bQ7w9ztvTCfjv4CJv63h7k7f3ePsD/w1eMd6K81bi C0vRTxS/UhP0tD+3wmzBos/+60sENfFXlvT4bjD9Syv9l1b2L21v3ir80/9HUJFCK8V0zdXzpVXi bR/eKv/BqvBWlbdqf7B/feaed9jT/vUzkftTq4GRZyhGBl/MlCHMDCaOSWLSmW+Ylcw6ppjZxZRj vj3FXGSuMncws75gmplPmIQkSe/uN/Kwu27trn/uqjFv8s+J6HW33bvrkO56QXf9oqumXl21iEZ3 vai7ftpVix/srq92101dtYRB15euJfy7293HSWzsWi/ZfT3J1O66+z6lJnTVvWq7amnX7rpr9iMi qyGcZweJaAh1xmxvgi2K2IBPE3Oni7Al/GI1deU9MwFbaowkK8PKsnKsPNubVWAVMXv0YZX5L3IL 5wQVKkeSoawV/3Vuiu+Rn/uI0fj1nTKM+u/2pqwKqyv8mjceIfkvXkG4jwqryqqx6qwGq8lqsdqs Dp6l5z5hjAVdQ9fSfLqOrqcFdAPm70JaRDfRYrqZbsE8vo1upzvoTrqLltDdtJTuwaxehjm9HDP6 AcznB2kFPUQP0yP0KD1GK4W5nVZhbj9Fv6en6Rn+WSWR7finbpKbmFsoZjZdmkfT6Vw6jybRZKSK +XQBXUhTaRrNQLrAfj7NQbJYTL+hS4RsQXORLf5BV9CVdBVdTb+Fftg/ohgpChnhvFk7hLNU8bRF kbacGEDOCsB4RYkokyokGhqERBNCMbfQcDoVuWY6jaAzaKSQbWg0sk0snUXjaDxNoLNpIp1DL9Jq eolepvX0Hn1Ea2gDPUsv0LsA9Ed6ntbRc/QKEss1ep3eoFfpbXoHmeUBfUjv088gAX2QY36gj2kz qIM4fU5f0Kf0GfLMS/qavqHt9GcwpR20ExmGBREQBV2QAjnQA0VQok9oIyjTV7QJVOhbUAU12kLf 0fe0lX6gH5GGPgn5B7QwVBKgoANi0AukQQZkQR56gwLogwEYQl8wAmMwATMQ+oo3ksVgfE4hyNZq PE/rIkOXIZkcxWKE3HwF6URIzFY8MdvyxGzHE7M9T8wDkZhfIl90YBmMTk4YR56Yh/DE7MwT81Ce mF14YnYl8hgT3IgiUWSGkT5Izx48PQ8nakSN8SQaRIMZQbSIFuNFdIgO440krceMJAbEgPEhfZGq R/FUPZqnal+eqv2IOTFn/Hm2HkMsiSUzlmflcTwrB5B75B4zHon5PjOBPCQPmUDyA/mBmUh+JD8y k8hT8pSZTJ6T50j8LzDOBJGX5CVyfxNpQu5/Q94g9zeTZuT+d+Qdcr+Qs6fynD2NfCKfmOnkM/nM RJBO0snMEE5xzERSlrJMFBWlosxMKk7FmWiewmN4Co/lKXwWT+FxPIXH8xSeIJz/l5lNVagKk8iz +BxkcU1mLtWm2sw8qkt1mSSqT/WZZGpIDZkUakSNmPkYeYYwC3hSX9it2v+pMv9K9V2q3UFuoGrr ST2vWk9GD/WY+6tqu/Sa/oViu/QqVGsPvdI8oWK7dI5nYpk1WBhmPRZCxhPsU5LJZB7miP++Ys+h Qs+jbi90K7cWVXoFFXqV12g9avQWqvQevYs6vY9K5VUt1DOo/6rZx6ja5m7FvuzW7N+lWFdmEj6h CCYNFVuEfV8rZh8Wa8y/R5D067HYMo+x2DFPsNgzz7AMxIz8AlX6Cssg7Nt+RNV3YnHEHE0ZJ8IS FhUrSsRQsb1IL9SqDJFBrcoROdSqAlFg3IkSUULFKhNlVKwqUUXFqhN1VKyw1zuC7/V6EV2ii4rV J/qoWENiiIo1IkaoWBPMjKNJf9IfFWtGzFCxFsQCFSsgAlSsFbFCxd4mt1Gxd8ldVGwDaUDFPiAP ULGPsKcbSB6Tx6jYJ+QJKvYZeYaKbSSNqFhhDziI7wEHk9fkNSr2LXmLim0hLajY9+Q9KvYD+YCK bSNtqNh20o6K7SAdTIQwOaNi8dGjYkWoCCpWDHvMM6kElUDFSlEpVKw0lUbFylJZVKw8lUfFKlAF VKwSVULFKlNlVKwq9p4TqQbVQMVqUS1UrA7VQcXqUT1UrAE1QMX2pX1RscbYt55PnakzKtYF+WBh Vy4GO7CHgeAAg2AwOIITDAHnL/M1LisiL/ZBNlRBJlRDChCyB8Fjnfl9zBlFGAou4Apu4A7DwAOG gyeM+JNz065zgznY8tQgPNocLIADAViCFQwAa7DBbX9+1RE8jdjwPQLhvJKKuEWPMaYTkVeGM150 El/70EDcfzhThdaLOYnWh2eh4V3cxFpSfzqGju2OKjvwlJfIZZ58XBltGsAasXp0GPVgTVhjth9r yvZnzagnHU5HUC/WnI6m41gDlmMFrAWrzxqyfak3HUl96CjqS/3wHH0YCb534soICcwTi/AuvPHO hDO4iCCRzUGCLsIiyytHjmnHIs9rozevBAVeCYq8dyrx/teH9z9l3v9UeJ9T5X1Ojfc5dd7nNHif 0+R9TguvrMVTnSIusXQ8Lo/jlyZ0L1HsSwufgB2jCu/gPbTCB/gIbfAJ2uFn+Awd0MkyKFHKgnAM ghVlxVhxVoKVZKX4eSelkKKS8cnNR1qmJJ1k4c6LyDeMOFlGluPWlWQlI0O+Jd8ysmQtWcvICeci Z+RJISnEf1mMBCacmbIvIwcXoRpq4BLUwmWogytwFa7DNbgBN6EebsFt/lpCmmKQphwwQjsSR56p huIVXZHdRclwMpyRIN7Em5Eko8ho3N+P+DHSZAwZg/cgHMeSxbMkMnYwHiZAIEyESTAZpkAQBEMI hEIYhMNq+BbWQh78A1ZBCRyAlZAPK2AdrIFyqID1sBX2wQbYCMWwGXbDHtgL38FBKIBCKIJNsAW2 wXbYATthF5RCGeyHQ3CYlcar9kfaMWSE4y39eY83x7xihUUMo6cNerEDRkVJZKFQRpqfLU8Wiegf 6BOlzB70iTIsCjwXKfLjiUo8C6nxLKSOFPQT+r+QgjT5cUMtnny0efLRIb2ReXR55tHjeUaf5xkD nmcMeW7py3OLEc8txjyrmPCs0o9nlf48q5jyrGLGs4o5zyoWPKtwPIcIeNKw5EnDiqeIATxFWPMU YcPTgm23l92Bu3APGuA+PICH8Ah+gMfwIzyBp/AMnkOjcAQMXsIraILX8AbeQvO/1cuOwFE4BpVw HE5AFZyEU3AavoczcBbOwXm48G/1Mi/whpHgA6NgNPiCH/jDGBgL4yAAZkAkREMUTIUISINvYDrE wDSIhZmwCJbBLEiGHIiHBJgL8yAdMiELlsBSiIPZkAhzIAlSYD4sgIWQChmQDYshF5ZDy/97WbeX /fMeZtdoTfd4jXIY1or8WkZ5IpeuHCAq0S/bI/uDNBGjRenKw3GVG/asBVKchKhIfxmgaiIMFywq 2V8U4SXdlhK2yI8bzZn2WKNRrJWqgbgjLD74wIWTy0XxE92FM47Cwun2OBmr6NTviWi/FyanRdub S7YZxgfYyHZGFqX3FnDpbBCXDl5FgLBEJc1L5O+N6py4obrql6M18VZiBf05E1EYw0op6LnExM7j p8nVMQ410RHY29t+MaGuuUCL0+jaWemrU+0KdDlt4XZQUPltu29MTIKO8+yE6TFxEQnzOC1lac6W s7PEn5WAs5ygLC2wxKY1rsTfBG4e/6zwJKIKdIyfQIGTFzbEFSTHBcdPj4ieloCXkeNkhCvFFMR8 w8NmxkSH/XJjkn92Y/qcbteNqfXcHhau4xcxLVo4p+0oF2cunehx0r++QIIdQUjHvh+ul6Tp6LYV 8+bfnLTfzX7HgN2CO22G1sPnVLVrbzznNuv1Fffn15d+H+nlG/Iun37vfWt4lIWBY/iJWv0KKY+K hbMb3Cp3LZcZdcawf3PRM2l97SvOBp9C8i+rum1d5amdf2m/hd73nmYpMbeVtByW2svZN1SavJvq YEYsOzuMPLZ9F0VyCtqPlIcuTG+bWJSWmZVb1nxo9ebLdttGZSkb5Yxs4FqZwe/Otg1OO579Ksp+ u/mA1gPmeyXnh6yYO7Vgbbx09t7m0y06h316LwutNr1t6abadNQzz2GUn0rt1NHzdpXmnB/rWJg+ alG0yD7rk8kGlb5TB+ePrOm/wCo6c5jolY11ntk0OpvZUpXzwI8COv7mtE9c2gdOAR+npiHbi5MU FUfXFRERA+DSioVrCZu2jktbkyoXWBf7OiJuo/7oBYrl3rmd1Zvi/vP+li6L9LZk0KBF8lccW0Nf PhjCyQrvUYGQTlaEA6w4TeEKGbYPq1ijWZvIxAbufXvn9Mh1o13NN7uGvuGkhJtlWRZllN1DOiD0 iOSSPQs8+zbXHhuZUBxglNBv9v7szyVeq+cy3o0Xf1K5F3FGpjilhbqcvZhT89Gv5lRh5diYN6Gu O12Zprzz625oHJIqVJVeXX9Hq9Rk/utX2+J3L79vnzt47YxjdjOvLtqr//lB480IiRWLKjseMUcH tHxIaZPrbS7yk0neqqGRxrMq7JY/FJO+MGn6pcpU58ipO45WHM0dcLEZ5FKS3l99OPRBcsejR7s7 Wh/ckN4fe3PlY5+DdsUpZtcH3x0gFWJLC9Nm6C9unRi6vGzCUfv6oKVjMtWs3jusLUrvVTxlyX7T ik1bq0vu6Bw8walm6ShK9zvm+8754WTu8UrjiJyTsT+0bC+pTR0alyiDMSYJY0xId4wJJuXr+Vio 1FNHIhhn/kZVY8ARYKCxxDBjbSXoDjjWvza5tIz/lXuT5h0HXZf19hnl+8vu8Ce7/9PYs3VWgkr1 3QKLT29DVVM353aej83otWlYv0+fJpTVestWOtzRvSRyY36K03f5iYYD7xX56DyJu+Yy62lnlGJb YWa5YU6l4neTjtsuNj9TkhU0KyvN6LAVtJXeXEWbDo6Rp9UZWa0ns0KDVYsUCzYUFriH2t6SHzTu nIeOX58PNQEdrVVqFw+6R0k/HyhSu03j8aI3DbtOxWYEXmludjp0e8uGzUz0rrRLTQPZ0irPlaYK DxudEyVSSdQ0nQOCMsfIq0PEM27Ecsu4Z8eX1lk0Xc92UpuwtWp61vPFKSvAM3q8i45HwaKOC24V z71YIhVSW/xSY5Xh57p9Mmc/HjRQS25PuTlx5JVpjd2x5yOX9v7rsec3Fd+Ou17dK2RKw+aEwiky a513BCq49OVfn6asUPUoZLFUPm5o6rMqXJ/Ur8veVbiDNjuYc+Dsi2yLrLOtuqd5D42L+mKa99jI COFai+5p2uMtXPzQ8cxxFefxyx0Swg7iBnJ2v7Q5mm36p/PG8ycMj+txpoQvBMVHn6Ago1sp3PDe w5xtB02efeDHYsZG3qPMNGD92pRXm3tvWtuksn9N68zcW5yaRqluqLP7qvq9asYj1tjMH+IfVBNy rPHniO1TFp7J2ZbdK2XnD+Pn3110Y85ckW0GF8M+jhxd4Wqcq2bqL24cd0ZbZbDpZcYoRuHK1uDm myEDK5mRIhb50+Y/CXVxcuh1fIlY0qO5Q048mFubo1OsuulY0JvC3b4TExU/q88VqQ+dHZn2Oce9 tHS874nkE3tVt6wsb5YyXcDJ3RWMOJ45YeGH9b3nNt5fEFQifVag1Rq3znHaZbsmu1p79fi7DrcH PMi4WnDp0ZL7ah1h4lP2tpofsjRMjDBsubHMWv/7u4auGH02YPTJ6oo+cjOk8n2qGMMS+btu2gFJ 04q/jEF/D+vYcPYCG07ADRhgKww99tj8G1jHP2JmeHxC8MzYf5V17tlGt+89P9Rzlsr5Wg9Hv6pP JYpHTC2P9vbxPZ/xytHq9nDBSuODK8Ieao/KPHJqxJWFIh9fzz6+5NyOG3siYqfONZr6/GDF66zD l5p2fe69RWq8nonF5SG3x7Lqid/NDJvp6X+34e39E4UZ51IfLPSitqvfV20UH6s1fdil21WJEy3m HzRkD4wNnKER2pmaMqjpBmvobT8nQWzSqYm3sm1NZ1+QeaFlL5GS2LEhKjrp4UvH5Ws2zpKZ0s9H JSTIcuPVjJH99SZOd1ty3yJTblR523dqy6KaDNcrfKyWq8+SeZeeGG9z9tuk4pog0ZciZdlWFR9X B2Y6ZwZkrY4u0zb1qIkpcHk44/nCvrmRXfEmnRjjEzH4WsQR/79BO3KiEt09CyUiRBimR6CMeT7S ac3hASUjspcfK3ix28HZ5Wwdp/rrAYqU7aUlyfgxs7EX4sI4/56E/oBRXwlQq73lBadSRh2Vz90U LEZklsa6LXsd71/pJCFi1nlotF+Wxiv7FRWbx0rdX3rQQf1K++7tFyr2jdZVjxGPWBAJxXrur6IO zEzRO+R+LbNlmexxsW9sTv60oDF2klvhyqs1tQ25VY9O9LuU8vLCHssbOYerQ0/bXFHRPZF432Hd fvX4jbqLbh040Nt/6buCU+Ge64z7FgR9I+twTiF8rsfRy6UZA33KQgLuc42N9pqPFzffsU9rU9Bd GpYaKsrmNa+jLhbJ7ouOdNLb4W2e9+9Awqr9ItG9ajbcMw5O8XirXCCva0c1cnaLnsmzPPRkyFm/ wZU7F99/PtV22Tu9vIKasjn+owfejHMt12/FALULA9TKX/BIpJjj8Uj878OjPwQCYYyy42wtrTE0 CQTWwhhl1dUUCJtc2v7/BB4ZcYZdTa1ol4jY6eFxOq5+bjpufiMH2lm7WplZcdYuZtZDXdwFhpx+ 13/S+P1/MvMT/ikdv/C4xIjQ8H8a3t6wZuV5VWpp0wz39Q3ZrzCiljtS1dvu57TwAWKnbcoNpreK sVVia95VvE3WCjF1vz1iy+gBFdeiXk1wOJCxadhgeXFz60i3p6cGLaVT6U6ViJ88XxmZNg2aE7jl emz+iHGZcnV7zT4u1nz6wuTAs8sbRUO2x/mfcjh72enQo7IAuagnW+u/PzXbtvJd1qO058a31N82 73mbvvlmPRQXKmW2D/5U8uig5fkiGtbytFOt7yxxv2+UaHOGUeLw9FnbX5dazj1bH9XHRy98TYi3 u0Wn/t6sl9tiK6H6zi1LkTP9/zHk4MYbptlRFdUKlvOXnV2wR9nC8uepRzXL3MZ8LP1kNi1jmsmq zKsTNun3xKnfAsLzNa0fXi99+zTi8fjpIz+s/SapYb3570jpqxHjf0JKCfGxocH/FlL65UwJXw/W v+M/0aqvRStppzlTVgw6vtV6y10RkUztsc2v87edE19msf+S06wb2SlztBt+Ui6vTHnclt8s6eZR qng0wrTZcVqIf3PTQiP5lfYva2/njFz0IWiYfrKR0hDxwhPSAjb9lvXBXgXMtSW75gaf+W6R8wZH m3sBW4zWD7xTKTpJcVu5rNfJ3EFLmkPyP059daNFw7jM8u5FgcSxdr3p7l6frsXrPTPJ1WPax54Q 3ZNWpHRkQJtxrrZniMimxe/ThjVKrxCvD3BYrhUpEbGzyiNlTLrTFMbOpUC0xumWxQmfeInBn49M fnfupe2psOAi7+uDY2sCyxTSTl7fLFCrDLuZdzXJqV+gu5/EoEvQ5jSeqVnsFyxIZ/MxYn1LCeHS cv7GLtvvOpK/DXUVpZ0WZqfu1yYBgl49x9Hwur+1pAQyXM+tShg1fj2QFaCrX3nfEX66STzqcIDf uQnz9FrumRz5iQvrcUgvwVjOv6hfqjHjzUQwoUwcE8MPxU1lEhgdxp+Zx8RiaxquD8al6cy8TX1T Df7UTxPmxcZMiwuOnT7P4ou4xKYTxr3u4QJvZumruvI8ONQ5rqjDNqIu3/Zg/sq+DyU3WwxbuHmw r7Ru6yX/VnfzmyIDfVY0zNDYsbPCM+jSDZcZsxqdt293O503p2500lWF9a0/Wt65Z5Ozyjm4fkDL 3p9zs8O3VyZ47As8L1ulGTiierJGnUHyRbn2KfVmc45vyazzF9tituSj/UevDhY0SztBxVeiYWkb G7LL1bdDc8i2DX2SNr4xOkBL9hwuZxrtGneNk03OO+Kw8cm7BVM2uFq+Npwzfq1eZkXNqaQV9oee GEs99J59QU76bqTo46WLYizb9Oovh87weiI59MDeQFXbmTMdFIdWK+acr79W8OizTUNpXmegbvr1 vTKum9KpNpdO1X97R6KCdNoLV4n/x53xywT5u7Qt1u2MRZM4lZ6eKPXbsC/Ba/66RUQgy48+WHPW lhxnI7Cf8AdHdG0N9Z6cp5SeuypsYEZMvxWTL3vbfBGdhC5SvCS5+lzflLFHq1Xf7PvW0kJtwbWF BaJlJxzUv227/CN4XzMcYxZvflhiZpvMgv62P86RKBk1Tzt5OYzVLBmjIa5S3BE3b1n7bnZ/xcmd PmEm13I0qp5FiW9wPXT9jfn+SL16+5/nS6xc2O49pz1nxp6t2z6TPU+3cpH37F3Gtd5Zf+hBdKij dv4P+zTi6iSz5Kb6Ko9vjP5YJ6O2MWvNkMOlzCeLHMtb26RbWL/TBpGvkhe+D1X5ILL2xMvvX2+M o3UHNCYpS6i7VZi0NJ29J3vaq8Yx6e3YT7sawiN89iZdE2ltURz4+Kf2KxkyAVbJEkW7boXWmYql 5TUNrZ25epzfSemleitpzTDR/wKn+lkTDQplbmRzdHJlYW0NCmVuZG9iag0KMzEyMyAwIG9iag0K PDwvVHlwZS9YUmVmL1NpemUgMzEyMy9XWyAxIDQgMl0gL1Jvb3QgMSAwIFIvSW5mbyAzMTEyIDAg Ui9JRFs8OEQ5NTM4OUFCMzYwNDk0MkE5OTI3RDQwMzg4NEQxNDc+PDhEOTUzODlBQjM2MDQ5NDJB OTkyN0Q0MDM4ODREMTQ3Pl0gL0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggNjA2OT4+DQpzdHJl YW0NCnicNdx1lFbV28dhvgPiVsT6GdiK3dhioWAnNqBiAQqKgWJQgoAB2Am2qNgdgB2A2N1YoCI2 dvIO5zqvf3ite2aYwWGe89l7uW6aNGn8Z86cNP57kSZN5jKoIu9VNHxTMd+0ivkvqWhxYcUCG1S0 9Otajq9Y8NqKtb6vWH9wRZv+FVvuin8qtppdsfWMip3XrdhlhYp9u1Tst3tF1zUrDl2loucdFb2u qegb/FwxeHjFkFMrRl5dMeqKiit7VIzuXDH23oqbxlbc/V/FPT9VTHipYuLzFZO6V0zuVPHaXxWv f1fx0eiKaaMqvppeMXNOxawH55JmvStWqL56Vjy9YqXq62XzpSo6tq3Yu13FAWvgk4oDP6joZDr6 mYqeD1X0PqniOJ96wBYVA6tvZEb8WzHyx4rR1fcsY8ZV3Lp5xW1rVzyyXsX4lSomt6mYsnLF26Z3 fM7pH1bMmFQxe9mKn5vPpaFJ9SfdkF4VC1b/fQ0LzaxY8sSKVtU3smHV6qs3rNa6YqORFRv3rWi/ YUWH6lM37L1qxT6tKjpXf6gNXaofvoaDv6045PqKrmMqhnasGNazYnj1c9Zw86cV46qfl4apUyte aPxPaWj8ge6WSZiM5xpp0uf/3zel8SNfmVxNjW8MGtAUzTAPmmNev/x5UzFNNc2H+dHC+14wLWB6 0dTS9JJpQSyEhb3vZdMipldMi5peNf0Pi3nja6bFTa+blsCSaOV9b5iWMr1pWtr0lmkZLOuNb5uW M71jWt70rmkFrIiV0NqHvGdaGatgVazmQ943rY41sCbW8iEfmNbGOlgX62F9H/mhqY3pI9MG2BAb YWNsgk2xGTZHW2yBLbEVtsY2aOfrTTNta/rYtJ3pE1N706emDqbPTNubPjftgB29cbppJ5yO3bAL dsau6I4jsTv2wJ7YCx2xN/bBvtgP++MAHIhO6IwuOAgH4xB0xaE4DIfjCHTDaTgFPXAUjkZP9MIx OBa9cRyOxwk4EX1wEk5GX5yKfr7lM0z9TV+YBpi+NA00fWUaZJppOsP0tWmwaZZpiOkb05mmb01D Td+Zhpm+Nw3HWTjb+34wnYNzMcL7fjSNxCich/NxAS7ERbgYl+BSXIbLcQWuxGiMwVW4GtfgWlyH 63EDbkT9fLkFN2EsbsZTeALjcCtuw+24A3fiLtyNe3Av7sP9eAAP4iE8jEcwHhMwEY/iMTyOJ1E/ RusH7tN4Bs+ijtQkTMYU1M2pY1N3pQ5KXZI6GnUt6kzUYaiLUD/86+d8/fSun9D1U7h+0taP0fr5 WT/W6udZ/SCrn2D1o6t+ZtVPqfqVU79k6tdK/SKpXx31y6J+PdQvhPoVUP/o1z/z9Y93/ZP8U0Xq kvyKnzEbv/gQac0C3vgbfscf+BN/4W/8g3/xH+b4ZM4McWaIM0OcGeLMEGeGODNkXhQ4JcQpIS3Q EhIZ+YwDQRbGIlgU2p/FsDjUPmqfVlgKS0PYsyyWw/LQ8Gh4WkOnsxoEOgIdgc5aUOYoc5Q5ypz1 0QZaHC2OFkeLo8XR4mhxtDhaHC2OFkeLo8XR4rTDtvDwz/Zoj+3QAVoVAYvsZkfobaQ1QhuFjfpG WiOtkdZIa6Q10hppjbRGWiOtkdZIa6Q10hpNjZxF4yKtkdZIa6Q10hppjSNAFDbOBZHWSGukNdIa aY20RlqjqVHRqGhUNCoaFY14R0yj6HFiST/0xwAMxCCcgcEYgjMxFMOghlHDnA0ZjGdyFC8joI1R w6hMVCaiGFGMKEYUI4oRxYhiRDGiGFGMKEYUI4oRxYhiRDGiGFGMKEYUI4oRxehf9C+aEyGK/kX/ on/Rv+hf9C/6F/2L/kX/on/Rv+hf9C/6F/2L8EXqInWRukhyFC86HamL1EXq6vtYpC5SF6mrr1f1 vaq+QtV3p/rSVN+P6otRfSOq70D15ae+59QXnPpmU99lqktMw0X1Faq+W9QXlfpKU91JGj+kftzr Sn23qC8V9Vm/PuTXp/v6WF+f5+uDfH10r4+T9TmyPkDWJ8f6yFifFetDYn06rI+F9XmwPgjWZ776 eJc6ij839ml6k+p33fibXxQLI2hAUzTDPGiOeVEwH+ZHCyyAllgQC2ERLIbFsQSWRCsshaWxDJbF clgeK2BFrITWWBmrYFWshtWxBtbEWlgb62BdrIf10QYb4Ahsgo2wITbGXtgDm2IzbI622AJbYits jW3QDttiO7RHB2yPHbAjdsLO2AW7Yjfsjj1xOA5FR+yNfbAv9sP+OAAHohM6owsOwsE4BF1xGI5E N3RHDxyFo9ETvXAMjkVvHIfjcQJORB+chJPRF6fgVJyG09EP/TEAAzEIZ2Aw7sMwnIkhGIprcTWG 4yycjXNwLkZgJEbhPJyPC3AhLsLFuASX4jJcjitwJUZjDK7CNbgXd+M6XI8bcCPG4ibcjFswDrfi NtyOO3An7sI9uB8P4EE8hIfxCMZjAibiUTyGx/EEnsRTeBrP4Fk8h0mYjCl4HlMrUj9+X8aLeAEv 4VvMwit4Fa/hdbyBN/EW3kbdsXfxHupyfYAP8RGm4WN8gk/xGT7HdMzAF/gSX2EmvsY3mIN/8R2+ xw/4ET9hNn7GL/gVv+F3/IE/8Rf+xj/4zx+A1EXqInWRukhdpC5SF6mL1EXqInWRukhdpC5SF2mN 4kVvq9tZI/oXz+QoXmQwohg1jIREbCKKEcWIYkQxohhRjChGFCOKEcWIYkQxohhRjChGFCOKEcWI YkQxohhRjChGDaM5EaKoYWQwEhn9i/5F/6J/0b/oX/Qv+hf9i/5F/6J/0b8IX6QuUhepi9RF6iJ1 keQoXnQ6Uhepi9RF6iJ1kbpIXaQuUhepi9RF6iJ1kbpIXaQ1ihe9jfNE9C+eyVG8yGBEMWoYCYnY RBQjihHFiGJEMaIYUYwoRhQjihHFiGJEMaIYUYwoRhQjihHFiGJEMaIYUYwaRnMiRFHDyGAkMvoX /Yv+Rf+if9G/6F/0L/oX/Yv+Rf+ifxG+SF2kLlIXqYvUReoiyVG86HSkLlIXqYvUReoidZG6SF2k LlIXqYvUReoidZG6SGsUL3ob54noXzyao3iRwYhi1DBKEgWKKEYUI4oRxYhiRDGiGFGMKEYUI4oR xYhiRDGiGFGMKEYUU0dR/6KGkcEIXzQnQhT9i/5F/6J/0b/oX32Pi/5F/+qbW/Qv+hf9i/5F/6J/ Eb5IXaQuUhepi9RF6qLaUbxIeaQuUhepi9RF6iJ1mZu6ZtX/0G74fbcmTUxBA5qiGeZBc2yB+VEw L+bDqlgZLbAAWmJBLISFsQgWxf+wGBbHElgSrbAUlsYyWBbLYXmsgBWxElpjFbTFZlgNq2MNrIm1 sDbWwbpYD+ujDTbAhtgIG2MTbIrNsSW2wtbYBu2wLbZDe3TA9tgBO2In7IxdsCt2w+7YA3tiL3TE 3tgH+2I/nI1OOAD740CchBPRGV1wEA7GIeiKQ3EYDscROBLd0B09cBSORk/0wjE4Fr1xHI7HCeiD szAMJ6MvTsGpOA2nox/6YwAGYhDOwGAMwZkYiuE4B+diBEZiFM7D+bgAF+IiXIxLcCkuw+W4Aldi NMbgKlyNa3AtrsP1uAE3YizexjjcjJtwC57Gk7gVt+F23IE7cRfuxj24F/fhfjyAB/EQHsYjGI8J mIhH8RgexxN4Cm/hDTyDZ/EcJmEypuB5TMULeBEv4WW8glfxGl7Hm3gH7+I9vI8P8CE+wjR8jE/w KT7D55iOGfgCX+IrzMTXmIVv8C2+w/f4AT/iJ8yuSP1o/g2/oO7Yrz5EQlJ35Xf8gT/xF/7GP/gX /2GOzyKKEcWIYkQxohhRjChG/6KGkcFIZPQv+hf9i/BFIiNE0b/oX/Qv+hf9i/5F/6J/0b/oX/Qv +hf9i/5F/6J/Eb7IdaQuUhepi9RF6iJ1kbpIXaQuUhepi9RF6iJ1kbpIXaQuQhvFi/rGISP6F/2L /kX/on/Rv+hf9C/6F/2L/kX/on/Rv+hf9C/6F/2L/kX/on/Rv+hf9C/6F/2L1EX4onhRw0hdpC5S F6mL1EXqInWRukhdpC5SF6mL1EXqInWRukhdpC5SF6mL1EXqInWRuihzFC9yHamL1EXqInWRukhd pC5SF6mL1EXqInWRukhdpC5SF4WN4sVDNYoXh4xocdQwshStiihGFCOKEcWIYkQxohhRjChGFCOK EcWIYkQxahgJia5EFCOKEcWIYkQxohhRjChGFKN/UcPIYCQy+hf9i/5F/6J/0b/oX/Qv+hf9i/BF gSJL0b/oX/Qv+hf9i/5F/6J/0b/oX3Q6Mhjxjv5F+KJO0aroX/Qv+hf9i/5F/6J/0b/oX/Qv+he9 jY5F1SJ1cdaIKEYUI4oRxahh9C/6F/2L/kX/on/Rv+hf9C/6F/2L/kX/on/Rv+hf9C/6F/2L/qXu 39zUNe8398rWtMWQ6srWOAUNaIpmmAfNMS8K5sP8aIEF0BILYiEsjEWwKP6HxbA4lkAvLI1WWBJL YW2siWWwLJbD8lgBK2IltMbKWAWrYjWsjjWwFnZAB6yDdbEe1kcbbIANsRE2xibYFJthc7TFFtgS W2FrbIN22BbboT22xwHYDztiJ+yMXbArdsPu2AN7Yi90xN7YB/tif/TEUTgQndAZXXAQDsYh6IpD cRgOxxE4Et3QHT1wNI7BsXgaJ+A49MbxOBODcSL64CScjL44BafiNJyOfuiPARiIQTgDQ3A1xmAo hmE4zsLZOAfnYgRGYhTOw/m4ABfiIlyMS3ApLsPluAJXYjSuwl24A9fgWlyH63EDbsRY3ISbcQvG 4VbchttxJ57CE7gb9+Be3If78QAexEN4GI9gPCZgIh7FY3gcT+IZPFuR+kE2BZPwHCbjPbyD5zEV L+BFvISX8QpexWt4HW/gTbyFt/EufsT3eB8f4EN8hGn4GJ/gU3yGzzEdM/AFvsRXmImvMQvf4Ft8 hx98B+sszcFPmI26Y7/gV/yG3/EH/sRf+Bv/4F/85+upYYQo6hRRjChGFCOKEcWIYkQxohhRjChG FCOKEcWIYkQxohg1jP5F6iJ8UbyoYTysIm5RvOhfhC8KFFmK/kX/on/Rv+hf9C/6F/2L/kX/IrSR wahv9C/CFyWJvET/on/Rv+hf9C/6F/2L/kX/on/Rv+hf9C/6F/2L/kX/on/Rv+h0ZDDiHeGLAkWW on/Rv+hf9C/6F/2L/kX/on/Rv+hfhDYyGPWN8EWI4ogT/Yv+Rf+if9G/6F/0L/oX/Yv+Rf+if9G/ yG5kMFocNYwMRtwifFHDaGNkMDIY/Yv+Rf+if9G/6F/0L/oX/Yv+Rf+if9G/6F/0L/oX/Yv+Rf+i f9G/6F/0L/oX/Yv+Rf+if9G/6F/0L/oX/Yv+Rf+if9G/6F/0L/oX/Yv+Rf+if9G/6F/0L/oX/Yv+ Rf+if9G/6F/0L/oX/Yv+Rf+if9G/6F/0L/oX/Yv+Rf+if9G/6F/0L/oX/Yv+Rf+if9G/6F91DSx9 6hN19C/6F/2L/kX/on/Rv+hf9C/6F/2L/kX/on/Rv+hf9C/6F/2L/kX/IiGRwehf9C/6F/2L/kX/ on/Rv+hf9C/6F/2L/kX/on/Rv+hf9C/6F/2L/kWZU2fwJ9/PhVAn5GfT3Htc68Wre1yx/1fs/xX7 f8X+X7H/V+z/Fft/xeJfsfFXbPwVG3/Fql+x1VfswBWLccVyX7HVV6zzFXt8xR5fscBXbO4Vm3vF 5l6xslfs6hVLesWSXrGdV6zlFft4xSJesXNX7NwV+1XFel2xc1cs4hWrd8W6W7EDV2zgFat3xepd sXpXrN4Vq3fFzl2xbFcs2xXLdsWyXbFsVyzbFct2xZZdsXBWbKEVy3bFsl2xbFcs2xXLdo1si+3Q Hh2wPXbAjtgJO2MX7IrdsDv2wJ7YCx2xD7rhCOyH/XEADkQndEYXHISDcQi64lAchsNxJE5FX3RH DxyFo9ETvXAMjkVvHIfjcQJORB+chJNxCk5HP/THAAzECxiCMzAIg3ExLsSZGIphGI6zcDbOwbkY gZEYhfNwPi7ARbgTt+MSXIrLcDmuwJUYjTG4ClfjGlyL63A9bsCNGIubcDNuwTjcittwBx7Ho7gL d+Me3Iv7cD8ewIN4CA/jEYzHBEzEY5iKKXgCT+IpPI1nUHfsOUzCZDyPt/EW3sCLeAkv4xW8itfw Ot7EO3i3IvVz8EO8j/fwAb7FLHyEafgYn+BTfIbPMR0z8AW+xFeYia/xjd+ZhKTuynf4Hj/gR9Tl mo06Wb/gV/yG3/EH/sRf+Bv/4F/8hzl+L/oX/Yv+Rf8ifFGgyFL0L/oX/Yv+Rf+if9WVrdjAK3bZ il22Ypet2GUrdtmKXbZiia3YUCvW1opdtmKXrVhiK5bKihWzYu+s2GUrdtkakd3IYLS4urIV64PF glux4FYsuBULbsWCW7HgViy4FZttxS5bsctWbEYV22vFSlux4FZsthXbZMUWWrHgViy4FQtuxYJb seBWLLgVC27Fglux4FYsuBULbsUuW7HZVqy0FUtsxVZYsSpW7LIVu2zFLluxy1bsshW7bMUuW7HL VuyyFbtsxS5bsctW7LIVu2zFLluxy1bsshW7bMUuW7FlV6y0Fat3xRJbsdlW7J01on/RvwhtZDDq G/2L/kX/on/Rv+hfhC/aETmLxkX/on/RvwhfNCdCFNmNDEaL40gVUYwoRhQjihHF6F/UMDIYiYz+ Rf+ifxG+SF2kLlIXz92IWxQv+hfhiwJFliK0kcGob/Qv+hf9i/5F/6J/0b/oX/Qv+hf9i/BFUKIy 0b/oX/Qv+hf9i/5F/6J/0b/odGQw4h39i/5F/6J/0b/oX/Qv+hf9i/BFwKJq0b/oX4Q2Mhj1jfBF ESJSUa7oX32Pi/5F/6J/UdHIYKQ1DkoRvmhOhCj6F/2L/kX/opuRwYhppDWimDqK+hc1jAxGIqN/ 0b8IX6QuUhepi9RF6iJ1kbpIXTQ1dfGENlIXqYvURerqS1qkrrqdzd+tup21Xa66nTVOQQOaohnm QXPMi4L5MD9aYAG0xIJYCAtjESyK/2ExLI4lsCRaoTuWxdJYCstgXayN5bA8VsCKWAmtsTJWwapY DatjDayJtbAOdsT2WA/row02wIbYCBtjE2yKzbA52mILbImtsDW2QTtsi+3QHh2wAw7E/tgJO2MX 7IrdsDv2wJ7YCx2xN/bBvtgPB6AbjkAndEYXHISDcQi64lAchsNxJI7HcTgWPXAUjkZP9MIx6I0T cCLGoy9OQh+cjLMwDKfgVJyG09EP/TEAAzEIZ2AwhuBMDMVwXI9rcTbOwbkYgZEYhfNwPi7AhbgI F+MSXIrLcDmuwJUYjTG4ClfjGlyH+3APbsCNGIubcDNuwTjcittwO+7AnbgLd+NePIKHcD8ewIN4 GFPxPCZjAibiUTyGx/EEnsRTeBrP4Fk8h0mYghfwYkXqh+qreBkv4RV8io/xGl7HG3gTb+FtvIN3 8R7exwf4EB9hGj7BX/gDn+FzTMcMfIEv8RVm4mvMwjf4Ft/he/yAH/ETZqPu2C/4Fb/hd/zp+6lV qQP2N/7Bv/gPc3ykKEYUI4oRxYhiRDFqGO2IxkX4IooRxYhi1DCaEyGKKEYUI4oRxYhiRDGiGFGM KEb/ooaRwUhk9C/6F/2L8EXqInXxwI24RfGifxG+KFBkKfoXoY0MRn2jf9G/6F/0L/oX/Yv+Rf+i f9G/CF8kJLoS/Yv+Rf+if9G/6F/0L/oX/YtARwaj2tG/6F/0L/oX/Yv+Rf+if9G/6F/0L8IXHYu4 RWgjg1Hf6F+EL1IQdYpkRf+if9G/6F/0L2IaGYzCxkEp+hfhi+ZEiKJ/0b8IZmQwKhpNjShGFKN/ UcPIYCQy+hf9i/5F/yJ8kbpIXaQuUhepi9RFU6N4EdpIXaQuUhepi9RF6iJ1kbpIXaQuUhepi9RF 6iJ1kbpIXaQuUhepi9RF6iJ1keQoXnQ6Uhepi9RF6iJ1kbpIXaQuUhepi9RF6iJ1kbpoahQvQhup i9RF6iKDUbxoY5x0InWRukhdpC5SF6mL1EXqInWRujzbeLc4pvo7PpseO6Wi929zadZwfkXTpyua VX9LarPmW1ZMeKxiYvXXgM7T/fOKHo83afJ/pBMQiQ0KZW5kc3RyZWFtDQplbmRvYmoNCnhyZWYN CjAgMzEyNA0KMDAwMDAwMDEwOSA2NTUzNSBmDQowMDAwMDAwMDE3IDAwMDAwIG4NCjAwMDAwMDAx MjYgMDAwMDAgbg0KMDAwMDAwMDQ3MiAwMDAwMCBuDQowMDAwMDAwNzQ3IDAwMDAwIG4NCjAwMDAw MDI1MjUgMDAwMDAgbg0KMDAwMDAwMjcwNyAwMDAwMCBuDQowMDAwMDAyOTYwIDAwMDAwIG4NCjAw MDAwMDMxMjEgMDAwMDAgbg0KMDAwMDAwMzM0NSAwMDAwMCBuDQowMDAwMDAzNTEyIDAwMDAwIG4N CjAwMDAwMDM3NDIgMDAwMDAgbg0KMDAwMDAxMDk5MCAwMDAwMCBuDQowMDAwMDEyMTYwIDAwMDAw IG4NCjAwMDAwMTI0MTEgMDAwMDAgbg0KMDAwMDAxNTE3OCAwMDAwMCBuDQowMDAwMDE1MzU2IDAw MDAwIG4NCjAwMDAwMTU2MDIgMDAwMDAgbg0KMDAwMDAxNTg0NCAwMDAwMCBuDQowMDAwMDE4NDc3 IDAwMDAwIG4NCjAwMDAwMTg3MTkgMDAwMDAgbg0KMDAwMDAyMTMzOCAwMDAwMCBuDQowMDAwMDIx NTgwIDAwMDAwIG4NCjAwMDAwMjQxMDUgMDAwMDAgbg0KMDAwMDAyNDM1NiAwMDAwMCBuDQowMDAw MDI3MDUxIDAwMDAwIG4NCjAwMDAwMjcyOTMgMDAwMDAgbg0KMDAwMDAyOTk1MyAwMDAwMCBuDQow MDAwMDMwMTk1IDAwMDAwIG4NCjAwMDAwMzI5MDEgMDAwMDAgbg0KMDAwMDAzMzE0MyAwMDAwMCBu DQowMDAwMDM1NzQwIDAwMDAwIG4NCjAwMDAwMzU5OTEgMDAwMDAgbg0KMDAwMDAzOTAxNCAwMDAw MCBuDQowMDAwMDM5MjU3IDAwMDAwIG4NCjAwMDAwNDE5MDQgMDAwMDAgbg0KMDAwMDA0MjE0NyAw MDAwMCBuDQowMDAwMDQ0Nzk4IDAwMDAwIG4NCjAwMDAwNDUwNDEgMDAwMDAgbg0KMDAwMDA0NzU2 NCAwMDAwMCBuDQowMDAwMDQ3ODE2IDAwMDAwIG4NCjAwMDAwNTA1MzMgMDAwMDAgbg0KMDAwMDA1 MDc3NiAwMDAwMCBuDQowMDAwMDUzNDk4IDAwMDAwIG4NCjAwMDAwNTM3NDEgMDAwMDAgbg0KMDAw MDA1NjQ3MyAwMDAwMCBuDQowMDAwMDU2NzE2IDAwMDAwIG4NCjAwMDAwNTkzNjMgMDAwMDAgbg0K MDAwMDA1OTY0NyAwMDAwMCBuDQowMDAwMDYwMDg0IDAwMDAwIG4NCjAwMDAwNjY2NjkgMDAwMDAg bg0KMDAwMDA3MzU3NCAwMDAwMCBuDQowMDAwMDczODQ5IDAwMDAwIG4NCjAwMDAwNzQyMzggMDAw MDAgbg0KMDAwMDA3OTg5NyAwMDAwMCBuDQowMDAwMDg2MDczIDAwMDAwIG4NCjAwMDAwODYzMzUg MDAwMDAgbg0KMDAwMDA4NzU5MiAwMDAwMCBuDQowMDAwMDg3Nzc1IDAwMDAwIG4NCjAwMDAwODgw MjYgMDAwMDAgbg0KMDAwMDA4ODI4NyAwMDAwMCBuDQowMDAwMDkyMzU0IDAwMDAwIG4NCjAwMDAw OTI1OTcgMDAwMDAgbg0KMDAwMDA5MzU1NSAwMDAwMCBuDQowMDAwMDkzNzk4IDAwMDAwIG4NCjAw MDAwOTczMzggMDAwMDAgbg0KMDAwMDA5NzU4MSAwMDAwMCBuDQowMDAwMTAxMTE3IDAwMDAwIG4N CjAwMDAxMDEzNjAgMDAwMDAgbg0KMDAwMDEwNDg3NSAwMDAwMCBuDQowMDAwMTA1MTI3IDAwMDAw IG4NCjAwMDAxMDg2MDAgMDAwMDAgbg0KMDAwMDEwODg0MyAwMDAwMCBuDQowMDAwMTEyNDMwIDAw MDAwIG4NCjAwMDAxMTI2NzMgMDAwMDAgbg0KMDAwMDExNjI3MiAwMDAwMCBuDQowMDAwMTE2NTE1 IDAwMDAwIG4NCjAwMDAxMjAxMTIgMDAwMDAgbg0KMDAwMDEyMDM2NSAwMDAwMCBuDQowMDAwMTIz ODY3IDAwMDAwIG4NCjAwMDAxMjQxMDEgMDAwMDAgbg0KMDAwMDEyNzUxNiAwMDAwMCBuDQowMDAw MTI3NzUwIDAwMDAwIG4NCjAwMDAxMzEyMDAgMDAwMDAgbg0KMDAwMDEzMTQzNCAwMDAwMCBuDQow MDAwMTM0OTEwIDAwMDAwIG4NCjAwMDAxMzUxNDQgMDAwMDAgbg0KMDAwMDEzNzA3MyAwMDAwMCBu DQowMDAwMTM3MzE3IDAwMDAwIG4NCjAwMDAxNDA2MDAgMDAwMDAgbg0KMDAwMDE0MDgzNCAwMDAw MCBuDQowMDAwMTQ0MjY3IDAwMDAwIG4NCjAwMDAxNDQ1MDEgMDAwMDAgbg0KMDAwMDE0ODAxOCAw MDAwMCBuDQowMDAwMTQ4MjUyIDAwMDAwIG4NCjAwMDAxNTE4NDUgMDAwMDAgbg0KMDAwMDE1MjA4 OCAwMDAwMCBuDQowMDAwMTUzOTc1IDAwMDAwIG4NCjAwMDAxNTQyNTYgMDAwMDAgbg0KMDAwMDE1 NDg2MCAwMDAwMCBuDQowMDAwMTU1MDQwIDAwMDAwIG4NCjAwMDAxNTUyOTAgMDAwMDAgbg0KMDAw MDE2NDY4OCAwMDAwMCBuDQowMDAwMTY0OTY5IDAwMDAwIG4NCjAwMDAxNjUzNzIgMDAwMDAgbg0K MDAwMDE3MzUzNiAwMDAwMCBuDQowMDAwMTczOTgxIDAwMDAwIG4NCjAwMDAxODI3MjkgMDAwMDAg bg0KMDAwMDE4MzAwMyAwMDAwMCBuDQowMDAwMDAwMTEwIDY1NTM1IGYNCjAwMDAwMDAxMTEgNjU1 MzUgZg0KMDAwMDAwMDExMiA2NTUzNSBmDQowMDAwMDAwMTEzIDY1NTM1IGYNCjAwMDAwMDAxMTQg NjU1MzUgZg0KMDAwMDAwMDExNSA2NTUzNSBmDQowMDAwMDAwMTE2IDY1NTM1IGYNCjAwMDAwMDAx MTcgNjU1MzUgZg0KMDAwMDAwMDExOCA2NTUzNSBmDQowMDAwMDAwMTE5IDY1NTM1IGYNCjAwMDAw MDAxMjAgNjU1MzUgZg0KMDAwMDAwMDEyMSA2NTUzNSBmDQowMDAwMDAwMTIyIDY1NTM1IGYNCjAw MDAwMDAxMjMgNjU1MzUgZg0KMDAwMDAwMDEyNCA2NTUzNSBmDQowMDAwMDAwMTI1IDY1NTM1IGYN CjAwMDAwMDAxMjYgNjU1MzUgZg0KMDAwMDAwMDEyNyA2NTUzNSBmDQowMDAwMDAwMTI4IDY1NTM1 IGYNCjAwMDAwMDAxMjkgNjU1MzUgZg0KMDAwMDAwMDEzMCA2NTUzNSBmDQowMDAwMDAwMTMxIDY1 NTM1IGYNCjAwMDAwMDAxMzIgNjU1MzUgZg0KMDAwMDAwMDEzMyA2NTUzNSBmDQowMDAwMDAwMTM0 IDY1NTM1IGYNCjAwMDAwMDAxMzUgNjU1MzUgZg0KMDAwMDAwMDEzNiA2NTUzNSBmDQowMDAwMDAw MTM3IDY1NTM1IGYNCjAwMDAwMDAxMzggNjU1MzUgZg0KMDAwMDAwMDEzOSA2NTUzNSBmDQowMDAw MDAwMTQwIDY1NTM1IGYNCjAwMDAwMDAxNDEgNjU1MzUgZg0KMDAwMDAwMDE0MiA2NTUzNSBmDQow MDAwMDAwMTQzIDY1NTM1IGYNCjAwMDAwMDAxNDQgNjU1MzUgZg0KMDAwMDAwMDE0NSA2NTUzNSBm DQowMDAwMDAwMTQ2IDY1NTM1IGYNCjAwMDAwMDAxNDcgNjU1MzUgZg0KMDAwMDAwMDE0OCA2NTUz NSBmDQowMDAwMDAwMTQ5IDY1NTM1IGYNCjAwMDAwMDAxNTAgNjU1MzUgZg0KMDAwMDAwMDE1MSA2 NTUzNSBmDQowMDAwMDAwMTUyIDY1NTM1IGYNCjAwMDAwMDAxNTMgNjU1MzUgZg0KMDAwMDAwMDE1 NCA2NTUzNSBmDQowMDAwMDAwMTU1IDY1NTM1IGYNCjAwMDAwMDAxNTYgNjU1MzUgZg0KMDAwMDAw MDE1NyA2NTUzNSBmDQowMDAwMDAwMTU4IDY1NTM1IGYNCjAwMDAwMDAxNTkgNjU1MzUgZg0KMDAw MDAwMDE2MCA2NTUzNSBmDQowMDAwMDAwMTYxIDY1NTM1IGYNCjAwMDAwMDAxNjIgNjU1MzUgZg0K MDAwMDAwMDE2MyA2NTUzNSBmDQowMDAwMDAwMTY0IDY1NTM1IGYNCjAwMDAwMDAxNjUgNjU1MzUg Zg0KMDAwMDAwMDE2NiA2NTUzNSBmDQowMDAwMDAwMTY3IDY1NTM1IGYNCjAwMDAwMDAxNjggNjU1 MzUgZg0KMDAwMDAwMDE2OSA2NTUzNSBmDQowMDAwMDAwMTcwIDY1NTM1IGYNCjAwMDAwMDAxNzEg NjU1MzUgZg0KMDAwMDAwMDE3MiA2NTUzNSBmDQowMDAwMDAwMTczIDY1NTM1IGYNCjAwMDAwMDAx NzQgNjU1MzUgZg0KMDAwMDAwMDE3NSA2NTUzNSBmDQowMDAwMDAwMTc2IDY1NTM1IGYNCjAwMDAw MDAxNzcgNjU1MzUgZg0KMDAwMDAwMDE3OCA2NTUzNSBmDQowMDAwMDAwMTc5IDY1NTM1IGYNCjAw MDAwMDAxODAgNjU1MzUgZg0KMDAwMDAwMDE4MSA2NTUzNSBmDQowMDAwMDAwMTgyIDY1NTM1IGYN CjAwMDAwMDAxODMgNjU1MzUgZg0KMDAwMDAwMDE4NCA2NTUzNSBmDQowMDAwMDAwMTg1IDY1NTM1 IGYNCjAwMDAwMDAxODYgNjU1MzUgZg0KMDAwMDAwMDE4NyA2NTUzNSBmDQowMDAwMDAwMTg4IDY1 NTM1IGYNCjAwMDAwMDAxODkgNjU1MzUgZg0KMDAwMDAwMDE5MCA2NTUzNSBmDQowMDAwMDAwMTkx IDY1NTM1IGYNCjAwMDAwMDAxOTIgNjU1MzUgZg0KMDAwMDAwMDE5MyA2NTUzNSBmDQowMDAwMDAw MTk0IDY1NTM1IGYNCjAwMDAwMDAxOTUgNjU1MzUgZg0KMDAwMDAwMDE5NiA2NTUzNSBmDQowMDAw MDAwMTk3IDY1NTM1IGYNCjAwMDAwMDAxOTggNjU1MzUgZg0KMDAwMDAwMDE5OSA2NTUzNSBmDQow MDAwMDAwMjAwIDY1NTM1IGYNCjAwMDAwMDAyMDEgNjU1MzUgZg0KMDAwMDAwMDIwMiA2NTUzNSBm DQowMDAwMDAwMjAzIDY1NTM1IGYNCjAwMDAwMDAyMDQgNjU1MzUgZg0KMDAwMDAwMDIwNSA2NTUz NSBmDQowMDAwMDAwMjA2IDY1NTM1IGYNCjAwMDAwMDAyMDcgNjU1MzUgZg0KMDAwMDAwMDIwOCA2 NTUzNSBmDQowMDAwMDAwMjA5IDY1NTM1IGYNCjAwMDAwMDAyMTAgNjU1MzUgZg0KMDAwMDAwMDIx MSA2NTUzNSBmDQowMDAwMDAwMjEyIDY1NTM1IGYNCjAwMDAwMDAyMTMgNjU1MzUgZg0KMDAwMDAw MDIxNCA2NTUzNSBmDQowMDAwMDAwMjE1IDY1NTM1IGYNCjAwMDAwMDAyMTYgNjU1MzUgZg0KMDAw MDAwMDIxNyA2NTUzNSBmDQowMDAwMDAwMjE4IDY1NTM1IGYNCjAwMDAwMDAyMTkgNjU1MzUgZg0K MDAwMDAwMDIyMCA2NTUzNSBmDQowMDAwMDAwMjIxIDY1NTM1IGYNCjAwMDAwMDAyMjIgNjU1MzUg Zg0KMDAwMDAwMDIyMyA2NTUzNSBmDQowMDAwMDAwMjI0IDY1NTM1IGYNCjAwMDAwMDAyMjUgNjU1 MzUgZg0KMDAwMDAwMDIyNiA2NTUzNSBmDQowMDAwMDAwMjI3IDY1NTM1IGYNCjAwMDAwMDAyMjgg NjU1MzUgZg0KMDAwMDAwMDIyOSA2NTUzNSBmDQowMDAwMDAwMjMwIDY1NTM1IGYNCjAwMDAwMDAy MzEgNjU1MzUgZg0KMDAwMDAwMDIzMiA2NTUzNSBmDQowMDAwMDAwMjMzIDY1NTM1IGYNCjAwMDAw MDAyMzQgNjU1MzUgZg0KMDAwMDAwMDIzNSA2NTUzNSBmDQowMDAwMDAwMjM2IDY1NTM1IGYNCjAw MDAwMDAyMzcgNjU1MzUgZg0KMDAwMDAwMDIzOCA2NTUzNSBmDQowMDAwMDAwMjM5IDY1NTM1IGYN CjAwMDAwMDAyNDAgNjU1MzUgZg0KMDAwMDAwMDI0MSA2NTUzNSBmDQowMDAwMDAwMjQyIDY1NTM1 IGYNCjAwMDAwMDAyNDMgNjU1MzUgZg0KMDAwMDAwMDI0NCA2NTUzNSBmDQowMDAwMDAwMjQ1IDY1 NTM1IGYNCjAwMDAwMDAyNDYgNjU1MzUgZg0KMDAwMDAwMDI0NyA2NTUzNSBmDQowMDAwMDAwMjQ4 IDY1NTM1IGYNCjAwMDAwMDAyNDkgNjU1MzUgZg0KMDAwMDAwMDI1MCA2NTUzNSBmDQowMDAwMDAw MjUxIDY1NTM1IGYNCjAwMDAwMDAyNTIgNjU1MzUgZg0KMDAwMDAwMDI1MyA2NTUzNSBmDQowMDAw MDAwMjU0IDY1NTM1IGYNCjAwMDAwMDAyNTUgNjU1MzUgZg0KMDAwMDAwMDI1NiA2NTUzNSBmDQow MDAwMDAwMjU3IDY1NTM1IGYNCjAwMDAwMDAyNTggNjU1MzUgZg0KMDAwMDAwMDI1OSA2NTUzNSBm DQowMDAwMDAwMjYwIDY1NTM1IGYNCjAwMDAwMDAyNjEgNjU1MzUgZg0KMDAwMDAwMDI2MiA2NTUz NSBmDQowMDAwMDAwMjYzIDY1NTM1IGYNCjAwMDAwMDAyNjQgNjU1MzUgZg0KMDAwMDAwMDI2NSA2 NTUzNSBmDQowMDAwMDAwMjY2IDY1NTM1IGYNCjAwMDAwMDAyNjcgNjU1MzUgZg0KMDAwMDAwMDI2 OCA2NTUzNSBmDQowMDAwMDAwMjY5IDY1NTM1IGYNCjAwMDAwMDAyNzAgNjU1MzUgZg0KMDAwMDAw MDI3MSA2NTUzNSBmDQowMDAwMDAwMjcyIDY1NTM1IGYNCjAwMDAwMDAyNzMgNjU1MzUgZg0KMDAw MDAwMDI3NCA2NTUzNSBmDQowMDAwMDAwMjc1IDY1NTM1IGYNCjAwMDAwMDAyNzYgNjU1MzUgZg0K MDAwMDAwMDI3NyA2NTUzNSBmDQowMDAwMDAwMjc4IDY1NTM1IGYNCjAwMDAwMDAyNzkgNjU1MzUg Zg0KMDAwMDAwMDI4MCA2NTUzNSBmDQowMDAwMDAwMjgxIDY1NTM1IGYNCjAwMDAwMDAyODIgNjU1 MzUgZg0KMDAwMDAwMDI4MyA2NTUzNSBmDQowMDAwMDAwMjg0IDY1NTM1IGYNCjAwMDAwMDAyODUg NjU1MzUgZg0KMDAwMDAwMDI4NiA2NTUzNSBmDQowMDAwMDAwMjg3IDY1NTM1IGYNCjAwMDAwMDAy ODggNjU1MzUgZg0KMDAwMDAwMDI4OSA2NTUzNSBmDQowMDAwMDAwMjkwIDY1NTM1IGYNCjAwMDAw MDAyOTEgNjU1MzUgZg0KMDAwMDAwMDI5MiA2NTUzNSBmDQowMDAwMDAwMjkzIDY1NTM1IGYNCjAw MDAwMDAyOTQgNjU1MzUgZg0KMDAwMDAwMDI5NSA2NTUzNSBmDQowMDAwMDAwMjk2IDY1NTM1IGYN CjAwMDAwMDAyOTcgNjU1MzUgZg0KMDAwMDAwMDI5OCA2NTUzNSBmDQowMDAwMDAwMjk5IDY1NTM1 IGYNCjAwMDAwMDAzMDAgNjU1MzUgZg0KMDAwMDAwMDMwMSA2NTUzNSBmDQowMDAwMDAwMzAyIDY1 NTM1IGYNCjAwMDAwMDAzMDMgNjU1MzUgZg0KMDAwMDAwMDMwNCA2NTUzNSBmDQowMDAwMDAwMzA1 IDY1NTM1IGYNCjAwMDAwMDAzMDYgNjU1MzUgZg0KMDAwMDAwMDMwNyA2NTUzNSBmDQowMDAwMDAw MzA4IDY1NTM1IGYNCjAwMDAwMDAzMDkgNjU1MzUgZg0KMDAwMDAwMDMxMCA2NTUzNSBmDQowMDAw MDAwMzExIDY1NTM1IGYNCjAwMDAwMDAzMTIgNjU1MzUgZg0KMDAwMDAwMDMxMyA2NTUzNSBmDQow MDAwMDAwMzE0IDY1NTM1IGYNCjAwMDAwMDAzMTUgNjU1MzUgZg0KMDAwMDAwMDMxNiA2NTUzNSBm DQowMDAwMDAwMzE3IDY1NTM1IGYNCjAwMDAwMDAzMTggNjU1MzUgZg0KMDAwMDAwMDMxOSA2NTUz NSBmDQowMDAwMDAwMzIwIDY1NTM1IGYNCjAwMDAwMDAzMjEgNjU1MzUgZg0KMDAwMDAwMDMyMiA2 NTUzNSBmDQowMDAwMDAwMzIzIDY1NTM1IGYNCjAwMDAwMDAzMjQgNjU1MzUgZg0KMDAwMDAwMDMy NSA2NTUzNSBmDQowMDAwMDAwMzI2IDY1NTM1IGYNCjAwMDAwMDAzMjcgNjU1MzUgZg0KMDAwMDAw MDMyOCA2NTUzNSBmDQowMDAwMDAwMzI5IDY1NTM1IGYNCjAwMDAwMDAzMzAgNjU1MzUgZg0KMDAw MDAwMDMzMSA2NTUzNSBmDQowMDAwMDAwMzMyIDY1NTM1IGYNCjAwMDAwMDAzMzMgNjU1MzUgZg0K MDAwMDAwMDMzNCA2NTUzNSBmDQowMDAwMDAwMzM1IDY1NTM1IGYNCjAwMDAwMDAzMzYgNjU1MzUg Zg0KMDAwMDAwMDMzNyA2NTUzNSBmDQowMDAwMDAwMzM4IDY1NTM1IGYNCjAwMDAwMDAzMzkgNjU1 MzUgZg0KMDAwMDAwMDM0MCA2NTUzNSBmDQowMDAwMDAwMzQxIDY1NTM1IGYNCjAwMDAwMDAzNDIg NjU1MzUgZg0KMDAwMDAwMDM0MyA2NTUzNSBmDQowMDAwMDAwMzQ0IDY1NTM1IGYNCjAwMDAwMDAz NDUgNjU1MzUgZg0KMDAwMDAwMDM0NiA2NTUzNSBmDQowMDAwMDAwMzQ3IDY1NTM1IGYNCjAwMDAw MDAzNDggNjU1MzUgZg0KMDAwMDAwMDM0OSA2NTUzNSBmDQowMDAwMDAwMzUwIDY1NTM1IGYNCjAw MDAwMDAzNTEgNjU1MzUgZg0KMDAwMDAwMDM1MiA2NTUzNSBmDQowMDAwMDAwMzUzIDY1NTM1IGYN CjAwMDAwMDAzNTQgNjU1MzUgZg0KMDAwMDAwMDM1NSA2NTUzNSBmDQowMDAwMDAwMzU2IDY1NTM1 IGYNCjAwMDAwMDAzNTcgNjU1MzUgZg0KMDAwMDAwMDM1OCA2NTUzNSBmDQowMDAwMDAwMzU5IDY1 NTM1IGYNCjAwMDAwMDAzNjAgNjU1MzUgZg0KMDAwMDAwMDM2MSA2NTUzNSBmDQowMDAwMDAwMzYy IDY1NTM1IGYNCjAwMDAwMDAzNjMgNjU1MzUgZg0KMDAwMDAwMDM2NCA2NTUzNSBmDQowMDAwMDAw MzY1IDY1NTM1IGYNCjAwMDAwMDAzNjYgNjU1MzUgZg0KMDAwMDAwMDM2NyA2NTUzNSBmDQowMDAw MDAwMzY4IDY1NTM1IGYNCjAwMDAwMDAzNjkgNjU1MzUgZg0KMDAwMDAwMDM3MCA2NTUzNSBmDQow MDAwMDAwMzcxIDY1NTM1IGYNCjAwMDAwMDAzNzIgNjU1MzUgZg0KMDAwMDAwMDM3MyA2NTUzNSBm DQowMDAwMDAwMzc0IDY1NTM1IGYNCjAwMDAwMDAzNzUgNjU1MzUgZg0KMDAwMDAwMDM3NiA2NTUz NSBmDQowMDAwMDAwMzc3IDY1NTM1IGYNCjAwMDAwMDAzNzggNjU1MzUgZg0KMDAwMDAwMDM3OSA2 NTUzNSBmDQowMDAwMDAwMzgwIDY1NTM1IGYNCjAwMDAwMDAzODEgNjU1MzUgZg0KMDAwMDAwMDM4 MiA2NTUzNSBmDQowMDAwMDAwMzgzIDY1NTM1IGYNCjAwMDAwMDAzODQgNjU1MzUgZg0KMDAwMDAw MDM4NSA2NTUzNSBmDQowMDAwMDAwMzg2IDY1NTM1IGYNCjAwMDAwMDAzODcgNjU1MzUgZg0KMDAw MDAwMDM4OCA2NTUzNSBmDQowMDAwMDAwMzg5IDY1NTM1IGYNCjAwMDAwMDAzOTAgNjU1MzUgZg0K MDAwMDAwMDM5MSA2NTUzNSBmDQowMDAwMDAwMzkyIDY1NTM1IGYNCjAwMDAwMDAzOTMgNjU1MzUg Zg0KMDAwMDAwMDM5NCA2NTUzNSBmDQowMDAwMDAwMzk1IDY1NTM1IGYNCjAwMDAwMDAzOTYgNjU1 MzUgZg0KMDAwMDAwMDM5NyA2NTUzNSBmDQowMDAwMDAwMzk4IDY1NTM1IGYNCjAwMDAwMDAzOTkg NjU1MzUgZg0KMDAwMDAwMDQwMCA2NTUzNSBmDQowMDAwMDAwNDAxIDY1NTM1IGYNCjAwMDAwMDA0 MDIgNjU1MzUgZg0KMDAwMDAwMDQwMyA2NTUzNSBmDQowMDAwMDAwNDA0IDY1NTM1IGYNCjAwMDAw MDA0MDUgNjU1MzUgZg0KMDAwMDAwMDQwNiA2NTUzNSBmDQowMDAwMDAwNDA3IDY1NTM1IGYNCjAw MDAwMDA0MDggNjU1MzUgZg0KMDAwMDAwMDQwOSA2NTUzNSBmDQowMDAwMDAwNDEwIDY1NTM1IGYN CjAwMDAwMDA0MTEgNjU1MzUgZg0KMDAwMDAwMDQxMiA2NTUzNSBmDQowMDAwMDAwNDEzIDY1NTM1 IGYNCjAwMDAwMDA0MTQgNjU1MzUgZg0KMDAwMDAwMDQxNSA2NTUzNSBmDQowMDAwMDAwNDE2IDY1 NTM1IGYNCjAwMDAwMDA0MTcgNjU1MzUgZg0KMDAwMDAwMDQxOCA2NTUzNSBmDQowMDAwMDAwNDE5 IDY1NTM1IGYNCjAwMDAwMDA0MjAgNjU1MzUgZg0KMDAwMDAwMDQyMSA2NTUzNSBmDQowMDAwMDAw NDIyIDY1NTM1IGYNCjAwMDAwMDA0MjMgNjU1MzUgZg0KMDAwMDAwMDQyNCA2NTUzNSBmDQowMDAw MDAwNDI1IDY1NTM1IGYNCjAwMDAwMDA0MjYgNjU1MzUgZg0KMDAwMDAwMDQyNyA2NTUzNSBmDQow MDAwMDAwNDI4IDY1NTM1IGYNCjAwMDAwMDA0MjkgNjU1MzUgZg0KMDAwMDAwMDQzMCA2NTUzNSBm DQowMDAwMDAwNDMxIDY1NTM1IGYNCjAwMDAwMDA0MzIgNjU1MzUgZg0KMDAwMDAwMDQzMyA2NTUz NSBmDQowMDAwMDAwNDM0IDY1NTM1IGYNCjAwMDAwMDA0MzUgNjU1MzUgZg0KMDAwMDAwMDQzNiA2 NTUzNSBmDQowMDAwMDAwNDM3IDY1NTM1IGYNCjAwMDAwMDA0MzggNjU1MzUgZg0KMDAwMDAwMDQz OSA2NTUzNSBmDQowMDAwMDAwNDQwIDY1NTM1IGYNCjAwMDAwMDA0NDEgNjU1MzUgZg0KMDAwMDAw MDQ0MiA2NTUzNSBmDQowMDAwMDAwNDQzIDY1NTM1IGYNCjAwMDAwMDA0NDQgNjU1MzUgZg0KMDAw MDAwMDQ0NSA2NTUzNSBmDQowMDAwMDAwNDQ2IDY1NTM1IGYNCjAwMDAwMDA0NDcgNjU1MzUgZg0K MDAwMDAwMDQ0OCA2NTUzNSBmDQowMDAwMDAwNDQ5IDY1NTM1IGYNCjAwMDAwMDA0NTAgNjU1MzUg Zg0KMDAwMDAwMDQ1MSA2NTUzNSBmDQowMDAwMDAwNDUyIDY1NTM1IGYNCjAwMDAwMDA0NTMgNjU1 MzUgZg0KMDAwMDAwMDQ1NCA2NTUzNSBmDQowMDAwMDAwNDU1IDY1NTM1IGYNCjAwMDAwMDA0NTYg NjU1MzUgZg0KMDAwMDAwMDQ1NyA2NTUzNSBmDQowMDAwMDAwNDU4IDY1NTM1IGYNCjAwMDAwMDA0 NTkgNjU1MzUgZg0KMDAwMDAwMDQ2MCA2NTUzNSBmDQowMDAwMDAwNDYxIDY1NTM1IGYNCjAwMDAw MDA0NjIgNjU1MzUgZg0KMDAwMDAwMDQ2MyA2NTUzNSBmDQowMDAwMDAwNDY0IDY1NTM1IGYNCjAw MDAwMDA0NjUgNjU1MzUgZg0KMDAwMDAwMDQ2NiA2NTUzNSBmDQowMDAwMDAwNDY3IDY1NTM1IGYN CjAwMDAwMDA0NjggNjU1MzUgZg0KMDAwMDAwMDQ2OSA2NTUzNSBmDQowMDAwMDAwNDcwIDY1NTM1 IGYNCjAwMDAwMDA0NzEgNjU1MzUgZg0KMDAwMDAwMDQ3MiA2NTUzNSBmDQowMDAwMDAwNDczIDY1 NTM1IGYNCjAwMDAwMDA0NzQgNjU1MzUgZg0KMDAwMDAwMDQ3NSA2NTUzNSBmDQowMDAwMDAwNDc2 IDY1NTM1IGYNCjAwMDAwMDA0NzcgNjU1MzUgZg0KMDAwMDAwMDQ3OCA2NTUzNSBmDQowMDAwMDAw NDc5IDY1NTM1IGYNCjAwMDAwMDA0ODAgNjU1MzUgZg0KMDAwMDAwMDQ4MSA2NTUzNSBmDQowMDAw MDAwNDgyIDY1NTM1IGYNCjAwMDAwMDA0ODMgNjU1MzUgZg0KMDAwMDAwMDQ4NCA2NTUzNSBmDQow MDAwMDAwNDg1IDY1NTM1IGYNCjAwMDAwMDA0ODYgNjU1MzUgZg0KMDAwMDAwMDQ4NyA2NTUzNSBm DQowMDAwMDAwNDg4IDY1NTM1IGYNCjAwMDAwMDA0ODkgNjU1MzUgZg0KMDAwMDAwMDQ5MCA2NTUz NSBmDQowMDAwMDAwNDkxIDY1NTM1IGYNCjAwMDAwMDA0OTIgNjU1MzUgZg0KMDAwMDAwMDQ5MyA2 NTUzNSBmDQowMDAwMDAwNDk0IDY1NTM1IGYNCjAwMDAwMDA0OTUgNjU1MzUgZg0KMDAwMDAwMDQ5 NiA2NTUzNSBmDQowMDAwMDAwNDk3IDY1NTM1IGYNCjAwMDAwMDA0OTggNjU1MzUgZg0KMDAwMDAw MDQ5OSA2NTUzNSBmDQowMDAwMDAwNTAwIDY1NTM1IGYNCjAwMDAwMDA1MDEgNjU1MzUgZg0KMDAw MDAwMDUwMiA2NTUzNSBmDQowMDAwMDAwNTAzIDY1NTM1IGYNCjAwMDAwMDA1MDQgNjU1MzUgZg0K MDAwMDAwMDUwNSA2NTUzNSBmDQowMDAwMDAwNTA2IDY1NTM1IGYNCjAwMDAwMDA1MDcgNjU1MzUg Zg0KMDAwMDAwMDUwOCA2NTUzNSBmDQowMDAwMDAwNTA5IDY1NTM1IGYNCjAwMDAwMDA1MTAgNjU1 MzUgZg0KMDAwMDAwMDUxMSA2NTUzNSBmDQowMDAwMDAwNTEyIDY1NTM1IGYNCjAwMDAwMDA1MTMg NjU1MzUgZg0KMDAwMDAwMDUxNCA2NTUzNSBmDQowMDAwMDAwNTE1IDY1NTM1IGYNCjAwMDAwMDA1 MTYgNjU1MzUgZg0KMDAwMDAwMDUxNyA2NTUzNSBmDQowMDAwMDAwNTE4IDY1NTM1IGYNCjAwMDAw MDA1MTkgNjU1MzUgZg0KMDAwMDAwMDUyMCA2NTUzNSBmDQowMDAwMDAwNTIxIDY1NTM1IGYNCjAw MDAwMDA1MjIgNjU1MzUgZg0KMDAwMDAwMDUyMyA2NTUzNSBmDQowMDAwMDAwNTI0IDY1NTM1IGYN CjAwMDAwMDA1MjUgNjU1MzUgZg0KMDAwMDAwMDUyNiA2NTUzNSBmDQowMDAwMDAwNTI3IDY1NTM1 IGYNCjAwMDAwMDA1MjggNjU1MzUgZg0KMDAwMDAwMDUyOSA2NTUzNSBmDQowMDAwMDAwNTMwIDY1 NTM1IGYNCjAwMDAwMDA1MzEgNjU1MzUgZg0KMDAwMDAwMDUzMiA2NTUzNSBmDQowMDAwMDAwNTMz IDY1NTM1IGYNCjAwMDAwMDA1MzQgNjU1MzUgZg0KMDAwMDAwMDUzNSA2NTUzNSBmDQowMDAwMDAw NTM2IDY1NTM1IGYNCjAwMDAwMDA1MzcgNjU1MzUgZg0KMDAwMDAwMDUzOCA2NTUzNSBmDQowMDAw MDAwNTM5IDY1NTM1IGYNCjAwMDAwMDA1NDAgNjU1MzUgZg0KMDAwMDAwMDU0MSA2NTUzNSBmDQow MDAwMDAwNTQyIDY1NTM1IGYNCjAwMDAwMDA1NDMgNjU1MzUgZg0KMDAwMDAwMDU0NCA2NTUzNSBm DQowMDAwMDAwNTQ1IDY1NTM1IGYNCjAwMDAwMDA1NDYgNjU1MzUgZg0KMDAwMDAwMDU0NyA2NTUz NSBmDQowMDAwMDAwNTQ4IDY1NTM1IGYNCjAwMDAwMDA1NDkgNjU1MzUgZg0KMDAwMDAwMDU1MCA2 NTUzNSBmDQowMDAwMDAwNTUxIDY1NTM1IGYNCjAwMDAwMDA1NTIgNjU1MzUgZg0KMDAwMDAwMDU1 MyA2NTUzNSBmDQowMDAwMDAwNTU0IDY1NTM1IGYNCjAwMDAwMDA1NTUgNjU1MzUgZg0KMDAwMDAw MDU1NiA2NTUzNSBmDQowMDAwMDAwNTU3IDY1NTM1IGYNCjAwMDAwMDA1NTggNjU1MzUgZg0KMDAw MDAwMDU1OSA2NTUzNSBmDQowMDAwMDAwNTYwIDY1NTM1IGYNCjAwMDAwMDA1NjEgNjU1MzUgZg0K MDAwMDAwMDU2MiA2NTUzNSBmDQowMDAwMDAwNTYzIDY1NTM1IGYNCjAwMDAwMDA1NjQgNjU1MzUg Zg0KMDAwMDAwMDU2NSA2NTUzNSBmDQowMDAwMDAwNTY2IDY1NTM1IGYNCjAwMDAwMDA1NjcgNjU1 MzUgZg0KMDAwMDAwMDU2OCA2NTUzNSBmDQowMDAwMDAwNTY5IDY1NTM1IGYNCjAwMDAwMDA1NzAg NjU1MzUgZg0KMDAwMDAwMDU3MSA2NTUzNSBmDQowMDAwMDAwNTcyIDY1NTM1IGYNCjAwMDAwMDA1 NzMgNjU1MzUgZg0KMDAwMDAwMDU3NCA2NTUzNSBmDQowMDAwMDAwNTc1IDY1NTM1IGYNCjAwMDAw MDA1NzYgNjU1MzUgZg0KMDAwMDAwMDU3NyA2NTUzNSBmDQowMDAwMDAwNTc4IDY1NTM1IGYNCjAw MDAwMDA1NzkgNjU1MzUgZg0KMDAwMDAwMDU4MCA2NTUzNSBmDQowMDAwMDAwNTgxIDY1NTM1IGYN CjAwMDAwMDA1ODIgNjU1MzUgZg0KMDAwMDAwMDU4MyA2NTUzNSBmDQowMDAwMDAwNTg0IDY1NTM1 IGYNCjAwMDAwMDA1ODUgNjU1MzUgZg0KMDAwMDAwMDU4NiA2NTUzNSBmDQowMDAwMDAwNTg3IDY1 NTM1IGYNCjAwMDAwMDA1ODggNjU1MzUgZg0KMDAwMDAwMDU4OSA2NTUzNSBmDQowMDAwMDAwNTkw IDY1NTM1IGYNCjAwMDAwMDA1OTEgNjU1MzUgZg0KMDAwMDAwMDU5MiA2NTUzNSBmDQowMDAwMDAw NTkzIDY1NTM1IGYNCjAwMDAwMDA1OTQgNjU1MzUgZg0KMDAwMDAwMDU5NSA2NTUzNSBmDQowMDAw MDAwNTk2IDY1NTM1IGYNCjAwMDAwMDA1OTcgNjU1MzUgZg0KMDAwMDAwMDU5OCA2NTUzNSBmDQow MDAwMDAwNTk5IDY1NTM1IGYNCjAwMDAwMDA2MDAgNjU1MzUgZg0KMDAwMDAwMDYwMSA2NTUzNSBm DQowMDAwMDAwNjAyIDY1NTM1IGYNCjAwMDAwMDA2MDMgNjU1MzUgZg0KMDAwMDAwMDYwNCA2NTUz NSBmDQowMDAwMDAwNjA1IDY1NTM1IGYNCjAwMDAwMDA2MDYgNjU1MzUgZg0KMDAwMDAwMDYwNyA2 NTUzNSBmDQowMDAwMDAwNjA4IDY1NTM1IGYNCjAwMDAwMDA2MDkgNjU1MzUgZg0KMDAwMDAwMDYx MCA2NTUzNSBmDQowMDAwMDAwNjExIDY1NTM1IGYNCjAwMDAwMDA2MTIgNjU1MzUgZg0KMDAwMDAw MDYxMyA2NTUzNSBmDQowMDAwMDAwNjE0IDY1NTM1IGYNCjAwMDAwMDA2MTUgNjU1MzUgZg0KMDAw MDAwMDYxNiA2NTUzNSBmDQowMDAwMDAwNjE3IDY1NTM1IGYNCjAwMDAwMDA2MTggNjU1MzUgZg0K MDAwMDAwMDYxOSA2NTUzNSBmDQowMDAwMDAwNjIwIDY1NTM1IGYNCjAwMDAwMDA2MjEgNjU1MzUg Zg0KMDAwMDAwMDYyMiA2NTUzNSBmDQowMDAwMDAwNjIzIDY1NTM1IGYNCjAwMDAwMDA2MjQgNjU1 MzUgZg0KMDAwMDAwMDYyNSA2NTUzNSBmDQowMDAwMDAwNjI2IDY1NTM1IGYNCjAwMDAwMDA2Mjcg NjU1MzUgZg0KMDAwMDAwMDYyOCA2NTUzNSBmDQowMDAwMDAwNjI5IDY1NTM1IGYNCjAwMDAwMDA2 MzAgNjU1MzUgZg0KMDAwMDAwMDYzMSA2NTUzNSBmDQowMDAwMDAwNjMyIDY1NTM1IGYNCjAwMDAw MDA2MzMgNjU1MzUgZg0KMDAwMDAwMDYzNCA2NTUzNSBmDQowMDAwMDAwNjM1IDY1NTM1IGYNCjAw MDAwMDA2MzYgNjU1MzUgZg0KMDAwMDAwMDYzNyA2NTUzNSBmDQowMDAwMDAwNjM4IDY1NTM1IGYN CjAwMDAwMDA2MzkgNjU1MzUgZg0KMDAwMDAwMDY0MCA2NTUzNSBmDQowMDAwMDAwNjQxIDY1NTM1 IGYNCjAwMDAwMDA2NDIgNjU1MzUgZg0KMDAwMDAwMDY0MyA2NTUzNSBmDQowMDAwMDAwNjQ0IDY1 NTM1IGYNCjAwMDAwMDA2NDUgNjU1MzUgZg0KMDAwMDAwMDY0NiA2NTUzNSBmDQowMDAwMDAwNjQ3 IDY1NTM1IGYNCjAwMDAwMDA2NDggNjU1MzUgZg0KMDAwMDAwMDY0OSA2NTUzNSBmDQowMDAwMDAw NjUwIDY1NTM1IGYNCjAwMDAwMDA2NTEgNjU1MzUgZg0KMDAwMDAwMDY1MiA2NTUzNSBmDQowMDAw MDAwNjUzIDY1NTM1IGYNCjAwMDAwMDA2NTQgNjU1MzUgZg0KMDAwMDAwMDY1NSA2NTUzNSBmDQow MDAwMDAwNjU2IDY1NTM1IGYNCjAwMDAwMDA2NTcgNjU1MzUgZg0KMDAwMDAwMDY1OCA2NTUzNSBm DQowMDAwMDAwNjU5IDY1NTM1IGYNCjAwMDAwMDA2NjAgNjU1MzUgZg0KMDAwMDAwMDY2MSA2NTUz NSBmDQowMDAwMDAwNjYyIDY1NTM1IGYNCjAwMDAwMDA2NjMgNjU1MzUgZg0KMDAwMDAwMDY2NCA2 NTUzNSBmDQowMDAwMDAwNjY1IDY1NTM1IGYNCjAwMDAwMDA2NjYgNjU1MzUgZg0KMDAwMDAwMDY2 NyA2NTUzNSBmDQowMDAwMDAwNjY4IDY1NTM1IGYNCjAwMDAwMDA2NjkgNjU1MzUgZg0KMDAwMDAw MDY3MCA2NTUzNSBmDQowMDAwMDAwNjcxIDY1NTM1IGYNCjAwMDAwMDA2NzIgNjU1MzUgZg0KMDAw MDAwMDY3MyA2NTUzNSBmDQowMDAwMDAwNjc0IDY1NTM1IGYNCjAwMDAwMDA2NzUgNjU1MzUgZg0K MDAwMDAwMDY3NiA2NTUzNSBmDQowMDAwMDAwNjc3IDY1NTM1IGYNCjAwMDAwMDA2NzggNjU1MzUg Zg0KMDAwMDAwMDY3OSA2NTUzNSBmDQowMDAwMDAwNjgwIDY1NTM1IGYNCjAwMDAwMDA2ODEgNjU1 MzUgZg0KMDAwMDAwMDY4MiA2NTUzNSBmDQowMDAwMDAwNjgzIDY1NTM1IGYNCjAwMDAwMDA2ODQg NjU1MzUgZg0KMDAwMDAwMDY4NSA2NTUzNSBmDQowMDAwMDAwNjg2IDY1NTM1IGYNCjAwMDAwMDA2 ODcgNjU1MzUgZg0KMDAwMDAwMDY4OCA2NTUzNSBmDQowMDAwMDAwNjg5IDY1NTM1IGYNCjAwMDAw MDA2OTAgNjU1MzUgZg0KMDAwMDAwMDY5MSA2NTUzNSBmDQowMDAwMDAwNjkyIDY1NTM1IGYNCjAw MDAwMDA2OTMgNjU1MzUgZg0KMDAwMDAwMDY5NCA2NTUzNSBmDQowMDAwMDAwNjk1IDY1NTM1IGYN CjAwMDAwMDA2OTYgNjU1MzUgZg0KMDAwMDAwMDY5NyA2NTUzNSBmDQowMDAwMDAwNjk4IDY1NTM1 IGYNCjAwMDAwMDA2OTkgNjU1MzUgZg0KMDAwMDAwMDcwMCA2NTUzNSBmDQowMDAwMDAwNzAxIDY1 NTM1IGYNCjAwMDAwMDA3MDIgNjU1MzUgZg0KMDAwMDAwMDcwMyA2NTUzNSBmDQowMDAwMDAwNzA0 IDY1NTM1IGYNCjAwMDAwMDA3MDUgNjU1MzUgZg0KMDAwMDAwMDcwNiA2NTUzNSBmDQowMDAwMDAw NzA3IDY1NTM1IGYNCjAwMDAwMDA3MDggNjU1MzUgZg0KMDAwMDAwMDcwOSA2NTUzNSBmDQowMDAw MDAwNzEwIDY1NTM1IGYNCjAwMDAwMDA3MTEgNjU1MzUgZg0KMDAwMDAwMDcxMiA2NTUzNSBmDQow MDAwMDAwNzEzIDY1NTM1IGYNCjAwMDAwMDA3MTQgNjU1MzUgZg0KMDAwMDAwMDcxNSA2NTUzNSBm DQowMDAwMDAwNzE2IDY1NTM1IGYNCjAwMDAwMDA3MTcgNjU1MzUgZg0KMDAwMDAwMDcxOCA2NTUz NSBmDQowMDAwMDAwNzE5IDY1NTM1IGYNCjAwMDAwMDA3MjAgNjU1MzUgZg0KMDAwMDAwMDcyMSA2 NTUzNSBmDQowMDAwMDAwNzIyIDY1NTM1IGYNCjAwMDAwMDA3MjMgNjU1MzUgZg0KMDAwMDAwMDcy NCA2NTUzNSBmDQowMDAwMDAwNzI1IDY1NTM1IGYNCjAwMDAwMDA3MjYgNjU1MzUgZg0KMDAwMDAw MDcyNyA2NTUzNSBmDQowMDAwMDAwNzI4IDY1NTM1IGYNCjAwMDAwMDA3MjkgNjU1MzUgZg0KMDAw MDAwMDczMCA2NTUzNSBmDQowMDAwMDAwNzMxIDY1NTM1IGYNCjAwMDAwMDA3MzIgNjU1MzUgZg0K MDAwMDAwMDczMyA2NTUzNSBmDQowMDAwMDAwNzM0IDY1NTM1IGYNCjAwMDAwMDA3MzUgNjU1MzUg Zg0KMDAwMDAwMDczNiA2NTUzNSBmDQowMDAwMDAwNzM3IDY1NTM1IGYNCjAwMDAwMDA3MzggNjU1 MzUgZg0KMDAwMDAwMDczOSA2NTUzNSBmDQowMDAwMDAwNzQwIDY1NTM1IGYNCjAwMDAwMDA3NDEg NjU1MzUgZg0KMDAwMDAwMDc0MiA2NTUzNSBmDQowMDAwMDAwNzQzIDY1NTM1IGYNCjAwMDAwMDA3 NDQgNjU1MzUgZg0KMDAwMDAwMDc0NSA2NTUzNSBmDQowMDAwMDAwNzQ2IDY1NTM1IGYNCjAwMDAw MDA3NDcgNjU1MzUgZg0KMDAwMDAwMDc0OCA2NTUzNSBmDQowMDAwMDAwNzQ5IDY1NTM1IGYNCjAw MDAwMDA3NTAgNjU1MzUgZg0KMDAwMDAwMDc1MSA2NTUzNSBmDQowMDAwMDAwNzUyIDY1NTM1IGYN CjAwMDAwMDA3NTMgNjU1MzUgZg0KMDAwMDAwMDc1NCA2NTUzNSBmDQowMDAwMDAwNzU1IDY1NTM1 IGYNCjAwMDAwMDA3NTYgNjU1MzUgZg0KMDAwMDAwMDc1NyA2NTUzNSBmDQowMDAwMDAwNzU4IDY1 NTM1IGYNCjAwMDAwMDA3NTkgNjU1MzUgZg0KMDAwMDAwMDc2MCA2NTUzNSBmDQowMDAwMDAwNzYx IDY1NTM1IGYNCjAwMDAwMDA3NjIgNjU1MzUgZg0KMDAwMDAwMDc2MyA2NTUzNSBmDQowMDAwMDAw NzY0IDY1NTM1IGYNCjAwMDAwMDA3NjUgNjU1MzUgZg0KMDAwMDAwMDc2NiA2NTUzNSBmDQowMDAw MDAwNzY3IDY1NTM1IGYNCjAwMDAwMDA3NjggNjU1MzUgZg0KMDAwMDAwMDc2OSA2NTUzNSBmDQow MDAwMDAwNzcwIDY1NTM1IGYNCjAwMDAwMDA3NzEgNjU1MzUgZg0KMDAwMDAwMDc3MiA2NTUzNSBm DQowMDAwMDAwNzczIDY1NTM1IGYNCjAwMDAwMDA3NzQgNjU1MzUgZg0KMDAwMDAwMDc3NSA2NTUz NSBmDQowMDAwMDAwNzc2IDY1NTM1IGYNCjAwMDAwMDA3NzcgNjU1MzUgZg0KMDAwMDAwMDc3OCA2 NTUzNSBmDQowMDAwMDAwNzc5IDY1NTM1IGYNCjAwMDAwMDA3ODAgNjU1MzUgZg0KMDAwMDAwMDc4 MSA2NTUzNSBmDQowMDAwMDAwNzgyIDY1NTM1IGYNCjAwMDAwMDA3ODMgNjU1MzUgZg0KMDAwMDAw MDc4NCA2NTUzNSBmDQowMDAwMDAwNzg1IDY1NTM1IGYNCjAwMDAwMDA3ODYgNjU1MzUgZg0KMDAw MDAwMDc4NyA2NTUzNSBmDQowMDAwMDAwNzg4IDY1NTM1IGYNCjAwMDAwMDA3ODkgNjU1MzUgZg0K MDAwMDAwMDc5MCA2NTUzNSBmDQowMDAwMDAwNzkxIDY1NTM1IGYNCjAwMDAwMDA3OTIgNjU1MzUg Zg0KMDAwMDAwMDc5MyA2NTUzNSBmDQowMDAwMDAwNzk0IDY1NTM1IGYNCjAwMDAwMDA3OTUgNjU1 MzUgZg0KMDAwMDAwMDc5NiA2NTUzNSBmDQowMDAwMDAwNzk3IDY1NTM1IGYNCjAwMDAwMDA3OTgg NjU1MzUgZg0KMDAwMDAwMDc5OSA2NTUzNSBmDQowMDAwMDAwODAwIDY1NTM1IGYNCjAwMDAwMDA4 MDEgNjU1MzUgZg0KMDAwMDAwMDgwMiA2NTUzNSBmDQowMDAwMDAwODAzIDY1NTM1IGYNCjAwMDAw MDA4MDQgNjU1MzUgZg0KMDAwMDAwMDgwNSA2NTUzNSBmDQowMDAwMDAwODA2IDY1NTM1IGYNCjAw MDAwMDA4MDcgNjU1MzUgZg0KMDAwMDAwMDgwOCA2NTUzNSBmDQowMDAwMDAwODA5IDY1NTM1IGYN CjAwMDAwMDA4MTAgNjU1MzUgZg0KMDAwMDAwMDgxMSA2NTUzNSBmDQowMDAwMDAwODEyIDY1NTM1 IGYNCjAwMDAwMDA4MTMgNjU1MzUgZg0KMDAwMDAwMDgxNCA2NTUzNSBmDQowMDAwMDAwODE1IDY1 NTM1IGYNCjAwMDAwMDA4MTYgNjU1MzUgZg0KMDAwMDAwMDgxNyA2NTUzNSBmDQowMDAwMDAwODE4 IDY1NTM1IGYNCjAwMDAwMDA4MTkgNjU1MzUgZg0KMDAwMDAwMDgyMCA2NTUzNSBmDQowMDAwMDAw ODIxIDY1NTM1IGYNCjAwMDAwMDA4MjIgNjU1MzUgZg0KMDAwMDAwMDgyMyA2NTUzNSBmDQowMDAw MDAwODI0IDY1NTM1IGYNCjAwMDAwMDA4MjUgNjU1MzUgZg0KMDAwMDAwMDgyNiA2NTUzNSBmDQow MDAwMDAwODI3IDY1NTM1IGYNCjAwMDAwMDA4MjggNjU1MzUgZg0KMDAwMDAwMDgyOSA2NTUzNSBm DQowMDAwMDAwODMwIDY1NTM1IGYNCjAwMDAwMDA4MzEgNjU1MzUgZg0KMDAwMDAwMDgzMiA2NTUz NSBmDQowMDAwMDAwODMzIDY1NTM1IGYNCjAwMDAwMDA4MzQgNjU1MzUgZg0KMDAwMDAwMDgzNSA2 NTUzNSBmDQowMDAwMDAwODM2IDY1NTM1IGYNCjAwMDAwMDA4MzcgNjU1MzUgZg0KMDAwMDAwMDgz OCA2NTUzNSBmDQowMDAwMDAwODM5IDY1NTM1IGYNCjAwMDAwMDA4NDAgNjU1MzUgZg0KMDAwMDAw MDg0MSA2NTUzNSBmDQowMDAwMDAwODQyIDY1NTM1IGYNCjAwMDAwMDA4NDMgNjU1MzUgZg0KMDAw MDAwMDg0NCA2NTUzNSBmDQowMDAwMDAwODQ1IDY1NTM1IGYNCjAwMDAwMDA4NDYgNjU1MzUgZg0K MDAwMDAwMDg0NyA2NTUzNSBmDQowMDAwMDAwODQ4IDY1NTM1IGYNCjAwMDAwMDA4NDkgNjU1MzUg Zg0KMDAwMDAwMDg1MCA2NTUzNSBmDQowMDAwMDAwODUxIDY1NTM1IGYNCjAwMDAwMDA4NTIgNjU1 MzUgZg0KMDAwMDAwMDg1MyA2NTUzNSBmDQowMDAwMDAwODU0IDY1NTM1IGYNCjAwMDAwMDA4NTUg NjU1MzUgZg0KMDAwMDAwMDg1NiA2NTUzNSBmDQowMDAwMDAwODU3IDY1NTM1IGYNCjAwMDAwMDA4 NTggNjU1MzUgZg0KMDAwMDAwMDg1OSA2NTUzNSBmDQowMDAwMDAwODYwIDY1NTM1IGYNCjAwMDAw MDA4NjEgNjU1MzUgZg0KMDAwMDAwMDg2MiA2NTUzNSBmDQowMDAwMDAwODYzIDY1NTM1IGYNCjAw MDAwMDA4NjQgNjU1MzUgZg0KMDAwMDAwMDg2NSA2NTUzNSBmDQowMDAwMDAwODY2IDY1NTM1IGYN CjAwMDAwMDA4NjcgNjU1MzUgZg0KMDAwMDAwMDg2OCA2NTUzNSBmDQowMDAwMDAwODY5IDY1NTM1 IGYNCjAwMDAwMDA4NzAgNjU1MzUgZg0KMDAwMDAwMDg3MSA2NTUzNSBmDQowMDAwMDAwODcyIDY1 NTM1IGYNCjAwMDAwMDA4NzMgNjU1MzUgZg0KMDAwMDAwMDg3NCA2NTUzNSBmDQowMDAwMDAwODc1 IDY1NTM1IGYNCjAwMDAwMDA4NzYgNjU1MzUgZg0KMDAwMDAwMDg3NyA2NTUzNSBmDQowMDAwMDAw ODc4IDY1NTM1IGYNCjAwMDAwMDA4NzkgNjU1MzUgZg0KMDAwMDAwMDg4MCA2NTUzNSBmDQowMDAw MDAwODgxIDY1NTM1IGYNCjAwMDAwMDA4ODIgNjU1MzUgZg0KMDAwMDAwMDg4MyA2NTUzNSBmDQow MDAwMDAwODg0IDY1NTM1IGYNCjAwMDAwMDA4ODUgNjU1MzUgZg0KMDAwMDAwMDg4NiA2NTUzNSBm DQowMDAwMDAwODg3IDY1NTM1IGYNCjAwMDAwMDA4ODggNjU1MzUgZg0KMDAwMDAwMDg4OSA2NTUz NSBmDQowMDAwMDAwODkwIDY1NTM1IGYNCjAwMDAwMDA4OTEgNjU1MzUgZg0KMDAwMDAwMDg5MiA2 NTUzNSBmDQowMDAwMDAwODkzIDY1NTM1IGYNCjAwMDAwMDA4OTQgNjU1MzUgZg0KMDAwMDAwMDg5 NSA2NTUzNSBmDQowMDAwMDAwODk2IDY1NTM1IGYNCjAwMDAwMDA4OTcgNjU1MzUgZg0KMDAwMDAw MDg5OCA2NTUzNSBmDQowMDAwMDAwODk5IDY1NTM1IGYNCjAwMDAwMDA5MDAgNjU1MzUgZg0KMDAw MDAwMDkwMSA2NTUzNSBmDQowMDAwMDAwOTAyIDY1NTM1IGYNCjAwMDAwMDA5MDMgNjU1MzUgZg0K MDAwMDAwMDkwNCA2NTUzNSBmDQowMDAwMDAwOTA1IDY1NTM1IGYNCjAwMDAwMDA5MDYgNjU1MzUg Zg0KMDAwMDAwMDkwNyA2NTUzNSBmDQowMDAwMDAwOTA4IDY1NTM1IGYNCjAwMDAwMDA5MDkgNjU1 MzUgZg0KMDAwMDAwMDkxMCA2NTUzNSBmDQowMDAwMDAwOTExIDY1NTM1IGYNCjAwMDAwMDA5MTIg NjU1MzUgZg0KMDAwMDAwMDkxMyA2NTUzNSBmDQowMDAwMDAwOTE0IDY1NTM1IGYNCjAwMDAwMDA5 MTUgNjU1MzUgZg0KMDAwMDAwMDkxNiA2NTUzNSBmDQowMDAwMDAwOTE3IDY1NTM1IGYNCjAwMDAw MDA5MTggNjU1MzUgZg0KMDAwMDAwMDkxOSA2NTUzNSBmDQowMDAwMDAwOTIwIDY1NTM1IGYNCjAw MDAwMDA5MjEgNjU1MzUgZg0KMDAwMDAwMDkyMiA2NTUzNSBmDQowMDAwMDAwOTIzIDY1NTM1IGYN CjAwMDAwMDA5MjQgNjU1MzUgZg0KMDAwMDAwMDkyNSA2NTUzNSBmDQowMDAwMDAwOTI2IDY1NTM1 IGYNCjAwMDAwMDA5MjcgNjU1MzUgZg0KMDAwMDAwMDkyOCA2NTUzNSBmDQowMDAwMDAwOTI5IDY1 NTM1IGYNCjAwMDAwMDA5MzAgNjU1MzUgZg0KMDAwMDAwMDkzMSA2NTUzNSBmDQowMDAwMDAwOTMy IDY1NTM1IGYNCjAwMDAwMDA5MzMgNjU1MzUgZg0KMDAwMDAwMDkzNCA2NTUzNSBmDQowMDAwMDAw OTM1IDY1NTM1IGYNCjAwMDAwMDA5MzYgNjU1MzUgZg0KMDAwMDAwMDkzNyA2NTUzNSBmDQowMDAw MDAwOTM4IDY1NTM1IGYNCjAwMDAwMDA5MzkgNjU1MzUgZg0KMDAwMDAwMDk0MCA2NTUzNSBmDQow MDAwMDAwOTQxIDY1NTM1IGYNCjAwMDAwMDA5NDIgNjU1MzUgZg0KMDAwMDAwMDk0MyA2NTUzNSBm DQowMDAwMDAwOTQ0IDY1NTM1IGYNCjAwMDAwMDA5NDUgNjU1MzUgZg0KMDAwMDAwMDk0NiA2NTUz NSBmDQowMDAwMDAwOTQ3IDY1NTM1IGYNCjAwMDAwMDA5NDggNjU1MzUgZg0KMDAwMDAwMDk0OSA2 NTUzNSBmDQowMDAwMDAwOTUwIDY1NTM1IGYNCjAwMDAwMDA5NTEgNjU1MzUgZg0KMDAwMDAwMDk1 MiA2NTUzNSBmDQowMDAwMDAwOTUzIDY1NTM1IGYNCjAwMDAwMDA5NTQgNjU1MzUgZg0KMDAwMDAw MDk1NSA2NTUzNSBmDQowMDAwMDAwOTU2IDY1NTM1IGYNCjAwMDAwMDA5NTcgNjU1MzUgZg0KMDAw MDAwMDk1OCA2NTUzNSBmDQowMDAwMDAwOTU5IDY1NTM1IGYNCjAwMDAwMDA5NjAgNjU1MzUgZg0K MDAwMDAwMDk2MSA2NTUzNSBmDQowMDAwMDAwOTYyIDY1NTM1IGYNCjAwMDAwMDA5NjMgNjU1MzUg Zg0KMDAwMDAwMDk2NCA2NTUzNSBmDQowMDAwMDAwOTY1IDY1NTM1IGYNCjAwMDAwMDA5NjYgNjU1 MzUgZg0KMDAwMDAwMDk2NyA2NTUzNSBmDQowMDAwMDAwOTY4IDY1NTM1IGYNCjAwMDAwMDA5Njkg NjU1MzUgZg0KMDAwMDAwMDk3MCA2NTUzNSBmDQowMDAwMDAwOTcxIDY1NTM1IGYNCjAwMDAwMDA5 NzIgNjU1MzUgZg0KMDAwMDAwMDk3MyA2NTUzNSBmDQowMDAwMDAwOTc0IDY1NTM1IGYNCjAwMDAw MDA5NzUgNjU1MzUgZg0KMDAwMDAwMDk3NiA2NTUzNSBmDQowMDAwMDAwOTc3IDY1NTM1IGYNCjAw MDAwMDA5NzggNjU1MzUgZg0KMDAwMDAwMDk3OSA2NTUzNSBmDQowMDAwMDAwOTgwIDY1NTM1IGYN CjAwMDAwMDA5ODEgNjU1MzUgZg0KMDAwMDAwMDk4MiA2NTUzNSBmDQowMDAwMDAwOTgzIDY1NTM1 IGYNCjAwMDAwMDA5ODQgNjU1MzUgZg0KMDAwMDAwMDk4NSA2NTUzNSBmDQowMDAwMDAwOTg2IDY1 NTM1IGYNCjAwMDAwMDA5ODcgNjU1MzUgZg0KMDAwMDAwMDk4OCA2NTUzNSBmDQowMDAwMDAwOTg5 IDY1NTM1IGYNCjAwMDAwMDA5OTAgNjU1MzUgZg0KMDAwMDAwMDk5MSA2NTUzNSBmDQowMDAwMDAw OTkyIDY1NTM1IGYNCjAwMDAwMDA5OTMgNjU1MzUgZg0KMDAwMDAwMDk5NCA2NTUzNSBmDQowMDAw MDAwOTk1IDY1NTM1IGYNCjAwMDAwMDA5OTYgNjU1MzUgZg0KMDAwMDAwMDk5NyA2NTUzNSBmDQow MDAwMDAwOTk4IDY1NTM1IGYNCjAwMDAwMDA5OTkgNjU1MzUgZg0KMDAwMDAwMTAwMCA2NTUzNSBm DQowMDAwMDAxMDAxIDY1NTM1IGYNCjAwMDAwMDEwMDIgNjU1MzUgZg0KMDAwMDAwMTAwMyA2NTUz NSBmDQowMDAwMDAxMDA0IDY1NTM1IGYNCjAwMDAwMDEwMDUgNjU1MzUgZg0KMDAwMDAwMTAwNiA2 NTUzNSBmDQowMDAwMDAxMDA3IDY1NTM1IGYNCjAwMDAwMDEwMDggNjU1MzUgZg0KMDAwMDAwMTAw OSA2NTUzNSBmDQowMDAwMDAxMDEwIDY1NTM1IGYNCjAwMDAwMDEwMTEgNjU1MzUgZg0KMDAwMDAw MTAxMiA2NTUzNSBmDQowMDAwMDAxMDEzIDY1NTM1IGYNCjAwMDAwMDEwMTQgNjU1MzUgZg0KMDAw MDAwMTAxNSA2NTUzNSBmDQowMDAwMDAxMDE2IDY1NTM1IGYNCjAwMDAwMDEwMTcgNjU1MzUgZg0K MDAwMDAwMTAxOCA2NTUzNSBmDQowMDAwMDAxMDE5IDY1NTM1IGYNCjAwMDAwMDEwMjAgNjU1MzUg Zg0KMDAwMDAwMTAyMSA2NTUzNSBmDQowMDAwMDAxMDIyIDY1NTM1IGYNCjAwMDAwMDEwMjMgNjU1 MzUgZg0KMDAwMDAwMTAyNCA2NTUzNSBmDQowMDAwMDAxMDI1IDY1NTM1IGYNCjAwMDAwMDEwMjYg NjU1MzUgZg0KMDAwMDAwMTAyNyA2NTUzNSBmDQowMDAwMDAxMDI4IDY1NTM1IGYNCjAwMDAwMDEw MjkgNjU1MzUgZg0KMDAwMDAwMTAzMCA2NTUzNSBmDQowMDAwMDAxMDMxIDY1NTM1IGYNCjAwMDAw MDEwMzIgNjU1MzUgZg0KMDAwMDAwMTAzMyA2NTUzNSBmDQowMDAwMDAxMDM0IDY1NTM1IGYNCjAw MDAwMDEwMzUgNjU1MzUgZg0KMDAwMDAwMTAzNiA2NTUzNSBmDQowMDAwMDAxMDM3IDY1NTM1IGYN CjAwMDAwMDEwMzggNjU1MzUgZg0KMDAwMDAwMTAzOSA2NTUzNSBmDQowMDAwMDAxMDQwIDY1NTM1 IGYNCjAwMDAwMDEwNDEgNjU1MzUgZg0KMDAwMDAwMTA0MiA2NTUzNSBmDQowMDAwMDAxMDQzIDY1 NTM1IGYNCjAwMDAwMDEwNDQgNjU1MzUgZg0KMDAwMDAwMTA0NSA2NTUzNSBmDQowMDAwMDAxMDQ2 IDY1NTM1IGYNCjAwMDAwMDEwNDcgNjU1MzUgZg0KMDAwMDAwMTA0OCA2NTUzNSBmDQowMDAwMDAx MDQ5IDY1NTM1IGYNCjAwMDAwMDEwNTAgNjU1MzUgZg0KMDAwMDAwMTA1MSA2NTUzNSBmDQowMDAw MDAxMDUyIDY1NTM1IGYNCjAwMDAwMDEwNTMgNjU1MzUgZg0KMDAwMDAwMTA1NCA2NTUzNSBmDQow MDAwMDAxMDU1IDY1NTM1IGYNCjAwMDAwMDEwNTYgNjU1MzUgZg0KMDAwMDAwMTA1NyA2NTUzNSBm DQowMDAwMDAxMDU4IDY1NTM1IGYNCjAwMDAwMDEwNTkgNjU1MzUgZg0KMDAwMDAwMTA2MCA2NTUz NSBmDQowMDAwMDAxMDYxIDY1NTM1IGYNCjAwMDAwMDEwNjIgNjU1MzUgZg0KMDAwMDAwMTA2MyA2 NTUzNSBmDQowMDAwMDAxMDY0IDY1NTM1IGYNCjAwMDAwMDEwNjUgNjU1MzUgZg0KMDAwMDAwMTA2 NiA2NTUzNSBmDQowMDAwMDAxMDY3IDY1NTM1IGYNCjAwMDAwMDEwNjggNjU1MzUgZg0KMDAwMDAw MTA2OSA2NTUzNSBmDQowMDAwMDAxMDcwIDY1NTM1IGYNCjAwMDAwMDEwNzEgNjU1MzUgZg0KMDAw MDAwMTA3MiA2NTUzNSBmDQowMDAwMDAxMDczIDY1NTM1IGYNCjAwMDAwMDEwNzQgNjU1MzUgZg0K MDAwMDAwMTA3NSA2NTUzNSBmDQowMDAwMDAxMDc2IDY1NTM1IGYNCjAwMDAwMDEwNzcgNjU1MzUg Zg0KMDAwMDAwMTA3OCA2NTUzNSBmDQowMDAwMDAxMDc5IDY1NTM1IGYNCjAwMDAwMDEwODAgNjU1 MzUgZg0KMDAwMDAwMTA4MSA2NTUzNSBmDQowMDAwMDAxMDgyIDY1NTM1IGYNCjAwMDAwMDEwODMg NjU1MzUgZg0KMDAwMDAwMTA4NCA2NTUzNSBmDQowMDAwMDAxMDg1IDY1NTM1IGYNCjAwMDAwMDEw ODYgNjU1MzUgZg0KMDAwMDAwMTA4NyA2NTUzNSBmDQowMDAwMDAxMDg4IDY1NTM1IGYNCjAwMDAw MDEwODkgNjU1MzUgZg0KMDAwMDAwMTA5MCA2NTUzNSBmDQowMDAwMDAxMDkxIDY1NTM1IGYNCjAw MDAwMDEwOTIgNjU1MzUgZg0KMDAwMDAwMTA5MyA2NTUzNSBmDQowMDAwMDAxMDk0IDY1NTM1IGYN CjAwMDAwMDEwOTUgNjU1MzUgZg0KMDAwMDAwMTA5NiA2NTUzNSBmDQowMDAwMDAxMDk3IDY1NTM1 IGYNCjAwMDAwMDEwOTggNjU1MzUgZg0KMDAwMDAwMTA5OSA2NTUzNSBmDQowMDAwMDAxMTAwIDY1 NTM1IGYNCjAwMDAwMDExMDEgNjU1MzUgZg0KMDAwMDAwMTEwMiA2NTUzNSBmDQowMDAwMDAxMTAz IDY1NTM1IGYNCjAwMDAwMDExMDQgNjU1MzUgZg0KMDAwMDAwMTEwNSA2NTUzNSBmDQowMDAwMDAx MTA2IDY1NTM1IGYNCjAwMDAwMDExMDcgNjU1MzUgZg0KMDAwMDAwMTEwOCA2NTUzNSBmDQowMDAw MDAxMTA5IDY1NTM1IGYNCjAwMDAwMDExMTAgNjU1MzUgZg0KMDAwMDAwMTExMSA2NTUzNSBmDQow MDAwMDAxMTEyIDY1NTM1IGYNCjAwMDAwMDExMTMgNjU1MzUgZg0KMDAwMDAwMTExNCA2NTUzNSBm DQowMDAwMDAxMTE1IDY1NTM1IGYNCjAwMDAwMDExMTYgNjU1MzUgZg0KMDAwMDAwMTExNyA2NTUz NSBmDQowMDAwMDAxMTE4IDY1NTM1IGYNCjAwMDAwMDExMTkgNjU1MzUgZg0KMDAwMDAwMTEyMCA2 NTUzNSBmDQowMDAwMDAxMTIxIDY1NTM1IGYNCjAwMDAwMDExMjIgNjU1MzUgZg0KMDAwMDAwMTEy MyA2NTUzNSBmDQowMDAwMDAxMTI0IDY1NTM1IGYNCjAwMDAwMDExMjUgNjU1MzUgZg0KMDAwMDAw MTEyNiA2NTUzNSBmDQowMDAwMDAxMTI3IDY1NTM1IGYNCjAwMDAwMDExMjggNjU1MzUgZg0KMDAw MDAwMTEyOSA2NTUzNSBmDQowMDAwMDAxMTMwIDY1NTM1IGYNCjAwMDAwMDExMzEgNjU1MzUgZg0K MDAwMDAwMTEzMiA2NTUzNSBmDQowMDAwMDAxMTMzIDY1NTM1IGYNCjAwMDAwMDExMzQgNjU1MzUg Zg0KMDAwMDAwMTEzNSA2NTUzNSBmDQowMDAwMDAxMTM2IDY1NTM1IGYNCjAwMDAwMDExMzcgNjU1 MzUgZg0KMDAwMDAwMTEzOCA2NTUzNSBmDQowMDAwMDAxMTM5IDY1NTM1IGYNCjAwMDAwMDExNDAg NjU1MzUgZg0KMDAwMDAwMTE0MSA2NTUzNSBmDQowMDAwMDAxMTQyIDY1NTM1IGYNCjAwMDAwMDEx NDMgNjU1MzUgZg0KMDAwMDAwMTE0NCA2NTUzNSBmDQowMDAwMDAxMTQ1IDY1NTM1IGYNCjAwMDAw MDExNDYgNjU1MzUgZg0KMDAwMDAwMTE0NyA2NTUzNSBmDQowMDAwMDAxMTQ4IDY1NTM1IGYNCjAw MDAwMDExNDkgNjU1MzUgZg0KMDAwMDAwMTE1MCA2NTUzNSBmDQowMDAwMDAxMTUxIDY1NTM1IGYN CjAwMDAwMDExNTIgNjU1MzUgZg0KMDAwMDAwMTE1MyA2NTUzNSBmDQowMDAwMDAxMTU0IDY1NTM1 IGYNCjAwMDAwMDExNTUgNjU1MzUgZg0KMDAwMDAwMTE1NiA2NTUzNSBmDQowMDAwMDAxMTU3IDY1 NTM1IGYNCjAwMDAwMDExNTggNjU1MzUgZg0KMDAwMDAwMTE1OSA2NTUzNSBmDQowMDAwMDAxMTYw IDY1NTM1IGYNCjAwMDAwMDExNjEgNjU1MzUgZg0KMDAwMDAwMTE2MiA2NTUzNSBmDQowMDAwMDAx MTYzIDY1NTM1IGYNCjAwMDAwMDExNjQgNjU1MzUgZg0KMDAwMDAwMTE2NSA2NTUzNSBmDQowMDAw MDAxMTY2IDY1NTM1IGYNCjAwMDAwMDExNjcgNjU1MzUgZg0KMDAwMDAwMTE2OCA2NTUzNSBmDQow MDAwMDAxMTY5IDY1NTM1IGYNCjAwMDAwMDExNzAgNjU1MzUgZg0KMDAwMDAwMTE3MSA2NTUzNSBm DQowMDAwMDAxMTcyIDY1NTM1IGYNCjAwMDAwMDExNzMgNjU1MzUgZg0KMDAwMDAwMTE3NCA2NTUz NSBmDQowMDAwMDAxMTc1IDY1NTM1IGYNCjAwMDAwMDExNzYgNjU1MzUgZg0KMDAwMDAwMTE3NyA2 NTUzNSBmDQowMDAwMDAxMTc4IDY1NTM1IGYNCjAwMDAwMDExNzkgNjU1MzUgZg0KMDAwMDAwMTE4 MCA2NTUzNSBmDQowMDAwMDAxMTgxIDY1NTM1IGYNCjAwMDAwMDExODIgNjU1MzUgZg0KMDAwMDAw MTE4MyA2NTUzNSBmDQowMDAwMDAxMTg0IDY1NTM1IGYNCjAwMDAwMDExODUgNjU1MzUgZg0KMDAw MDAwMTE4NiA2NTUzNSBmDQowMDAwMDAxMTg3IDY1NTM1IGYNCjAwMDAwMDExODggNjU1MzUgZg0K MDAwMDAwMTE4OSA2NTUzNSBmDQowMDAwMDAxMTkwIDY1NTM1IGYNCjAwMDAwMDExOTEgNjU1MzUg Zg0KMDAwMDAwMTE5MiA2NTUzNSBmDQowMDAwMDAxMTkzIDY1NTM1IGYNCjAwMDAwMDExOTQgNjU1 MzUgZg0KMDAwMDAwMTE5NSA2NTUzNSBmDQowMDAwMDAxMTk2IDY1NTM1IGYNCjAwMDAwMDExOTcg NjU1MzUgZg0KMDAwMDAwMTE5OCA2NTUzNSBmDQowMDAwMDAxMTk5IDY1NTM1IGYNCjAwMDAwMDEy MDAgNjU1MzUgZg0KMDAwMDAwMTIwMSA2NTUzNSBmDQowMDAwMDAxMjAyIDY1NTM1IGYNCjAwMDAw MDEyMDMgNjU1MzUgZg0KMDAwMDAwMTIwNCA2NTUzNSBmDQowMDAwMDAxMjA1IDY1NTM1IGYNCjAw MDAwMDEyMDYgNjU1MzUgZg0KMDAwMDAwMTIwNyA2NTUzNSBmDQowMDAwMDAxMjA4IDY1NTM1IGYN CjAwMDAwMDEyMDkgNjU1MzUgZg0KMDAwMDAwMTIxMCA2NTUzNSBmDQowMDAwMDAxMjExIDY1NTM1 IGYNCjAwMDAwMDEyMTIgNjU1MzUgZg0KMDAwMDAwMTIxMyA2NTUzNSBmDQowMDAwMDAxMjE0IDY1 NTM1IGYNCjAwMDAwMDEyMTUgNjU1MzUgZg0KMDAwMDAwMTIxNiA2NTUzNSBmDQowMDAwMDAxMjE3 IDY1NTM1IGYNCjAwMDAwMDEyMTggNjU1MzUgZg0KMDAwMDAwMTIxOSA2NTUzNSBmDQowMDAwMDAx MjIwIDY1NTM1IGYNCjAwMDAwMDEyMjEgNjU1MzUgZg0KMDAwMDAwMTIyMiA2NTUzNSBmDQowMDAw MDAxMjIzIDY1NTM1IGYNCjAwMDAwMDEyMjQgNjU1MzUgZg0KMDAwMDAwMTIyNSA2NTUzNSBmDQow MDAwMDAxMjI2IDY1NTM1IGYNCjAwMDAwMDEyMjcgNjU1MzUgZg0KMDAwMDAwMTIyOCA2NTUzNSBm DQowMDAwMDAxMjI5IDY1NTM1IGYNCjAwMDAwMDEyMzAgNjU1MzUgZg0KMDAwMDAwMTIzMSA2NTUz NSBmDQowMDAwMDAxMjMyIDY1NTM1IGYNCjAwMDAwMDEyMzMgNjU1MzUgZg0KMDAwMDAwMTIzNCA2 NTUzNSBmDQowMDAwMDAxMjM1IDY1NTM1IGYNCjAwMDAwMDEyMzYgNjU1MzUgZg0KMDAwMDAwMTIz NyA2NTUzNSBmDQowMDAwMDAxMjM4IDY1NTM1IGYNCjAwMDAwMDEyMzkgNjU1MzUgZg0KMDAwMDAw MTI0MCA2NTUzNSBmDQowMDAwMDAxMjQxIDY1NTM1IGYNCjAwMDAwMDEyNDIgNjU1MzUgZg0KMDAw MDAwMTI0MyA2NTUzNSBmDQowMDAwMDAxMjQ0IDY1NTM1IGYNCjAwMDAwMDEyNDUgNjU1MzUgZg0K MDAwMDAwMTI0NiA2NTUzNSBmDQowMDAwMDAxMjQ3IDY1NTM1IGYNCjAwMDAwMDEyNDggNjU1MzUg Zg0KMDAwMDAwMTI0OSA2NTUzNSBmDQowMDAwMDAxMjUwIDY1NTM1IGYNCjAwMDAwMDEyNTEgNjU1 MzUgZg0KMDAwMDAwMTI1MiA2NTUzNSBmDQowMDAwMDAxMjUzIDY1NTM1IGYNCjAwMDAwMDEyNTQg NjU1MzUgZg0KMDAwMDAwMTI1NSA2NTUzNSBmDQowMDAwMDAxMjU2IDY1NTM1IGYNCjAwMDAwMDEy NTcgNjU1MzUgZg0KMDAwMDAwMTI1OCA2NTUzNSBmDQowMDAwMDAxMjU5IDY1NTM1IGYNCjAwMDAw MDEyNjAgNjU1MzUgZg0KMDAwMDAwMTI2MSA2NTUzNSBmDQowMDAwMDAxMjYyIDY1NTM1IGYNCjAw MDAwMDEyNjMgNjU1MzUgZg0KMDAwMDAwMTI2NCA2NTUzNSBmDQowMDAwMDAxMjY1IDY1NTM1IGYN CjAwMDAwMDEyNjYgNjU1MzUgZg0KMDAwMDAwMTI2NyA2NTUzNSBmDQowMDAwMDAxMjY4IDY1NTM1 IGYNCjAwMDAwMDEyNjkgNjU1MzUgZg0KMDAwMDAwMTI3MCA2NTUzNSBmDQowMDAwMDAxMjcxIDY1 NTM1IGYNCjAwMDAwMDEyNzIgNjU1MzUgZg0KMDAwMDAwMTI3MyA2NTUzNSBmDQowMDAwMDAxMjc0 IDY1NTM1IGYNCjAwMDAwMDEyNzUgNjU1MzUgZg0KMDAwMDAwMTI3NiA2NTUzNSBmDQowMDAwMDAx Mjc3IDY1NTM1IGYNCjAwMDAwMDEyNzggNjU1MzUgZg0KMDAwMDAwMTI3OSA2NTUzNSBmDQowMDAw MDAxMjgwIDY1NTM1IGYNCjAwMDAwMDEyODEgNjU1MzUgZg0KMDAwMDAwMTI4MiA2NTUzNSBmDQow MDAwMDAxMjgzIDY1NTM1IGYNCjAwMDAwMDEyODQgNjU1MzUgZg0KMDAwMDAwMTI4NSA2NTUzNSBm DQowMDAwMDAxMjg2IDY1NTM1IGYNCjAwMDAwMDEyODcgNjU1MzUgZg0KMDAwMDAwMTI4OCA2NTUz NSBmDQowMDAwMDAxMjg5IDY1NTM1IGYNCjAwMDAwMDEyOTAgNjU1MzUgZg0KMDAwMDAwMTI5MSA2 NTUzNSBmDQowMDAwMDAxMjkyIDY1NTM1IGYNCjAwMDAwMDEyOTMgNjU1MzUgZg0KMDAwMDAwMTI5 NCA2NTUzNSBmDQowMDAwMDAxMjk1IDY1NTM1IGYNCjAwMDAwMDEyOTYgNjU1MzUgZg0KMDAwMDAw MTI5NyA2NTUzNSBmDQowMDAwMDAxMjk4IDY1NTM1IGYNCjAwMDAwMDEyOTkgNjU1MzUgZg0KMDAw MDAwMTMwMCA2NTUzNSBmDQowMDAwMDAxMzAxIDY1NTM1IGYNCjAwMDAwMDEzMDIgNjU1MzUgZg0K MDAwMDAwMTMwMyA2NTUzNSBmDQowMDAwMDAxMzA0IDY1NTM1IGYNCjAwMDAwMDEzMDUgNjU1MzUg Zg0KMDAwMDAwMTMwNiA2NTUzNSBmDQowMDAwMDAxMzA3IDY1NTM1IGYNCjAwMDAwMDEzMDggNjU1 MzUgZg0KMDAwMDAwMTMwOSA2NTUzNSBmDQowMDAwMDAxMzEwIDY1NTM1IGYNCjAwMDAwMDEzMTEg NjU1MzUgZg0KMDAwMDAwMTMxMiA2NTUzNSBmDQowMDAwMDAxMzEzIDY1NTM1IGYNCjAwMDAwMDEz MTQgNjU1MzUgZg0KMDAwMDAwMTMxNSA2NTUzNSBmDQowMDAwMDAxMzE2IDY1NTM1IGYNCjAwMDAw MDEzMTcgNjU1MzUgZg0KMDAwMDAwMTMxOCA2NTUzNSBmDQowMDAwMDAxMzE5IDY1NTM1IGYNCjAw MDAwMDEzMjAgNjU1MzUgZg0KMDAwMDAwMTMyMSA2NTUzNSBmDQowMDAwMDAxMzIyIDY1NTM1IGYN CjAwMDAwMDEzMjMgNjU1MzUgZg0KMDAwMDAwMTMyNCA2NTUzNSBmDQowMDAwMDAxMzI1IDY1NTM1 IGYNCjAwMDAwMDEzMjYgNjU1MzUgZg0KMDAwMDAwMTMyNyA2NTUzNSBmDQowMDAwMDAxMzI4IDY1 NTM1IGYNCjAwMDAwMDEzMjkgNjU1MzUgZg0KMDAwMDAwMTMzMCA2NTUzNSBmDQowMDAwMDAxMzMx IDY1NTM1IGYNCjAwMDAwMDEzMzIgNjU1MzUgZg0KMDAwMDAwMTMzMyA2NTUzNSBmDQowMDAwMDAx MzM0IDY1NTM1IGYNCjAwMDAwMDEzMzUgNjU1MzUgZg0KMDAwMDAwMTMzNiA2NTUzNSBmDQowMDAw MDAxMzM3IDY1NTM1IGYNCjAwMDAwMDEzMzggNjU1MzUgZg0KMDAwMDAwMTMzOSA2NTUzNSBmDQow MDAwMDAxMzQwIDY1NTM1IGYNCjAwMDAwMDEzNDEgNjU1MzUgZg0KMDAwMDAwMTM0MiA2NTUzNSBm DQowMDAwMDAxMzQzIDY1NTM1IGYNCjAwMDAwMDEzNDQgNjU1MzUgZg0KMDAwMDAwMTM0NSA2NTUz NSBmDQowMDAwMDAxMzQ2IDY1NTM1IGYNCjAwMDAwMDEzNDcgNjU1MzUgZg0KMDAwMDAwMTM0OCA2 NTUzNSBmDQowMDAwMDAxMzQ5IDY1NTM1IGYNCjAwMDAwMDEzNTAgNjU1MzUgZg0KMDAwMDAwMTM1 MSA2NTUzNSBmDQowMDAwMDAxMzUyIDY1NTM1IGYNCjAwMDAwMDEzNTMgNjU1MzUgZg0KMDAwMDAw MTM1NCA2NTUzNSBmDQowMDAwMDAxMzU1IDY1NTM1IGYNCjAwMDAwMDEzNTYgNjU1MzUgZg0KMDAw MDAwMTM1NyA2NTUzNSBmDQowMDAwMDAxMzU4IDY1NTM1IGYNCjAwMDAwMDEzNTkgNjU1MzUgZg0K MDAwMDAwMTM2MCA2NTUzNSBmDQowMDAwMDAxMzYxIDY1NTM1IGYNCjAwMDAwMDEzNjIgNjU1MzUg Zg0KMDAwMDAwMTM2MyA2NTUzNSBmDQowMDAwMDAxMzY0IDY1NTM1IGYNCjAwMDAwMDEzNjUgNjU1 MzUgZg0KMDAwMDAwMTM2NiA2NTUzNSBmDQowMDAwMDAxMzY3IDY1NTM1IGYNCjAwMDAwMDEzNjgg NjU1MzUgZg0KMDAwMDAwMTM2OSA2NTUzNSBmDQowMDAwMDAxMzcwIDY1NTM1IGYNCjAwMDAwMDEz NzEgNjU1MzUgZg0KMDAwMDAwMTM3MiA2NTUzNSBmDQowMDAwMDAxMzczIDY1NTM1IGYNCjAwMDAw MDEzNzQgNjU1MzUgZg0KMDAwMDAwMTM3NSA2NTUzNSBmDQowMDAwMDAxMzc2IDY1NTM1IGYNCjAw MDAwMDEzNzcgNjU1MzUgZg0KMDAwMDAwMTM3OCA2NTUzNSBmDQowMDAwMDAxMzc5IDY1NTM1IGYN CjAwMDAwMDEzODAgNjU1MzUgZg0KMDAwMDAwMTM4MSA2NTUzNSBmDQowMDAwMDAxMzgyIDY1NTM1 IGYNCjAwMDAwMDEzODMgNjU1MzUgZg0KMDAwMDAwMTM4NCA2NTUzNSBmDQowMDAwMDAxMzg1IDY1 NTM1IGYNCjAwMDAwMDEzODYgNjU1MzUgZg0KMDAwMDAwMTM4NyA2NTUzNSBmDQowMDAwMDAxMzg4 IDY1NTM1IGYNCjAwMDAwMDEzODkgNjU1MzUgZg0KMDAwMDAwMTM5MCA2NTUzNSBmDQowMDAwMDAx MzkxIDY1NTM1IGYNCjAwMDAwMDEzOTIgNjU1MzUgZg0KMDAwMDAwMTM5MyA2NTUzNSBmDQowMDAw MDAxMzk0IDY1NTM1IGYNCjAwMDAwMDEzOTUgNjU1MzUgZg0KMDAwMDAwMTM5NiA2NTUzNSBmDQow MDAwMDAxMzk3IDY1NTM1IGYNCjAwMDAwMDEzOTggNjU1MzUgZg0KMDAwMDAwMTM5OSA2NTUzNSBm DQowMDAwMDAxNDAwIDY1NTM1IGYNCjAwMDAwMDE0MDEgNjU1MzUgZg0KMDAwMDAwMTQwMiA2NTUz NSBmDQowMDAwMDAxNDAzIDY1NTM1IGYNCjAwMDAwMDE0MDQgNjU1MzUgZg0KMDAwMDAwMTQwNSA2 NTUzNSBmDQowMDAwMDAxNDA2IDY1NTM1IGYNCjAwMDAwMDE0MDcgNjU1MzUgZg0KMDAwMDAwMTQw OCA2NTUzNSBmDQowMDAwMDAxNDA5IDY1NTM1IGYNCjAwMDAwMDE0MTAgNjU1MzUgZg0KMDAwMDAw MTQxMSA2NTUzNSBmDQowMDAwMDAxNDEyIDY1NTM1IGYNCjAwMDAwMDE0MTMgNjU1MzUgZg0KMDAw MDAwMTQxNCA2NTUzNSBmDQowMDAwMDAxNDE1IDY1NTM1IGYNCjAwMDAwMDE0MTYgNjU1MzUgZg0K MDAwMDAwMTQxNyA2NTUzNSBmDQowMDAwMDAxNDE4IDY1NTM1IGYNCjAwMDAwMDE0MTkgNjU1MzUg Zg0KMDAwMDAwMTQyMCA2NTUzNSBmDQowMDAwMDAxNDIxIDY1NTM1IGYNCjAwMDAwMDE0MjIgNjU1 MzUgZg0KMDAwMDAwMTQyMyA2NTUzNSBmDQowMDAwMDAxNDI0IDY1NTM1IGYNCjAwMDAwMDE0MjUg NjU1MzUgZg0KMDAwMDAwMTQyNiA2NTUzNSBmDQowMDAwMDAxNDI3IDY1NTM1IGYNCjAwMDAwMDE0 MjggNjU1MzUgZg0KMDAwMDAwMTQyOSA2NTUzNSBmDQowMDAwMDAxNDMwIDY1NTM1IGYNCjAwMDAw MDE0MzEgNjU1MzUgZg0KMDAwMDAwMTQzMiA2NTUzNSBmDQowMDAwMDAxNDMzIDY1NTM1IGYNCjAw MDAwMDE0MzQgNjU1MzUgZg0KMDAwMDAwMTQzNSA2NTUzNSBmDQowMDAwMDAxNDM2IDY1NTM1IGYN CjAwMDAwMDE0MzcgNjU1MzUgZg0KMDAwMDAwMTQzOCA2NTUzNSBmDQowMDAwMDAxNDM5IDY1NTM1 IGYNCjAwMDAwMDE0NDAgNjU1MzUgZg0KMDAwMDAwMTQ0MSA2NTUzNSBmDQowMDAwMDAxNDQyIDY1 NTM1IGYNCjAwMDAwMDE0NDMgNjU1MzUgZg0KMDAwMDAwMTQ0NCA2NTUzNSBmDQowMDAwMDAxNDQ1 IDY1NTM1IGYNCjAwMDAwMDE0NDYgNjU1MzUgZg0KMDAwMDAwMTQ0NyA2NTUzNSBmDQowMDAwMDAx NDQ4IDY1NTM1IGYNCjAwMDAwMDE0NDkgNjU1MzUgZg0KMDAwMDAwMTQ1MCA2NTUzNSBmDQowMDAw MDAxNDUxIDY1NTM1IGYNCjAwMDAwMDE0NTIgNjU1MzUgZg0KMDAwMDAwMTQ1MyA2NTUzNSBmDQow MDAwMDAxNDU0IDY1NTM1IGYNCjAwMDAwMDE0NTUgNjU1MzUgZg0KMDAwMDAwMTQ1NiA2NTUzNSBm DQowMDAwMDAxNDU3IDY1NTM1IGYNCjAwMDAwMDE0NTggNjU1MzUgZg0KMDAwMDAwMTQ1OSA2NTUz NSBmDQowMDAwMDAxNDYwIDY1NTM1IGYNCjAwMDAwMDE0NjEgNjU1MzUgZg0KMDAwMDAwMTQ2MiA2 NTUzNSBmDQowMDAwMDAxNDYzIDY1NTM1IGYNCjAwMDAwMDE0NjQgNjU1MzUgZg0KMDAwMDAwMTQ2 NSA2NTUzNSBmDQowMDAwMDAxNDY2IDY1NTM1IGYNCjAwMDAwMDE0NjcgNjU1MzUgZg0KMDAwMDAw MTQ2OCA2NTUzNSBmDQowMDAwMDAxNDY5IDY1NTM1IGYNCjAwMDAwMDE0NzAgNjU1MzUgZg0KMDAw MDAwMTQ3MSA2NTUzNSBmDQowMDAwMDAxNDcyIDY1NTM1IGYNCjAwMDAwMDE0NzMgNjU1MzUgZg0K MDAwMDAwMTQ3NCA2NTUzNSBmDQowMDAwMDAxNDc1IDY1NTM1IGYNCjAwMDAwMDE0NzYgNjU1MzUg Zg0KMDAwMDAwMTQ3NyA2NTUzNSBmDQowMDAwMDAxNDc4IDY1NTM1IGYNCjAwMDAwMDE0NzkgNjU1 MzUgZg0KMDAwMDAwMTQ4MCA2NTUzNSBmDQowMDAwMDAxNDgxIDY1NTM1IGYNCjAwMDAwMDE0ODIg NjU1MzUgZg0KMDAwMDAwMTQ4MyA2NTUzNSBmDQowMDAwMDAxNDg0IDY1NTM1IGYNCjAwMDAwMDE0 ODUgNjU1MzUgZg0KMDAwMDAwMTQ4NiA2NTUzNSBmDQowMDAwMDAxNDg3IDY1NTM1IGYNCjAwMDAw MDE0ODggNjU1MzUgZg0KMDAwMDAwMTQ4OSA2NTUzNSBmDQowMDAwMDAxNDkwIDY1NTM1IGYNCjAw MDAwMDE0OTEgNjU1MzUgZg0KMDAwMDAwMTQ5MiA2NTUzNSBmDQowMDAwMDAxNDkzIDY1NTM1IGYN CjAwMDAwMDE0OTQgNjU1MzUgZg0KMDAwMDAwMTQ5NSA2NTUzNSBmDQowMDAwMDAxNDk2IDY1NTM1 IGYNCjAwMDAwMDE0OTcgNjU1MzUgZg0KMDAwMDAwMTQ5OCA2NTUzNSBmDQowMDAwMDAxNDk5IDY1 NTM1IGYNCjAwMDAwMDE1MDAgNjU1MzUgZg0KMDAwMDAwMTUwMSA2NTUzNSBmDQowMDAwMDAxNTAy IDY1NTM1IGYNCjAwMDAwMDE1MDMgNjU1MzUgZg0KMDAwMDAwMTUwNCA2NTUzNSBmDQowMDAwMDAx NTA1IDY1NTM1IGYNCjAwMDAwMDE1MDYgNjU1MzUgZg0KMDAwMDAwMTUwNyA2NTUzNSBmDQowMDAw MDAxNTA4IDY1NTM1IGYNCjAwMDAwMDE1MDkgNjU1MzUgZg0KMDAwMDAwMTUxMCA2NTUzNSBmDQow MDAwMDAxNTExIDY1NTM1IGYNCjAwMDAwMDE1MTIgNjU1MzUgZg0KMDAwMDAwMTUxMyA2NTUzNSBm DQowMDAwMDAxNTE0IDY1NTM1IGYNCjAwMDAwMDE1MTUgNjU1MzUgZg0KMDAwMDAwMTUxNiA2NTUz NSBmDQowMDAwMDAxNTE3IDY1NTM1IGYNCjAwMDAwMDE1MTggNjU1MzUgZg0KMDAwMDAwMTUxOSA2 NTUzNSBmDQowMDAwMDAxNTIwIDY1NTM1IGYNCjAwMDAwMDE1MjEgNjU1MzUgZg0KMDAwMDAwMTUy MiA2NTUzNSBmDQowMDAwMDAxNTIzIDY1NTM1IGYNCjAwMDAwMDE1MjQgNjU1MzUgZg0KMDAwMDAw MTUyNSA2NTUzNSBmDQowMDAwMDAxNTI2IDY1NTM1IGYNCjAwMDAwMDE1MjcgNjU1MzUgZg0KMDAw MDAwMTUyOCA2NTUzNSBmDQowMDAwMDAxNTI5IDY1NTM1IGYNCjAwMDAwMDE1MzAgNjU1MzUgZg0K MDAwMDAwMTUzMSA2NTUzNSBmDQowMDAwMDAxNTMyIDY1NTM1IGYNCjAwMDAwMDE1MzMgNjU1MzUg Zg0KMDAwMDAwMTUzNCA2NTUzNSBmDQowMDAwMDAxNTM1IDY1NTM1IGYNCjAwMDAwMDE1MzYgNjU1 MzUgZg0KMDAwMDAwMTUzNyA2NTUzNSBmDQowMDAwMDAxNTM4IDY1NTM1IGYNCjAwMDAwMDE1Mzkg NjU1MzUgZg0KMDAwMDAwMTU0MCA2NTUzNSBmDQowMDAwMDAxNTQxIDY1NTM1IGYNCjAwMDAwMDE1 NDIgNjU1MzUgZg0KMDAwMDAwMTU0MyA2NTUzNSBmDQowMDAwMDAxNTQ0IDY1NTM1IGYNCjAwMDAw MDE1NDUgNjU1MzUgZg0KMDAwMDAwMTU0NiA2NTUzNSBmDQowMDAwMDAxNTQ3IDY1NTM1IGYNCjAw MDAwMDE1NDggNjU1MzUgZg0KMDAwMDAwMTU0OSA2NTUzNSBmDQowMDAwMDAxNTUwIDY1NTM1IGYN CjAwMDAwMDE1NTEgNjU1MzUgZg0KMDAwMDAwMTU1MiA2NTUzNSBmDQowMDAwMDAxNTUzIDY1NTM1 IGYNCjAwMDAwMDE1NTQgNjU1MzUgZg0KMDAwMDAwMTU1NSA2NTUzNSBmDQowMDAwMDAxNTU2IDY1 NTM1IGYNCjAwMDAwMDE1NTcgNjU1MzUgZg0KMDAwMDAwMTU1OCA2NTUzNSBmDQowMDAwMDAxNTU5 IDY1NTM1IGYNCjAwMDAwMDE1NjAgNjU1MzUgZg0KMDAwMDAwMTU2MSA2NTUzNSBmDQowMDAwMDAx NTYyIDY1NTM1IGYNCjAwMDAwMDE1NjMgNjU1MzUgZg0KMDAwMDAwMTU2NCA2NTUzNSBmDQowMDAw MDAxNTY1IDY1NTM1IGYNCjAwMDAwMDE1NjYgNjU1MzUgZg0KMDAwMDAwMTU2NyA2NTUzNSBmDQow MDAwMDAxNTY4IDY1NTM1IGYNCjAwMDAwMDE1NjkgNjU1MzUgZg0KMDAwMDAwMTU3MCA2NTUzNSBm DQowMDAwMDAxNTcxIDY1NTM1IGYNCjAwMDAwMDE1NzIgNjU1MzUgZg0KMDAwMDAwMTU3MyA2NTUz NSBmDQowMDAwMDAxNTc0IDY1NTM1IGYNCjAwMDAwMDE1NzUgNjU1MzUgZg0KMDAwMDAwMTU3NiA2 NTUzNSBmDQowMDAwMDAxNTc3IDY1NTM1IGYNCjAwMDAwMDE1NzggNjU1MzUgZg0KMDAwMDAwMTU3 OSA2NTUzNSBmDQowMDAwMDAxNTgwIDY1NTM1IGYNCjAwMDAwMDE1ODEgNjU1MzUgZg0KMDAwMDAw MTU4MiA2NTUzNSBmDQowMDAwMDAxNTgzIDY1NTM1IGYNCjAwMDAwMDE1ODQgNjU1MzUgZg0KMDAw MDAwMTU4NSA2NTUzNSBmDQowMDAwMDAxNTg2IDY1NTM1IGYNCjAwMDAwMDE1ODcgNjU1MzUgZg0K MDAwMDAwMTU4OCA2NTUzNSBmDQowMDAwMDAxNTg5IDY1NTM1IGYNCjAwMDAwMDE1OTAgNjU1MzUg Zg0KMDAwMDAwMTU5MSA2NTUzNSBmDQowMDAwMDAxNTkyIDY1NTM1IGYNCjAwMDAwMDE1OTMgNjU1 MzUgZg0KMDAwMDAwMTU5NCA2NTUzNSBmDQowMDAwMDAxNTk1IDY1NTM1IGYNCjAwMDAwMDE1OTYg NjU1MzUgZg0KMDAwMDAwMTU5NyA2NTUzNSBmDQowMDAwMDAxNTk4IDY1NTM1IGYNCjAwMDAwMDE1 OTkgNjU1MzUgZg0KMDAwMDAwMTYwMCA2NTUzNSBmDQowMDAwMDAxNjAxIDY1NTM1IGYNCjAwMDAw MDE2MDIgNjU1MzUgZg0KMDAwMDAwMTYwMyA2NTUzNSBmDQowMDAwMDAxNjA0IDY1NTM1IGYNCjAw MDAwMDE2MDUgNjU1MzUgZg0KMDAwMDAwMTYwNiA2NTUzNSBmDQowMDAwMDAxNjA3IDY1NTM1IGYN CjAwMDAwMDE2MDggNjU1MzUgZg0KMDAwMDAwMTYwOSA2NTUzNSBmDQowMDAwMDAxNjEwIDY1NTM1 IGYNCjAwMDAwMDE2MTEgNjU1MzUgZg0KMDAwMDAwMTYxMiA2NTUzNSBmDQowMDAwMDAxNjEzIDY1 NTM1IGYNCjAwMDAwMDE2MTQgNjU1MzUgZg0KMDAwMDAwMTYxNSA2NTUzNSBmDQowMDAwMDAxNjE2 IDY1NTM1IGYNCjAwMDAwMDE2MTcgNjU1MzUgZg0KMDAwMDAwMTYxOCA2NTUzNSBmDQowMDAwMDAx NjE5IDY1NTM1IGYNCjAwMDAwMDE2MjAgNjU1MzUgZg0KMDAwMDAwMTYyMSA2NTUzNSBmDQowMDAw MDAxNjIyIDY1NTM1IGYNCjAwMDAwMDE2MjMgNjU1MzUgZg0KMDAwMDAwMTYyNCA2NTUzNSBmDQow MDAwMDAxNjI1IDY1NTM1IGYNCjAwMDAwMDE2MjYgNjU1MzUgZg0KMDAwMDAwMTYyNyA2NTUzNSBm DQowMDAwMDAxNjI4IDY1NTM1IGYNCjAwMDAwMDE2MjkgNjU1MzUgZg0KMDAwMDAwMTYzMCA2NTUz NSBmDQowMDAwMDAxNjMxIDY1NTM1IGYNCjAwMDAwMDE2MzIgNjU1MzUgZg0KMDAwMDAwMTYzMyA2 NTUzNSBmDQowMDAwMDAxNjM0IDY1NTM1IGYNCjAwMDAwMDE2MzUgNjU1MzUgZg0KMDAwMDAwMTYz NiA2NTUzNSBmDQowMDAwMDAxNjM3IDY1NTM1IGYNCjAwMDAwMDE2MzggNjU1MzUgZg0KMDAwMDAw MTYzOSA2NTUzNSBmDQowMDAwMDAxNjQwIDY1NTM1IGYNCjAwMDAwMDE2NDEgNjU1MzUgZg0KMDAw MDAwMTY0MiA2NTUzNSBmDQowMDAwMDAxNjQzIDY1NTM1IGYNCjAwMDAwMDE2NDQgNjU1MzUgZg0K MDAwMDAwMTY0NSA2NTUzNSBmDQowMDAwMDAxNjQ2IDY1NTM1IGYNCjAwMDAwMDE2NDcgNjU1MzUg Zg0KMDAwMDAwMTY0OCA2NTUzNSBmDQowMDAwMDAxNjQ5IDY1NTM1IGYNCjAwMDAwMDE2NTAgNjU1 MzUgZg0KMDAwMDAwMTY1MSA2NTUzNSBmDQowMDAwMDAxNjUyIDY1NTM1IGYNCjAwMDAwMDE2NTMg NjU1MzUgZg0KMDAwMDAwMTY1NCA2NTUzNSBmDQowMDAwMDAxNjU1IDY1NTM1IGYNCjAwMDAwMDE2 NTYgNjU1MzUgZg0KMDAwMDAwMTY1NyA2NTUzNSBmDQowMDAwMDAxNjU4IDY1NTM1IGYNCjAwMDAw MDE2NTkgNjU1MzUgZg0KMDAwMDAwMTY2MCA2NTUzNSBmDQowMDAwMDAxNjYxIDY1NTM1IGYNCjAw MDAwMDE2NjIgNjU1MzUgZg0KMDAwMDAwMTY2MyA2NTUzNSBmDQowMDAwMDAxNjY0IDY1NTM1IGYN CjAwMDAwMDE2NjUgNjU1MzUgZg0KMDAwMDAwMTY2NiA2NTUzNSBmDQowMDAwMDAxNjY3IDY1NTM1 IGYNCjAwMDAwMDE2NjggNjU1MzUgZg0KMDAwMDAwMTY2OSA2NTUzNSBmDQowMDAwMDAxNjcwIDY1 NTM1IGYNCjAwMDAwMDE2NzEgNjU1MzUgZg0KMDAwMDAwMTY3MiA2NTUzNSBmDQowMDAwMDAxNjcz IDY1NTM1IGYNCjAwMDAwMDE2NzQgNjU1MzUgZg0KMDAwMDAwMTY3NSA2NTUzNSBmDQowMDAwMDAx Njc2IDY1NTM1IGYNCjAwMDAwMDE2NzcgNjU1MzUgZg0KMDAwMDAwMTY3OCA2NTUzNSBmDQowMDAw MDAxNjc5IDY1NTM1IGYNCjAwMDAwMDE2ODAgNjU1MzUgZg0KMDAwMDAwMTY4MSA2NTUzNSBmDQow MDAwMDAxNjgyIDY1NTM1IGYNCjAwMDAwMDE2ODMgNjU1MzUgZg0KMDAwMDAwMTY4NCA2NTUzNSBm DQowMDAwMDAxNjg1IDY1NTM1IGYNCjAwMDAwMDE2ODYgNjU1MzUgZg0KMDAwMDAwMTY4NyA2NTUz NSBmDQowMDAwMDAxNjg4IDY1NTM1IGYNCjAwMDAwMDE2ODkgNjU1MzUgZg0KMDAwMDAwMTY5MCA2 NTUzNSBmDQowMDAwMDAxNjkxIDY1NTM1IGYNCjAwMDAwMDE2OTIgNjU1MzUgZg0KMDAwMDAwMTY5 MyA2NTUzNSBmDQowMDAwMDAxNjk0IDY1NTM1IGYNCjAwMDAwMDE2OTUgNjU1MzUgZg0KMDAwMDAw MTY5NiA2NTUzNSBmDQowMDAwMDAxNjk3IDY1NTM1IGYNCjAwMDAwMDE2OTggNjU1MzUgZg0KMDAw MDAwMTY5OSA2NTUzNSBmDQowMDAwMDAxNzAwIDY1NTM1IGYNCjAwMDAwMDE3MDEgNjU1MzUgZg0K MDAwMDAwMTcwMiA2NTUzNSBmDQowMDAwMDAxNzAzIDY1NTM1IGYNCjAwMDAwMDE3MDQgNjU1MzUg Zg0KMDAwMDAwMTcwNSA2NTUzNSBmDQowMDAwMDAxNzA2IDY1NTM1IGYNCjAwMDAwMDE3MDcgNjU1 MzUgZg0KMDAwMDAwMTcwOCA2NTUzNSBmDQowMDAwMDAxNzA5IDY1NTM1IGYNCjAwMDAwMDE3MTAg NjU1MzUgZg0KMDAwMDAwMTcxMSA2NTUzNSBmDQowMDAwMDAxNzEyIDY1NTM1IGYNCjAwMDAwMDE3 MTMgNjU1MzUgZg0KMDAwMDAwMTcxNCA2NTUzNSBmDQowMDAwMDAxNzE1IDY1NTM1IGYNCjAwMDAw MDE3MTYgNjU1MzUgZg0KMDAwMDAwMTcxNyA2NTUzNSBmDQowMDAwMDAxNzE4IDY1NTM1IGYNCjAw MDAwMDE3MTkgNjU1MzUgZg0KMDAwMDAwMTcyMCA2NTUzNSBmDQowMDAwMDAxNzIxIDY1NTM1IGYN CjAwMDAwMDE3MjIgNjU1MzUgZg0KMDAwMDAwMTcyMyA2NTUzNSBmDQowMDAwMDAxNzI0IDY1NTM1 IGYNCjAwMDAwMDE3MjUgNjU1MzUgZg0KMDAwMDAwMTcyNiA2NTUzNSBmDQowMDAwMDAxNzI3IDY1 NTM1IGYNCjAwMDAwMDE3MjggNjU1MzUgZg0KMDAwMDAwMTcyOSA2NTUzNSBmDQowMDAwMDAxNzMw IDY1NTM1IGYNCjAwMDAwMDE3MzEgNjU1MzUgZg0KMDAwMDAwMTczMiA2NTUzNSBmDQowMDAwMDAx NzMzIDY1NTM1IGYNCjAwMDAwMDE3MzQgNjU1MzUgZg0KMDAwMDAwMTczNSA2NTUzNSBmDQowMDAw MDAxNzM2IDY1NTM1IGYNCjAwMDAwMDE3MzcgNjU1MzUgZg0KMDAwMDAwMTczOCA2NTUzNSBmDQow MDAwMDAxNzM5IDY1NTM1IGYNCjAwMDAwMDE3NDAgNjU1MzUgZg0KMDAwMDAwMTc0MSA2NTUzNSBm DQowMDAwMDAxNzQyIDY1NTM1IGYNCjAwMDAwMDE3NDMgNjU1MzUgZg0KMDAwMDAwMTc0NCA2NTUz NSBmDQowMDAwMDAxNzQ1IDY1NTM1IGYNCjAwMDAwMDE3NDYgNjU1MzUgZg0KMDAwMDAwMTc0NyA2 NTUzNSBmDQowMDAwMDAxNzQ4IDY1NTM1IGYNCjAwMDAwMDE3NDkgNjU1MzUgZg0KMDAwMDAwMTc1 MCA2NTUzNSBmDQowMDAwMDAxNzUxIDY1NTM1IGYNCjAwMDAwMDE3NTIgNjU1MzUgZg0KMDAwMDAw MTc1MyA2NTUzNSBmDQowMDAwMDAxNzU0IDY1NTM1IGYNCjAwMDAwMDE3NTUgNjU1MzUgZg0KMDAw MDAwMTc1NiA2NTUzNSBmDQowMDAwMDAxNzU3IDY1NTM1IGYNCjAwMDAwMDE3NTggNjU1MzUgZg0K MDAwMDAwMTc1OSA2NTUzNSBmDQowMDAwMDAxNzYwIDY1NTM1IGYNCjAwMDAwMDE3NjEgNjU1MzUg Zg0KMDAwMDAwMTc2MiA2NTUzNSBmDQowMDAwMDAxNzYzIDY1NTM1IGYNCjAwMDAwMDE3NjQgNjU1 MzUgZg0KMDAwMDAwMTc2NSA2NTUzNSBmDQowMDAwMDAxNzY2IDY1NTM1IGYNCjAwMDAwMDE3Njcg NjU1MzUgZg0KMDAwMDAwMTc2OCA2NTUzNSBmDQowMDAwMDAxNzY5IDY1NTM1IGYNCjAwMDAwMDE3 NzAgNjU1MzUgZg0KMDAwMDAwMTc3MSA2NTUzNSBmDQowMDAwMDAxNzcyIDY1NTM1IGYNCjAwMDAw MDE3NzMgNjU1MzUgZg0KMDAwMDAwMTc3NCA2NTUzNSBmDQowMDAwMDAxNzc1IDY1NTM1IGYNCjAw MDAwMDE3NzYgNjU1MzUgZg0KMDAwMDAwMTc3NyA2NTUzNSBmDQowMDAwMDAxNzc4IDY1NTM1IGYN CjAwMDAwMDE3NzkgNjU1MzUgZg0KMDAwMDAwMTc4MCA2NTUzNSBmDQowMDAwMDAxNzgxIDY1NTM1 IGYNCjAwMDAwMDE3ODIgNjU1MzUgZg0KMDAwMDAwMTc4MyA2NTUzNSBmDQowMDAwMDAxNzg0IDY1 NTM1IGYNCjAwMDAwMDE3ODUgNjU1MzUgZg0KMDAwMDAwMTc4NiA2NTUzNSBmDQowMDAwMDAxNzg3 IDY1NTM1IGYNCjAwMDAwMDE3ODggNjU1MzUgZg0KMDAwMDAwMTc4OSA2NTUzNSBmDQowMDAwMDAx NzkwIDY1NTM1IGYNCjAwMDAwMDE3OTEgNjU1MzUgZg0KMDAwMDAwMTc5MiA2NTUzNSBmDQowMDAw MDAxNzkzIDY1NTM1IGYNCjAwMDAwMDE3OTQgNjU1MzUgZg0KMDAwMDAwMTc5NSA2NTUzNSBmDQow MDAwMDAxNzk2IDY1NTM1IGYNCjAwMDAwMDE3OTcgNjU1MzUgZg0KMDAwMDAwMTc5OCA2NTUzNSBm DQowMDAwMDAxNzk5IDY1NTM1IGYNCjAwMDAwMDE4MDAgNjU1MzUgZg0KMDAwMDAwMTgwMSA2NTUz NSBmDQowMDAwMDAxODAyIDY1NTM1IGYNCjAwMDAwMDE4MDMgNjU1MzUgZg0KMDAwMDAwMTgwNCA2 NTUzNSBmDQowMDAwMDAxODA1IDY1NTM1IGYNCjAwMDAwMDE4MDYgNjU1MzUgZg0KMDAwMDAwMTgw NyA2NTUzNSBmDQowMDAwMDAxODA4IDY1NTM1IGYNCjAwMDAwMDE4MDkgNjU1MzUgZg0KMDAwMDAw MTgxMCA2NTUzNSBmDQowMDAwMDAxODExIDY1NTM1IGYNCjAwMDAwMDE4MTIgNjU1MzUgZg0KMDAw MDAwMTgxMyA2NTUzNSBmDQowMDAwMDAxODE0IDY1NTM1IGYNCjAwMDAwMDE4MTUgNjU1MzUgZg0K MDAwMDAwMTgxNiA2NTUzNSBmDQowMDAwMDAxODE3IDY1NTM1IGYNCjAwMDAwMDE4MTggNjU1MzUg Zg0KMDAwMDAwMTgxOSA2NTUzNSBmDQowMDAwMDAxODIwIDY1NTM1IGYNCjAwMDAwMDE4MjEgNjU1 MzUgZg0KMDAwMDAwMTgyMiA2NTUzNSBmDQowMDAwMDAxODIzIDY1NTM1IGYNCjAwMDAwMDE4MjQg NjU1MzUgZg0KMDAwMDAwMTgyNSA2NTUzNSBmDQowMDAwMDAxODI2IDY1NTM1IGYNCjAwMDAwMDE4 MjcgNjU1MzUgZg0KMDAwMDAwMTgyOCA2NTUzNSBmDQowMDAwMDAxODI5IDY1NTM1IGYNCjAwMDAw MDE4MzAgNjU1MzUgZg0KMDAwMDAwMTgzMSA2NTUzNSBmDQowMDAwMDAxODMyIDY1NTM1IGYNCjAw MDAwMDE4MzMgNjU1MzUgZg0KMDAwMDAwMTgzNCA2NTUzNSBmDQowMDAwMDAxODM1IDY1NTM1IGYN CjAwMDAwMDE4MzYgNjU1MzUgZg0KMDAwMDAwMTgzNyA2NTUzNSBmDQowMDAwMDAxODM4IDY1NTM1 IGYNCjAwMDAwMDE4MzkgNjU1MzUgZg0KMDAwMDAwMTg0MCA2NTUzNSBmDQowMDAwMDAxODQxIDY1 NTM1IGYNCjAwMDAwMDE4NDIgNjU1MzUgZg0KMDAwMDAwMTg0MyA2NTUzNSBmDQowMDAwMDAxODQ0 IDY1NTM1IGYNCjAwMDAwMDE4NDUgNjU1MzUgZg0KMDAwMDAwMTg0NiA2NTUzNSBmDQowMDAwMDAx ODQ3IDY1NTM1IGYNCjAwMDAwMDE4NDggNjU1MzUgZg0KMDAwMDAwMTg0OSA2NTUzNSBmDQowMDAw MDAxODUwIDY1NTM1IGYNCjAwMDAwMDE4NTEgNjU1MzUgZg0KMDAwMDAwMTg1MiA2NTUzNSBmDQow MDAwMDAxODUzIDY1NTM1IGYNCjAwMDAwMDE4NTQgNjU1MzUgZg0KMDAwMDAwMTg1NSA2NTUzNSBm DQowMDAwMDAxODU2IDY1NTM1IGYNCjAwMDAwMDE4NTcgNjU1MzUgZg0KMDAwMDAwMTg1OCA2NTUz NSBmDQowMDAwMDAxODU5IDY1NTM1IGYNCjAwMDAwMDE4NjAgNjU1MzUgZg0KMDAwMDAwMTg2MSA2 NTUzNSBmDQowMDAwMDAxODYyIDY1NTM1IGYNCjAwMDAwMDE4NjMgNjU1MzUgZg0KMDAwMDAwMTg2 NCA2NTUzNSBmDQowMDAwMDAxODY1IDY1NTM1IGYNCjAwMDAwMDE4NjYgNjU1MzUgZg0KMDAwMDAw MTg2NyA2NTUzNSBmDQowMDAwMDAxODY4IDY1NTM1IGYNCjAwMDAwMDE4NjkgNjU1MzUgZg0KMDAw MDAwMTg3MCA2NTUzNSBmDQowMDAwMDAxODcxIDY1NTM1IGYNCjAwMDAwMDE4NzIgNjU1MzUgZg0K MDAwMDAwMTg3MyA2NTUzNSBmDQowMDAwMDAxODc0IDY1NTM1IGYNCjAwMDAwMDE4NzUgNjU1MzUg Zg0KMDAwMDAwMTg3NiA2NTUzNSBmDQowMDAwMDAxODc3IDY1NTM1IGYNCjAwMDAwMDE4NzggNjU1 MzUgZg0KMDAwMDAwMTg3OSA2NTUzNSBmDQowMDAwMDAxODgwIDY1NTM1IGYNCjAwMDAwMDE4ODEg NjU1MzUgZg0KMDAwMDAwMTg4MiA2NTUzNSBmDQowMDAwMDAxODgzIDY1NTM1IGYNCjAwMDAwMDE4 ODQgNjU1MzUgZg0KMDAwMDAwMTg4NSA2NTUzNSBmDQowMDAwMDAxODg2IDY1NTM1IGYNCjAwMDAw MDE4ODcgNjU1MzUgZg0KMDAwMDAwMTg4OCA2NTUzNSBmDQowMDAwMDAxODg5IDY1NTM1IGYNCjAw MDAwMDE4OTAgNjU1MzUgZg0KMDAwMDAwMTg5MSA2NTUzNSBmDQowMDAwMDAxODkyIDY1NTM1IGYN CjAwMDAwMDE4OTMgNjU1MzUgZg0KMDAwMDAwMTg5NCA2NTUzNSBmDQowMDAwMDAxODk1IDY1NTM1 IGYNCjAwMDAwMDE4OTYgNjU1MzUgZg0KMDAwMDAwMTg5NyA2NTUzNSBmDQowMDAwMDAxODk4IDY1 NTM1IGYNCjAwMDAwMDE4OTkgNjU1MzUgZg0KMDAwMDAwMTkwMCA2NTUzNSBmDQowMDAwMDAxOTAx IDY1NTM1IGYNCjAwMDAwMDE5MDIgNjU1MzUgZg0KMDAwMDAwMTkwMyA2NTUzNSBmDQowMDAwMDAx OTA0IDY1NTM1IGYNCjAwMDAwMDE5MDUgNjU1MzUgZg0KMDAwMDAwMTkwNiA2NTUzNSBmDQowMDAw MDAxOTA3IDY1NTM1IGYNCjAwMDAwMDE5MDggNjU1MzUgZg0KMDAwMDAwMTkwOSA2NTUzNSBmDQow MDAwMDAxOTEwIDY1NTM1IGYNCjAwMDAwMDE5MTEgNjU1MzUgZg0KMDAwMDAwMTkxMiA2NTUzNSBm DQowMDAwMDAxOTEzIDY1NTM1IGYNCjAwMDAwMDE5MTQgNjU1MzUgZg0KMDAwMDAwMTkxNSA2NTUz NSBmDQowMDAwMDAxOTE2IDY1NTM1IGYNCjAwMDAwMDE5MTcgNjU1MzUgZg0KMDAwMDAwMTkxOCA2 NTUzNSBmDQowMDAwMDAxOTE5IDY1NTM1IGYNCjAwMDAwMDE5MjAgNjU1MzUgZg0KMDAwMDAwMTky MSA2NTUzNSBmDQowMDAwMDAxOTIyIDY1NTM1IGYNCjAwMDAwMDE5MjMgNjU1MzUgZg0KMDAwMDAw MTkyNCA2NTUzNSBmDQowMDAwMDAxOTI1IDY1NTM1IGYNCjAwMDAwMDE5MjYgNjU1MzUgZg0KMDAw MDAwMTkyNyA2NTUzNSBmDQowMDAwMDAxOTI4IDY1NTM1IGYNCjAwMDAwMDE5MjkgNjU1MzUgZg0K MDAwMDAwMTkzMCA2NTUzNSBmDQowMDAwMDAxOTMxIDY1NTM1IGYNCjAwMDAwMDE5MzIgNjU1MzUg Zg0KMDAwMDAwMTkzMyA2NTUzNSBmDQowMDAwMDAxOTM0IDY1NTM1IGYNCjAwMDAwMDE5MzUgNjU1 MzUgZg0KMDAwMDAwMTkzNiA2NTUzNSBmDQowMDAwMDAxOTM3IDY1NTM1IGYNCjAwMDAwMDE5Mzgg NjU1MzUgZg0KMDAwMDAwMTkzOSA2NTUzNSBmDQowMDAwMDAxOTQwIDY1NTM1IGYNCjAwMDAwMDE5 NDEgNjU1MzUgZg0KMDAwMDAwMTk0MiA2NTUzNSBmDQowMDAwMDAxOTQzIDY1NTM1IGYNCjAwMDAw MDE5NDQgNjU1MzUgZg0KMDAwMDAwMTk0NSA2NTUzNSBmDQowMDAwMDAxOTQ2IDY1NTM1IGYNCjAw MDAwMDE5NDcgNjU1MzUgZg0KMDAwMDAwMTk0OCA2NTUzNSBmDQowMDAwMDAxOTQ5IDY1NTM1IGYN CjAwMDAwMDE5NTAgNjU1MzUgZg0KMDAwMDAwMTk1MSA2NTUzNSBmDQowMDAwMDAxOTUyIDY1NTM1 IGYNCjAwMDAwMDE5NTMgNjU1MzUgZg0KMDAwMDAwMTk1NCA2NTUzNSBmDQowMDAwMDAxOTU1IDY1 NTM1IGYNCjAwMDAwMDE5NTYgNjU1MzUgZg0KMDAwMDAwMTk1NyA2NTUzNSBmDQowMDAwMDAxOTU4 IDY1NTM1IGYNCjAwMDAwMDE5NTkgNjU1MzUgZg0KMDAwMDAwMTk2MCA2NTUzNSBmDQowMDAwMDAx OTYxIDY1NTM1IGYNCjAwMDAwMDE5NjIgNjU1MzUgZg0KMDAwMDAwMTk2MyA2NTUzNSBmDQowMDAw MDAxOTY0IDY1NTM1IGYNCjAwMDAwMDE5NjUgNjU1MzUgZg0KMDAwMDAwMTk2NiA2NTUzNSBmDQow MDAwMDAxOTY3IDY1NTM1IGYNCjAwMDAwMDE5NjggNjU1MzUgZg0KMDAwMDAwMTk2OSA2NTUzNSBm DQowMDAwMDAxOTcwIDY1NTM1IGYNCjAwMDAwMDE5NzEgNjU1MzUgZg0KMDAwMDAwMTk3MiA2NTUz NSBmDQowMDAwMDAxOTczIDY1NTM1IGYNCjAwMDAwMDE5NzQgNjU1MzUgZg0KMDAwMDAwMTk3NSA2 NTUzNSBmDQowMDAwMDAxOTc2IDY1NTM1IGYNCjAwMDAwMDE5NzcgNjU1MzUgZg0KMDAwMDAwMTk3 OCA2NTUzNSBmDQowMDAwMDAxOTc5IDY1NTM1IGYNCjAwMDAwMDE5ODAgNjU1MzUgZg0KMDAwMDAw MTk4MSA2NTUzNSBmDQowMDAwMDAxOTgyIDY1NTM1IGYNCjAwMDAwMDE5ODMgNjU1MzUgZg0KMDAw MDAwMTk4NCA2NTUzNSBmDQowMDAwMDAxOTg1IDY1NTM1IGYNCjAwMDAwMDE5ODYgNjU1MzUgZg0K MDAwMDAwMTk4NyA2NTUzNSBmDQowMDAwMDAxOTg4IDY1NTM1IGYNCjAwMDAwMDE5ODkgNjU1MzUg Zg0KMDAwMDAwMTk5MCA2NTUzNSBmDQowMDAwMDAxOTkxIDY1NTM1IGYNCjAwMDAwMDE5OTIgNjU1 MzUgZg0KMDAwMDAwMTk5MyA2NTUzNSBmDQowMDAwMDAxOTk0IDY1NTM1IGYNCjAwMDAwMDE5OTUg NjU1MzUgZg0KMDAwMDAwMTk5NiA2NTUzNSBmDQowMDAwMDAxOTk3IDY1NTM1IGYNCjAwMDAwMDE5 OTggNjU1MzUgZg0KMDAwMDAwMTk5OSA2NTUzNSBmDQowMDAwMDAyMDAwIDY1NTM1IGYNCjAwMDAw MDIwMDEgNjU1MzUgZg0KMDAwMDAwMjAwMiA2NTUzNSBmDQowMDAwMDAyMDAzIDY1NTM1IGYNCjAw MDAwMDIwMDQgNjU1MzUgZg0KMDAwMDAwMjAwNSA2NTUzNSBmDQowMDAwMDAyMDA2IDY1NTM1IGYN CjAwMDAwMDIwMDcgNjU1MzUgZg0KMDAwMDAwMjAwOCA2NTUzNSBmDQowMDAwMDAyMDA5IDY1NTM1 IGYNCjAwMDAwMDIwMTAgNjU1MzUgZg0KMDAwMDAwMjAxMSA2NTUzNSBmDQowMDAwMDAyMDEyIDY1 NTM1IGYNCjAwMDAwMDIwMTMgNjU1MzUgZg0KMDAwMDAwMjAxNCA2NTUzNSBmDQowMDAwMDAyMDE1 IDY1NTM1IGYNCjAwMDAwMDIwMTYgNjU1MzUgZg0KMDAwMDAwMjAxNyA2NTUzNSBmDQowMDAwMDAy MDE4IDY1NTM1IGYNCjAwMDAwMDIwMTkgNjU1MzUgZg0KMDAwMDAwMjAyMCA2NTUzNSBmDQowMDAw MDAyMDIxIDY1NTM1IGYNCjAwMDAwMDIwMjIgNjU1MzUgZg0KMDAwMDAwMjAyMyA2NTUzNSBmDQow MDAwMDAyMDI0IDY1NTM1IGYNCjAwMDAwMDIwMjUgNjU1MzUgZg0KMDAwMDAwMjAyNiA2NTUzNSBm DQowMDAwMDAyMDI3IDY1NTM1IGYNCjAwMDAwMDIwMjggNjU1MzUgZg0KMDAwMDAwMjAyOSA2NTUz NSBmDQowMDAwMDAyMDMwIDY1NTM1IGYNCjAwMDAwMDIwMzEgNjU1MzUgZg0KMDAwMDAwMjAzMiA2 NTUzNSBmDQowMDAwMDAyMDMzIDY1NTM1IGYNCjAwMDAwMDIwMzQgNjU1MzUgZg0KMDAwMDAwMjAz NSA2NTUzNSBmDQowMDAwMDAyMDM2IDY1NTM1IGYNCjAwMDAwMDIwMzcgNjU1MzUgZg0KMDAwMDAw MjAzOCA2NTUzNSBmDQowMDAwMDAyMDM5IDY1NTM1IGYNCjAwMDAwMDIwNDAgNjU1MzUgZg0KMDAw MDAwMjA0MSA2NTUzNSBmDQowMDAwMDAyMDQyIDY1NTM1IGYNCjAwMDAwMDIwNDMgNjU1MzUgZg0K MDAwMDAwMjA0NCA2NTUzNSBmDQowMDAwMDAyMDQ1IDY1NTM1IGYNCjAwMDAwMDIwNDYgNjU1MzUg Zg0KMDAwMDAwMjA0NyA2NTUzNSBmDQowMDAwMDAyMDQ4IDY1NTM1IGYNCjAwMDAwMDIwNDkgNjU1 MzUgZg0KMDAwMDAwMjA1MCA2NTUzNSBmDQowMDAwMDAyMDUxIDY1NTM1IGYNCjAwMDAwMDIwNTIg NjU1MzUgZg0KMDAwMDAwMjA1MyA2NTUzNSBmDQowMDAwMDAyMDU0IDY1NTM1IGYNCjAwMDAwMDIw NTUgNjU1MzUgZg0KMDAwMDAwMjA1NiA2NTUzNSBmDQowMDAwMDAyMDU3IDY1NTM1IGYNCjAwMDAw MDIwNTggNjU1MzUgZg0KMDAwMDAwMjA1OSA2NTUzNSBmDQowMDAwMDAyMDYwIDY1NTM1IGYNCjAw MDAwMDIwNjEgNjU1MzUgZg0KMDAwMDAwMjA2MiA2NTUzNSBmDQowMDAwMDAyMDYzIDY1NTM1IGYN CjAwMDAwMDIwNjQgNjU1MzUgZg0KMDAwMDAwMjA2NSA2NTUzNSBmDQowMDAwMDAyMDY2IDY1NTM1 IGYNCjAwMDAwMDIwNjcgNjU1MzUgZg0KMDAwMDAwMjA2OCA2NTUzNSBmDQowMDAwMDAyMDY5IDY1 NTM1IGYNCjAwMDAwMDIwNzAgNjU1MzUgZg0KMDAwMDAwMjA3MSA2NTUzNSBmDQowMDAwMDAyMDcy IDY1NTM1IGYNCjAwMDAwMDIwNzMgNjU1MzUgZg0KMDAwMDAwMjA3NCA2NTUzNSBmDQowMDAwMDAy MDc1IDY1NTM1IGYNCjAwMDAwMDIwNzYgNjU1MzUgZg0KMDAwMDAwMjA3NyA2NTUzNSBmDQowMDAw MDAyMDc4IDY1NTM1IGYNCjAwMDAwMDIwNzkgNjU1MzUgZg0KMDAwMDAwMjA4MCA2NTUzNSBmDQow MDAwMDAyMDgxIDY1NTM1IGYNCjAwMDAwMDIwODIgNjU1MzUgZg0KMDAwMDAwMjA4MyA2NTUzNSBm DQowMDAwMDAyMDg0IDY1NTM1IGYNCjAwMDAwMDIwODUgNjU1MzUgZg0KMDAwMDAwMjA4NiA2NTUz NSBmDQowMDAwMDAyMDg3IDY1NTM1IGYNCjAwMDAwMDIwODggNjU1MzUgZg0KMDAwMDAwMjA4OSA2 NTUzNSBmDQowMDAwMDAyMDkwIDY1NTM1IGYNCjAwMDAwMDIwOTEgNjU1MzUgZg0KMDAwMDAwMjA5 MiA2NTUzNSBmDQowMDAwMDAyMDkzIDY1NTM1IGYNCjAwMDAwMDIwOTQgNjU1MzUgZg0KMDAwMDAw MjA5NSA2NTUzNSBmDQowMDAwMDAyMDk2IDY1NTM1IGYNCjAwMDAwMDIwOTcgNjU1MzUgZg0KMDAw MDAwMjA5OCA2NTUzNSBmDQowMDAwMDAyMDk5IDY1NTM1IGYNCjAwMDAwMDIxMDAgNjU1MzUgZg0K MDAwMDAwMjEwMSA2NTUzNSBmDQowMDAwMDAyMTAyIDY1NTM1IGYNCjAwMDAwMDIxMDMgNjU1MzUg Zg0KMDAwMDAwMjEwNCA2NTUzNSBmDQowMDAwMDAyMTA1IDY1NTM1IGYNCjAwMDAwMDIxMDYgNjU1 MzUgZg0KMDAwMDAwMjEwNyA2NTUzNSBmDQowMDAwMDAyMTA4IDY1NTM1IGYNCjAwMDAwMDIxMDkg NjU1MzUgZg0KMDAwMDAwMjExMCA2NTUzNSBmDQowMDAwMDAyMTExIDY1NTM1IGYNCjAwMDAwMDIx MTIgNjU1MzUgZg0KMDAwMDAwMjExMyA2NTUzNSBmDQowMDAwMDAyMTE0IDY1NTM1IGYNCjAwMDAw MDIxMTUgNjU1MzUgZg0KMDAwMDAwMjExNiA2NTUzNSBmDQowMDAwMDAyMTE3IDY1NTM1IGYNCjAw MDAwMDIxMTggNjU1MzUgZg0KMDAwMDAwMjExOSA2NTUzNSBmDQowMDAwMDAyMTIwIDY1NTM1IGYN CjAwMDAwMDIxMjEgNjU1MzUgZg0KMDAwMDAwMjEyMiA2NTUzNSBmDQowMDAwMDAyMTIzIDY1NTM1 IGYNCjAwMDAwMDIxMjQgNjU1MzUgZg0KMDAwMDAwMjEyNSA2NTUzNSBmDQowMDAwMDAyMTI2IDY1 NTM1IGYNCjAwMDAwMDIxMjcgNjU1MzUgZg0KMDAwMDAwMjEyOCA2NTUzNSBmDQowMDAwMDAyMTI5 IDY1NTM1IGYNCjAwMDAwMDIxMzAgNjU1MzUgZg0KMDAwMDAwMjEzMSA2NTUzNSBmDQowMDAwMDAy MTMyIDY1NTM1IGYNCjAwMDAwMDIxMzMgNjU1MzUgZg0KMDAwMDAwMjEzNCA2NTUzNSBmDQowMDAw MDAyMTM1IDY1NTM1IGYNCjAwMDAwMDIxMzYgNjU1MzUgZg0KMDAwMDAwMjEzNyA2NTUzNSBmDQow MDAwMDAyMTM4IDY1NTM1IGYNCjAwMDAwMDIxMzkgNjU1MzUgZg0KMDAwMDAwMjE0MCA2NTUzNSBm DQowMDAwMDAyMTQxIDY1NTM1IGYNCjAwMDAwMDIxNDIgNjU1MzUgZg0KMDAwMDAwMjE0MyA2NTUz NSBmDQowMDAwMDAyMTQ0IDY1NTM1IGYNCjAwMDAwMDIxNDUgNjU1MzUgZg0KMDAwMDAwMjE0NiA2 NTUzNSBmDQowMDAwMDAyMTQ3IDY1NTM1IGYNCjAwMDAwMDIxNDggNjU1MzUgZg0KMDAwMDAwMjE0 OSA2NTUzNSBmDQowMDAwMDAyMTUwIDY1NTM1IGYNCjAwMDAwMDIxNTEgNjU1MzUgZg0KMDAwMDAw MjE1MiA2NTUzNSBmDQowMDAwMDAyMTUzIDY1NTM1IGYNCjAwMDAwMDIxNTQgNjU1MzUgZg0KMDAw MDAwMjE1NSA2NTUzNSBmDQowMDAwMDAyMTU2IDY1NTM1IGYNCjAwMDAwMDIxNTcgNjU1MzUgZg0K MDAwMDAwMjE1OCA2NTUzNSBmDQowMDAwMDAyMTU5IDY1NTM1IGYNCjAwMDAwMDIxNjAgNjU1MzUg Zg0KMDAwMDAwMjE2MSA2NTUzNSBmDQowMDAwMDAyMTYyIDY1NTM1IGYNCjAwMDAwMDIxNjMgNjU1 MzUgZg0KMDAwMDAwMjE2NCA2NTUzNSBmDQowMDAwMDAyMTY1IDY1NTM1IGYNCjAwMDAwMDIxNjYg NjU1MzUgZg0KMDAwMDAwMjE2NyA2NTUzNSBmDQowMDAwMDAyMTY4IDY1NTM1IGYNCjAwMDAwMDIx NjkgNjU1MzUgZg0KMDAwMDAwMjE3MCA2NTUzNSBmDQowMDAwMDAyMTcxIDY1NTM1IGYNCjAwMDAw MDIxNzIgNjU1MzUgZg0KMDAwMDAwMjE3MyA2NTUzNSBmDQowMDAwMDAyMTc0IDY1NTM1IGYNCjAw MDAwMDIxNzUgNjU1MzUgZg0KMDAwMDAwMjE3NiA2NTUzNSBmDQowMDAwMDAyMTc3IDY1NTM1IGYN CjAwMDAwMDIxNzggNjU1MzUgZg0KMDAwMDAwMjE3OSA2NTUzNSBmDQowMDAwMDAyMTgwIDY1NTM1 IGYNCjAwMDAwMDIxODEgNjU1MzUgZg0KMDAwMDAwMjE4MiA2NTUzNSBmDQowMDAwMDAyMTgzIDY1 NTM1IGYNCjAwMDAwMDIxODQgNjU1MzUgZg0KMDAwMDAwMjE4NSA2NTUzNSBmDQowMDAwMDAyMTg2 IDY1NTM1IGYNCjAwMDAwMDIxODcgNjU1MzUgZg0KMDAwMDAwMjE4OCA2NTUzNSBmDQowMDAwMDAy MTg5IDY1NTM1IGYNCjAwMDAwMDIxOTAgNjU1MzUgZg0KMDAwMDAwMjE5MSA2NTUzNSBmDQowMDAw MDAyMTkyIDY1NTM1IGYNCjAwMDAwMDIxOTMgNjU1MzUgZg0KMDAwMDAwMjE5NCA2NTUzNSBmDQow MDAwMDAyMTk1IDY1NTM1IGYNCjAwMDAwMDIxOTYgNjU1MzUgZg0KMDAwMDAwMjE5NyA2NTUzNSBm DQowMDAwMDAyMTk4IDY1NTM1IGYNCjAwMDAwMDIxOTkgNjU1MzUgZg0KMDAwMDAwMjIwMCA2NTUz NSBmDQowMDAwMDAyMjAxIDY1NTM1IGYNCjAwMDAwMDIyMDIgNjU1MzUgZg0KMDAwMDAwMjIwMyA2 NTUzNSBmDQowMDAwMDAyMjA0IDY1NTM1IGYNCjAwMDAwMDIyMDUgNjU1MzUgZg0KMDAwMDAwMjIw NiA2NTUzNSBmDQowMDAwMDAyMjA3IDY1NTM1IGYNCjAwMDAwMDIyMDggNjU1MzUgZg0KMDAwMDAw MjIwOSA2NTUzNSBmDQowMDAwMDAyMjEwIDY1NTM1IGYNCjAwMDAwMDIyMTEgNjU1MzUgZg0KMDAw MDAwMjIxMiA2NTUzNSBmDQowMDAwMDAyMjEzIDY1NTM1IGYNCjAwMDAwMDIyMTQgNjU1MzUgZg0K MDAwMDAwMjIxNSA2NTUzNSBmDQowMDAwMDAyMjE2IDY1NTM1IGYNCjAwMDAwMDIyMTcgNjU1MzUg Zg0KMDAwMDAwMjIxOCA2NTUzNSBmDQowMDAwMDAyMjE5IDY1NTM1IGYNCjAwMDAwMDIyMjAgNjU1 MzUgZg0KMDAwMDAwMjIyMSA2NTUzNSBmDQowMDAwMDAyMjIyIDY1NTM1IGYNCjAwMDAwMDIyMjMg NjU1MzUgZg0KMDAwMDAwMjIyNCA2NTUzNSBmDQowMDAwMDAyMjI1IDY1NTM1IGYNCjAwMDAwMDIy MjYgNjU1MzUgZg0KMDAwMDAwMjIyNyA2NTUzNSBmDQowMDAwMDAyMjI4IDY1NTM1IGYNCjAwMDAw MDIyMjkgNjU1MzUgZg0KMDAwMDAwMjIzMCA2NTUzNSBmDQowMDAwMDAyMjMxIDY1NTM1IGYNCjAw MDAwMDIyMzIgNjU1MzUgZg0KMDAwMDAwMjIzMyA2NTUzNSBmDQowMDAwMDAyMjM0IDY1NTM1IGYN CjAwMDAwMDIyMzUgNjU1MzUgZg0KMDAwMDAwMjIzNiA2NTUzNSBmDQowMDAwMDAyMjM3IDY1NTM1 IGYNCjAwMDAwMDIyMzggNjU1MzUgZg0KMDAwMDAwMjIzOSA2NTUzNSBmDQowMDAwMDAyMjQwIDY1 NTM1IGYNCjAwMDAwMDIyNDEgNjU1MzUgZg0KMDAwMDAwMjI0MiA2NTUzNSBmDQowMDAwMDAyMjQz IDY1NTM1IGYNCjAwMDAwMDIyNDQgNjU1MzUgZg0KMDAwMDAwMjI0NSA2NTUzNSBmDQowMDAwMDAy MjQ2IDY1NTM1IGYNCjAwMDAwMDIyNDcgNjU1MzUgZg0KMDAwMDAwMjI0OCA2NTUzNSBmDQowMDAw MDAyMjQ5IDY1NTM1IGYNCjAwMDAwMDIyNTAgNjU1MzUgZg0KMDAwMDAwMjI1MSA2NTUzNSBmDQow MDAwMDAyMjUyIDY1NTM1IGYNCjAwMDAwMDIyNTMgNjU1MzUgZg0KMDAwMDAwMjI1NCA2NTUzNSBm DQowMDAwMDAyMjU1IDY1NTM1IGYNCjAwMDAwMDIyNTYgNjU1MzUgZg0KMDAwMDAwMjI1NyA2NTUz NSBmDQowMDAwMDAyMjU4IDY1NTM1IGYNCjAwMDAwMDIyNTkgNjU1MzUgZg0KMDAwMDAwMjI2MCA2 NTUzNSBmDQowMDAwMDAyMjYxIDY1NTM1IGYNCjAwMDAwMDIyNjIgNjU1MzUgZg0KMDAwMDAwMjI2 MyA2NTUzNSBmDQowMDAwMDAyMjY0IDY1NTM1IGYNCjAwMDAwMDIyNjUgNjU1MzUgZg0KMDAwMDAw MjI2NiA2NTUzNSBmDQowMDAwMDAyMjY3IDY1NTM1IGYNCjAwMDAwMDIyNjggNjU1MzUgZg0KMDAw MDAwMjI2OSA2NTUzNSBmDQowMDAwMDAyMjcwIDY1NTM1IGYNCjAwMDAwMDIyNzEgNjU1MzUgZg0K MDAwMDAwMjI3MiA2NTUzNSBmDQowMDAwMDAyMjczIDY1NTM1IGYNCjAwMDAwMDIyNzQgNjU1MzUg Zg0KMDAwMDAwMjI3NSA2NTUzNSBmDQowMDAwMDAyMjc2IDY1NTM1IGYNCjAwMDAwMDIyNzcgNjU1 MzUgZg0KMDAwMDAwMjI3OCA2NTUzNSBmDQowMDAwMDAyMjc5IDY1NTM1IGYNCjAwMDAwMDIyODAg NjU1MzUgZg0KMDAwMDAwMjI4MSA2NTUzNSBmDQowMDAwMDAyMjgyIDY1NTM1IGYNCjAwMDAwMDIy ODMgNjU1MzUgZg0KMDAwMDAwMjI4NCA2NTUzNSBmDQowMDAwMDAyMjg1IDY1NTM1IGYNCjAwMDAw MDIyODYgNjU1MzUgZg0KMDAwMDAwMjI4NyA2NTUzNSBmDQowMDAwMDAyMjg4IDY1NTM1IGYNCjAw MDAwMDIyODkgNjU1MzUgZg0KMDAwMDAwMjI5MCA2NTUzNSBmDQowMDAwMDAyMjkxIDY1NTM1IGYN CjAwMDAwMDIyOTIgNjU1MzUgZg0KMDAwMDAwMjI5MyA2NTUzNSBmDQowMDAwMDAyMjk0IDY1NTM1 IGYNCjAwMDAwMDIyOTUgNjU1MzUgZg0KMDAwMDAwMjI5NiA2NTUzNSBmDQowMDAwMDAyMjk3IDY1 NTM1IGYNCjAwMDAwMDIyOTggNjU1MzUgZg0KMDAwMDAwMjI5OSA2NTUzNSBmDQowMDAwMDAyMzAw IDY1NTM1IGYNCjAwMDAwMDIzMDEgNjU1MzUgZg0KMDAwMDAwMjMwMiA2NTUzNSBmDQowMDAwMDAy MzAzIDY1NTM1IGYNCjAwMDAwMDIzMDQgNjU1MzUgZg0KMDAwMDAwMjMwNSA2NTUzNSBmDQowMDAw MDAyMzA2IDY1NTM1IGYNCjAwMDAwMDIzMDcgNjU1MzUgZg0KMDAwMDAwMjMwOCA2NTUzNSBmDQow MDAwMDAyMzA5IDY1NTM1IGYNCjAwMDAwMDIzMTAgNjU1MzUgZg0KMDAwMDAwMjMxMSA2NTUzNSBm DQowMDAwMDAyMzEyIDY1NTM1IGYNCjAwMDAwMDIzMTMgNjU1MzUgZg0KMDAwMDAwMjMxNCA2NTUz NSBmDQowMDAwMDAyMzE1IDY1NTM1IGYNCjAwMDAwMDIzMTYgNjU1MzUgZg0KMDAwMDAwMjMxNyA2 NTUzNSBmDQowMDAwMDAyMzE4IDY1NTM1IGYNCjAwMDAwMDIzMTkgNjU1MzUgZg0KMDAwMDAwMjMy MCA2NTUzNSBmDQowMDAwMDAyMzIxIDY1NTM1IGYNCjAwMDAwMDIzMjIgNjU1MzUgZg0KMDAwMDAw MjMyMyA2NTUzNSBmDQowMDAwMDAyMzI0IDY1NTM1IGYNCjAwMDAwMDIzMjUgNjU1MzUgZg0KMDAw MDAwMjMyNiA2NTUzNSBmDQowMDAwMDAyMzI3IDY1NTM1IGYNCjAwMDAwMDIzMjggNjU1MzUgZg0K MDAwMDAwMjMyOSA2NTUzNSBmDQowMDAwMDAyMzMwIDY1NTM1IGYNCjAwMDAwMDIzMzEgNjU1MzUg Zg0KMDAwMDAwMjMzMiA2NTUzNSBmDQowMDAwMDAyMzMzIDY1NTM1IGYNCjAwMDAwMDIzMzQgNjU1 MzUgZg0KMDAwMDAwMjMzNSA2NTUzNSBmDQowMDAwMDAyMzM2IDY1NTM1IGYNCjAwMDAwMDIzMzcg NjU1MzUgZg0KMDAwMDAwMjMzOCA2NTUzNSBmDQowMDAwMDAyMzM5IDY1NTM1IGYNCjAwMDAwMDIz NDAgNjU1MzUgZg0KMDAwMDAwMjM0MSA2NTUzNSBmDQowMDAwMDAyMzQyIDY1NTM1IGYNCjAwMDAw MDIzNDMgNjU1MzUgZg0KMDAwMDAwMjM0NCA2NTUzNSBmDQowMDAwMDAyMzQ1IDY1NTM1IGYNCjAw MDAwMDIzNDYgNjU1MzUgZg0KMDAwMDAwMjM0NyA2NTUzNSBmDQowMDAwMDAyMzQ4IDY1NTM1IGYN CjAwMDAwMDIzNDkgNjU1MzUgZg0KMDAwMDAwMjM1MCA2NTUzNSBmDQowMDAwMDAyMzUxIDY1NTM1 IGYNCjAwMDAwMDIzNTIgNjU1MzUgZg0KMDAwMDAwMjM1MyA2NTUzNSBmDQowMDAwMDAyMzU0IDY1 NTM1IGYNCjAwMDAwMDIzNTUgNjU1MzUgZg0KMDAwMDAwMjM1NiA2NTUzNSBmDQowMDAwMDAyMzU3 IDY1NTM1IGYNCjAwMDAwMDIzNTggNjU1MzUgZg0KMDAwMDAwMjM1OSA2NTUzNSBmDQowMDAwMDAy MzYwIDY1NTM1IGYNCjAwMDAwMDIzNjEgNjU1MzUgZg0KMDAwMDAwMjM2MiA2NTUzNSBmDQowMDAw MDAyMzYzIDY1NTM1IGYNCjAwMDAwMDIzNjQgNjU1MzUgZg0KMDAwMDAwMjM2NSA2NTUzNSBmDQow MDAwMDAyMzY2IDY1NTM1IGYNCjAwMDAwMDIzNjcgNjU1MzUgZg0KMDAwMDAwMjM2OCA2NTUzNSBm DQowMDAwMDAyMzY5IDY1NTM1IGYNCjAwMDAwMDIzNzAgNjU1MzUgZg0KMDAwMDAwMjM3MSA2NTUz NSBmDQowMDAwMDAyMzcyIDY1NTM1IGYNCjAwMDAwMDIzNzMgNjU1MzUgZg0KMDAwMDAwMjM3NCA2 NTUzNSBmDQowMDAwMDAyMzc1IDY1NTM1IGYNCjAwMDAwMDIzNzYgNjU1MzUgZg0KMDAwMDAwMjM3 NyA2NTUzNSBmDQowMDAwMDAyMzc4IDY1NTM1IGYNCjAwMDAwMDIzNzkgNjU1MzUgZg0KMDAwMDAw MjM4MCA2NTUzNSBmDQowMDAwMDAyMzgxIDY1NTM1IGYNCjAwMDAwMDIzODIgNjU1MzUgZg0KMDAw MDAwMjM4MyA2NTUzNSBmDQowMDAwMDAyMzg0IDY1NTM1IGYNCjAwMDAwMDIzODUgNjU1MzUgZg0K MDAwMDAwMjM4NiA2NTUzNSBmDQowMDAwMDAyMzg3IDY1NTM1IGYNCjAwMDAwMDIzODggNjU1MzUg Zg0KMDAwMDAwMjM4OSA2NTUzNSBmDQowMDAwMDAyMzkwIDY1NTM1IGYNCjAwMDAwMDIzOTEgNjU1 MzUgZg0KMDAwMDAwMjM5MiA2NTUzNSBmDQowMDAwMDAyMzkzIDY1NTM1IGYNCjAwMDAwMDIzOTQg NjU1MzUgZg0KMDAwMDAwMjM5NSA2NTUzNSBmDQowMDAwMDAyMzk2IDY1NTM1IGYNCjAwMDAwMDIz OTcgNjU1MzUgZg0KMDAwMDAwMjM5OCA2NTUzNSBmDQowMDAwMDAyMzk5IDY1NTM1IGYNCjAwMDAw MDI0MDAgNjU1MzUgZg0KMDAwMDAwMjQwMSA2NTUzNSBmDQowMDAwMDAyNDAyIDY1NTM1IGYNCjAw MDAwMDI0MDMgNjU1MzUgZg0KMDAwMDAwMjQwNCA2NTUzNSBmDQowMDAwMDAyNDA1IDY1NTM1IGYN CjAwMDAwMDI0MDYgNjU1MzUgZg0KMDAwMDAwMjQwNyA2NTUzNSBmDQowMDAwMDAyNDA4IDY1NTM1 IGYNCjAwMDAwMDI0MDkgNjU1MzUgZg0KMDAwMDAwMjQxMCA2NTUzNSBmDQowMDAwMDAyNDExIDY1 NTM1IGYNCjAwMDAwMDI0MTIgNjU1MzUgZg0KMDAwMDAwMjQxMyA2NTUzNSBmDQowMDAwMDAyNDE0 IDY1NTM1IGYNCjAwMDAwMDI0MTUgNjU1MzUgZg0KMDAwMDAwMjQxNiA2NTUzNSBmDQowMDAwMDAy NDE3IDY1NTM1IGYNCjAwMDAwMDI0MTggNjU1MzUgZg0KMDAwMDAwMjQxOSA2NTUzNSBmDQowMDAw MDAyNDIwIDY1NTM1IGYNCjAwMDAwMDI0MjEgNjU1MzUgZg0KMDAwMDAwMjQyMiA2NTUzNSBmDQow MDAwMDAyNDIzIDY1NTM1IGYNCjAwMDAwMDI0MjQgNjU1MzUgZg0KMDAwMDAwMjQyNSA2NTUzNSBm DQowMDAwMDAyNDI2IDY1NTM1IGYNCjAwMDAwMDI0MjcgNjU1MzUgZg0KMDAwMDAwMjQyOCA2NTUz NSBmDQowMDAwMDAyNDI5IDY1NTM1IGYNCjAwMDAwMDI0MzAgNjU1MzUgZg0KMDAwMDAwMjQzMSA2 NTUzNSBmDQowMDAwMDAyNDMyIDY1NTM1IGYNCjAwMDAwMDI0MzMgNjU1MzUgZg0KMDAwMDAwMjQz NCA2NTUzNSBmDQowMDAwMDAyNDM1IDY1NTM1IGYNCjAwMDAwMDI0MzYgNjU1MzUgZg0KMDAwMDAw MjQzNyA2NTUzNSBmDQowMDAwMDAyNDM4IDY1NTM1IGYNCjAwMDAwMDI0MzkgNjU1MzUgZg0KMDAw MDAwMjQ0MCA2NTUzNSBmDQowMDAwMDAyNDQxIDY1NTM1IGYNCjAwMDAwMDI0NDIgNjU1MzUgZg0K MDAwMDAwMjQ0MyA2NTUzNSBmDQowMDAwMDAyNDQ0IDY1NTM1IGYNCjAwMDAwMDI0NDUgNjU1MzUg Zg0KMDAwMDAwMjQ0NiA2NTUzNSBmDQowMDAwMDAyNDQ3IDY1NTM1IGYNCjAwMDAwMDI0NDggNjU1 MzUgZg0KMDAwMDAwMjQ0OSA2NTUzNSBmDQowMDAwMDAyNDUwIDY1NTM1IGYNCjAwMDAwMDI0NTEg NjU1MzUgZg0KMDAwMDAwMjQ1MiA2NTUzNSBmDQowMDAwMDAyNDUzIDY1NTM1IGYNCjAwMDAwMDI0 NTQgNjU1MzUgZg0KMDAwMDAwMjQ1NSA2NTUzNSBmDQowMDAwMDAyNDU2IDY1NTM1IGYNCjAwMDAw MDI0NTcgNjU1MzUgZg0KMDAwMDAwMjQ1OCA2NTUzNSBmDQowMDAwMDAyNDU5IDY1NTM1IGYNCjAw MDAwMDI0NjAgNjU1MzUgZg0KMDAwMDAwMjQ2MSA2NTUzNSBmDQowMDAwMDAyNDYyIDY1NTM1IGYN CjAwMDAwMDI0NjMgNjU1MzUgZg0KMDAwMDAwMjQ2NCA2NTUzNSBmDQowMDAwMDAyNDY1IDY1NTM1 IGYNCjAwMDAwMDI0NjYgNjU1MzUgZg0KMDAwMDAwMjQ2NyA2NTUzNSBmDQowMDAwMDAyNDY4IDY1 NTM1IGYNCjAwMDAwMDI0NjkgNjU1MzUgZg0KMDAwMDAwMjQ3MCA2NTUzNSBmDQowMDAwMDAyNDcx IDY1NTM1IGYNCjAwMDAwMDI0NzIgNjU1MzUgZg0KMDAwMDAwMjQ3MyA2NTUzNSBmDQowMDAwMDAy NDc0IDY1NTM1IGYNCjAwMDAwMDI0NzUgNjU1MzUgZg0KMDAwMDAwMjQ3NiA2NTUzNSBmDQowMDAw MDAyNDc3IDY1NTM1IGYNCjAwMDAwMDI0NzggNjU1MzUgZg0KMDAwMDAwMjQ3OSA2NTUzNSBmDQow MDAwMDAyNDgwIDY1NTM1IGYNCjAwMDAwMDI0ODEgNjU1MzUgZg0KMDAwMDAwMjQ4MiA2NTUzNSBm DQowMDAwMDAyNDgzIDY1NTM1IGYNCjAwMDAwMDI0ODQgNjU1MzUgZg0KMDAwMDAwMjQ4NSA2NTUz NSBmDQowMDAwMDAyNDg2IDY1NTM1IGYNCjAwMDAwMDI0ODcgNjU1MzUgZg0KMDAwMDAwMjQ4OCA2 NTUzNSBmDQowMDAwMDAyNDg5IDY1NTM1IGYNCjAwMDAwMDI0OTAgNjU1MzUgZg0KMDAwMDAwMjQ5 MSA2NTUzNSBmDQowMDAwMDAyNDkyIDY1NTM1IGYNCjAwMDAwMDI0OTMgNjU1MzUgZg0KMDAwMDAw MjQ5NCA2NTUzNSBmDQowMDAwMDAyNDk1IDY1NTM1IGYNCjAwMDAwMDI0OTYgNjU1MzUgZg0KMDAw MDAwMjQ5NyA2NTUzNSBmDQowMDAwMDAyNDk4IDY1NTM1IGYNCjAwMDAwMDI0OTkgNjU1MzUgZg0K MDAwMDAwMjUwMCA2NTUzNSBmDQowMDAwMDAyNTAxIDY1NTM1IGYNCjAwMDAwMDI1MDIgNjU1MzUg Zg0KMDAwMDAwMjUwMyA2NTUzNSBmDQowMDAwMDAyNTA0IDY1NTM1IGYNCjAwMDAwMDI1MDUgNjU1 MzUgZg0KMDAwMDAwMjUwNiA2NTUzNSBmDQowMDAwMDAyNTA3IDY1NTM1IGYNCjAwMDAwMDI1MDgg NjU1MzUgZg0KMDAwMDAwMjUwOSA2NTUzNSBmDQowMDAwMDAyNTEwIDY1NTM1IGYNCjAwMDAwMDI1 MTEgNjU1MzUgZg0KMDAwMDAwMjUxMiA2NTUzNSBmDQowMDAwMDAyNTEzIDY1NTM1IGYNCjAwMDAw MDI1MTQgNjU1MzUgZg0KMDAwMDAwMjUxNSA2NTUzNSBmDQowMDAwMDAyNTE2IDY1NTM1IGYNCjAw MDAwMDI1MTcgNjU1MzUgZg0KMDAwMDAwMjUxOCA2NTUzNSBmDQowMDAwMDAyNTE5IDY1NTM1IGYN CjAwMDAwMDI1MjAgNjU1MzUgZg0KMDAwMDAwMjUyMSA2NTUzNSBmDQowMDAwMDAyNTIyIDY1NTM1 IGYNCjAwMDAwMDI1MjMgNjU1MzUgZg0KMDAwMDAwMjUyNCA2NTUzNSBmDQowMDAwMDAyNTI1IDY1 NTM1IGYNCjAwMDAwMDI1MjYgNjU1MzUgZg0KMDAwMDAwMjUyNyA2NTUzNSBmDQowMDAwMDAyNTI4 IDY1NTM1IGYNCjAwMDAwMDI1MjkgNjU1MzUgZg0KMDAwMDAwMjUzMCA2NTUzNSBmDQowMDAwMDAy NTMxIDY1NTM1IGYNCjAwMDAwMDI1MzIgNjU1MzUgZg0KMDAwMDAwMjUzMyA2NTUzNSBmDQowMDAw MDAyNTM0IDY1NTM1IGYNCjAwMDAwMDI1MzUgNjU1MzUgZg0KMDAwMDAwMjUzNiA2NTUzNSBmDQow MDAwMDAyNTM3IDY1NTM1IGYNCjAwMDAwMDI1MzggNjU1MzUgZg0KMDAwMDAwMjUzOSA2NTUzNSBm DQowMDAwMDAyNTQwIDY1NTM1IGYNCjAwMDAwMDI1NDEgNjU1MzUgZg0KMDAwMDAwMjU0MiA2NTUz NSBmDQowMDAwMDAyNTQzIDY1NTM1IGYNCjAwMDAwMDI1NDQgNjU1MzUgZg0KMDAwMDAwMjU0NSA2 NTUzNSBmDQowMDAwMDAyNTQ2IDY1NTM1IGYNCjAwMDAwMDI1NDcgNjU1MzUgZg0KMDAwMDAwMjU0 OCA2NTUzNSBmDQowMDAwMDAyNTQ5IDY1NTM1IGYNCjAwMDAwMDI1NTAgNjU1MzUgZg0KMDAwMDAw MjU1MSA2NTUzNSBmDQowMDAwMDAyNTUyIDY1NTM1IGYNCjAwMDAwMDI1NTMgNjU1MzUgZg0KMDAw MDAwMjU1NCA2NTUzNSBmDQowMDAwMDAyNTU1IDY1NTM1IGYNCjAwMDAwMDI1NTYgNjU1MzUgZg0K MDAwMDAwMjU1NyA2NTUzNSBmDQowMDAwMDAyNTU4IDY1NTM1IGYNCjAwMDAwMDI1NTkgNjU1MzUg Zg0KMDAwMDAwMjU2MCA2NTUzNSBmDQowMDAwMDAyNTYxIDY1NTM1IGYNCjAwMDAwMDI1NjIgNjU1 MzUgZg0KMDAwMDAwMjU2MyA2NTUzNSBmDQowMDAwMDAyNTY0IDY1NTM1IGYNCjAwMDAwMDI1NjUg NjU1MzUgZg0KMDAwMDAwMjU2NiA2NTUzNSBmDQowMDAwMDAyNTY3IDY1NTM1IGYNCjAwMDAwMDI1 NjggNjU1MzUgZg0KMDAwMDAwMjU2OSA2NTUzNSBmDQowMDAwMDAyNTcwIDY1NTM1IGYNCjAwMDAw MDI1NzEgNjU1MzUgZg0KMDAwMDAwMjU3MiA2NTUzNSBmDQowMDAwMDAyNTczIDY1NTM1IGYNCjAw MDAwMDI1NzQgNjU1MzUgZg0KMDAwMDAwMjU3NSA2NTUzNSBmDQowMDAwMDAyNTc2IDY1NTM1IGYN CjAwMDAwMDI1NzcgNjU1MzUgZg0KMDAwMDAwMjU3OCA2NTUzNSBmDQowMDAwMDAyNTc5IDY1NTM1 IGYNCjAwMDAwMDI1ODAgNjU1MzUgZg0KMDAwMDAwMjU4MSA2NTUzNSBmDQowMDAwMDAyNTgyIDY1 NTM1IGYNCjAwMDAwMDI1ODMgNjU1MzUgZg0KMDAwMDAwMjU4NCA2NTUzNSBmDQowMDAwMDAyNTg1 IDY1NTM1IGYNCjAwMDAwMDI1ODYgNjU1MzUgZg0KMDAwMDAwMjU4NyA2NTUzNSBmDQowMDAwMDAy NTg4IDY1NTM1IGYNCjAwMDAwMDI1ODkgNjU1MzUgZg0KMDAwMDAwMjU5MCA2NTUzNSBmDQowMDAw MDAyNTkxIDY1NTM1IGYNCjAwMDAwMDI1OTIgNjU1MzUgZg0KMDAwMDAwMjU5MyA2NTUzNSBmDQow MDAwMDAyNTk0IDY1NTM1IGYNCjAwMDAwMDI1OTUgNjU1MzUgZg0KMDAwMDAwMjU5NiA2NTUzNSBm DQowMDAwMDAyNTk3IDY1NTM1IGYNCjAwMDAwMDI1OTggNjU1MzUgZg0KMDAwMDAwMjU5OSA2NTUz NSBmDQowMDAwMDAyNjAwIDY1NTM1IGYNCjAwMDAwMDI2MDEgNjU1MzUgZg0KMDAwMDAwMjYwMiA2 NTUzNSBmDQowMDAwMDAyNjAzIDY1NTM1IGYNCjAwMDAwMDI2MDQgNjU1MzUgZg0KMDAwMDAwMjYw NSA2NTUzNSBmDQowMDAwMDAyNjA2IDY1NTM1IGYNCjAwMDAwMDI2MDcgNjU1MzUgZg0KMDAwMDAw MjYwOCA2NTUzNSBmDQowMDAwMDAyNjA5IDY1NTM1IGYNCjAwMDAwMDI2MTAgNjU1MzUgZg0KMDAw MDAwMjYxMSA2NTUzNSBmDQowMDAwMDAyNjEyIDY1NTM1IGYNCjAwMDAwMDI2MTMgNjU1MzUgZg0K MDAwMDAwMjYxNCA2NTUzNSBmDQowMDAwMDAyNjE1IDY1NTM1IGYNCjAwMDAwMDI2MTYgNjU1MzUg Zg0KMDAwMDAwMjYxNyA2NTUzNSBmDQowMDAwMDAyNjE4IDY1NTM1IGYNCjAwMDAwMDI2MTkgNjU1 MzUgZg0KMDAwMDAwMjYyMCA2NTUzNSBmDQowMDAwMDAyNjIxIDY1NTM1IGYNCjAwMDAwMDI2MjIg NjU1MzUgZg0KMDAwMDAwMjYyMyA2NTUzNSBmDQowMDAwMDAyNjI0IDY1NTM1IGYNCjAwMDAwMDI2 MjUgNjU1MzUgZg0KMDAwMDAwMjYyNiA2NTUzNSBmDQowMDAwMDAyNjI3IDY1NTM1IGYNCjAwMDAw MDI2MjggNjU1MzUgZg0KMDAwMDAwMjYyOSA2NTUzNSBmDQowMDAwMDAyNjMwIDY1NTM1IGYNCjAw MDAwMDI2MzEgNjU1MzUgZg0KMDAwMDAwMjYzMiA2NTUzNSBmDQowMDAwMDAyNjMzIDY1NTM1IGYN CjAwMDAwMDI2MzQgNjU1MzUgZg0KMDAwMDAwMjYzNSA2NTUzNSBmDQowMDAwMDAyNjM2IDY1NTM1 IGYNCjAwMDAwMDI2MzcgNjU1MzUgZg0KMDAwMDAwMjYzOCA2NTUzNSBmDQowMDAwMDAyNjM5IDY1 NTM1IGYNCjAwMDAwMDI2NDAgNjU1MzUgZg0KMDAwMDAwMjY0MSA2NTUzNSBmDQowMDAwMDAyNjQy IDY1NTM1IGYNCjAwMDAwMDI2NDMgNjU1MzUgZg0KMDAwMDAwMjY0NCA2NTUzNSBmDQowMDAwMDAy NjQ1IDY1NTM1IGYNCjAwMDAwMDI2NDYgNjU1MzUgZg0KMDAwMDAwMjY0NyA2NTUzNSBmDQowMDAw MDAyNjQ4IDY1NTM1IGYNCjAwMDAwMDI2NDkgNjU1MzUgZg0KMDAwMDAwMjY1MCA2NTUzNSBmDQow MDAwMDAyNjUxIDY1NTM1IGYNCjAwMDAwMDI2NTIgNjU1MzUgZg0KMDAwMDAwMjY1MyA2NTUzNSBm DQowMDAwMDAyNjU0IDY1NTM1IGYNCjAwMDAwMDI2NTUgNjU1MzUgZg0KMDAwMDAwMjY1NiA2NTUz NSBmDQowMDAwMDAyNjU3IDY1NTM1IGYNCjAwMDAwMDI2NTggNjU1MzUgZg0KMDAwMDAwMjY1OSA2 NTUzNSBmDQowMDAwMDAyNjYwIDY1NTM1IGYNCjAwMDAwMDI2NjEgNjU1MzUgZg0KMDAwMDAwMjY2 MiA2NTUzNSBmDQowMDAwMDAyNjYzIDY1NTM1IGYNCjAwMDAwMDI2NjQgNjU1MzUgZg0KMDAwMDAw MjY2NSA2NTUzNSBmDQowMDAwMDAyNjY2IDY1NTM1IGYNCjAwMDAwMDI2NjcgNjU1MzUgZg0KMDAw MDAwMjY2OCA2NTUzNSBmDQowMDAwMDAyNjY5IDY1NTM1IGYNCjAwMDAwMDI2NzAgNjU1MzUgZg0K MDAwMDAwMjY3MSA2NTUzNSBmDQowMDAwMDAyNjcyIDY1NTM1IGYNCjAwMDAwMDI2NzMgNjU1MzUg Zg0KMDAwMDAwMjY3NCA2NTUzNSBmDQowMDAwMDAyNjc1IDY1NTM1IGYNCjAwMDAwMDI2NzYgNjU1 MzUgZg0KMDAwMDAwMjY3NyA2NTUzNSBmDQowMDAwMDAyNjc4IDY1NTM1IGYNCjAwMDAwMDI2Nzkg NjU1MzUgZg0KMDAwMDAwMjY4MCA2NTUzNSBmDQowMDAwMDAyNjgxIDY1NTM1IGYNCjAwMDAwMDI2 ODIgNjU1MzUgZg0KMDAwMDAwMjY4MyA2NTUzNSBmDQowMDAwMDAyNjg0IDY1NTM1IGYNCjAwMDAw MDI2ODUgNjU1MzUgZg0KMDAwMDAwMjY4NiA2NTUzNSBmDQowMDAwMDAyNjg3IDY1NTM1IGYNCjAw MDAwMDI2ODggNjU1MzUgZg0KMDAwMDAwMjY4OSA2NTUzNSBmDQowMDAwMDAyNjkwIDY1NTM1IGYN CjAwMDAwMDI2OTEgNjU1MzUgZg0KMDAwMDAwMjY5MiA2NTUzNSBmDQowMDAwMDAyNjkzIDY1NTM1 IGYNCjAwMDAwMDI2OTQgNjU1MzUgZg0KMDAwMDAwMjY5NSA2NTUzNSBmDQowMDAwMDAyNjk2IDY1 NTM1IGYNCjAwMDAwMDI2OTcgNjU1MzUgZg0KMDAwMDAwMjY5OCA2NTUzNSBmDQowMDAwMDAyNjk5 IDY1NTM1IGYNCjAwMDAwMDI3MDAgNjU1MzUgZg0KMDAwMDAwMjcwMSA2NTUzNSBmDQowMDAwMDAy NzAyIDY1NTM1IGYNCjAwMDAwMDI3MDMgNjU1MzUgZg0KMDAwMDAwMjcwNCA2NTUzNSBmDQowMDAw MDAyNzA1IDY1NTM1IGYNCjAwMDAwMDI3MDYgNjU1MzUgZg0KMDAwMDAwMjcwNyA2NTUzNSBmDQow MDAwMDAyNzA4IDY1NTM1IGYNCjAwMDAwMDI3MDkgNjU1MzUgZg0KMDAwMDAwMjcxMCA2NTUzNSBm DQowMDAwMDAyNzExIDY1NTM1IGYNCjAwMDAwMDI3MTIgNjU1MzUgZg0KMDAwMDAwMjcxMyA2NTUz NSBmDQowMDAwMDAyNzE0IDY1NTM1IGYNCjAwMDAwMDI3MTUgNjU1MzUgZg0KMDAwMDAwMjcxNiA2 NTUzNSBmDQowMDAwMDAyNzE3IDY1NTM1IGYNCjAwMDAwMDI3MTggNjU1MzUgZg0KMDAwMDAwMjcx OSA2NTUzNSBmDQowMDAwMDAyNzIwIDY1NTM1IGYNCjAwMDAwMDI3MjEgNjU1MzUgZg0KMDAwMDAw MjcyMiA2NTUzNSBmDQowMDAwMDAyNzIzIDY1NTM1IGYNCjAwMDAwMDI3MjQgNjU1MzUgZg0KMDAw MDAwMjcyNSA2NTUzNSBmDQowMDAwMDAyNzI2IDY1NTM1IGYNCjAwMDAwMDI3MjcgNjU1MzUgZg0K MDAwMDAwMjcyOCA2NTUzNSBmDQowMDAwMDAyNzI5IDY1NTM1IGYNCjAwMDAwMDI3MzAgNjU1MzUg Zg0KMDAwMDAwMjczMSA2NTUzNSBmDQowMDAwMDAyNzMyIDY1NTM1IGYNCjAwMDAwMDI3MzMgNjU1 MzUgZg0KMDAwMDAwMjczNCA2NTUzNSBmDQowMDAwMDAyNzM1IDY1NTM1IGYNCjAwMDAwMDI3MzYg NjU1MzUgZg0KMDAwMDAwMjczNyA2NTUzNSBmDQowMDAwMDAyNzM4IDY1NTM1IGYNCjAwMDAwMDI3 MzkgNjU1MzUgZg0KMDAwMDAwMjc0MCA2NTUzNSBmDQowMDAwMDAyNzQxIDY1NTM1IGYNCjAwMDAw MDI3NDIgNjU1MzUgZg0KMDAwMDAwMjc0MyA2NTUzNSBmDQowMDAwMDAyNzQ0IDY1NTM1IGYNCjAw MDAwMDI3NDUgNjU1MzUgZg0KMDAwMDAwMjc0NiA2NTUzNSBmDQowMDAwMDAyNzQ3IDY1NTM1IGYN CjAwMDAwMDI3NDggNjU1MzUgZg0KMDAwMDAwMjc0OSA2NTUzNSBmDQowMDAwMDAyNzUwIDY1NTM1 IGYNCjAwMDAwMDI3NTEgNjU1MzUgZg0KMDAwMDAwMjc1MiA2NTUzNSBmDQowMDAwMDAyNzUzIDY1 NTM1IGYNCjAwMDAwMDI3NTQgNjU1MzUgZg0KMDAwMDAwMjc1NSA2NTUzNSBmDQowMDAwMDAyNzU2 IDY1NTM1IGYNCjAwMDAwMDI3NTcgNjU1MzUgZg0KMDAwMDAwMjc1OCA2NTUzNSBmDQowMDAwMDAy NzU5IDY1NTM1IGYNCjAwMDAwMDI3NjAgNjU1MzUgZg0KMDAwMDAwMjc2MSA2NTUzNSBmDQowMDAw MDAyNzYyIDY1NTM1IGYNCjAwMDAwMDI3NjMgNjU1MzUgZg0KMDAwMDAwMjc2NCA2NTUzNSBmDQow MDAwMDAyNzY1IDY1NTM1IGYNCjAwMDAwMDI3NjYgNjU1MzUgZg0KMDAwMDAwMjc2NyA2NTUzNSBm DQowMDAwMDAyNzY4IDY1NTM1IGYNCjAwMDAwMDI3NjkgNjU1MzUgZg0KMDAwMDAwMjc3MCA2NTUz NSBmDQowMDAwMDAyNzcxIDY1NTM1IGYNCjAwMDAwMDI3NzIgNjU1MzUgZg0KMDAwMDAwMjc3MyA2 NTUzNSBmDQowMDAwMDAyNzc0IDY1NTM1IGYNCjAwMDAwMDI3NzUgNjU1MzUgZg0KMDAwMDAwMjc3 NiA2NTUzNSBmDQowMDAwMDAyNzc3IDY1NTM1IGYNCjAwMDAwMDI3NzggNjU1MzUgZg0KMDAwMDAw Mjc3OSA2NTUzNSBmDQowMDAwMDAyNzgwIDY1NTM1IGYNCjAwMDAwMDI3ODEgNjU1MzUgZg0KMDAw MDAwMjc4MiA2NTUzNSBmDQowMDAwMDAyNzgzIDY1NTM1IGYNCjAwMDAwMDI3ODQgNjU1MzUgZg0K MDAwMDAwMjc4NSA2NTUzNSBmDQowMDAwMDAyNzg2IDY1NTM1IGYNCjAwMDAwMDI3ODcgNjU1MzUg Zg0KMDAwMDAwMjc4OCA2NTUzNSBmDQowMDAwMDAyNzg5IDY1NTM1IGYNCjAwMDAwMDI3OTAgNjU1 MzUgZg0KMDAwMDAwMjc5MSA2NTUzNSBmDQowMDAwMDAyNzkyIDY1NTM1IGYNCjAwMDAwMDI3OTMg NjU1MzUgZg0KMDAwMDAwMjc5NCA2NTUzNSBmDQowMDAwMDAyNzk1IDY1NTM1IGYNCjAwMDAwMDI3 OTYgNjU1MzUgZg0KMDAwMDAwMjc5NyA2NTUzNSBmDQowMDAwMDAyNzk4IDY1NTM1IGYNCjAwMDAw MDI3OTkgNjU1MzUgZg0KMDAwMDAwMjgwMCA2NTUzNSBmDQowMDAwMDAyODAxIDY1NTM1IGYNCjAw MDAwMDI4MDIgNjU1MzUgZg0KMDAwMDAwMjgwMyA2NTUzNSBmDQowMDAwMDAyODA0IDY1NTM1IGYN CjAwMDAwMDI4MDUgNjU1MzUgZg0KMDAwMDAwMjgwNiA2NTUzNSBmDQowMDAwMDAyODA3IDY1NTM1 IGYNCjAwMDAwMDI4MDggNjU1MzUgZg0KMDAwMDAwMjgwOSA2NTUzNSBmDQowMDAwMDAyODEwIDY1 NTM1IGYNCjAwMDAwMDI4MTEgNjU1MzUgZg0KMDAwMDAwMjgxMiA2NTUzNSBmDQowMDAwMDAyODEz IDY1NTM1IGYNCjAwMDAwMDI4MTQgNjU1MzUgZg0KMDAwMDAwMjgxNSA2NTUzNSBmDQowMDAwMDAy ODE2IDY1NTM1IGYNCjAwMDAwMDI4MTcgNjU1MzUgZg0KMDAwMDAwMjgxOCA2NTUzNSBmDQowMDAw MDAyODE5IDY1NTM1IGYNCjAwMDAwMDI4MjAgNjU1MzUgZg0KMDAwMDAwMjgyMSA2NTUzNSBmDQow MDAwMDAyODIyIDY1NTM1IGYNCjAwMDAwMDI4MjMgNjU1MzUgZg0KMDAwMDAwMjgyNCA2NTUzNSBm DQowMDAwMDAyODI1IDY1NTM1IGYNCjAwMDAwMDI4MjYgNjU1MzUgZg0KMDAwMDAwMjgyNyA2NTUz NSBmDQowMDAwMDAyODI4IDY1NTM1IGYNCjAwMDAwMDI4MjkgNjU1MzUgZg0KMDAwMDAwMjgzMCA2 NTUzNSBmDQowMDAwMDAyODMxIDY1NTM1IGYNCjAwMDAwMDI4MzIgNjU1MzUgZg0KMDAwMDAwMjgz MyA2NTUzNSBmDQowMDAwMDAyODM0IDY1NTM1IGYNCjAwMDAwMDI4MzUgNjU1MzUgZg0KMDAwMDAw MjgzNiA2NTUzNSBmDQowMDAwMDAyODM3IDY1NTM1IGYNCjAwMDAwMDI4MzggNjU1MzUgZg0KMDAw MDAwMjgzOSA2NTUzNSBmDQowMDAwMDAyODQwIDY1NTM1IGYNCjAwMDAwMDI4NDEgNjU1MzUgZg0K MDAwMDAwMjg0MiA2NTUzNSBmDQowMDAwMDAyODQzIDY1NTM1IGYNCjAwMDAwMDI4NDQgNjU1MzUg Zg0KMDAwMDAwMjg0NSA2NTUzNSBmDQowMDAwMDAyODQ2IDY1NTM1IGYNCjAwMDAwMDI4NDcgNjU1 MzUgZg0KMDAwMDAwMjg0OCA2NTUzNSBmDQowMDAwMDAyODQ5IDY1NTM1IGYNCjAwMDAwMDI4NTAg NjU1MzUgZg0KMDAwMDAwMjg1MSA2NTUzNSBmDQowMDAwMDAyODUyIDY1NTM1IGYNCjAwMDAwMDI4 NTMgNjU1MzUgZg0KMDAwMDAwMjg1NCA2NTUzNSBmDQowMDAwMDAyODU1IDY1NTM1IGYNCjAwMDAw MDI4NTYgNjU1MzUgZg0KMDAwMDAwMjg1NyA2NTUzNSBmDQowMDAwMDAyODU4IDY1NTM1IGYNCjAw MDAwMDI4NTkgNjU1MzUgZg0KMDAwMDAwMjg2MCA2NTUzNSBmDQowMDAwMDAyODYxIDY1NTM1IGYN CjAwMDAwMDI4NjIgNjU1MzUgZg0KMDAwMDAwMjg2MyA2NTUzNSBmDQowMDAwMDAyODY0IDY1NTM1 IGYNCjAwMDAwMDI4NjUgNjU1MzUgZg0KMDAwMDAwMjg2NiA2NTUzNSBmDQowMDAwMDAyODY3IDY1 NTM1IGYNCjAwMDAwMDI4NjggNjU1MzUgZg0KMDAwMDAwMjg2OSA2NTUzNSBmDQowMDAwMDAyODcw IDY1NTM1IGYNCjAwMDAwMDI4NzEgNjU1MzUgZg0KMDAwMDAwMjg3MiA2NTUzNSBmDQowMDAwMDAy ODczIDY1NTM1IGYNCjAwMDAwMDI4NzQgNjU1MzUgZg0KMDAwMDAwMjg3NSA2NTUzNSBmDQowMDAw MDAyODc2IDY1NTM1IGYNCjAwMDAwMDI4NzcgNjU1MzUgZg0KMDAwMDAwMjg3OCA2NTUzNSBmDQow MDAwMDAyODc5IDY1NTM1IGYNCjAwMDAwMDI4ODAgNjU1MzUgZg0KMDAwMDAwMjg4MSA2NTUzNSBm DQowMDAwMDAyODgyIDY1NTM1IGYNCjAwMDAwMDI4ODMgNjU1MzUgZg0KMDAwMDAwMjg4NCA2NTUz NSBmDQowMDAwMDAyODg1IDY1NTM1IGYNCjAwMDAwMDI4ODYgNjU1MzUgZg0KMDAwMDAwMjg4NyA2 NTUzNSBmDQowMDAwMDAyODg4IDY1NTM1IGYNCjAwMDAwMDI4ODkgNjU1MzUgZg0KMDAwMDAwMjg5 MCA2NTUzNSBmDQowMDAwMDAyODkxIDY1NTM1IGYNCjAwMDAwMDI4OTIgNjU1MzUgZg0KMDAwMDAw Mjg5MyA2NTUzNSBmDQowMDAwMDAyODk0IDY1NTM1IGYNCjAwMDAwMDI4OTUgNjU1MzUgZg0KMDAw MDAwMjg5NiA2NTUzNSBmDQowMDAwMDAyODk3IDY1NTM1IGYNCjAwMDAwMDI4OTggNjU1MzUgZg0K MDAwMDAwMjg5OSA2NTUzNSBmDQowMDAwMDAyOTAwIDY1NTM1IGYNCjAwMDAwMDI5MDEgNjU1MzUg Zg0KMDAwMDAwMjkwMiA2NTUzNSBmDQowMDAwMDAyOTAzIDY1NTM1IGYNCjAwMDAwMDI5MDQgNjU1 MzUgZg0KMDAwMDAwMjkwNSA2NTUzNSBmDQowMDAwMDAyOTA2IDY1NTM1IGYNCjAwMDAwMDI5MDcg NjU1MzUgZg0KMDAwMDAwMjkwOCA2NTUzNSBmDQowMDAwMDAyOTA5IDY1NTM1IGYNCjAwMDAwMDI5 MTAgNjU1MzUgZg0KMDAwMDAwMjkxMSA2NTUzNSBmDQowMDAwMDAyOTEyIDY1NTM1IGYNCjAwMDAw MDI5MTMgNjU1MzUgZg0KMDAwMDAwMjkxNCA2NTUzNSBmDQowMDAwMDAyOTE1IDY1NTM1IGYNCjAw MDAwMDI5MTYgNjU1MzUgZg0KMDAwMDAwMjkxNyA2NTUzNSBmDQowMDAwMDAyOTE4IDY1NTM1IGYN CjAwMDAwMDI5MTkgNjU1MzUgZg0KMDAwMDAwMjkyMCA2NTUzNSBmDQowMDAwMDAyOTIxIDY1NTM1 IGYNCjAwMDAwMDI5MjIgNjU1MzUgZg0KMDAwMDAwMjkyMyA2NTUzNSBmDQowMDAwMDAyOTI0IDY1 NTM1IGYNCjAwMDAwMDI5MjUgNjU1MzUgZg0KMDAwMDAwMjkyNiA2NTUzNSBmDQowMDAwMDAyOTI3 IDY1NTM1IGYNCjAwMDAwMDI5MjggNjU1MzUgZg0KMDAwMDAwMjkyOSA2NTUzNSBmDQowMDAwMDAy OTMwIDY1NTM1IGYNCjAwMDAwMDI5MzEgNjU1MzUgZg0KMDAwMDAwMjkzMiA2NTUzNSBmDQowMDAw MDAyOTMzIDY1NTM1IGYNCjAwMDAwMDI5MzQgNjU1MzUgZg0KMDAwMDAwMjkzNSA2NTUzNSBmDQow MDAwMDAyOTM2IDY1NTM1IGYNCjAwMDAwMDI5MzcgNjU1MzUgZg0KMDAwMDAwMjkzOCA2NTUzNSBm DQowMDAwMDAyOTM5IDY1NTM1IGYNCjAwMDAwMDI5NDAgNjU1MzUgZg0KMDAwMDAwMjk0MSA2NTUz NSBmDQowMDAwMDAyOTQyIDY1NTM1IGYNCjAwMDAwMDI5NDMgNjU1MzUgZg0KMDAwMDAwMjk0NCA2 NTUzNSBmDQowMDAwMDAyOTQ1IDY1NTM1IGYNCjAwMDAwMDI5NDYgNjU1MzUgZg0KMDAwMDAwMjk0 NyA2NTUzNSBmDQowMDAwMDAyOTQ4IDY1NTM1IGYNCjAwMDAwMDI5NDkgNjU1MzUgZg0KMDAwMDAw Mjk1MCA2NTUzNSBmDQowMDAwMDAyOTUxIDY1NTM1IGYNCjAwMDAwMDI5NTIgNjU1MzUgZg0KMDAw MDAwMjk1MyA2NTUzNSBmDQowMDAwMDAyOTU0IDY1NTM1IGYNCjAwMDAwMDI5NTUgNjU1MzUgZg0K MDAwMDAwMjk1NiA2NTUzNSBmDQowMDAwMDAyOTU3IDY1NTM1IGYNCjAwMDAwMDI5NTggNjU1MzUg Zg0KMDAwMDAwMjk1OSA2NTUzNSBmDQowMDAwMDAyOTYwIDY1NTM1IGYNCjAwMDAwMDI5NjEgNjU1 MzUgZg0KMDAwMDAwMjk2MiA2NTUzNSBmDQowMDAwMDAyOTYzIDY1NTM1IGYNCjAwMDAwMDI5NjQg NjU1MzUgZg0KMDAwMDAwMjk2NSA2NTUzNSBmDQowMDAwMDAyOTY2IDY1NTM1IGYNCjAwMDAwMDI5 NjcgNjU1MzUgZg0KMDAwMDAwMjk2OCA2NTUzNSBmDQowMDAwMDAyOTY5IDY1NTM1IGYNCjAwMDAw MDI5NzAgNjU1MzUgZg0KMDAwMDAwMjk3MSA2NTUzNSBmDQowMDAwMDAyOTcyIDY1NTM1IGYNCjAw MDAwMDI5NzMgNjU1MzUgZg0KMDAwMDAwMjk3NCA2NTUzNSBmDQowMDAwMDAyOTc1IDY1NTM1IGYN CjAwMDAwMDI5NzYgNjU1MzUgZg0KMDAwMDAwMjk3NyA2NTUzNSBmDQowMDAwMDAyOTc4IDY1NTM1 IGYNCjAwMDAwMDI5NzkgNjU1MzUgZg0KMDAwMDAwMjk4MCA2NTUzNSBmDQowMDAwMDAyOTgxIDY1 NTM1IGYNCjAwMDAwMDI5ODIgNjU1MzUgZg0KMDAwMDAwMjk4MyA2NTUzNSBmDQowMDAwMDAyOTg0 IDY1NTM1IGYNCjAwMDAwMDI5ODUgNjU1MzUgZg0KMDAwMDAwMjk4NiA2NTUzNSBmDQowMDAwMDAy OTg3IDY1NTM1IGYNCjAwMDAwMDI5ODggNjU1MzUgZg0KMDAwMDAwMjk4OSA2NTUzNSBmDQowMDAw MDAyOTkwIDY1NTM1IGYNCjAwMDAwMDI5OTEgNjU1MzUgZg0KMDAwMDAwMjk5MiA2NTUzNSBmDQow MDAwMDAyOTkzIDY1NTM1IGYNCjAwMDAwMDI5OTQgNjU1MzUgZg0KMDAwMDAwMjk5NSA2NTUzNSBm DQowMDAwMDAyOTk2IDY1NTM1IGYNCjAwMDAwMDI5OTcgNjU1MzUgZg0KMDAwMDAwMjk5OCA2NTUz NSBmDQowMDAwMDAyOTk5IDY1NTM1IGYNCjAwMDAwMDMwMDAgNjU1MzUgZg0KMDAwMDAwMzAwMSA2 NTUzNSBmDQowMDAwMDAzMDAyIDY1NTM1IGYNCjAwMDAwMDMwMDMgNjU1MzUgZg0KMDAwMDAwMzAw NCA2NTUzNSBmDQowMDAwMDAzMDA1IDY1NTM1IGYNCjAwMDAwMDMwMDYgNjU1MzUgZg0KMDAwMDAw MzAwNyA2NTUzNSBmDQowMDAwMDAzMDA4IDY1NTM1IGYNCjAwMDAwMDMwMDkgNjU1MzUgZg0KMDAw MDAwMzAxMCA2NTUzNSBmDQowMDAwMDAzMDExIDY1NTM1IGYNCjAwMDAwMDMwMTIgNjU1MzUgZg0K MDAwMDAwMzAxMyA2NTUzNSBmDQowMDAwMDAzMDE0IDY1NTM1IGYNCjAwMDAwMDMwMTUgNjU1MzUg Zg0KMDAwMDAwMzAxNiA2NTUzNSBmDQowMDAwMDAzMDE3IDY1NTM1IGYNCjAwMDAwMDMwMTggNjU1 MzUgZg0KMDAwMDAwMzAxOSA2NTUzNSBmDQowMDAwMDAzMDIwIDY1NTM1IGYNCjAwMDAwMDMwMjEg NjU1MzUgZg0KMDAwMDAwMzAyMiA2NTUzNSBmDQowMDAwMDAzMDIzIDY1NTM1IGYNCjAwMDAwMDMw MjQgNjU1MzUgZg0KMDAwMDAwMzAyNSA2NTUzNSBmDQowMDAwMDAzMDI2IDY1NTM1IGYNCjAwMDAw MDMwMjcgNjU1MzUgZg0KMDAwMDAwMzAyOCA2NTUzNSBmDQowMDAwMDAzMDI5IDY1NTM1IGYNCjAw MDAwMDMwMzAgNjU1MzUgZg0KMDAwMDAwMzAzMSA2NTUzNSBmDQowMDAwMDAzMDMyIDY1NTM1IGYN CjAwMDAwMDMwMzMgNjU1MzUgZg0KMDAwMDAwMzAzNCA2NTUzNSBmDQowMDAwMDAzMDM1IDY1NTM1 IGYNCjAwMDAwMDMwMzYgNjU1MzUgZg0KMDAwMDAwMzAzNyA2NTUzNSBmDQowMDAwMDAzMDM4IDY1 NTM1IGYNCjAwMDAwMDMwMzkgNjU1MzUgZg0KMDAwMDAwMzA0MCA2NTUzNSBmDQowMDAwMDAzMDQx IDY1NTM1IGYNCjAwMDAwMDMwNDIgNjU1MzUgZg0KMDAwMDAwMzA0MyA2NTUzNSBmDQowMDAwMDAz MDQ0IDY1NTM1IGYNCjAwMDAwMDMwNDUgNjU1MzUgZg0KMDAwMDAwMzA0NiA2NTUzNSBmDQowMDAw MDAzMDQ3IDY1NTM1IGYNCjAwMDAwMDMwNDggNjU1MzUgZg0KMDAwMDAwMzA0OSA2NTUzNSBmDQow MDAwMDAzMDUwIDY1NTM1IGYNCjAwMDAwMDMwNTEgNjU1MzUgZg0KMDAwMDAwMzA1MiA2NTUzNSBm DQowMDAwMDAzMDUzIDY1NTM1IGYNCjAwMDAwMDMwNTQgNjU1MzUgZg0KMDAwMDAwMzA1NSA2NTUz NSBmDQowMDAwMDAzMDU2IDY1NTM1IGYNCjAwMDAwMDMwNTcgNjU1MzUgZg0KMDAwMDAwMzA1OCA2 NTUzNSBmDQowMDAwMDAzMDU5IDY1NTM1IGYNCjAwMDAwMDMwNjAgNjU1MzUgZg0KMDAwMDAwMzA2 MSA2NTUzNSBmDQowMDAwMDAzMDYyIDY1NTM1IGYNCjAwMDAwMDMwNjMgNjU1MzUgZg0KMDAwMDAw MzA2NCA2NTUzNSBmDQowMDAwMDAzMDY1IDY1NTM1IGYNCjAwMDAwMDMwNjYgNjU1MzUgZg0KMDAw MDAwMzA2NyA2NTUzNSBmDQowMDAwMDAzMDY4IDY1NTM1IGYNCjAwMDAwMDMwNjkgNjU1MzUgZg0K MDAwMDAwMzA3MCA2NTUzNSBmDQowMDAwMDAzMDcxIDY1NTM1IGYNCjAwMDAwMDMwNzIgNjU1MzUg Zg0KMDAwMDAwMzA3MyA2NTUzNSBmDQowMDAwMDAzMDc0IDY1NTM1IGYNCjAwMDAwMDMwNzUgNjU1 MzUgZg0KMDAwMDAwMzA3NiA2NTUzNSBmDQowMDAwMDAzMDc3IDY1NTM1IGYNCjAwMDAwMDMwNzgg NjU1MzUgZg0KMDAwMDAwMzA3OSA2NTUzNSBmDQowMDAwMDAzMDgwIDY1NTM1IGYNCjAwMDAwMDMw ODEgNjU1MzUgZg0KMDAwMDAwMzA4MiA2NTUzNSBmDQowMDAwMDAzMDgzIDY1NTM1IGYNCjAwMDAw MDMwODQgNjU1MzUgZg0KMDAwMDAwMzA4NSA2NTUzNSBmDQowMDAwMDAzMDg2IDY1NTM1IGYNCjAw MDAwMDMwODcgNjU1MzUgZg0KMDAwMDAwMzA4OCA2NTUzNSBmDQowMDAwMDAzMDg5IDY1NTM1IGYN CjAwMDAwMDMwOTAgNjU1MzUgZg0KMDAwMDAwMzA5MSA2NTUzNSBmDQowMDAwMDAzMDkyIDY1NTM1 IGYNCjAwMDAwMDMwOTMgNjU1MzUgZg0KMDAwMDAwMzA5NCA2NTUzNSBmDQowMDAwMDAzMDk1IDY1 NTM1IGYNCjAwMDAwMDMwOTYgNjU1MzUgZg0KMDAwMDAwMzA5NyA2NTUzNSBmDQowMDAwMDAzMDk4 IDY1NTM1IGYNCjAwMDAwMDMwOTkgNjU1MzUgZg0KMDAwMDAwMzEwMCA2NTUzNSBmDQowMDAwMDAz MTAxIDY1NTM1IGYNCjAwMDAwMDMxMDIgNjU1MzUgZg0KMDAwMDAwMzEwMyA2NTUzNSBmDQowMDAw MDAzMTA0IDY1NTM1IGYNCjAwMDAwMDMxMDUgNjU1MzUgZg0KMDAwMDAwMzEwNiA2NTUzNSBmDQow MDAwMDAzMTA3IDY1NTM1IGYNCjAwMDAwMDMxMDggNjU1MzUgZg0KMDAwMDAwMzEwOSA2NTUzNSBm DQowMDAwMDAzMTEwIDY1NTM1IGYNCjAwMDAwMDMxMTEgNjU1MzUgZg0KMDAwMDAwMDAwMCA2NTUz NSBmDQowMDAwMjI0MjM3IDAwMDAwIG4NCjAwMDAyMjQ0NTUgMDAwMDAgbg0KMDAwMDIyNDc1OCAw MDAwMCBuDQowMDAwMjYyNzk4IDAwMDAwIG4NCjAwMDAyNjMxMDUgMDAwMDAgbg0KMDAwMDI2MzM4 NCAwMDAwMCBuDQowMDAwMjYzNzM5IDAwMDAwIG4NCjAwMDAzMDk2OTIgMDAwMDAgbg0KMDAwMDMw OTk4NyAwMDAwMCBuDQowMDAwMzUzNzYyIDAwMDAwIG4NCjAwMDAzNTM5ODEgMDAwMDAgbg0KMDAw MDM4MDc0OCAwMDAwMCBuDQp0cmFpbGVyDQo8PC9TaXplIDMxMjQvUm9vdCAxIDAgUi9JbmZvIDMx MTIgMCBSL0lEWzw4RDk1Mzg5QUIzNjA0OTQyQTk5MjdENDAzODg0RDE0Nz48OEQ5NTM4OUFCMzYw NDk0MkE5OTI3RDQwMzg4NEQxNDc+XSA+Pg0Kc3RhcnR4cmVmDQozODcwMjUNCiUlRU9GDQp4cmVm DQowIDANCnRyYWlsZXINCjw8L1NpemUgMzEyNC9Sb290IDEgMCBSL0luZm8gMzExMiAwIFIvSURb PDhEOTUzODlBQjM2MDQ5NDJBOTkyN0Q0MDM4ODREMTQ3Pjw4RDk1Mzg5QUIzNjA0OTQyQTk5MjdE NDAzODg0RDE0Nz5dIC9QcmV2IDM4NzAyNS9YUmVmU3RtIDM4MDc0OD4+DQpzdGFydHhyZWYNCjQ0 OTY2OQ0KJSVFT0Y= --089e013d175a1cb6b104f5defe9c-- From owner-freebsd-fs@FreeBSD.ORG Mon Mar 31 11:26:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D93C5988; Mon, 31 Mar 2014 11:26:17 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BA6D1E89; Mon, 31 Mar 2014 11:26:17 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id DEF397277D; Mon, 31 Mar 2014 04:26:10 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 63294-09; Mon, 31 Mar 2014 04:26:10 -0700 (PDT) Received: from [192.168.3.176] (unknown [124.195.210.70]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id B8A3072767; Mon, 31 Mar 2014 04:26:04 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem From: Jordan Hubbard In-Reply-To: Date: Mon, 31 Mar 2014 16:25:57 +0500 Content-Transfer-Encoding: quoted-printable Message-Id: <5599C60E-7735-4596-B6C5-2EE428D9B248@mail.turbofuzz.com> References: <1377879526.2465097.1396046676367.JavaMail.root@uoguelph.ca> To: araujo@FreeBSD.org X-Mailer: Apple Mail (2.1874) Cc: FreeBSD Filesystems , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Mar 2014 11:26:17 -0000 On Mar 31, 2014, at 8:53 AM, Marcelo Araujo = wrote: > I understand your concern about add more one sysctl, however maybe we = can > do something like ZFS does, if it detect the system is AMD and have = more > than X of RAM it enables some options by default, or a kind of warning = can > be displayed show the new sysctl option. >=20 > Of, course other people opinion will be very welcome. Why not simply enable (conditionally compile) it in only for the x64 = architecture? If you=92re on a 64 bit Intel architecture machine, = chances are pretty good you=92re also running hardware of reasonable = recent vintage and aren=92t significantly HW constrained. I think it=92s also fair to say that if you=92re providing NFS or iSCSI = services on an i386 with 512M of memory or a similarly endowed ARM or = PPC system, performance is not your first and primary concern. You=92re = simply happy that it works at all. ;-) - Jordan From owner-freebsd-fs@FreeBSD.ORG Mon Mar 31 11:56:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E381A150 for ; Mon, 31 Mar 2014 11:56:20 +0000 (UTC) Received: from mail-yk0-x22a.google.com (mail-yk0-x22a.google.com [IPv6:2607:f8b0:4002:c07::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A9F39191 for ; Mon, 31 Mar 2014 11:56:20 +0000 (UTC) Received: by mail-yk0-f170.google.com with SMTP id 9so5976070ykp.15 for ; Mon, 31 Mar 2014 04:56:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=uSlNV385tcPieML4eJfZawJ30DrDvYJPDaET2DTn5pA=; b=VXMyHmwEpG1iwgLhube6di1kFXOWGJb04+2hhGDV2Rwx9puvKpyBCOtKhJpbCtjDz5 xtSZBVWgdbrW5vUK/3/QqVTyOOEWHoSpsTXyr+uF+I1HhAbR89ddNaR4CbMLV4rNdZV3 UOU8+R4ItlwjzcM9wa5UzIfAnir7xWVdIkQAXBrPirEKlqBuQwXwHUe1gdSkHGpnpNQZ Xv8tdt4tGDYzM+DCpeOFrxgVcB6GdPGwejuWm99dVsaQ9T6vO98/1kIeVjiAailorNjf nCrdvQjgOUmD9tfOguHO8cb3kNC4zQH6dUe+MBAlm7bVw00a/sijFx4P8c77JZ6iWd0I SY7A== MIME-Version: 1.0 X-Received: by 10.236.137.8 with SMTP id x8mr35316156yhi.4.1396266979848; Mon, 31 Mar 2014 04:56:19 -0700 (PDT) Received: by 10.170.95.212 with HTTP; Mon, 31 Mar 2014 04:56:19 -0700 (PDT) In-Reply-To: <53391F6C.9070208@FreeBSD.org> References: <53391F6C.9070208@FreeBSD.org> Date: Mon, 31 Mar 2014 13:56:19 +0200 Message-ID: Subject: Re: ZFS panic: spin lock held too long From: Idwer Vollering To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Mar 2014 11:56:20 -0000 2014-03-31 9:55 GMT+02:00 Andriy Gapon : > on 30/03/2014 21:43 Idwer Vollering said the following: >> Unread portion of the kernel message buffer: >> spin lock 0xffffffff814fa030 (smp rendezvous) held by >> 0xfffff8000fb42920 (tid 100428) too long > > Please note the tid and obtain a stack trace for that thread. > You can switch to the thread using 'tid' command in kgdb. Like this? ==== vmcore.0 ==== (kgdb) tid 100428 [Switching to thread 266 (Thread 100428)]#0 0xffffffff80c7f478 in cpustop_handler () at /usr/src/sys/amd64/amd64/mp_machdep.c:1432 1432 savectx(&stoppcbs[cpu]); (kgdb) bt #0 0xffffffff80c7f478 in cpustop_handler () at /usr/src/sys/amd64/amd64/mp_machdep.c:1432 #1 0xffffffff80c7f43f in ipi_nmi_handler () at /usr/src/sys/amd64/amd64/mp_machdep.c:1417 #2 0xffffffff80c8db52 in trap (frame=0xfffffe020b2fdf30) at /usr/src/sys/amd64/amd64/trap.c:211 #3 0xffffffff80c757d3 in nmi_calltrap () at /usr/src/sys/amd64/amd64/exception.S:505 #4 0xffffffff80c7ed42 in smp_tlb_shootdown (vector=, pmap=, addr1=, addr2=18446741875183812608) at cpufunc.h:309 #5 0xffffffff80c8066c in pmap_invalidate_range (pmap=, sva=, eva=) at /usr/src/sys/amd64/amd64/pmap.c:1441 #6 0xffffffff80c81d73 in pmap_remove (pmap=0xffffffff81520958, sva=18446741875183812608, eva=18446741875183812608) at /usr/src/sys/amd64/amd64/pmap.c:3698 #7 0xffffffff80b0fc53 in kmem_unback (object=0xffffffff814fecf8, addr=18446741875183804416, size=8192) at /usr/src/sys/vm/vm_kern.c:401 #8 0xffffffff80b0fd44 in kmem_free (vmem=0xffffffff8147a300, addr=18446741875183804416, size=8192) at /usr/src/sys/vm/vm_kern.c:421 #9 0xffffffff80b08d6c in uma_large_free (slab=0xfffff801a62e7640) at /usr/src/sys/vm/uma_core.c:1097 #10 0xffffffff80898d17 in free (addr=, mtp=0xffffffff81397420) at /usr/src/sys/kern/kern_malloc.c:599 #11 0xffffffff808f06b6 in sbuf_delete (s=0xfffffe0215613718) at /usr/src/sys/kern/subr_sbuf.c:761 #12 0xffffffff808735cb in sysctl_kern_proc_filedesc (oidp=, arg1=, arg2=-8791816929280, req=) at /usr/src/sys/kern/kern_descrip.c:3540 #13 0xffffffff808bae0f in sysctl_root (arg1=, arg2=) at /usr/src/sys/kern/kern_sysctl.c:1497 #14 0xffffffff808bb3c8 in userland_sysctl (td=, name=0xfffffe02156138b0, namelen=, old=, oldlenp=, inkernel=, new=, retval=, flags=0) at /usr/src/sys/kern/kern_sysctl.c:1607 #15 0xffffffff808bb1b4 in sys___sysctl (td=0xfffff8000fb42920, uap=0xfffffe02156139c0) at /usr/src/sys/kern/kern_sysctl.c:1533 #16 0xffffffff80c8ef87 in amd64_syscall (td=0xfffff8000fb42920, traced=0) at subr_syscall.c:134 #17 0xffffffff80c7567b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:391 #18 0x000000080102b73a in ?? () Previous frame inner to this frame (corrupt stack?) ==== vmcore.1 ==== (kgdb) tid 100396 [Switching to thread 263 (Thread 100396)]#0 0xffffffff80c7f478 in cpustop_handler () at /usr/src/sys/amd64/amd64/mp_machdep.c:1432 1432 savectx(&stoppcbs[cpu]); (kgdb) bt #0 0xffffffff80c7f478 in cpustop_handler () at /usr/src/sys/amd64/amd64/mp_machdep.c:1432 #1 0xffffffff80c7f43f in ipi_nmi_handler () at /usr/src/sys/amd64/amd64/mp_machdep.c:1417 #2 0xffffffff80c8db52 in trap (frame=0xfffffe020b305f30) at /usr/src/sys/amd64/amd64/trap.c:211 #3 0xffffffff80c757d3 in nmi_calltrap () at /usr/src/sys/amd64/amd64/exception.S:505 #4 0xffffffff80c7ed42 in smp_tlb_shootdown (vector=, pmap=, addr1=, addr2=18446741874812944384) at cpufunc.h:309 #5 0xffffffff80c8066c in pmap_invalidate_range (pmap=, sva=, eva=) at /usr/src/sys/amd64/amd64/pmap.c:1441 #6 0xffffffff80c81d73 in pmap_remove (pmap=0xffffffff81520958, sva=18446741874812944384, eva=18446741874812944384) at /usr/src/sys/amd64/amd64/pmap.c:3698 #7 0xffffffff80b0fc53 in kmem_unback (object=0xffffffff814fecf8, addr=18446741874812936192, size=8192) at /usr/src/sys/vm/vm_kern.c:401 #8 0xffffffff80b0fd44 in kmem_free (vmem=0xffffffff8147a300, addr=18446741874812936192, size=8192) at /usr/src/sys/vm/vm_kern.c:421 #9 0xffffffff80b08d6c in uma_large_free (slab=0xfffff800acffcf00) at /usr/src/sys/vm/uma_core.c:1097 #10 0xffffffff80898d17 in free (addr=, mtp=0xffffffff81a2cc40) at /usr/src/sys/kern/kern_malloc.c:599 #11 0xffffffff81890bb9 in zfsdev_ioctl () from /boot/kernel/zfs.ko #12 0xffffffff807ac16f in devfs_ioctl_f (fp=0xfffff800ac0f18a0, com=0, data=0x0, cred=, td=0xfffffe00078c6000) at /usr/src/sys/fs/devfs/devfs_vnops.c:757 #13 0xffffffff808fdeee in kern_ioctl (td=0xfffff8000e525920, fd=, com=786678) at file.h:319 #14 0xffffffff808fdc6f in sys_ioctl (td=0xfffff8000e525920, uap=0xfffffe02155e39c0) at /usr/src/sys/kern/sys_generic.c:702 #15 0xffffffff80c8ef87 in amd64_syscall (td=0xfffff8000e525920, traced=0) at subr_syscall.c:134 #16 0xffffffff80c7567b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:391 #17 0x00000008019e308a in ?? () Previous frame inner to this frame (corrupt stack?) From owner-freebsd-fs@FreeBSD.ORG Mon Mar 31 23:09:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A5E699D2; Mon, 31 Mar 2014 23:09:32 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 338DB106; Mon, 31 Mar 2014 23:09:31 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ArsEAOP0OVODaFve/2dsb2JhbABZg0FXgwq/DIEegTV0giUBAQEDASMERwsbGAICDRkCIzYZG4dKAwkIDa52mwsNh0sXgSmIJ4MTgUQBIzQHgm+BSQSUYAeBeoMgiziFSoNMIYEsAR8i X-IronPort-AV: E=Sophos;i="4.97,768,1389762000"; d="scan'208";a="110685846" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 31 Mar 2014 19:09:24 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 94508B3F46; Mon, 31 Mar 2014 19:09:24 -0400 (EDT) Date: Mon, 31 Mar 2014 19:09:24 -0400 (EDT) From: Rick Macklem To: pyunyh@gmail.com Message-ID: <779330717.3747229.1396307364595.JavaMail.root@uoguelph.ca> In-Reply-To: <20140331023253.GC3548@michelle.cdnetworks.com> Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , FreeBSD Net , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Mar 2014 23:09:32 -0000 Yonghyeon Pyun wrote: > On Wed, Mar 26, 2014 at 08:27:48PM -0400, Rick Macklem wrote: > > pyunyh@gmail.com wrote: > > > On Tue, Mar 25, 2014 at 07:10:35PM -0400, Rick Macklem wrote: > > > > Hi, > > > > > > > > First off, I hope you don't mind that I cross-posted this, > > > > but I wanted to make sure both the NFS/iSCSI and networking > > > > types say it. > > > > If you look in this mailing list thread: > > > > http://docs.FreeBSD.org/cgi/mid.cgi?1850411724.1687820.1395621539316.JavaMail.root > > > > you'll see that several people have been working hard at > > > > testing > > > > and > > > > thanks to them, I think I now know what is going on. > > > > > > > > > Thanks for your hard work on narrowing down that issue. I'm too > > > busy for $work in these days so I couldn't find time to > > > investigate > > > the issue. > > > > > > > (This applies to network drivers that support TSO and are > > > > limited > > > > to 32 transmit > > > > segments->32 mbufs in chain.) Doing a quick search I found the > > > > following > > > > drivers that appear to be affected (I may have missed some): > > > > jme, fxp, age, sge, msk, alc, ale, ixgbe/ix, nfe, e1000/em, re > > > > > > > > > > The magic number 32 was chosen long time ago when I implemented > > > TSO > > > in non-Intel drivers. I tried to find optimal number to reduce > > > the > > > size kernel stack usage at that time. bus_dma(9) will coalesce > > > with previous segment if possible so I thought the number 32 was > > > not an issue. Not sure current bus_dma(9) also has the same code > > > though. The number 32 is arbitrary one so you can increase > > > it if you want. > > > > > Well, in the case of "ix" Jack Vogel says it is a hardware > > limitation. > > I can't change drivers that I can't test and don't know anything > > about > > the hardware. Maybe replacing m_collapse() with m_defrag() is an > > exception, > > since I know what that is doing and it isn't hardware related, but > > I > > would still prefer a review by the driver author/maintainer before > > making > > such a change. > > > > If there are drivers that you know can be increased from 32->35 > > please do > > so, since that will not only avoid the EFBIG failures but also > > avoid a > > lot of calls to m_defrag(). > > > > > > Further, of these drivers, the following use m_collapse() and > > > > not > > > > m_defrag() > > > > to try and reduce the # of mbufs in the chain. m_collapse() is > > > > not > > > > going to > > > > get the 35 mbufs down to 32 mbufs, as far as I can see, so > > > > these > > > > ones are > > > > more badly broken: > > > > jme, fxp, age, sge, alc, ale, nfe, re > > > > > > I guess m_defeg(9) is more optimized for non-TSO packets. You > > > don't > > > want to waste CPU cycles to copy the full frame to reduce the > > > number of mbufs in the chain. For TSO packets, m_defrag(9) looks > > > better but if we always have to copy a full TSO packet to make > > > TSO > > > work, driver writers have to invent better scheme rather than > > > blindly relying on m_defrag(9), I guess. > > > > > Yes, avoiding m_defrag() calls would be nice. For this issue, > > increasing > > the transmit segment limit from 32->35 does that, if the change can > > be > > done easily/safely. > > > > Otherwise, all I can think of is my suggestion to add something > > like > > if_hw_tsomaxseg which the driver can use to tell tcp_output() the > > driver's limit for # of mbufs in the chain. > > > > > > > > > > The long description is in the above thread, but the short > > > > version > > > > is: > > > > - NFS generates a chain with 35 mbufs in it for (read/readdir > > > > replies and write requests) > > > > made up of (tcpip header, RPC header, NFS args, 32 clusters > > > > of > > > > file data) > > > > - tcp_output() usually trims the data size down to tp->t_tsomax > > > > (65535) and > > > > then some more to make it an exact multiple of TCP transmit > > > > data > > > > size. > > > > - the net driver prepends an ethernet header, growing the > > > > length > > > > by 14 (or > > > > sometimes 18 for vlans), but in the first mbuf and not > > > > adding > > > > one to the chain. > > > > - m_defrag() copies this to a chain of 32 mbuf clusters > > > > (because > > > > the total data > > > > length is <= 64K) and it gets sent > > > > > > > > However, it the data length is a little less than 64K when > > > > passed > > > > to tcp_output() > > > > so that the length including headers is in the range > > > > 65519->65535... > > > > - tcp_output() doesn't reduce its size. > > > > - the net driver adds an ethernet header, making the total > > > > data > > > > length slightly > > > > greater than 64K > > > > - m_defrag() copies it to a chain of 33 mbuf clusters, which > > > > fails with EFBIG > > > > --> trainwrecks NFS performance, because the TSO segment is > > > > dropped > > > > instead of sent. > > > > > > > > A tester also stated that the problem could be reproduced using > > > > iSCSI. Maybe > > > > Edward Napierala might know some details w.r.t. what kind of > > > > mbuf > > > > chain iSCSI > > > > generates? > > > > > > > > Also, one tester has reported that setting if_hw_tsomax in the > > > > driver before > > > > the ether_ifattach() call didn't make the value of tp->t_tsomax > > > > smaller. > > > > However, reducing IP_MAXPACKET (which is what it is set to by > > > > default) did > > > > reduce it. I have no idea why this happens or how to fix it, > > > > but it > > > > implies > > > > that setting if_hw_tsomax in the driver isn't a solution until > > > > this > > > > is resolved. > > > > > > > > So, what to do about this? > > > > First, I'd like a simple fix/workaround that can go into 9.3. > > > > (which is code > > > > freeze in May). The best thing I can think of is setting > > > > if_hw_tsomax to a > > > > smaller default value. (Line# 658 of sys/net/if.c in head.) > > > > > > > > Version A: > > > > replace > > > > ifp->if_hw_tsomax = IP_MAXPACKET; > > > > with > > > > ifp->if_hw_tsomax = min(32 * MCLBYTES - (ETHER_HDR_LEN + > > > > ETHER_VLAN_ENCAP_LEN), > > > > IP_MAXPACKET); > > > > plus > > > > replace m_collapse() with m_defrag() in the drivers listed > > > > above. > > > > > > > > This would only reduce the default from 65535->65518, so it > > > > only > > > > impacts > > > > the uncommon case where the output size (with tcpip header) is > > > > within > > > > this range. (As such, I don't think it would have a negative > > > > impact > > > > for > > > > drivers that handle more than 32 transmit segments.) > > > > From the testers, it seems that this is sufficient to get rid > > > > of > > > > the EFBIG > > > > errors. (The total data length including ethernet header > > > > doesn't > > > > exceed 64K, > > > > so m_defrag() fits it into 32 mbuf clusters.) > > > > > > > > The main downside of this is that there will be a lot of > > > > m_defrag() > > > > calls > > > > being done and they do quite a bit of bcopy()'ng. > > > > > > > > Version B: > > > > replace > > > > ifp->if_hw_tsomax = IP_MAXPACKET; > > > > with > > > > ifp->if_hw_tsomax = min(29 * MCLBYTES, IP_MAXPACKET); > > > > > > > > This one would avoid the m_defrag() calls, but might have a > > > > negative > > > > impact on TSO performance for drivers that can handle 35 > > > > transmit > > > > segments, > > > > since the maximum TSO segment size is reduced by about 6K. > > > > (Because > > > > of the > > > > second size reduction to an exact multiple of TCP transmit data > > > > size, the > > > > exact amount varies.) > > > > > > > > Possible longer term fixes: > > > > One longer term fix might be to add something like > > > > if_hw_tsomaxseg > > > > so that > > > > a driver can set a limit on the number of transmit segments > > > > (mbufs > > > > in chain) > > > > and tcp_output() could use that to limit the size of the TSO > > > > segment, as > > > > required. (I have a first stab at such a patch, but no way to > > > > test > > > > it, so > > > > I can't see that being done by May. Also, it would require > > > > changes > > > > to a lot > > > > of drivers to make it work. I've attached this patch, in case > > > > anyone wants > > > > to work on it?) > > > > > > > > Another might be to increase the size of MCLBYTES (I don't see > > > > this > > > > as > > > > practical for 9.3, although the actual change is simple.). I do > > > > think > > > > that increasing MCLBYTES might be something to consider doing > > > > in > > > > the > > > > future, for reasons beyond fixing this. > > > > > > > > So, what do others think should be done? rick > > > > > > > > > > AFAIK all TSO capable drivers you mentioned above have no limit > > > on > > > the number of TX segments in TSO path. Not sure about Intel > > > controllers though. Increasing the number of segment will > > > consume > > > lots of kernel stack in those drivers. Given that ixgbe, which > > > seems to use 100, didn't show any kernal stack shortage, I think > > > bumping the number of segments would be quick way to address the > > > issue. > > > > > Well, bumping it from 32->35 is all it would take for NFS (can't > > comment > > w.r.t. iSCSI). ixgbe uses 100 for the 82598 chip and 32 for the > > 82599 > > (just so others aren't confused by the above comment). I understand > > your point was w.r.t. using 100 without blowing the kernel stack, > > but > > since the testers have been using "ix" with the 82599 chip which is > > limited to 32 transmit segments... > > > > However, please increase any you know can be safely done from > > 32->35, rick > > Done in r263957. > Thanks, rick ps: I've pinged the guys who seem to be maintaining bge, bce, bxe, since they all have the problem, too. From owner-freebsd-fs@FreeBSD.ORG Tue Apr 1 00:41:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F39D2EF4; Tue, 1 Apr 2014 00:41:44 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 7FC1BA82; Tue, 1 Apr 2014 00:41:44 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ArkEAPYJOlODaFve/2dsb2JhbABZhBiDCr51gR6BN3SCJQEBAQMBI1YFFg4EBgICDRkCIygOBhOHZQMJCK58mwoNh0sXgSmLOoFoATMHgm+BSQSWYY5YhUqDTCGBbg X-IronPort-AV: E=Sophos;i="4.97,769,1389762000"; d="scan'208";a="110701900" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 31 Mar 2014 20:41:43 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 0BA6DB3F17; Mon, 31 Mar 2014 20:41:43 -0400 (EDT) Date: Mon, 31 Mar 2014 20:41:43 -0400 (EDT) From: Rick Macklem To: Jordan Hubbard Message-ID: <1519461744.3785300.1396312903037.JavaMail.root@uoguelph.ca> In-Reply-To: <5599C60E-7735-4596-B6C5-2EE428D9B248@mail.turbofuzz.com> Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Alexander Motin , Garrett Wollman X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Apr 2014 00:41:45 -0000 Jordan Hubbard wrote: >=20 > On Mar 31, 2014, at 8:53 AM, Marcelo Araujo > wrote: >=20 > > I understand your concern about add more one sysctl, however maybe > > we can > > do something like ZFS does, if it detect the system is AMD and have > > more > > than X of RAM it enables some options by default, or a kind of > > warning can > > be displayed show the new sysctl option. > >=20 > > Of, course other people opinion will be very welcome. >=20 > Why not simply enable (conditionally compile) it in only for the x64 > architecture? If you=E2=80=99re on a 64 bit Intel architecture machine, > chances are pretty good you=E2=80=99re also running hardware of reasonabl= e > recent vintage and aren=E2=80=99t significantly HW constrained. >=20 I'm actually typing this on a single core amd64 with 2Gbytes of RAM, so I think enabling it only for both 64bits and at least some # of Gbytes of RAM would be better. (I agree that most amd64s will be relatively big machines, but not all;-) My biggest problem is that I have no way of testing this on a fairly big amd64 server at this time and I'd be a lot more comfortable committing a patch that has been tested this way. (I realize that Marcelo has been running it for his benchmarks and that's a good start, but it isn't the same as a heavily loaded server.) I notice that Alexander is on the cc list and I've added Garrett, since those are the two guys that have been doing a bunch of server testing (and my thanks go to them for this). Maybe they will have a chance to test this patch on a heavily loaded server? Since I do want to test/debug the if_hw_tsomaxseg patch I have, I plan on inquiring to see if I can use something like the netperf cluster for this testing (in a couple of weeks when I get home). rick > I think it=E2=80=99s also fair to say that if you=E2=80=99re providing NF= S or iSCSI > services on an i386 with 512M of memory or a similarly endowed ARM > or PPC system, performance is not your first and primary concern. > You=E2=80=99re simply happy that it works at all. ;-) >=20 > - Jordan >=20 >=20 From owner-freebsd-fs@FreeBSD.ORG Tue Apr 1 01:12:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A199A6F6; Tue, 1 Apr 2014 01:12:56 +0000 (UTC) Received: from mail-pa0-x234.google.com (mail-pa0-x234.google.com [IPv6:2607:f8b0:400e:c03::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 641BDD30; Tue, 1 Apr 2014 01:12:56 +0000 (UTC) Received: by mail-pa0-f52.google.com with SMTP id rd3so9078488pab.11 for ; Mon, 31 Mar 2014 18:12:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=references:mime-version:in-reply-to:content-type :content-transfer-encoding:message-id:cc:from:subject:date:to; bh=oRcKdiNIY1yfSfuPbbniah76g06EmYXzm4x1ukbK/eg=; b=hmbenASFw8ap5g8kB95gtA0qrHqw4RAbH2aUfGk7l1xyAU4HBVUrgQHFsR9RutYyxP 1Zmtrtb6X1fATeConP8OdP3432dxwqtjLtiIfV3poc2c0TliEdJiWyeh9paIoT6tNaBG VzSVe7DTCMr2PHx+PXrSdOi5uKvr0PPjhNCtu9t/a1L3rIUTqQ63D23IJ7sCAkjbFh5b nI6AVY83SSKOhX+bM/vI9IjW2RUq8EITqX/LxyleZUwWbqaMZc6AuJQtS32lJ1mnWYDD /QZINyOtuxDI2iEXgnY/LWxKGBSBJCJLHPX66Ip12/W6UIzGwBLO5ZmeUhgNlvTFmuAm h30w== X-Received: by 10.68.196.202 with SMTP id io10mr246365pbc.149.1396314776060; Mon, 31 Mar 2014 18:12:56 -0700 (PDT) Received: from [10.182.119.231] ([106.65.75.47]) by mx.google.com with ESMTPSA id ha2sm46127566pbb.8.2014.03.31.18.12.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 31 Mar 2014 18:12:54 -0700 (PDT) References: <1519461744.3785300.1396312903037.JavaMail.root@uoguelph.ca> Mime-Version: 1.0 (1.0) In-Reply-To: <1519461744.3785300.1396312903037.JavaMail.root@uoguelph.ca> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Message-Id: <2A998A50-C692-4EAC-A01C-79D5230566E6@gmail.com> X-Mailer: iPhone Mail (10B146) From: araujobsdport@gmail.com Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem Date: Tue, 1 Apr 2014 09:12:34 +0800 To: Rick Macklem Cc: FreeBSD Filesystems , Alexander Motin , Garrett Wollman X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Apr 2014 01:12:56 -0000 Hello Rick, I have some production servers with lots of NFS users that I can make a test= too, but it will cost time and we can only verify regression, as I can't ma= ke any benchmark there! Let me check, how loaded is this server and I tell you later; if you want me= do more tests, I can do so. Best Regards. On 2014/4/1, at 8:41, Rick Macklem wrote: > Jordan Hubbard wrote: >>=20 >> On Mar 31, 2014, at 8:53 AM, Marcelo Araujo >> wrote: >>=20 >>> I understand your concern about add more one sysctl, however maybe >>> we can >>> do something like ZFS does, if it detect the system is AMD and have >>> more >>> than X of RAM it enables some options by default, or a kind of >>> warning can >>> be displayed show the new sysctl option. >>>=20 >>> Of, course other people opinion will be very welcome. >>=20 >> Why not simply enable (conditionally compile) it in only for the x64 >> architecture? If you=E2=80=99re on a 64 bit Intel architecture machine,= >> chances are pretty good you=E2=80=99re also running hardware of reasonabl= e >> recent vintage and aren=E2=80=99t significantly HW constrained. > I'm actually typing this on a single core amd64 with 2Gbytes of RAM, so > I think enabling it only for both 64bits and at least some # of Gbytes of > RAM would be better. (I agree that most amd64s will be relatively big > machines, but not all;-) >=20 > My biggest problem is that I have no way of testing this on a fairly > big amd64 server at this time and I'd be a lot more comfortable committing= > a patch that has been tested this way. (I realize that Marcelo has been > running it for his benchmarks and that's a good start, but it isn't the > same as a heavily loaded server.) >=20 > I notice that Alexander is on the cc list and I've added Garrett, since > those are the two guys that have been doing a bunch of server testing > (and my thanks go to them for this). Maybe they will have a chance to > test this patch on a heavily loaded server? >=20 > Since I do want to test/debug the if_hw_tsomaxseg patch I have, I plan > on inquiring to see if I can use something like the netperf cluster > for this testing (in a couple of weeks when I get home). >=20 > rick >=20 >> I think it=E2=80=99s also fair to say that if you=E2=80=99re providing NFS= or iSCSI >> services on an i386 with 512M of memory or a similarly endowed ARM >> or PPC system, performance is not your first and primary concern. >> You=E2=80=99re simply happy that it works at all. ;-) >>=20 >> - Jordan >>=20 >>=20 From owner-freebsd-fs@FreeBSD.ORG Tue Apr 1 01:43:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6186AF95; Tue, 1 Apr 2014 01:43:27 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id F1014F53; Tue, 1 Apr 2014 01:43:26 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ar0EACkZOlODaFve/2dsb2JhbABZg0FXgwq4XoYXTVGBNHSCJQEBAQMBAQEBICsgCxsYAgINGQIpAQkmDgcEARwEh1AIDa5iomIXgSmIJ4ReAQEbNAeCb4FJBJV3hAqRAoNMITGBBDk X-IronPort-AV: E=Sophos;i="4.97,769,1389762000"; d="scan'208";a="110716106" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 31 Mar 2014 21:43:25 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 55C1AB403C; Mon, 31 Mar 2014 21:43:25 -0400 (EDT) Date: Mon, 31 Mar 2014 21:43:25 -0400 (EDT) From: Rick Macklem To: araujo@FreeBSD.org Message-ID: <2056019527.3811582.1396316605342.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Apr 2014 01:43:27 -0000 Marcelo Araujo wrote: > > Hello Rick, > > > We have made couple more benchmarks here with additional options, > such like '64 threads' and readahead=8. > I can't remember, but if you haven't already done so, another thing to try are these sysctls on the server: sysctl vfs.nfsd.tcphighwater=100000 sysctl vfs.nfsd.tcpcachetimeo=300 These should reduce the server's CPU overhead (how important these setting are depends on how current your kernel is). > > Now, we add nfsstat and netstat -m into the table. > Here attached is the full benchmark, and I can say, this patch really > improved the read speed. > I noticed a significant reduction in CPU usage on the server (about 20%). An interesting question would be "Is this CPU reduction a result of avoiding the m_defrag() calls in the ix driver?". Unfortunately, the only way I can think of answering this is doing the benchmarks on hardware without the 32 mbuf chain limitation, but I doubt that you can do that? Put another way, it would be interesting to compare "with vs without" the patch on machines where the network interface can handle 35 mbufs in the transmit chain, so there aren't m_defrag() calls being done for the non-patched case. Anyhow, have fun with it, rick > > I understand your concern about add more one sysctl, however maybe we > can do something like ZFS does, if it detect the system is AMD and > have more than X of RAM it enables some options by default, or a > kind of warning can be displayed show the new sysctl option. > > > Of, course other people opinion will be very welcome. > > > Best Regards, > > > > 2014-03-29 6:44 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca > : > > > > > Marcelo Araujo wrote: > > 2014-03-28 5:37 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca >: > > > > > Christopher Forgeron wrote: > > > > I'm quite sure the problem is on 9.2-RELEASE, not 9.1-RELEASE > > > > or > > > > earlier, > > > > as a 9.2-STABLE from last year I have doesn't exhibit the > > > > problem. > > > > New > > > > code in if.c at line 660 looks to be what is starting this, > > > > which > > > > makes me > > > > wonder how TSO was being handled before 9.2. > > > > > > > > I also like Rick's NFS patch for cluster size. I notice an > > > > improvement, but > > > > don't have solid numbers yet. I'm still stress testing it as we > > > > speak. > > > > > > > Unfortunately, this causes problems for small i386 systems, so I > > > am reluctant to commit it to head. Maybe a variant that is only > > > enabled for amd64 systems with lots of memory would be ok? > > > > > > > > Rick, > > > > Maybe you can create a SYSCTL to enable/disable it by the end user > > will be > > more reasonable. Also, of course, it is so far safe if only 64Bits > > CPU can > > enable this SYSCTL. Any other option seems not OK, will be hard to > > judge > > what is lots of memory and what is not, it will depends what is > > running > > onto the system. > > > I guess adding it so it can be optionally enabled via a sysctl isn't > a bad idea. I think the largest risk here is "how do you tell people > what the risk of enabling this is"? > > There are already a bunch of sysctls related to NFS that few people > know how to use. (I recall that Alexander has argued that folk don't > want > worry about these tunables and I tend to agree.) > > If I do a variant of the patch that uses m_getjcl(..M_WAITOK..), then > at least the "breakage" is thread(s) sleeping on "btallo", which is > fairly easy to check for, althouggh rather obscure. > (Btw, I've never reproduced this for a patch that changes the code to > always use MJUMPAGESIZE mbuf clusters. > I can only reproduce it intermittently when the patch mixes > allocation of > MCLBYTES clusters and MJUMPAGESIZE clusters.) > > I've been poking at it to try and figure out how to get > m_getjcl(..M_NOWAIT..) > to return NULL instead of looping when it runs out of boundary tags > (to > see if that can result in a stable implementation of the patch), but > haven't had much luck yet. > > Bottom line: > I just don't like committing a patch that can break the system in > such an > obscure way, even if it is enabled via a sysctl. > > Others have an opinion on this? > > Thanks, rick > > > > The SYSCTL will be great, and in case you don't have time to do it, > > I > > can > > give you a hand. > > > > I'm gonna do more benchmarks today and will send another report, > > but > > in our > > product here, I'm inclined to use this patch, because 10~20% speed > > up > > in > > read for me is a lot. :-) > > > > Thank you so much and best regards, > > -- > > Marcelo Araujo > > araujo@FreeBSD.org > > > > _______________________________________________ > > freebsd-net@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-net > > To unsubscribe, send any mail to > > " freebsd-net-unsubscribe@freebsd.org " > > > > > > > -- > Marcelo Araujo > araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Tue Apr 1 01:53:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 20EFE305; Tue, 1 Apr 2014 01:53:52 +0000 (UTC) Received: from mail-wg0-x22f.google.com (mail-wg0-x22f.google.com [IPv6:2a00:1450:400c:c00::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7D3AF159; Tue, 1 Apr 2014 01:53:51 +0000 (UTC) Received: by mail-wg0-f47.google.com with SMTP id x12so6588872wgg.18 for ; Mon, 31 Mar 2014 18:53:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=XxdVTjfGHaIxV71Kd4ticJ+z9xDXWhv0EeCfZM0UiyE=; b=HiGeEyqL5Bdkl8wtcVmxVHPeK4d1QGCENUOoGyQfDVaFyV//ZEtOi01BNmNvdLIl17 nH+pc899V1wavGsB1qCCKg+PAPud2P//gM+LmTwnIG/tYM/X1qULD5eLEwGBbD0x7W1z GLYsvR85g7F76JupLnnHo+9kDD80jKXGz3ocBRKHnGwY5Miil9Vk+zPp3P58gue41x32 nLHcj/J0y4LsjAzKz2eQlzpsmyZd0Z4xJZIYr35xSoDToxvJeEXZM3XjM62nGMpJ/Gr3 /oSp+sIFIGj7z18m50fhj8jUykuKAgHCmMFmys3yfRg/6XaCyeN/cRXx+fgCYOApStiB rZ6w== MIME-Version: 1.0 X-Received: by 10.180.94.196 with SMTP id de4mr16371875wib.16.1396317229858; Mon, 31 Mar 2014 18:53:49 -0700 (PDT) Received: by 10.216.190.199 with HTTP; Mon, 31 Mar 2014 18:53:49 -0700 (PDT) In-Reply-To: <2056019527.3811582.1396316605342.JavaMail.root@uoguelph.ca> References: <2056019527.3811582.1396316605342.JavaMail.root@uoguelph.ca> Date: Tue, 1 Apr 2014 09:53:49 +0800 Message-ID: Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem From: Marcelo Araujo To: Rick Macklem Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: araujo@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Apr 2014 01:53:52 -0000 2014-04-01 9:43 GMT+08:00 Rick Macklem : > Marcelo Araujo wrote: > > > > Hello Rick, > > > > > > We have made couple more benchmarks here with additional options, > > such like '64 threads' and readahead=8. > > > I can't remember, but if you haven't already done so, another thing to > try are these sysctls on the server: > sysctl vfs.nfsd.tcphighwater=100000 > sysctl vfs.nfsd.tcpcachetimeo=300 > > These should reduce the server's CPU overhead (how important these > setting are depends on how current your kernel is). > I haven't done it, I don't have these sysctl on my system. > > > > > Now, we add nfsstat and netstat -m into the table. > > Here attached is the full benchmark, and I can say, this patch really > > improved the read speed. > > > I noticed a significant reduction in CPU usage on the server (about 20%). > An interesting question would be "Is this CPU reduction a result of > avoiding the m_defrag() calls in the ix driver?". > I do believe it is because avoid m_defrag(), but I didn't try to dig into it to check if is really m_defrag(). > Unfortunately, the only way I can think of answering this is doing the > benchmarks on hardware without the 32 mbuf chain limitation, but I > doubt that you can do that? > No, I don't have any hardware without the 32mbuf limitation. > > Put another way, it would be interesting to compare "with vs without" > the patch on machines where the network interface can handle 35 mbufs > in the transmit chain, so there aren't m_defrag() calls being done for > the non-patched case. > > Anyhow, have fun with it, rick > Maybe Christopher can do this benchmark as well in his environment. > > > > > I understand your concern about add more one sysctl, however maybe we > > can do something like ZFS does, if it detect the system is AMD and > > have more than X of RAM it enables some options by default, or a > > kind of warning can be displayed show the new sysctl option. > > > > > > Of, course other people opinion will be very welcome. > > > > > > Best Regards, > > > > > > > > 2014-03-29 6:44 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca > : > > > > > > > > > > Marcelo Araujo wrote: > > > 2014-03-28 5:37 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca >: > > > > > > > Christopher Forgeron wrote: > > > > > I'm quite sure the problem is on 9.2-RELEASE, not 9.1-RELEASE > > > > > or > > > > > earlier, > > > > > as a 9.2-STABLE from last year I have doesn't exhibit the > > > > > problem. > > > > > New > > > > > code in if.c at line 660 looks to be what is starting this, > > > > > which > > > > > makes me > > > > > wonder how TSO was being handled before 9.2. > > > > > > > > > > I also like Rick's NFS patch for cluster size. I notice an > > > > > improvement, but > > > > > don't have solid numbers yet. I'm still stress testing it as we > > > > > speak. > > > > > > > > > Unfortunately, this causes problems for small i386 systems, so I > > > > am reluctant to commit it to head. Maybe a variant that is only > > > > enabled for amd64 systems with lots of memory would be ok? > > > > > > > > > > > Rick, > > > > > > Maybe you can create a SYSCTL to enable/disable it by the end user > > > will be > > > more reasonable. Also, of course, it is so far safe if only 64Bits > > > CPU can > > > enable this SYSCTL. Any other option seems not OK, will be hard to > > > judge > > > what is lots of memory and what is not, it will depends what is > > > running > > > onto the system. > > > > > I guess adding it so it can be optionally enabled via a sysctl isn't > > a bad idea. I think the largest risk here is "how do you tell people > > what the risk of enabling this is"? > > > > There are already a bunch of sysctls related to NFS that few people > > know how to use. (I recall that Alexander has argued that folk don't > > want > > worry about these tunables and I tend to agree.) > > > > If I do a variant of the patch that uses m_getjcl(..M_WAITOK..), then > > at least the "breakage" is thread(s) sleeping on "btallo", which is > > fairly easy to check for, althouggh rather obscure. > > (Btw, I've never reproduced this for a patch that changes the code to > > always use MJUMPAGESIZE mbuf clusters. > > I can only reproduce it intermittently when the patch mixes > > allocation of > > MCLBYTES clusters and MJUMPAGESIZE clusters.) > > > > I've been poking at it to try and figure out how to get > > m_getjcl(..M_NOWAIT..) > > to return NULL instead of looping when it runs out of boundary tags > > (to > > see if that can result in a stable implementation of the patch), but > > haven't had much luck yet. > > > > Bottom line: > > I just don't like committing a patch that can break the system in > > such an > > obscure way, even if it is enabled via a sysctl. > > > > Others have an opinion on this? > > > > Thanks, rick > > > > > > > The SYSCTL will be great, and in case you don't have time to do it, > > > I > > > can > > > give you a hand. > > > > > > I'm gonna do more benchmarks today and will send another report, > > > but > > > in our > > > product here, I'm inclined to use this patch, because 10~20% speed > > > up > > > in > > > read for me is a lot. :-) > > > > > > Thank you so much and best regards, > > > -- > > > Marcelo Araujo > > > araujo@FreeBSD.org > > > > > > > _______________________________________________ > > > freebsd-net@freebsd.org mailing list > > > http://lists.freebsd.org/mailman/listinfo/freebsd-net > > > To unsubscribe, send any mail to > > > " freebsd-net-unsubscribe@freebsd.org " > > > > > > > > > > > > > -- > > Marcelo Araujo > > araujo@FreeBSD.org > -- Marcelo Araujo araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Tue Apr 1 02:17:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A404874E; Tue, 1 Apr 2014 02:17:22 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 31783380; Tue, 1 Apr 2014 02:17:21 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqgEAHEgOlODaFve/2dsb2JhbABZg0FXgwq4XoYXTVGBNHSCJQEBAQMBAQEBICsgCxsYAgINGQIpAQkmDgcEARwEh1AIDa5iokkXgSmIHYRZAQEbNAeCb4FJBJV3hAqRA4NMITGBBDk X-IronPort-AV: E=Sophos;i="4.97,769,1389762000"; d="scan'208";a="110578951" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 31 Mar 2014 22:17:14 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 6C577B3F17; Mon, 31 Mar 2014 22:17:14 -0400 (EDT) Date: Mon, 31 Mar 2014 22:17:14 -0400 (EDT) From: Rick Macklem To: araujo@FreeBSD.org Message-ID: <1331037009.3823520.1396318634433.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: RFC: How to fix the NFS/iSCSI vs TSO problem MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Apr 2014 02:17:22 -0000 Marcelo Araujo wrote: > 2014-04-01 9:43 GMT+08:00 Rick Macklem : > > > Marcelo Araujo wrote: > > > > > > Hello Rick, > > > > > > > > > We have made couple more benchmarks here with additional options, > > > such like '64 threads' and readahead=8. > > > > > I can't remember, but if you haven't already done so, another thing > > to > > try are these sysctls on the server: > > sysctl vfs.nfsd.tcphighwater=100000 > > sysctl vfs.nfsd.tcpcachetimeo=300 > > > > These should reduce the server's CPU overhead (how important these > > setting are depends on how current your kernel is). > > > > I haven't done it, I don't have these sysctl on my system. > Ok, yes, I now notice you are using a 9.1 system. > > > > > > > > > Now, we add nfsstat and netstat -m into the table. > > > Here attached is the full benchmark, and I can say, this patch > > > really > > > improved the read speed. > > > > > I noticed a significant reduction in CPU usage on the server (about > > 20%). > > An interesting question would be "Is this CPU reduction a result of > > avoiding the m_defrag() calls in the ix driver?". > > > > I do believe it is because avoid m_defrag(), but I didn't try to dig > into > it to check if is really m_defrag(). > For FreeBSD9.1, you could make this small change to sys/netinet/tcp_output.c so that it will avoid the m_defrag() calls in the "ix" driver. (line #750, 751) Replace: if (len > IP_MAXPACKET - hdrlen) { len = IP_MAXPACKET - hdrlen; with if (len > 29 * MCLBYTES - hdrlen) { len = 29 * MCLBYTES - hdrlen; I think this will keep the TSO segments at 32 mbufs in the chain. It would be interesting to see if a system with this patch still demonstrated the 20% reduction in CPU when the 4kmcl.patch is applied to it. > > > Unfortunately, the only way I can think of answering this is doing > > the > > benchmarks on hardware without the 32 mbuf chain limitation, but I > > doubt that you can do that? > > > > No, I don't have any hardware without the 32mbuf limitation. > Your previous post mentioned this network interface: NIC - 10G Intel X540 that is based on 82599 chipset. This one has the 32 mbuf limitation. (See above w.r.t. testing without m_defrag() calls.) Again, have fun with it, rick > > > > > Put another way, it would be interesting to compare "with vs > > without" > > the patch on machines where the network interface can handle 35 > > mbufs > > in the transmit chain, so there aren't m_defrag() calls being done > > for > > the non-patched case. > > > > Anyhow, have fun with it, rick > > > > Maybe Christopher can do this benchmark as well in his environment. > > > > > > > > > > I understand your concern about add more one sysctl, however > > > maybe we > > > can do something like ZFS does, if it detect the system is AMD > > > and > > > have more than X of RAM it enables some options by default, or a > > > kind of warning can be displayed show the new sysctl option. > > > > > > > > > Of, course other people opinion will be very welcome. > > > > > > > > > Best Regards, > > > > > > > > > > > > 2014-03-29 6:44 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca > : > > > > > > > > > > > > > > > Marcelo Araujo wrote: > > > > 2014-03-28 5:37 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca > > > > >: > > > > > > > > > Christopher Forgeron wrote: > > > > > > I'm quite sure the problem is on 9.2-RELEASE, not > > > > > > 9.1-RELEASE > > > > > > or > > > > > > earlier, > > > > > > as a 9.2-STABLE from last year I have doesn't exhibit the > > > > > > problem. > > > > > > New > > > > > > code in if.c at line 660 looks to be what is starting this, > > > > > > which > > > > > > makes me > > > > > > wonder how TSO was being handled before 9.2. > > > > > > > > > > > > I also like Rick's NFS patch for cluster size. I notice an > > > > > > improvement, but > > > > > > don't have solid numbers yet. I'm still stress testing it > > > > > > as we > > > > > > speak. > > > > > > > > > > > Unfortunately, this causes problems for small i386 systems, > > > > > so I > > > > > am reluctant to commit it to head. Maybe a variant that is > > > > > only > > > > > enabled for amd64 systems with lots of memory would be ok? > > > > > > > > > > > > > > Rick, > > > > > > > > Maybe you can create a SYSCTL to enable/disable it by the end > > > > user > > > > will be > > > > more reasonable. Also, of course, it is so far safe if only > > > > 64Bits > > > > CPU can > > > > enable this SYSCTL. Any other option seems not OK, will be hard > > > > to > > > > judge > > > > what is lots of memory and what is not, it will depends what is > > > > running > > > > onto the system. > > > > > > > I guess adding it so it can be optionally enabled via a sysctl > > > isn't > > > a bad idea. I think the largest risk here is "how do you tell > > > people > > > what the risk of enabling this is"? > > > > > > There are already a bunch of sysctls related to NFS that few > > > people > > > know how to use. (I recall that Alexander has argued that folk > > > don't > > > want > > > worry about these tunables and I tend to agree.) > > > > > > If I do a variant of the patch that uses m_getjcl(..M_WAITOK..), > > > then > > > at least the "breakage" is thread(s) sleeping on "btallo", which > > > is > > > fairly easy to check for, althouggh rather obscure. > > > (Btw, I've never reproduced this for a patch that changes the > > > code to > > > always use MJUMPAGESIZE mbuf clusters. > > > I can only reproduce it intermittently when the patch mixes > > > allocation of > > > MCLBYTES clusters and MJUMPAGESIZE clusters.) > > > > > > I've been poking at it to try and figure out how to get > > > m_getjcl(..M_NOWAIT..) > > > to return NULL instead of looping when it runs out of boundary > > > tags > > > (to > > > see if that can result in a stable implementation of the patch), > > > but > > > haven't had much luck yet. > > > > > > Bottom line: > > > I just don't like committing a patch that can break the system in > > > such an > > > obscure way, even if it is enabled via a sysctl. > > > > > > Others have an opinion on this? > > > > > > Thanks, rick > > > > > > > > > > The SYSCTL will be great, and in case you don't have time to do > > > > it, > > > > I > > > > can > > > > give you a hand. > > > > > > > > I'm gonna do more benchmarks today and will send another > > > > report, > > > > but > > > > in our > > > > product here, I'm inclined to use this patch, because 10~20% > > > > speed > > > > up > > > > in > > > > read for me is a lot. :-) > > > > > > > > Thank you so much and best regards, > > > > -- > > > > Marcelo Araujo > > > > araujo@FreeBSD.org > > > > > > > > > > _______________________________________________ > > > > freebsd-net@freebsd.org mailing list > > > > http://lists.freebsd.org/mailman/listinfo/freebsd-net > > > > To unsubscribe, send any mail to > > > > " freebsd-net-unsubscribe@freebsd.org " > > > > > > > > > > > > > > > > > > > -- > > > Marcelo Araujo > > > araujo@FreeBSD.org > > > > > > -- > Marcelo Araujo > araujo@FreeBSD.org > From owner-freebsd-fs@FreeBSD.ORG Tue Apr 1 06:41:36 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 14C74228 for ; Tue, 1 Apr 2014 06:41:36 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 50FBA6A3 for ; Tue, 1 Apr 2014 06:41:34 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id JAA19193; Tue, 01 Apr 2014 09:41:32 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WUsO4-000GkN-JD; Tue, 01 Apr 2014 09:41:32 +0300 Message-ID: <533A5F65.7020800@FreeBSD.org> Date: Tue, 01 Apr 2014 09:40:37 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Idwer Vollering , freebsd-fs@FreeBSD.org Subject: Re: ZFS panic: spin lock held too long References: <53391F6C.9070208@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Apr 2014 06:41:36 -0000 on 31/03/2014 14:56 Idwer Vollering said the following: > 2014-03-31 9:55 GMT+02:00 Andriy Gapon : >> on 30/03/2014 21:43 Idwer Vollering said the following: >>> Unread portion of the kernel message buffer: >>> spin lock 0xffffffff814fa030 (smp rendezvous) held by >>> 0xfffff8000fb42920 (tid 100428) too long >> >> Please note the tid and obtain a stack trace for that thread. >> You can switch to the thread using 'tid' command in kgdb. > > Like this? Yes. > ==== vmcore.0 ==== > > (kgdb) tid 100428 > [Switching to thread 266 (Thread 100428)]#0 0xffffffff80c7f478 in > cpustop_handler () > at /usr/src/sys/amd64/amd64/mp_machdep.c:1432 > 1432 savectx(&stoppcbs[cpu]); > (kgdb) bt > #0 0xffffffff80c7f478 in cpustop_handler () at > /usr/src/sys/amd64/amd64/mp_machdep.c:1432 > #1 0xffffffff80c7f43f in ipi_nmi_handler () at > /usr/src/sys/amd64/amd64/mp_machdep.c:1417 > #2 0xffffffff80c8db52 in trap (frame=0xfffffe020b2fdf30) at > /usr/src/sys/amd64/amd64/trap.c:211 > #3 0xffffffff80c757d3 in nmi_calltrap () at > /usr/src/sys/amd64/amd64/exception.S:505 > #4 0xffffffff80c7ed42 in smp_tlb_shootdown (vector= out>, pmap=, > addr1=, addr2=18446741875183812608) at cpufunc.h:309 So, this thread is stuck waiting on some CPU(s) doing TLB shootdown. This must mean that that CPU is stuck doing something. I can not provide exact instructions on how to find out which CPU is stuck and what it is doing, but you could try to start with examining output of 'thread apply all bt'. > #5 0xffffffff80c8066c in pmap_invalidate_range (pmap= out>, sva=, > eva=) at /usr/src/sys/amd64/amd64/pmap.c:1441 > #6 0xffffffff80c81d73 in pmap_remove (pmap=0xffffffff81520958, > sva=18446741875183812608, eva=18446741875183812608) > at /usr/src/sys/amd64/amd64/pmap.c:3698 > #7 0xffffffff80b0fc53 in kmem_unback (object=0xffffffff814fecf8, > addr=18446741875183804416, size=8192) > at /usr/src/sys/vm/vm_kern.c:401 > #8 0xffffffff80b0fd44 in kmem_free (vmem=0xffffffff8147a300, > addr=18446741875183804416, size=8192) > at /usr/src/sys/vm/vm_kern.c:421 > #9 0xffffffff80b08d6c in uma_large_free (slab=0xfffff801a62e7640) at > /usr/src/sys/vm/uma_core.c:1097 > #10 0xffffffff80898d17 in free (addr=, > mtp=0xffffffff81397420) > at /usr/src/sys/kern/kern_malloc.c:599 > #11 0xffffffff808f06b6 in sbuf_delete (s=0xfffffe0215613718) at > /usr/src/sys/kern/subr_sbuf.c:761 > #12 0xffffffff808735cb in sysctl_kern_proc_filedesc (oidp= optimized out>, arg1=, > arg2=-8791816929280, req=) at > /usr/src/sys/kern/kern_descrip.c:3540 > #13 0xffffffff808bae0f in sysctl_root (arg1=, > arg2=) > at /usr/src/sys/kern/kern_sysctl.c:1497 > #14 0xffffffff808bb3c8 in userland_sysctl (td=, > name=0xfffffe02156138b0, > namelen=, old=, > oldlenp=, > inkernel=, new=, > retval=, flags=0) > at /usr/src/sys/kern/kern_sysctl.c:1607 > #15 0xffffffff808bb1b4 in sys___sysctl (td=0xfffff8000fb42920, > uap=0xfffffe02156139c0) > at /usr/src/sys/kern/kern_sysctl.c:1533 > #16 0xffffffff80c8ef87 in amd64_syscall (td=0xfffff8000fb42920, > traced=0) at subr_syscall.c:134 > #17 0xffffffff80c7567b in Xfast_syscall () at > /usr/src/sys/amd64/amd64/exception.S:391 > #18 0x000000080102b73a in ?? () > Previous frame inner to this frame (corrupt stack?) > > ==== vmcore.1 ==== > > (kgdb) tid 100396 > [Switching to thread 263 (Thread 100396)]#0 0xffffffff80c7f478 in > cpustop_handler () at /usr/src/sys/amd64/amd64/mp_machdep.c:1432 > 1432 savectx(&stoppcbs[cpu]); > (kgdb) bt > #0 0xffffffff80c7f478 in cpustop_handler () at > /usr/src/sys/amd64/amd64/mp_machdep.c:1432 > #1 0xffffffff80c7f43f in ipi_nmi_handler () at > /usr/src/sys/amd64/amd64/mp_machdep.c:1417 > #2 0xffffffff80c8db52 in trap (frame=0xfffffe020b305f30) at > /usr/src/sys/amd64/amd64/trap.c:211 > #3 0xffffffff80c757d3 in nmi_calltrap () at > /usr/src/sys/amd64/amd64/exception.S:505 > #4 0xffffffff80c7ed42 in smp_tlb_shootdown (vector= out>, pmap=, addr1=, > addr2=18446741874812944384) at cpufunc.h:309 > #5 0xffffffff80c8066c in pmap_invalidate_range (pmap= out>, sva=, eva=) > at /usr/src/sys/amd64/amd64/pmap.c:1441 > #6 0xffffffff80c81d73 in pmap_remove (pmap=0xffffffff81520958, > sva=18446741874812944384, eva=18446741874812944384) at > /usr/src/sys/amd64/amd64/pmap.c:3698 > #7 0xffffffff80b0fc53 in kmem_unback (object=0xffffffff814fecf8, > addr=18446741874812936192, size=8192) at /usr/src/sys/vm/vm_kern.c:401 > #8 0xffffffff80b0fd44 in kmem_free (vmem=0xffffffff8147a300, > addr=18446741874812936192, size=8192) at /usr/src/sys/vm/vm_kern.c:421 > #9 0xffffffff80b08d6c in uma_large_free (slab=0xfffff800acffcf00) at > /usr/src/sys/vm/uma_core.c:1097 > #10 0xffffffff80898d17 in free (addr=, > mtp=0xffffffff81a2cc40) at /usr/src/sys/kern/kern_malloc.c:599 > #11 0xffffffff81890bb9 in zfsdev_ioctl () from /boot/kernel/zfs.ko > #12 0xffffffff807ac16f in devfs_ioctl_f (fp=0xfffff800ac0f18a0, com=0, > data=0x0, cred=, td=0xfffffe00078c6000) > at /usr/src/sys/fs/devfs/devfs_vnops.c:757 > #13 0xffffffff808fdeee in kern_ioctl (td=0xfffff8000e525920, fd= optimized out>, com=786678) at file.h:319 > #14 0xffffffff808fdc6f in sys_ioctl (td=0xfffff8000e525920, > uap=0xfffffe02155e39c0) at /usr/src/sys/kern/sys_generic.c:702 > #15 0xffffffff80c8ef87 in amd64_syscall (td=0xfffff8000e525920, > traced=0) at subr_syscall.c:134 > #16 0xffffffff80c7567b in Xfast_syscall () at > /usr/src/sys/amd64/amd64/exception.S:391 > #17 0x00000008019e308a in ?? () -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Apr 1 18:19:02 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C7462B7B; Tue, 1 Apr 2014 18:19:02 +0000 (UTC) Received: from mail-yk0-x234.google.com (mail-yk0-x234.google.com [IPv6:2607:f8b0:4002:c07::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7E35094E; Tue, 1 Apr 2014 18:19:02 +0000 (UTC) Received: by mail-yk0-f180.google.com with SMTP id 19so4929689ykq.11 for ; Tue, 01 Apr 2014 11:19:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=MV3+6bu2kHxjFVXLPqT4l9lT9ibMhn7Ct4l/6fYYbSc=; b=rWXyJiYDuvm7d6pB7wLbO/t034GuloXIOhVfqzrGPUNDJQnQrfI/dFHvScOo2FBc8H eI2xSdWEXfHmhu2iI0QYJeH0xb0imOp75/R9QEtcTEUb7AJZn4ltmRbT9REuHOgVZT7U ej71XdgwjvJ2IwSn1KIcOdLW7+kbo/yt04PLaVH12CKVbwu9nbr0Q2DglpYo7+EfHy1X cKMqyct3AtxMyE8VtiCn7Grpp2aCmMUKirWh5wv67gyv/LLiVvy8y8qcUADwmSA2oV72 c8tiHI0K5eGYf3v7SrY6Q5qZ30KLm5vrD0Hf4eTv0HiIZzW/WvJv8BbnI/YHtGrVO6hE fEww== MIME-Version: 1.0 X-Received: by 10.236.179.162 with SMTP id h22mr13922815yhm.107.1396376341776; Tue, 01 Apr 2014 11:19:01 -0700 (PDT) Received: by 10.170.95.212 with HTTP; Tue, 1 Apr 2014 11:19:01 -0700 (PDT) In-Reply-To: <533A5F65.7020800@FreeBSD.org> References: <53391F6C.9070208@FreeBSD.org> <533A5F65.7020800@FreeBSD.org> Date: Tue, 1 Apr 2014 20:19:01 +0200 Message-ID: Subject: Re: ZFS panic: spin lock held too long From: Idwer Vollering To: freebsd-fs@freebsd.org, freebsd-hardware@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Apr 2014 18:19:02 -0000 Adding freebsd-hardware@ 2014-04-01 8:40 GMT+02:00 Andriy Gapon : > So, this thread is stuck waiting on some CPU(s) doing TLB shootdown. > This must mean that that CPU is stuck doing something. > I can not provide exact instructions on how to find out which CPU is stuck and > what it is doing, but you could try to start with examining output of 'thread > apply all bt'. Inline output was over 200KB big, twice, so here is the output from 'thread apply all bt': http://ra.openbios.org/~idwer/freebsd/vmcore.0.kgdb_out http://ra.openbios.org/~idwer/freebsd/vmcore.1.kgdb_out From owner-freebsd-fs@FreeBSD.ORG Wed Apr 2 09:47:32 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A8301CC1; Wed, 2 Apr 2014 09:47:32 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7D06135A; Wed, 2 Apr 2014 09:47:32 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s329lW5A066513; Wed, 2 Apr 2014 09:47:32 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s329lWKV066512; Wed, 2 Apr 2014 09:47:32 GMT (envelope-from linimon) Date: Wed, 2 Apr 2014 09:47:32 GMT Message-Id: <201404020947.s329lWKV066512@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/188187: [zfs] 10-stable: Kernel panic on zpool import: integer divide fault X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Apr 2014 09:47:32 -0000 Synopsis: [zfs] 10-stable: Kernel panic on zpool import: integer divide fault Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 2 09:47:24 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=188187 From owner-freebsd-fs@FreeBSD.ORG Thu Apr 3 14:35:30 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1AA719BB for ; Thu, 3 Apr 2014 14:35:30 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EF4CE367 for ; Thu, 3 Apr 2014 14:35:29 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s33EZTiS044424 for ; Thu, 3 Apr 2014 14:35:29 GMT (envelope-from bdrewery@freefall.freebsd.org) Received: (from bdrewery@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s33EZTTG044423 for freebsd-fs@FreeBSD.org; Thu, 3 Apr 2014 14:35:29 GMT (envelope-from bdrewery) Received: (qmail 20902 invoked from network); 3 Apr 2014 09:35:27 -0500 Received: from unknown (HELO roundcube.xk42.net) (10.10.5.5) by sweb.xzibition.com with SMTP; 3 Apr 2014 09:35:27 -0500 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Thu, 03 Apr 2014 09:35:27 -0500 From: Bryan Drewery To: freebsd-fs@FreeBSD.org Subject: Poudriere: rm -rf: Directory not empty Organization: FreeBSD Message-ID: X-Sender: bdrewery@FreeBSD.org User-Agent: Roundcube Webmail/0.9.5 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Apr 2014 14:35:30 -0000 Hi, While using Poudriere to build packages on segregated tmpfs jails we sometimes get the following errors: ====>> [08] Starting build of devel/qt4-qt3support ====>> [08] Starting build of graphics/qt4-opengl rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5/include/Qt: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5/include: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support/work: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr: Directory not empty ====>> [08] Starting build of math/py-numpy rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5/include/Qt: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5/include: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support/work: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports: Directory not empty rm: /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr: Directory not empty What is happening here is that the devel/qt4-qt3support finishes, fails to cleanup itself, then the next build tries to cleanup the previous tempdir and fails. The next build then fails, and so on. Eventually crashing the whole build. This is the result of just "rm -rf /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/ports/devel/qt4-qt3support/work". devel/qt4-qt3support runs rm -rf, fails. kill -9 -1 is ran in jail. graphics/qt4-opengl starts, runs jail -r [kills processes], tries rm -rf, fails math/py-numpy starts, runs jail -r [kills processes], tries rm -rf, fails Another example is at the bottom of http://beefy1.isc.freebsd.org/bulk/83i386-default/2014-02-12_03h42m23s/logs/eclipse-3.7.1_4.log The eclipse one involved a process crashing and a coredump as well. I thought perhaps there was a race between writing core and removing the directory, but I found no evidence of that either by code inspection or testing. As shown above, no processes should be running in the jail at this point. Poudriere itself is not touching these directories outside of the jail either. There's no nullfs mounts of these files to elsewhere either that may be getting touched. What might cause this? It's very difficult to reproduce and is reported about once every 2 months or less. Note well this is not due to flags. A rerun of these same ports won't hit the issue. So far the workaround is to umount the tmpfs and remount it, but this is not a solution as tmpfs is optional for Poudriere. From past research it was found to not be tmpfs-specific, but my confidence level is not 100% on that. This has been seen on at least 9.2-R, and 10.0-R. I can't recreate this with simple tests though on ZFS or TMPFS. cd /tmp ( rm -rf test; mkdir test; cat /dev/random > test/foo & sleep 1; rm -rf test; kill $! ) ( rm -rf test; mkdir test; mkfifo test/foo; cat test/foo & sleep 1; rm -rf test; kill $! ) ( rm -rf test; mkdir test; cd test; rm -rf ../test ) In the other cases it's not clear if looping on rm -rf would work or if it would spin forever. We have not tried it since it's so difficult to reproduce. -- Regards, Bryan Drewery From owner-freebsd-fs@FreeBSD.ORG Thu Apr 3 17:00:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A8BB1699 for ; Thu, 3 Apr 2014 17:00:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 899A15EB for ; Thu, 3 Apr 2014 17:00:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s33H01c0087811 for ; Thu, 3 Apr 2014 17:00:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s33H01Hk087810; Thu, 3 Apr 2014 17:00:01 GMT (envelope-from gnats) Date: Thu, 3 Apr 2014 17:00:01 GMT Message-Id: <201404031700.s33H01Hk087810@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Karl Denninger List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Apr 2014 17:00:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Karl Denninger To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Thu, 03 Apr 2014 11:57:50 -0500 This is a cryptographically signed message in MIME format. --------------ms040709030506010201090909 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable After more than a week of operation without any changes on a very busy=20 production server this is what the status looks like at this particular=20 moment in time (caught it being pretty quiet at the moment... slow day): [karl@NewFS ~]$ uptime 11:56AM up 10 days, 20:37, 1 user, load averages: 0.80, 0.59, 0.58 [karl@NewFS ~]$ uname -v FreeBSD 10.0-STABLE #22 r263665:263671M: Sun Mar 23 15:00:48 CDT 2014 karl@NewFS.denninger.net:/usr/obj/usr/src/sys/KSD-SMP 1 users Load 0.50 0.57 0.58 Apr 3 11:52 Mem:KB REAL VIRTUAL VN PAGER SWAP P= AGER Tot Share Tot Share Free in out in = out Act 4503936 32680 9319616 54908 701712 count All 17598k 42312 10162228 293268 pages Proc: Interrup= ts r p d s w Csw Trp Sys Int Sof Flt ioflt 2635 to= tal 2 245 3 9936 4302 12k 990 442 2878 1161 cow 11 ua= rt0 4 1460 zfod 53 uh= ci0 16 0.6%Sys 0.1%Intr 1.5%User 0.0%Nice 97.8%Idle ozfod pc= m0 17 | | | | | | | | | | %ozfod ehc= i0 uhci > daefr uhc= i1 21 dtbuf 1779 prcfr 532 uh= ci3 ehci Namei Name-cache Dir-cache 485888 desvn 3862 totfr 44 twa= 0 30 Calls hits % hits % 145761 numvn react 989 cp= u0:timer 18611 18549 100 121467 frevn pdwak 69 mp= s0 256 909 pdpgs 24 em= 0:rx 0 Disks da0 da1 da2 da3 da4 da5 da6 intrn 32 em0= :tx 0 KB/t 10.30 10.39 0.00 0.00 22.61 24.69 24.39 19017980 wire em0= :link tps 21 21 0 0 10 16 16 2197580 act 118 em1= :rx 0 MB/s 0.22 0.22 0.00 0.00 0.22 0.39 0.39 2544544 inact 107 em1= :tx 0 %busy 19 19 0 0 0 1 1 3276 cache em1= :link 698064 free ah= ci0:ch0 buf 32 cp= u1:timer 24 cp= u10:time 50 cp= u6:timer 26 cp= u12:time 37 cp= u7:timer 45 cp= u14:time 41 cp= u4:timer 35 cp= u15:time 25 cp= u5:timer 45 cp= u9:timer 45 cp= u2:timer 102 cp= u11:time 63 cp= u3:timer 41 cp= u13:time 45 cp= u8:timer [karl@NewFS ~]$ zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT media 2.72T 2.12T 616G 77% 1.00x ONLINE - zroot 234G 18.8G 215G 8% 1.36x ONLINE - zstore 3.63T 2.50T 1.13T 68% 1.00x ONLINE - [karl@NewFS ~]$ zfs-stats -A ------------------------------------------------------------------------ ZFS Subsystem Report Thu Apr 3 11:53:42 2014 ------------------------------------------------------------------------ ARC Summary: (HEALTHY) Memory Throttle Count: 0 ARC Misc: Deleted: 27.84m Recycle Misses: 1.12m Mutex Misses: 2.65k Evict Skips: 39.26m ARC Size: 59.13% 13.20 GiB Target Size: (Adaptive) 59.14% 13.20 GiB Min Size (Hard Limit): 12.50% 2.79 GiB Max Size (High Water): 8:1 22.33 GiB ARC Size Breakdown: Recently Used Cache Size: 81.41% 10.75 GiB Frequently Used Cache Size: 18.59% 2.46 GiB ARC Hash Breakdown: Elements Max: 2.69m Elements Current: 63.22% 1.70m Collisions: 95.13m Chain Max: 24 Chains: 413.62k ------------------------------------------------------------------------ [karl@NewFS ~]$ zfs-stats -E ------------------------------------------------------------------------ ZFS Subsystem Report Thu Apr 3 11:53:59 2014 ------------------------------------------------------------------------ ARC Efficiency: 1.28b Cache Hit Ratio: 98.37% 1.26b Cache Miss Ratio: 1.63% 20.80m Actual Hit Ratio: 60.07% 766.91m Data Demand Efficiency: 99.15% 435.02m Data Prefetch Efficiency: 20.45% 17.49m CACHE HITS BY CACHE LIST: Anonymously Used: 38.72% 486.24m Most Recently Used: 3.74% 46.94m Most Frequently Used: 57.33% 719.97m Most Recently Used Ghost: 0.06% 792.68k Most Frequently Used Ghost: 0.16% 1.97m CACHE HITS BY DATA TYPE: Demand Data: 34.34% 431.32m Prefetch Data: 0.28% 3.58m Demand Metadata: 23.72% 297.92m Prefetch Metadata: 41.65% 523.09m CACHE MISSES BY DATA TYPE: Demand Data: 17.75% 3.69m Prefetch Data: 66.88% 13.91m Demand Metadata: 5.78% 1.20m Prefetch Metadata: 9.60% 2.00m ------------------------------------------------------------------------ Grinnin' big, in short. I have no reason to make further changes to the code or defaults. --=20 -- Karl karl@denninger.net --------------ms040709030506010201090909 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA0MDMxNjU3NTBaMCMGCSqGSIb3DQEJBDEW BBQtiQvKlj5Ru9Pv6YxPqFzRIx08NDBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIASySS2MCYfM+HpbvhybMFLOf39MPG 9lKM9kaeWPUhLSzp5DqLZuL4cvQvel17Pr7SNed8aqtR+VoSQ3QyxDEdNRRh3jGA+Nk1GYdL UFaOPwxrmie0+IaMxjay5SoZoaBnuDL15QoExVIQIwindXtuX2R6ze97agYUn91exV/6GLhg 51QuOancUKExT1zGOEMzLg2YxpYQOb6yicsjaqXOTLHcUGh+oZRRzNelEpUCSAotnkOa7ikx juNPdyfyJPZO4Oapvn0TwWY03JBX2BbhCRU5wLU7U0PPpExH/wbH1EpjXMT5Xx5g15EERk7N 6E7nMROdEJmyK2N1pkD43paPX4oz5pjwiZZSOzr8HrV/pxzUitv2zOvonpmwYIpFlub9XGk1 MUn2NNCpttbhLRgMSFFa99gakenZEjq3mrW4chJyHGg10FXh+Mrxh8Dv/HRB1sVRzp3D8LPw 2nUw2MV9mhfZJGGz5QSkqudGwOkC7EMvHtdhEiyLWiHs6Ro9YWT65uGiz1uZcdhexGyjJjg8 pSkcLaG++Ty+LtwJknhdwgHlDdDsThE1Zf8YXf10BA96uykt53sLbwnW20yy5FpGr8dcgiWh H4a84Div8YZ0OnR3+MyOXMU0+EBet1ojAarc2xMH6m+MnqIMl6nL9BhiEA51KE84odL6Nw8z sNJgryAAAAAAAAA= --------------ms040709030506010201090909-- From owner-freebsd-fs@FreeBSD.ORG Thu Apr 3 17:30:50 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 593CE142; Thu, 3 Apr 2014 17:30:50 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D315E9A0; Thu, 3 Apr 2014 17:30:49 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.8/8.14.8) with ESMTP id s33HUioI006268; Thu, 3 Apr 2014 20:30:44 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua s33HUioI006268 Received: (from kostik@localhost) by tom.home (8.14.8/8.14.8/Submit) id s33HUiQA006266; Thu, 3 Apr 2014 20:30:44 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Thu, 3 Apr 2014 20:30:44 +0300 From: Konstantin Belousov To: Bryan Drewery Subject: Re: Poudriere: rm -rf: Directory not empty Message-ID: <20140403173044.GY21331@kib.kiev.ua> References: MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="qnuS/wU1MXEWeKjo" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Apr 2014 17:30:50 -0000 --qnuS/wU1MXEWeKjo Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Apr 03, 2014 at 09:35:27AM -0500, Bryan Drewery wrote: > Hi, >=20 > While using Poudriere to build packages on segregated tmpfs jails > we sometimes get the following errors: >=20 > =3D=3D=3D=3D>> [08] Starting build of devel/qt4-qt3support > =3D=3D=3D=3D>> [08] Starting build of graphics/qt4-opengl > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5/include/Qt:= =20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5/include:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support/work:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr:=20 > Directory not empty > =3D=3D=3D=3D>> [08] Starting build of math/py-numpy > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5/include/Qt:= =20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5/include:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support/work/qt-everywhere-opensource-src-4.8.5:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support/work:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts:=20 > Directory not empty > rm:=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr:=20 > Directory not empty >=20 > What is happening here is that the devel/qt4-qt3support finishes, > fails to cleanup itself, then the next build tries to cleanup the > previous tempdir and fails. The next build then fails, and so on. > Eventually crashing the whole build. >=20 > This is the result of just "rm -rf=20 > /usr/local/poudriere/data/build/92amd64-default/ref/../08/wrkdirs/usr/por= ts/devel/qt4-qt3support/work". >=20 > devel/qt4-qt3support runs rm -rf, fails. kill -9 -1 is ran in jail. > graphics/qt4-opengl starts, runs jail -r [kills processes], tries rm=20 > -rf, fails > math/py-numpy starts, runs jail -r [kills processes], tries rm -rf,=20 > fails >=20 > Another example is at the bottom of=20 > http://beefy1.isc.freebsd.org/bulk/83i386-default/2014-02-12_03h42m23s/lo= gs/eclipse-3.7.1_4.log > The eclipse one involved a process crashing and a coredump as well. > I thought perhaps there was a race between writing core and > removing the directory, but I found no evidence of that either > by code inspection or testing. >=20 > As shown above, no processes should be running in the jail at this > point. Poudriere itself is not touching these directories outside > of the jail either. There's no nullfs mounts of these > files to elsewhere either that may be getting touched. >=20 > What might cause this? It's very difficult to reproduce and is > reported about once every 2 months or less. Note well this is > not due to flags. A rerun of these same ports won't hit the > issue. >=20 > So far the workaround is to umount the tmpfs and remount it, but this > is not a solution as tmpfs is optional for Poudriere. From past research > it was found to not be tmpfs-specific, but my confidence level is not=20 > 100% > on that. >=20 > This has been seen on at least 9.2-R, and 10.0-R. >=20 > I can't recreate this with simple tests though on ZFS or TMPFS. >=20 > cd /tmp > ( rm -rf test; mkdir test; cat /dev/random > test/foo & sleep 1; rm=20 > -rf test; kill $! ) > ( rm -rf test; mkdir test; mkfifo test/foo; cat test/foo & sleep 1; rm= =20 > -rf test; kill $! ) > ( rm -rf test; mkdir test; cd test; rm -rf ../test ) >=20 > In the other cases it's not clear if looping on rm -rf would work or > if it would spin forever. We have not tried it since it's so difficult > to reproduce. >=20 When the situation occured and you notes it, do you still have an access to the tmpfs directory which failed rm -rf ? If yes, try to do ls -la there, and ktrace the "rm -rf". Another approach is to patch tmpfs_rmdir() in tmpfs_vnops.c and dump some information when ENOTEMPTY error is returned; e.g. you could print the directory content and tn_size. --qnuS/wU1MXEWeKjo Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQIcBAEBAgAGBQJTPZrDAAoJEJDCuSvBvK1BPVQP/jehDsdsYAgdnbzjKFe+Iiaw RNW+ZPN/1jHNxxB9o2OWs9cVZH9N0TT/yLSkTVmmPSbtIaYD9DS1ele0urgy2tNz +J85tBhjzacP0utYNZlUCJ0wN3cq9hfOkLU9MVY9XNy3NhJ8xjm34I0emBXk9mDU gDJhBkPUwMFsl+KVAD4gwaIGFDX3LQLivoiUCQG0OX2hQB0ixwKGa800QJVkW4BI uU4nLkNzvfuDCgM+sUxBiJT8fkIZAQymomwoa4aaK/6t+RN8Wd2zNr8jZvfeVY+F KmZ+XnckLWLF1jLaXPvglGhpFSpQJmAXXK3V8lA+LfPvhYH3tQnh5VS2H2Tt82c1 UZVQhbBKjsbS1PjRmLxCmxFVCIZQ+PWuPVTJjYOg+f6bzJokaN/dKsJ0TIl5O0BP TorZjTLGRVtfAQ50Vgbh4uWGcgCTQlvxf8tVdiTQkrRx9Nr7MqIpjfLYQwS5ETmL RvXlXLNYqeLzzCtQPc08OYqOq8xp3WQEFcrKxocUs2iEmXNVQyuAvXF24j8aQXA6 BnvDNR/jUqVD3/Km3DCyVBj4Ls53tKBUyABMOFujiFZASXc1AbHADv1lp7cWn4my QzsBJEJb2A5bBJwC/dr+K7MtlLInV6dM35FkMnDG3TmqteKzpnJtZ4VkMz+RvBNX OTIqTSqE+ShvJDYkHaGG =C245 -----END PGP SIGNATURE----- --qnuS/wU1MXEWeKjo-- From owner-freebsd-fs@FreeBSD.ORG Sat Apr 5 23:20:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AE81A38F for ; Sat, 5 Apr 2014 23:20:15 +0000 (UTC) Received: from mail-wg0-x230.google.com (mail-wg0-x230.google.com [IPv6:2a00:1450:400c:c00::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47AEF9F5 for ; Sat, 5 Apr 2014 23:20:15 +0000 (UTC) Received: by mail-wg0-f48.google.com with SMTP id l18so5143988wgh.7 for ; Sat, 05 Apr 2014 16:20:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject :content-type:content-transfer-encoding; bh=x488dk3vIt+vlRyLUCvuK5+UKnkmawryTWD1+br056Y=; b=LhHPpmIvKXZ7Qs+27SXcKv5ZDmAB8hfoEo8fZw6KCQxp3sqmcVEWRZQFv6v9wH6z4q 87jMMUUYvrJg+oa2lOKCKJr9dl0tHHHMT/wLWCBxTYPzwWlnjI2qGnN8UvasOM8Os7kM CzH5rdrlmHT9gFXjKPz5nnFBC1y2K/uFwvhBiWR0DAWX1wAjMJXuxK3OfN/Bc1Y4jJR3 UXYQ0evIwdQ464Li24GtDf8wNYCK27OaTFjCemC9AsgxDZ8h8LjhL55fv+LO/kmclSUg Mjv0JVV7Wxnyn9EEcyfprFByA9rvsMUluNTCfiZBWB3SA3K4AINjrTNWmeAVt4AHbB8Z qcuA== X-Received: by 10.180.149.143 with SMTP id ua15mr14686294wib.36.1396740013645; Sat, 05 Apr 2014 16:20:13 -0700 (PDT) Received: from [192.168.20.30] (81-178-2-118.dsl.pipex.com. [81.178.2.118]) by mx.google.com with ESMTPSA id kp5sm18934730wjb.30.2014.04.05.16.20.12 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 05 Apr 2014 16:20:12 -0700 (PDT) Message-ID: <53408FAB.8080202@gmail.com> Date: Sun, 06 Apr 2014 00:20:11 +0100 From: Kaya Saman User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: FreeBSD Filesystems Subject: Device Removed by Administrator in ZPOOL? Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Apr 2014 23:20:15 -0000 Hi, I'm running FreeBSD 10.0 x64 on a Xeon E5 based system with 8GB RAM. Checking the ZPOOL status I saw one of my drives has been offlined... the exact error is this: # zpool status -v pool: ZPOOL_2 state: DEGRADED status: One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: scrub repaired 0 in 9h3m with 0 errors on Sat Apr 5 03:46:55 2014 config: NAME STATE READ WRITE CKSUM ZPOOL_2 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 da0 ONLINE 0 0 0 14870388343127772554 REMOVED 0 0 0 was /dev/da1 da2 ONLINE 0 0 0 da3 ONLINE 0 0 0 da4 ONLINE 0 0 0 I think this is due to a dead disk however, I'm not certain which is why I wanted to ask here as I didn't remove the drive at all..... rather then some kind of OS/ZFS error. The drives are 2TB WD Green drives all connected to an LSI HBA; everything is still under warranty so no big issue there and I have external backups too so I'm not really that worried, I'm just trying to work out what's going on. Are my suspicions correct or should I simply try to reboot the system and see if the drive comes back online? Regards, Kaya From owner-freebsd-fs@FreeBSD.ORG Sat Apr 5 23:52:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C8208BE2 for ; Sat, 5 Apr 2014 23:52:20 +0000 (UTC) Received: from MOY07-NIX1.wadns.net (moy07-nix1.wadns.net [41.185.26.137]) (using TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 66D93C9B for ; Sat, 5 Apr 2014 23:52:20 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by MOY07-NIX1.wadns.net (Postfix) with ESMTP id 48B02226DD; Sun, 6 Apr 2014 01:45:04 +0200 (SAST) X-Virus-Scanned: Debian amavisd-new at doggle.co.za Received: from MOY07-NIX1.wadns.net ([127.0.0.1]) by localhost (MOY07-NIX1.wadns.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RSwXjH-IB-0S; Sun, 6 Apr 2014 01:44:58 +0200 (SAST) Received: from [10.0.0.117] (196-215-19-31.dynamic.isadsl.co.za [196.215.19.31]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by MOY07-NIX1.wadns.net (Postfix) with ESMTPSA id 1F80E20077; Sun, 6 Apr 2014 01:44:58 +0200 (SAST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (1.0) Subject: Re: Device Removed by Administrator in ZPOOL? From: Vusa Moyo X-Mailer: iPad Mail (11D167) In-Reply-To: <53408FAB.8080202@gmail.com> Date: Sun, 6 Apr 2014 01:44:57 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <512A7865-CEFD-4BDA-A060-AE911BEDD5B7@tuxsystems.co.za> References: <53408FAB.8080202@gmail.com> To: Kaya Saman Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Apr 2014 23:52:20 -0000 This is more than likely a failed drive.=20 Have you physically looked at the server for orange lights which may help ID= the failed drive?? =20 There could also be tools to query the lsi hba.=20 Sent from my iPad > On Apr 6, 2014, at 1:20 AM, Kaya Saman wrote: >=20 > Hi, >=20 > I'm running FreeBSD 10.0 x64 on a Xeon E5 based system with 8GB RAM. >=20 >=20 > Checking the ZPOOL status I saw one of my drives has been offlined... the e= xact error is this: >=20 > # zpool status -v > pool: ZPOOL_2 > state: DEGRADED > status: One or more devices has been removed by the administrator. > Sufficient replicas exist for the pool to continue functioning in a > degraded state. > action: Online the device using 'zpool online' or replace the device with > 'zpool replace'. > scan: scrub repaired 0 in 9h3m with 0 errors on Sat Apr 5 03:46:55 2014 > config: >=20 > NAME STATE READ WRITE CKSUM > ZPOOL_2 DEGRADED 0 0 0 > raidz2-0 DEGRADED 0 0 0 > da0 ONLINE 0 0 0 > 14870388343127772554 REMOVED 0 0 0 was /dev/da1 > da2 ONLINE 0 0 0 > da3 ONLINE 0 0 0 > da4 ONLINE 0 0 0 >=20 >=20 > I think this is due to a dead disk however, I'm not certain which is why I= wanted to ask here as I didn't remove the drive at all..... rather then som= e kind of OS/ZFS error. >=20 >=20 > The drives are 2TB WD Green drives all connected to an LSI HBA; everything= is still under warranty so no big issue there and I have external backups t= oo so I'm not really that worried, I'm just trying to work out what's going o= n. >=20 >=20 > Are my suspicions correct or should I simply try to reboot the system and s= ee if the drive comes back online? >=20 >=20 > Regards, >=20 >=20 > Kaya > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sun Apr 6 00:12:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2514AD10 for ; Sun, 6 Apr 2014 00:12:38 +0000 (UTC) Received: from mail-we0-x22a.google.com (mail-we0-x22a.google.com [IPv6:2a00:1450:400c:c03::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AFADBDE2 for ; Sun, 6 Apr 2014 00:12:37 +0000 (UTC) Received: by mail-we0-f170.google.com with SMTP id w61so5163905wes.1 for ; Sat, 05 Apr 2014 17:12:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=Fb84jn7r04z954+DppURNWmtbAeDlpSc/QQL5HDtujg=; b=bKdQYZ4lzPJElIpaWZGZCGU8VWy8NCQQ/VsTfBtdAXAt1cbnUKiJhUzRmJGZzYqVSR lJGgn2B4fpqCe3MHcQUXfMTb1S5LVFb7aD6To0RJ4WzY12cdvbjXa6QdY/ZHU+9/4UHN Vi1ooujRU49XJ4QuBjQmtACdMpt0ZhqZE8nvJgbLOXob3Ntf7ReDE1Z1mpXTe66Ph0CB sZhUMxGC10KooSavmSeIm71z7uzre1oQ/awYWR+yVkVrxeZEkmVSELaBuJxR6AJsu0Jb JqBdbfpM2lIUxNbXTOrrgHQ1MYdq69N14n1qEVT+lNxHSk2fTRPVTbJWof3b0hNHhb6T 32ug== X-Received: by 10.194.203.2 with SMTP id km2mr31232655wjc.72.1396743155597; Sat, 05 Apr 2014 17:12:35 -0700 (PDT) Received: from [192.168.20.30] (81-178-2-118.dsl.pipex.com. [81.178.2.118]) by mx.google.com with ESMTPSA id cl9sm19105164wjc.25.2014.04.05.17.12.34 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 05 Apr 2014 17:12:34 -0700 (PDT) Message-ID: <53409BF1.6050001@gmail.com> Date: Sun, 06 Apr 2014 01:12:33 +0100 From: Kaya Saman User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Vusa Moyo Subject: Re: Device Removed by Administrator in ZPOOL? References: <53408FAB.8080202@gmail.com> <512A7865-CEFD-4BDA-A060-AE911BEDD5B7@tuxsystems.co.za> In-Reply-To: <512A7865-CEFD-4BDA-A060-AE911BEDD5B7@tuxsystems.co.za> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Apr 2014 00:12:38 -0000 Many thanks for the response! The server doesn't show any lights for "drive error" however, the blue read LED isn't coming on, on the drive in question (as removed from ZPOOL). I will have a look for LSI tools in @Ports and also see if the BIOS LSI hook comes up with anything. Regards, Kaya On 04/06/2014 12:44 AM, Vusa Moyo wrote: > This is more than likely a failed drive. > > Have you physically looked at the server for orange lights which may help ID the failed drive?? > > There could also be tools to query the lsi hba. > > Sent from my iPad > >> On Apr 6, 2014, at 1:20 AM, Kaya Saman wrote: >> >> Hi, >> >> I'm running FreeBSD 10.0 x64 on a Xeon E5 based system with 8GB RAM. >> >> >> Checking the ZPOOL status I saw one of my drives has been offlined... the exact error is this: >> >> # zpool status -v >> pool: ZPOOL_2 >> state: DEGRADED >> status: One or more devices has been removed by the administrator. >> Sufficient replicas exist for the pool to continue functioning in a >> degraded state. >> action: Online the device using 'zpool online' or replace the device with >> 'zpool replace'. >> scan: scrub repaired 0 in 9h3m with 0 errors on Sat Apr 5 03:46:55 2014 >> config: >> >> NAME STATE READ WRITE CKSUM >> ZPOOL_2 DEGRADED 0 0 0 >> raidz2-0 DEGRADED 0 0 0 >> da0 ONLINE 0 0 0 >> 14870388343127772554 REMOVED 0 0 0 was /dev/da1 >> da2 ONLINE 0 0 0 >> da3 ONLINE 0 0 0 >> da4 ONLINE 0 0 0 >> >> >> I think this is due to a dead disk however, I'm not certain which is why I wanted to ask here as I didn't remove the drive at all..... rather then some kind of OS/ZFS error. >> >> >> The drives are 2TB WD Green drives all connected to an LSI HBA; everything is still under warranty so no big issue there and I have external backups too so I'm not really that worried, I'm just trying to work out what's going on. >> >> >> Are my suspicions correct or should I simply try to reboot the system and see if the drive comes back online? >> >> >> Regards, >> >> >> Kaya >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sun Apr 6 01:21:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7D8765DF for ; Sun, 6 Apr 2014 01:21:22 +0000 (UTC) Received: from mail-we0-x22c.google.com (mail-we0-x22c.google.com [IPv6:2a00:1450:400c:c03::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 0D8E726B for ; Sun, 6 Apr 2014 01:21:21 +0000 (UTC) Received: by mail-we0-f172.google.com with SMTP id t61so5184280wes.31 for ; Sat, 05 Apr 2014 18:21:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=DOU6hxF56y4DnJ67bH6ysKfDJW3STzfkvBuFkS+L/V8=; b=ntKpnojStgN2UK4j4LeZzzCNrvtfirz/O2sSpkvw8QgbCbQSVhAIUKVoOzKLhnzBeC /4Ph0qiBUu6k6UVLSiwL/AToKKl55q5OV3FHIn/MRn/3Wm8MoBUzBgqeguoghx3DsAN8 pqInVYxRk+P8XCzSoDSfx2WD+/JowraXzfOp9WfNQuhphz34llgwor8854VQfbfyG3KO bjsgnqrQfePQM7xc5FQc64Jn/eP5KXTB9AXg7JXX6lpFkQoFW7CHL53QenW+Id5F8jgN IldFqhRH4EB1DEguPmJnyDAFrLqeqDJy8iNQooL8TJxlZnSEVvYwJZAgKZ2sgdXpemRA eGeQ== X-Received: by 10.194.20.65 with SMTP id l1mr31360478wje.39.1396747280422; Sat, 05 Apr 2014 18:21:20 -0700 (PDT) Received: from [192.168.20.30] (81-178-2-118.dsl.pipex.com. [81.178.2.118]) by mx.google.com with ESMTPSA id cu6sm9646831wjb.8.2014.04.05.18.21.19 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 05 Apr 2014 18:21:19 -0700 (PDT) Message-ID: <5340AC0E.7020900@gmail.com> Date: Sun, 06 Apr 2014 02:21:18 +0100 From: Kaya Saman User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: kpneal@pobox.com Subject: Re: Device Removed by Administrator in ZPOOL? References: <53408FAB.8080202@gmail.com> <512A7865-CEFD-4BDA-A060-AE911BEDD5B7@tuxsystems.co.za> <53409BF1.6050001@gmail.com> <20140406002849.GA14765@neutralgood.org> In-Reply-To: <20140406002849.GA14765@neutralgood.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: FreeBSD Filesystems , Vusa Moyo X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Apr 2014 01:21:22 -0000 On 04/06/2014 01:28 AM, kpneal@pobox.com wrote: > On Sun, Apr 06, 2014 at 01:12:33AM +0100, Kaya Saman wrote: >> Many thanks for the response! >> >> The server doesn't show any lights for "drive error" however, the blue >> read LED isn't coming on, on the drive in question (as removed from ZPOOL). >> >> I will have a look for LSI tools in @Ports and also see if the BIOS LSI >> hook comes up with anything. > Have you seen any other errors in your logs? Seems like if a drive fails > there should be some other error message reporting the errors that resulted > in ZFS marking the drive removed. What does 'dmesg' have to say? > > Once ZFS has stopped using the drive (for whatever reason) I wouldn't > expect you to see anything else happening on the drive. So the light not > coming on doesn't really tell us anything new. > > Also, aren't 'green' drives the kind that spin down and then have to spin > back up when a request comes in? I don't know what happens if a drive takes > "too long" to respond because it has spun down. I have no idea how FreeBSD > handles that, and I also don't know if ZFS adds anything to the equation. > Hopefully someone else here will clue me/us in. > Unfortunately I haven't seen any other errors.... I looked at dmesg but it didn't say anything? I'll try physically removing the drive and putting it back in again, though I don't think it'll help. ....just done it. The blue light blinked for a second and then that was it, dmesg doesn't show anything and the drive isn't showing up in /dev either...... >> On 04/06/2014 12:44 AM, Vusa Moyo wrote: >>> This is more than likely a failed drive. >>> >>> Have you physically looked at the server for orange lights which may help ID the failed drive?? >>> >>> There could also be tools to query the lsi hba. >>> >>> Sent from my iPad >>> >>>> On Apr 6, 2014, at 1:20 AM, Kaya Saman wrote: >>>> >>>> Hi, >>>> >>>> I'm running FreeBSD 10.0 x64 on a Xeon E5 based system with 8GB RAM. >>>> >>>> >>>> Checking the ZPOOL status I saw one of my drives has been offlined... the exact error is this: >>>> >>>> # zpool status -v >>>> pool: ZPOOL_2 >>>> state: DEGRADED >>>> status: One or more devices has been removed by the administrator. >>>> Sufficient replicas exist for the pool to continue functioning in a >>>> degraded state. >>>> action: Online the device using 'zpool online' or replace the device with >>>> 'zpool replace'. >>>> scan: scrub repaired 0 in 9h3m with 0 errors on Sat Apr 5 03:46:55 2014 >>>> config: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> ZPOOL_2 DEGRADED 0 0 0 >>>> raidz2-0 DEGRADED 0 0 0 >>>> da0 ONLINE 0 0 0 >>>> 14870388343127772554 REMOVED 0 0 0 was /dev/da1 >>>> da2 ONLINE 0 0 0 >>>> da3 ONLINE 0 0 0 >>>> da4 ONLINE 0 0 0 >>>> >>>> >>>> I think this is due to a dead disk however, I'm not certain which is why I wanted to ask here as I didn't remove the drive at all..... rather then some kind of OS/ZFS error. >>>> >>>> >>>> The drives are 2TB WD Green drives all connected to an LSI HBA; everything is still under warranty so no big issue there and I have external backups too so I'm not really that worried, I'm just trying to work out what's going on. >>>> >>>> >>>> Are my suspicions correct or should I simply try to reboot the system and see if the drive comes back online? From owner-freebsd-fs@FreeBSD.ORG Sun Apr 6 01:36:02 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 142C975C for ; Sun, 6 Apr 2014 01:36:02 +0000 (UTC) Received: from mail-wg0-x229.google.com (mail-wg0-x229.google.com [IPv6:2a00:1450:400c:c00::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9621539C for ; Sun, 6 Apr 2014 01:36:01 +0000 (UTC) Received: by mail-wg0-f41.google.com with SMTP id n12so5259636wgh.24 for ; Sat, 05 Apr 2014 18:35:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type; bh=sbv4MNhF84PKfMJcGxxqSDFRXNTlcitMkAT7OW/v7+4=; b=d5H8x6QJrtwCkPXR+j78Wcvkr/an0ZX3vM0zJ4lB4T3GrPuEBVZQCOWeltOXZmPFU+ SbY96h/YULO8Wbgf4+02vCaSYUL+A0CKZdFMow8nAApAZBlsrwTpa6YDPdhgMX0n0khi R8xkS/VVT5NZFHY2HUXV8Kji2MBR2tJrZUXdncva8to0swSDzjj4Go7OHZ9Pc+/ey8lJ hvXe7jUaSYBMfPmX65aXNrZEOe9jZPgYrgRdP0k1DR+ltEB3ovgJzZ2jq+YHMLYx3rQT GXD3IHLZUvdfogX4HT8QXzLJwIJ9gWYztVaBjAiCJjnH7AVFxaVo22l090DYWJOKYHhm 8zIA== X-Received: by 10.194.109.227 with SMTP id hv3mr32015270wjb.10.1396748159914; Sat, 05 Apr 2014 18:35:59 -0700 (PDT) Received: from [192.168.20.30] (81-178-2-118.dsl.pipex.com. [81.178.2.118]) by mx.google.com with ESMTPSA id fs16sm14123000wic.18.2014.04.05.18.35.58 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 05 Apr 2014 18:35:59 -0700 (PDT) Message-ID: <5340AF7D.5000204@gmail.com> Date: Sun, 06 Apr 2014 02:35:57 +0100 From: Kaya Saman User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: kpneal@pobox.com Subject: Re: Device Removed by Administrator in ZPOOL? References: <53408FAB.8080202@gmail.com> <512A7865-CEFD-4BDA-A060-AE911BEDD5B7@tuxsystems.co.za> <53409BF1.6050001@gmail.com> <20140406002849.GA14765@neutralgood.org> In-Reply-To: <20140406002849.GA14765@neutralgood.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems , Vusa Moyo X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Apr 2014 01:36:02 -0000 On 04/06/2014 01:28 AM, kpneal@pobox.com wrote: > On Sun, Apr 06, 2014 at 01:12:33AM +0100, Kaya Saman wrote: >> Many thanks for the response! >> >> The server doesn't show any lights for "drive error" however, the blue >> read LED isn't coming on, on the drive in question (as removed from ZPOOL). >> >> I will have a look for LSI tools in @Ports and also see if the BIOS LSI >> hook comes up with anything. > Have you seen any other errors in your logs? Seems like if a drive fails > there should be some other error message reporting the errors that resulted > in ZFS marking the drive removed. What does 'dmesg' have to say? > > Once ZFS has stopped using the drive (for whatever reason) I wouldn't > expect you to see anything else happening on the drive. So the light not > coming on doesn't really tell us anything new. > > Also, aren't 'green' drives the kind that spin down and then have to spin > back up when a request comes in? I don't know what happens if a drive takes > "too long" to respond because it has spun down. I have no idea how FreeBSD > handles that, and I also don't know if ZFS adds anything to the equation. > Hopefully someone else here will clue me/us in. > Something interesting I've just read..... https://forums.freebsd.org/viewtopic.php?&t=4534 [quote] Very, very, very poor data throughput. Drive dropping off the controller. Running an array verify would bring the server to a grinding halt. Incompatibilities with riser cards used in 2U rackmount servers (didn't matter if it was the el-cheapo one that came with the case, or a Tyan one specifically for the motherboard). Lack of useable management tools for Linux/FreeBSD (the mega* tools are a joke compared to 3dm2 or even the BIOS config tool for 3Ware). [/quote] I wonder if that's the issue with my system, that the drive has literally "dropped off the controller"? >> On 04/06/2014 12:44 AM, Vusa Moyo wrote: >>> This is more than likely a failed drive. >>> >>> Have you physically looked at the server for orange lights which may help ID the failed drive?? >>> >>> There could also be tools to query the lsi hba. >>> >>> Sent from my iPad >>> >>>> On Apr 6, 2014, at 1:20 AM, Kaya Saman wrote: >>>> >>>> Hi, >>>> >>>> I'm running FreeBSD 10.0 x64 on a Xeon E5 based system with 8GB RAM. >>>> >>>> >>>> Checking the ZPOOL status I saw one of my drives has been offlined... the exact error is this: >>>> >>>> # zpool status -v >>>> pool: ZPOOL_2 >>>> state: DEGRADED >>>> status: One or more devices has been removed by the administrator. >>>> Sufficient replicas exist for the pool to continue functioning in a >>>> degraded state. >>>> action: Online the device using 'zpool online' or replace the device with >>>> 'zpool replace'. >>>> scan: scrub repaired 0 in 9h3m with 0 errors on Sat Apr 5 03:46:55 2014 >>>> config: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> ZPOOL_2 DEGRADED 0 0 0 >>>> raidz2-0 DEGRADED 0 0 0 >>>> da0 ONLINE 0 0 0 >>>> 14870388343127772554 REMOVED 0 0 0 was /dev/da1 >>>> da2 ONLINE 0 0 0 >>>> da3 ONLINE 0 0 0 >>>> da4 ONLINE 0 0 0 >>>> >>>> >>>> I think this is due to a dead disk however, I'm not certain which is why I wanted to ask here as I didn't remove the drive at all..... rather then some kind of OS/ZFS error. >>>> >>>> >>>> The drives are 2TB WD Green drives all connected to an LSI HBA; everything is still under warranty so no big issue there and I have external backups too so I'm not really that worried, I'm just trying to work out what's going on. >>>> >>>> >>>> Are my suspicions correct or should I simply try to reboot the system and see if the drive comes back online? From owner-freebsd-fs@FreeBSD.ORG Sun Apr 6 01:45:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4078F7F9 for ; Sun, 6 Apr 2014 01:45:45 +0000 (UTC) Received: from mail-wg0-x22e.google.com (mail-wg0-x22e.google.com [IPv6:2a00:1450:400c:c00::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C3A9761B for ; Sun, 6 Apr 2014 01:45:44 +0000 (UTC) Received: by mail-wg0-f46.google.com with SMTP id b13so5197073wgh.5 for ; Sat, 05 Apr 2014 18:45:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=MLGcXcFey19PtxItau85nDfGTZvUqtR9ALokTzHJ7kw=; b=KbEQNSu91PJZ+rsmP5H66oh/3RNJP3bipqa2p7IIzdUuLbUC03PNtrLrmk1+3iAJdp pdn/1gSmQUSePbpBqBknL0zKrnxf9dGQXQD6xoq0cnA+rYKJW3AQS/lClYRl35LhMfxa Gt50C96ZGETn6WtZbwQ1ErvQji+yPYuAZ2aWYHtg8S7G+/m93PWbtdm09nXinN/Kqsy1 A7wUVfg55GtB6pQs4qV0ckA2qLjdARbbF5uIDynf2T3v6yhpWEbCl1oyclzJXz5Aamjh 19+cf81gLfBvb1MfvO7w8VM+2HDJvV0nPb8ZD9oFRZxealVo+PoG/NaCkb3cEUVSui3b yIbg== X-Received: by 10.194.59.43 with SMTP id w11mr31675247wjq.65.1396748743094; Sat, 05 Apr 2014 18:45:43 -0700 (PDT) Received: from [192.168.20.30] (81-178-2-118.dsl.pipex.com. [81.178.2.118]) by mx.google.com with ESMTPSA id gz1sm14171133wib.14.2014.04.05.18.45.41 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 05 Apr 2014 18:45:42 -0700 (PDT) Message-ID: <5340B1C5.4000700@gmail.com> Date: Sun, 06 Apr 2014 02:45:41 +0100 From: Kaya Saman User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: kpneal@pobox.com Subject: Re: Device Removed by Administrator in ZPOOL? References: <53408FAB.8080202@gmail.com> <512A7865-CEFD-4BDA-A060-AE911BEDD5B7@tuxsystems.co.za> <53409BF1.6050001@gmail.com> <20140406002849.GA14765@neutralgood.org> In-Reply-To: <20140406002849.GA14765@neutralgood.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: FreeBSD Filesystems , Vusa Moyo X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Apr 2014 01:45:45 -0000 On 04/06/2014 01:28 AM, kpneal@pobox.com wrote: > On Sun, Apr 06, 2014 at 01:12:33AM +0100, Kaya Saman wrote: >> Many thanks for the response! >> >> The server doesn't show any lights for "drive error" however, the blue >> read LED isn't coming on, on the drive in question (as removed from ZPOOL). >> >> I will have a look for LSI tools in @Ports and also see if the BIOS LSI >> hook comes up with anything. > Have you seen any other errors in your logs? Seems like if a drive fails > there should be some other error message reporting the errors that resulted > in ZFS marking the drive removed. What does 'dmesg' have to say? > > Once ZFS has stopped using the drive (for whatever reason) I wouldn't > expect you to see anything else happening on the drive. So the light not > coming on doesn't really tell us anything new. > > Also, aren't 'green' drives the kind that spin down and then have to spin > back up when a request comes in? I don't know what happens if a drive takes > "too long" to respond because it has spun down. I have no idea how FreeBSD > handles that, and I also don't know if ZFS adds anything to the equation. > Hopefully someone else here will clue me/us in. > Ok this is really weird.... just did a reboot and now: $ zpool status pool: ZPOOL_2 state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sun Apr 6 02:43:03 2014 1.13G scanned out of 7.77T at 22.2M/s, 101h57m to go 227M resilvered, 0.01% done config: NAME STATE READ WRITE CKSUM ZPOOL_2 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 da0 ONLINE 0 0 0 da1 ONLINE 0 0 0 (resilvering) da2 ONLINE 0 0 0 da3 ONLINE 0 0 0 da4 ONLINE 0 0 0 ???? Looks like the drive might have fallen off the controller? Am just looking at the tools for it on the LSI website but there doesn't seem to be anything FreeBSD related.... Linux and Solaris yes but no FBSD? Model is LSI SAS 9207-4i4e >> On 04/06/2014 12:44 AM, Vusa Moyo wrote: >>> This is more than likely a failed drive. >>> >>> Have you physically looked at the server for orange lights which may help ID the failed drive?? >>> >>> There could also be tools to query the lsi hba. >>> >>> Sent from my iPad >>> >>>> On Apr 6, 2014, at 1:20 AM, Kaya Saman wrote: >>>> >>>> Hi, >>>> >>>> I'm running FreeBSD 10.0 x64 on a Xeon E5 based system with 8GB RAM. >>>> >>>> >>>> Checking the ZPOOL status I saw one of my drives has been offlined... the exact error is this: >>>> >>>> # zpool status -v >>>> pool: ZPOOL_2 >>>> state: DEGRADED >>>> status: One or more devices has been removed by the administrator. >>>> Sufficient replicas exist for the pool to continue functioning in a >>>> degraded state. >>>> action: Online the device using 'zpool online' or replace the device with >>>> 'zpool replace'. >>>> scan: scrub repaired 0 in 9h3m with 0 errors on Sat Apr 5 03:46:55 2014 >>>> config: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> ZPOOL_2 DEGRADED 0 0 0 >>>> raidz2-0 DEGRADED 0 0 0 >>>> da0 ONLINE 0 0 0 >>>> 14870388343127772554 REMOVED 0 0 0 was /dev/da1 >>>> da2 ONLINE 0 0 0 >>>> da3 ONLINE 0 0 0 >>>> da4 ONLINE 0 0 0 >>>> >>>> >>>> I think this is due to a dead disk however, I'm not certain which is why I wanted to ask here as I didn't remove the drive at all..... rather then some kind of OS/ZFS error. >>>> >>>> >>>> The drives are 2TB WD Green drives all connected to an LSI HBA; everything is still under warranty so no big issue there and I have external backups too so I'm not really that worried, I'm just trying to work out what's going on. >>>> >>>> >>>> Are my suspicions correct or should I simply try to reboot the system and see if the drive comes back online? From owner-freebsd-fs@FreeBSD.ORG Sun Apr 6 05:21:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E9603390 for ; Sun, 6 Apr 2014 05:21:22 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id AFE7D784 for ; Sun, 6 Apr 2014 05:21:22 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s3656xoB080037 for ; Sun, 6 Apr 2014 00:06:59 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Sun Apr 6 00:06:59 2014 Message-ID: <5340E0EE.8010905@denninger.net> Date: Sun, 06 Apr 2014 00:06:54 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Device Removed by Administrator in ZPOOL? References: <53408FAB.8080202@gmail.com> <512A7865-CEFD-4BDA-A060-AE911BEDD5B7@tuxsystems.co.za> <53409BF1.6050001@gmail.com> <20140406002849.GA14765@neutralgood.org> <5340B1C5.4000700@gmail.com> In-Reply-To: <5340B1C5.4000700@gmail.com> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060107070107090608010508" X-Antivirus: avast! (VPS 140405-4, 04/05/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Apr 2014 05:21:23 -0000 This is a cryptographically signed message in MIME format. --------------ms060107070107090608010508 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 4/5/2014 8:45 PM, Kaya Saman wrote: > On 04/06/2014 01:28 AM, kpneal@pobox.com wrote: >> On Sun, Apr 06, 2014 at 01:12:33AM +0100, Kaya Saman wrote: >>> Many thanks for the response! >>> >>> The server doesn't show any lights for "drive error" however, the blu= e >>> read LED isn't coming on, on the drive in question (as removed from=20 >>> ZPOOL). >>> >>> I will have a look for LSI tools in @Ports and also see if the BIOS L= SI >>> hook comes up with anything. >> Have you seen any other errors in your logs? Seems like if a drive fai= ls >> there should be some other error message reporting the errors that=20 >> resulted >> in ZFS marking the drive removed. What does 'dmesg' have to say? >> >> Once ZFS has stopped using the drive (for whatever reason) I wouldn't >> expect you to see anything else happening on the drive. So the light n= ot >> coming on doesn't really tell us anything new. >> >> Also, aren't 'green' drives the kind that spin down and then have to=20 >> spin >> back up when a request comes in? I don't know what happens if a drive = >> takes >> "too long" to respond because it has spun down. I have no idea how=20 >> FreeBSD >> handles that, and I also don't know if ZFS adds anything to the=20 >> equation. >> Hopefully someone else here will clue me/us in. > > Ok this is really weird.... just did a reboot and now: > > $ zpool status > pool: ZPOOL_2 > state: ONLINE > status: One or more devices is currently being resilvered. The pool wi= ll > continue to function, possibly in a degraded state. > action: Wait for the resilver to complete. > scan: resilver in progress since Sun Apr 6 02:43:03 2014 > 1.13G scanned out of 7.77T at 22.2M/s, 101h57m to go > 227M resilvered, 0.01% done > config: > > NAME STATE READ WRITE CKSUM > ZPOOL_2 ONLINE 0 0 0 > raidz2-0 ONLINE 0 0 0 > da0 ONLINE 0 0 0 > da1 ONLINE 0 0 0 (resilvering) > da2 ONLINE 0 0 0 > da3 ONLINE 0 0 0 > da4 ONLINE 0 0 0 > > > ???? Looks like the drive might have fallen off the controller? > > Am just looking at the tools for it on the LSI website but there=20 > doesn't seem to be anything FreeBSD related.... Linux and Solaris yes=20 > but no FBSD? > > Model is LSI SAS 9207-4i4e > It looks like the drive detached itself. I've seen those "Green" drives = do this before; they go to "sleep" if quiescent, and sometimes fail to=20 wake up properly. The controller then detaches them thinking they're=20 dead, but they're not... I'd get those things off your system. They work ok for desktop PCs but=20 I don't like them in servers. --=20 -- Karl karl@denninger.net --------------ms060107070107090608010508 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA0MDYwNTA2NTRaMCMGCSqGSIb3DQEJBDEW BBTPTqLFYunS7jZu+ZEJV8xwB13sYTBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAtS1T63NeFF1nwDELi7aEHOI5g7rv vmiFTY1IIax2/klcKmlin0/VcDsgfasSwt01puAKVv3rs0bIt+BanwjUjtaGW/Be34YDvXL4 TmGh0YpCPmiE8yCxRqNL3IGkQdpJnvbqushkdaVcrSFWZix/BQyZ+NPjQl6S5cPyFgkywcOf ktIn67rZ+82O4LPS/X8doFpp03Djn48bzbuk/Tdg2jotI3UqxcyCDpFgKMPEAFdGS4KtqF/f BlH8vMaf8YqoZFJ9lAzmAfgqcguPJOF7ktQZNjcEAYSvovRU6tWGRigpscN0PhAgHbTd7gG/ Z+iOBXTlFiiUB0hVKjvZKGTt+7gdkIA7S+Si0DT9njFxz2GvGMsGpGUNgWF6oxdbs3qHGP79 Hcz1Q8nWwKPNAR39Hoqm2/bK0NIbQANMJOZtbl7yIIAd2sDUqe40ErmxdxkvWrrZMI9QrdBX XPCvsN9JJ61ElyRLf2pX8tVo7z0C5vdK5FLr5ZScf7a4S8ZachXLA5eZ31Vpd1ps1Mllc7s/ K0alcW6u/oxIbSomTxfap//olOySTOmQ7SsPr4O2PsndnSfuO0GZJK8+EVHrx5ornFTBwEQ9 IrGtyBz0uvSKjiJl8abuuWMekoHscGoNgZ4sAmQF11LAP3tsvdPkBNS7rDzybcQ3b1oFJhbg cajy9tAAAAAAAAA= --------------ms060107070107090608010508-- From owner-freebsd-fs@FreeBSD.ORG Sun Apr 6 11:20:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D332D14F for ; Sun, 6 Apr 2014 11:20:56 +0000 (UTC) Received: from mail-wi0-x236.google.com (mail-wi0-x236.google.com [IPv6:2a00:1450:400c:c05::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 67DFB33A for ; Sun, 6 Apr 2014 11:20:56 +0000 (UTC) Received: by mail-wi0-f182.google.com with SMTP id d1so3527686wiv.15 for ; Sun, 06 Apr 2014 04:20:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=8DGfzr8lIWLF0xhCmO7J1E7N1xaK6EqJNHrusRYp7Qs=; b=JKYEzLkx4eaLt5aLtfXhEp+FsHdbkzDYjuJdg2/lDxqPHAxtoPz4Az2UovaF1RR7Ms +VgQa/BlzokkbgAfhuYvK4R3xa4VSrkd4TxWoxsRvrXvRnAm+M7sB8rnCd8RcsNIVbUn 8gKHLj6dxdTN+TfiHP6BXjyq37spXDqAlZFq5nSwch9/93h4+6QGRT5AVnQOU8sdFgJ/ wSGG4HoCZ7wdnWTBwDHIpDRpGE5eaxmI6eS2//vd5+aHAHHRwdbT0/4K4uRasEdX3log e2Se+/clzK0i68beFiMlNVU/Vrv2WCn6NN8wR80d9neyOwzMa0F88E+QxyzIAA4ub54n T8og== X-Received: by 10.180.12.14 with SMTP id u14mr18489882wib.0.1396783254642; Sun, 06 Apr 2014 04:20:54 -0700 (PDT) Received: from [192.168.20.30] (81-178-2-118.dsl.pipex.com. [81.178.2.118]) by mx.google.com with ESMTPSA id ll1sm21279311wjc.6.2014.04.06.04.20.52 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 06 Apr 2014 04:20:53 -0700 (PDT) Message-ID: <53413894.4050400@gmail.com> Date: Sun, 06 Apr 2014 12:20:52 +0100 From: Kaya Saman User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Device Removed by Administrator in ZPOOL? References: <53408FAB.8080202@gmail.com> <512A7865-CEFD-4BDA-A060-AE911BEDD5B7@tuxsystems.co.za> <53409BF1.6050001@gmail.com> <20140406002849.GA14765@neutralgood.org> <5340B1C5.4000700@gmail.com> <5340E0EE.8010905@denninger.net> In-Reply-To: <5340E0EE.8010905@denninger.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Apr 2014 11:20:56 -0000 On 04/06/2014 06:06 AM, Karl Denninger wrote: > > On 4/5/2014 8:45 PM, Kaya Saman wrote: >> On 04/06/2014 01:28 AM, kpneal@pobox.com wrote: >>> On Sun, Apr 06, 2014 at 01:12:33AM +0100, Kaya Saman wrote: >>>> Many thanks for the response! >>>> >>>> The server doesn't show any lights for "drive error" however, the blue >>>> read LED isn't coming on, on the drive in question (as removed from >>>> ZPOOL). >>>> >>>> I will have a look for LSI tools in @Ports and also see if the BIOS >>>> LSI >>>> hook comes up with anything. >>> Have you seen any other errors in your logs? Seems like if a drive >>> fails >>> there should be some other error message reporting the errors that >>> resulted >>> in ZFS marking the drive removed. What does 'dmesg' have to say? >>> >>> Once ZFS has stopped using the drive (for whatever reason) I wouldn't >>> expect you to see anything else happening on the drive. So the light >>> not >>> coming on doesn't really tell us anything new. >>> >>> Also, aren't 'green' drives the kind that spin down and then have to >>> spin >>> back up when a request comes in? I don't know what happens if a >>> drive takes >>> "too long" to respond because it has spun down. I have no idea how >>> FreeBSD >>> handles that, and I also don't know if ZFS adds anything to the >>> equation. >>> Hopefully someone else here will clue me/us in. >> >> Ok this is really weird.... just did a reboot and now: >> >> $ zpool status >> pool: ZPOOL_2 >> state: ONLINE >> status: One or more devices is currently being resilvered. The pool >> will >> continue to function, possibly in a degraded state. >> action: Wait for the resilver to complete. >> scan: resilver in progress since Sun Apr 6 02:43:03 2014 >> 1.13G scanned out of 7.77T at 22.2M/s, 101h57m to go >> 227M resilvered, 0.01% done >> config: >> >> NAME STATE READ WRITE CKSUM >> ZPOOL_2 ONLINE 0 0 0 >> raidz2-0 ONLINE 0 0 0 >> da0 ONLINE 0 0 0 >> da1 ONLINE 0 0 0 (resilvering) >> da2 ONLINE 0 0 0 >> da3 ONLINE 0 0 0 >> da4 ONLINE 0 0 0 >> >> >> ???? Looks like the drive might have fallen off the controller? >> >> Am just looking at the tools for it on the LSI website but there >> doesn't seem to be anything FreeBSD related.... Linux and Solaris yes >> but no FBSD? >> >> Model is LSI SAS 9207-4i4e >> > It looks like the drive detached itself. I've seen those "Green" > drives do this before; they go to "sleep" if quiescent, and sometimes > fail to wake up properly. The controller then detaches them thinking > they're dead, but they're not... > > I'd get those things off your system. They work ok for desktop PCs > but I don't like them in servers. > Thanks for the info..... I didn't want to go for the "Green" variant either but they seemed to be highest in capacity for 2.5" drives which is why I really had no choice. I prefer WD Black drives as I've had really good experiences with them. Guess I'll just have to wait till the capacity of the smaller drives increases :-( Regards, Kaya From owner-freebsd-fs@FreeBSD.ORG Mon Apr 7 08:40:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E4ADC4ED for ; Mon, 7 Apr 2014 08:40:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B5B5CB95 for ; Mon, 7 Apr 2014 08:40:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s378e1Gp024279 for ; Mon, 7 Apr 2014 08:40:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s378e1E6024278; Mon, 7 Apr 2014 08:40:01 GMT (envelope-from gnats) Date: Mon, 7 Apr 2014 08:40:01 GMT Message-Id: <201404070840.s378e1E6024278@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Aragon Gouveia Subject: Re: kern/185858: [zfs] zvol clone can't see new device X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Aragon Gouveia List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2014 08:40:02 -0000 The following reply was made to PR kern/185858; it has been noted by GNATS. From: Aragon Gouveia To: bug-followup@FreeBSD.org, biatche@gmail.com Cc: Subject: Re: kern/185858: [zfs] zvol clone can't see new device Date: Mon, 07 Apr 2014 10:28:11 +0200 FYI, this is a dupe of kern/178999. http://www.freebsd.org/cgi/query-pr.cgi?pr=178999 From owner-freebsd-fs@FreeBSD.ORG Mon Apr 7 10:25:48 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 80A2CDC0; Mon, 7 Apr 2014 10:25:48 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 53C81869; Mon, 7 Apr 2014 10:25:48 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s37APmIk058692; Mon, 7 Apr 2014 10:25:48 GMT (envelope-from smh@freefall.freebsd.org) Received: (from smh@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s37APmo4058691; Mon, 7 Apr 2014 10:25:48 GMT (envelope-from smh) Date: Mon, 7 Apr 2014 10:25:48 GMT Message-Id: <201404071025.s37APmo4058691@freefall.freebsd.org> To: smh@FreeBSD.org, freebsd-fs@FreeBSD.org, smh@FreeBSD.org From: smh@FreeBSD.org Subject: Re: kern/185858: [zfs] zvol clone can't see new device X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2014 10:25:48 -0000 Synopsis: [zfs] zvol clone can't see new device Responsible-Changed-From-To: freebsd-fs->smh Responsible-Changed-By: smh Responsible-Changed-When: Mon Apr 7 10:25:23 UTC 2014 Responsible-Changed-Why: I'll take it http://www.freebsd.org/cgi/query-pr.cgi?pr=185858 From owner-freebsd-fs@FreeBSD.ORG Mon Apr 7 11:06:43 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1C023A05 for ; Mon, 7 Apr 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F1D8ABEC for ; Mon, 7 Apr 2014 11:06:42 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s37B6grS071031 for ; Mon, 7 Apr 2014 11:06:42 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s37B6gxp071029 for freebsd-fs@FreeBSD.org; Mon, 7 Apr 2014 11:06:42 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 7 Apr 2014 11:06:42 GMT Message-Id: <201404071106.s37B6gxp071029@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2014 11:06:43 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/188187 fs [zfs] 10-stable: Kernel panic on zpool import: integer o kern/187905 fs [zpool] Confusion zpool with a block size in HDD - blo o kern/187778 fs [zfs] Two ZFS filesystems mounted on / at same time o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix o kern/187261 fs [fuse] FUSE kernel panic when using socket / bind o bin/187071 fs [nfs] nfs server only start 2 daemons 1 master & 1 ser o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186574 fs [zfs] zpool history hangs (infinite loop) o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 344 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 7 13:43:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 79C6FF1A for ; Mon, 7 Apr 2014 13:43:17 +0000 (UTC) Received: from potato.growveg.org (potato.growveg.org [62.49.247.163]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 36F81F79 for ; Mon, 7 Apr 2014 13:43:17 +0000 (UTC) Received: from john by potato.growveg.org with local (Exim 4.82 (FreeBSD)) (envelope-from ) id 1WX9pK-00006V-Pc for freebsd-fs@freebsd.org; Mon, 07 Apr 2014 14:43:06 +0100 Date: Mon, 7 Apr 2014 14:43:06 +0100 From: John To: freebsd-fs@freebsd.org Subject: zfs - moving filesystem from one zpool to another Message-ID: <20140407134306.GA67619@potato.growveg.org> Mail-Followup-To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) Sender: John X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: john@potato.growveg.org X-SA-Exim-Scanned: No (on potato.growveg.org); SAEximRunCond expanded to false X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: freebsd-fs@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2014 13:43:17 -0000 Hello list, For other reasons I had to install pc-bsd 10 rather than the regular freebsd on my new desktop. Unfortunately it insisted on installing to the first device which is a ssd (110GB). It makes its own zpool called "tank" and everything gets installed there. Now, while it's great that some stuff is there on the ssd like / /boot and /root /var and /tmp, I'm rather less enamoured that /usr/ports /usr/src and /home are there too. With the hdds in the machine (ada1 ada2 and ada3) I made a raidz array that made 7.2TB usable space. This zpool is called "storage". I'd like to put the not-so-active filesystems on the zpool called "storage". A couple of issues though: 1. /usr/local is not it's own filesystem. Is it a case of just doing a recursive copy on the existing /usr/local into a newly created /storage/usr/local and setting the mountpoint as /usr/local (after deleting the old /usr/local) ? 2. /usr/ports IS it's own filesystem on tank: tank/usr/ports on /usr/ports (zfs, local, nfsv4acls) Can I zfs export this from tank and then zfs import it into storage ? If so, do I need to set a mountpoint? thanks, -- John From owner-freebsd-fs@FreeBSD.ORG Mon Apr 7 13:51:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D214434D for ; Mon, 7 Apr 2014 13:51:50 +0000 (UTC) Received: from mail-ob0-x22e.google.com (mail-ob0-x22e.google.com [IPv6:2607:f8b0:4003:c01::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9B1AAD6 for ; Mon, 7 Apr 2014 13:51:50 +0000 (UTC) Received: by mail-ob0-f174.google.com with SMTP id wo20so6597333obc.33 for ; Mon, 07 Apr 2014 06:51:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=lAs+vNgIAFzpil7aYlbsf3gA5PMBYGUnM6Jd1K1qiDg=; b=KeEDyLAY8BZ+/JtcrsEckFFqJ0MCNsTh4jRDJh0xOvn8wkB0bYpHl4UDMM683cB7Iw PjTbnZzzu+4giNT+lvhZg5qRkDiC6/g43TSwuoXkuZac2DSv/6lvJ3QLh5A28AgkBiJD QdEQkRjbjofO+C/v5x6nLDKcNAuB4RgcqdRL+/8lIIvVa3sqrlkPmKG08AAZV8NbN1ln gGnhpNOTJdNHmRInvBCySWxHXOF1aHJI3RGj2t8L/OUiG23jBiRxS+5GWSMK+qkzhwYy ogTx/X8jUOtIwYZU8PzpKp68SBRleK6UIjIXKQnXYI8w3AsK4dBvO3bSuiXW6U1bq/iB 4BQw== MIME-Version: 1.0 X-Received: by 10.60.161.101 with SMTP id xr5mr1195856oeb.71.1396878709637; Mon, 07 Apr 2014 06:51:49 -0700 (PDT) Received: by 10.76.12.34 with HTTP; Mon, 7 Apr 2014 06:51:49 -0700 (PDT) In-Reply-To: <20140407134306.GA67619@potato.growveg.org> References: <20140407134306.GA67619@potato.growveg.org> Date: Mon, 7 Apr 2014 15:51:49 +0200 Message-ID: Subject: Re: zfs - moving filesystem from one zpool to another From: Andreas Nilsson To: "freebsd-fs@freebsd.org" Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2014 13:51:51 -0000 On Mon, Apr 7, 2014 at 3:43 PM, John wrote: > Hello list, > > For other reasons I had to install pc-bsd 10 rather than the regular > freebsd on my new desktop. Unfortunately it insisted on installing to > the first device which is a ssd (110GB). It makes its own zpool called > "tank" and everything gets installed there. > > Now, while it's great that some stuff is there on the ssd like / /boot > and /root /var and /tmp, I'm rather less enamoured that /usr/ports /usr/src > and /home are there too. > > With the hdds in the machine (ada1 ada2 and ada3) I made a raidz array > that made 7.2TB usable space. This zpool is called "storage". I'd like > to put the not-so-active filesystems on the zpool called "storage". A > couple of issues though: > > 1. /usr/local is not it's own filesystem. Is it a case of just doing a > recursive copy on the existing /usr/local into a newly created > /storage/usr/local and setting the mountpoint as /usr/local (after > deleting the old /usr/local) ? > Having usr/local as a separate fs works. I would not use cp to move the files, as symlinks are not preserved. > > 2. /usr/ports IS it's own filesystem on tank: > tank/usr/ports on /usr/ports (zfs, local, nfsv4acls) > > Can I zfs export this from tank and then zfs import it into storage ? If > so, do I need to set a mountpoint? > export/import are commands that work on zpool not zfs. You want send and receive. zfs snapshot tank/usr/ports@now zfs send tank/usr/ports@now | zfs recv storage/ports #or whatever you want the name to be. Parents dataset must exist. zfs set mountpoint=/usr/ports storage/ports Best regards Andreas > > thanks, > -- > John > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Apr 7 13:53:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2B2073F3 for ; Mon, 7 Apr 2014 13:53:48 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id C9512ED for ; Mon, 7 Apr 2014 13:53:47 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s37DrVqC086474 for ; Mon, 7 Apr 2014 08:53:31 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Mon Apr 7 08:53:31 2014 Message-ID: <5342ADD6.9000502@denninger.net> Date: Mon, 07 Apr 2014 08:53:26 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zfs - moving filesystem from one zpool to another References: <20140407134306.GA67619@potato.growveg.org> In-Reply-To: <20140407134306.GA67619@potato.growveg.org> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060609030808020004040109" X-Antivirus: avast! (VPS 140407-0, 04/07/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2014 13:53:48 -0000 This is a cryptographically signed message in MIME format. --------------ms060609030808020004040109 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 4/7/2014 8:43 AM, John wrote: > Hello list, > > For other reasons I had to install pc-bsd 10 rather than the regular > freebsd on my new desktop. Unfortunately it insisted on installing to > the first device which is a ssd (110GB). It makes its own zpool called > "tank" and everything gets installed there. > > Now, while it's great that some stuff is there on the ssd like / /boot > and /root /var and /tmp, I'm rather less enamoured that /usr/ports /usr= /src > and /home are there too. > > With the hdds in the machine (ada1 ada2 and ada3) I made a raidz array > that made 7.2TB usable space. This zpool is called "storage". I'd like > to put the not-so-active filesystems on the zpool called "storage". A > couple of issues though: > > 1. /usr/local is not it's own filesystem. Is it a case of just doing a > recursive copy on the existing /usr/local into a newly created > /storage/usr/local and setting the mountpoint as /usr/local (after > deleting the old /usr/local) ? > > 2. /usr/ports IS it's own filesystem on tank: > tank/usr/ports on /usr/ports (zfs, local, nfsv4acls) > > Can I zfs export this from tank and then zfs import it into storage ? I= f > so, do I need to set a mountpoint? > > thanks, Use "zfs send ..... | zfs recv" to do this; you zfs send the filesystem=20 into a newly-created one on the other pool. --=20 -- Karl karl@denninger.net --------------ms060609030808020004040109 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA0MDcxMzUzMjZaMCMGCSqGSIb3DQEJBDEW BBRYUPf/7AGNHliGvByJoRj3vzs1hzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAemPmowQhz+6k64mzwW+VdruH97Lf TXwdp/vIYxSe73iLcMx95k2U2IenZTDPdKhXG1ELMpWfK8fdmV5q+NsBRzSbk7MiHbQCS/5L ZAUnThJpo/ktOj/wxshwLudly1H9tGdL8CZ203WDDP4n614v4XEkD8r6dSPLUj+4fdwikhJQ 1mUzfUm885MtFkUdLvGZwzVfTgnty9Eqg28gflvj8FCtpyfNzcIg9YLeqXn4f7PGHTcFvXX2 aw7PcTGMHr3KylMV4vStK6WH7LFnmGqtkxtqeu8hnb1MvavCSz/STPTr0lHql95bR8U449nK KjFQ/+9hpd49bY/ZB2c1s2+jY7H9tA8K8Ci7wYXq+oqNRbLxEgFLLS9u+NEuWAP/GkpifJTd v8OJIH9PCMif5ads8oYljyEoIqan69+jvFKPqsRkIBWLmzwR/w6GOBQB0+f39+/6q+XgsQCb wvtOIdynEwixZsZ0uQYeJcb9BaydRzvTJj/tLsTlpSWzXe2/Uiro5P9gNzGasfI2YDZL6ZYD MMELT3uOxtDq9DtCP/MG4w9c0XSGBeaXNA/Om13/tx9svmDbYFJUmvxJkk8CbdfigdNQrxez XJsAAac+UcUVWVNBIsTSRDJwiFUkSXsenqmZynaPfXRi4HgfBYBskMUSt2BkKo/yKBK05eDn 6OemRuwAAAAAAAA= --------------ms060609030808020004040109-- From owner-freebsd-fs@FreeBSD.ORG Mon Apr 7 14:44:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2E4D94A2 for ; Mon, 7 Apr 2014 14:44:11 +0000 (UTC) Received: from potato.growveg.org (potato.growveg.org [62.49.247.163]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DC54C8D3 for ; Mon, 7 Apr 2014 14:44:10 +0000 (UTC) Received: from john by potato.growveg.org with local (Exim 4.82 (FreeBSD)) (envelope-from ) id 1WXAmM-0000Bx-7V for freebsd-fs@freebsd.org; Mon, 07 Apr 2014 15:44:06 +0100 Date: Mon, 7 Apr 2014 15:44:06 +0100 From: John To: freebsd-fs@freebsd.org Subject: Re: zfs - moving filesystem from one zpool to another Message-ID: <20140407144406.GB67619@potato.growveg.org> Mail-Followup-To: freebsd-fs@freebsd.org References: <20140407134306.GA67619@potato.growveg.org> <5342ADD6.9000502@denninger.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5342ADD6.9000502@denninger.net> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: John X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: john@potato.growveg.org X-SA-Exim-Scanned: No (on potato.growveg.org); SAEximRunCond expanded to false X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: freebsd-fs@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2014 14:44:11 -0000 On Mon, Apr 07, 2014 at 08:53:26AM -0500, Karl Denninger wrote: > Use "zfs send ..... | zfs recv" to do this; you zfs send the filesystem > into a newly-created one on the other pool. thank you both, I'll do this shortly -- John From owner-freebsd-fs@FreeBSD.ORG Mon Apr 7 14:59:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EDB32A99 for ; Mon, 7 Apr 2014 14:59:35 +0000 (UTC) Received: from potato.growveg.org (potato.growveg.org [62.49.247.163]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id ABAF8A16 for ; Mon, 7 Apr 2014 14:59:35 +0000 (UTC) Received: from john by potato.growveg.org with local (Exim 4.82 (FreeBSD)) (envelope-from ) id 1WXB1H-0000Dc-Hq for freebsd-fs@freebsd.org; Mon, 07 Apr 2014 15:59:31 +0100 Date: Mon, 7 Apr 2014 15:59:31 +0100 From: John To: freebsd-fs@freebsd.org Subject: /boot/loader.conf or /etc/sysctl.conf ? zfs tunables Message-ID: <20140407145931.GC67619@potato.growveg.org> Mail-Followup-To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) Sender: John X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: john@potato.growveg.org X-SA-Exim-Scanned: No (on potato.growveg.org); SAEximRunCond expanded to false X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: freebsd-fs@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2014 14:59:36 -0000 Hello list, Is it safe for all zfs tunables to go in loader.conf? I see some are recommended to go in sysctl.conf. Well, not recommended, but quoted. Using /boot/loader.conf would, to me at least, be more systematic. Is there any harm in defining them all in there? If it is, it's not always obvious which goes where. thanks -- John From owner-freebsd-fs@FreeBSD.ORG Thu Apr 10 10:20:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 54EE0330 for ; Thu, 10 Apr 2014 10:20:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3FAC01E6A for ; Thu, 10 Apr 2014 10:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3AAK0Cg027656 for ; Thu, 10 Apr 2014 10:20:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3AAK0Sv027655; Thu, 10 Apr 2014 10:20:00 GMT (envelope-from gnats) Date: Thu, 10 Apr 2014 10:20:00 GMT Message-Id: <201404101020.s3AAK0Sv027655@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Garrett Wollman Subject: Re: kern/181966: [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig); zio.c line 2955 [9.2/amd64] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Garrett Wollman List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2014 10:20:01 -0000 The following reply was made to PR kern/181966; it has been noted by GNATS. From: Garrett Wollman To: bug-followup@freebsd.org Cc: Subject: Re: kern/181966: [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig); zio.c line 2955 [9.2/amd64] Date: Thu, 10 Apr 2014 06:14:12 -0400 Just hit this bug, with the following (apparently identical) stack trace on a 9.2 system: panic: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 2955 cpuid = 20 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a/frame 0xffffff98a38d2900 kdb_backtrace() at kdb_backtrace+0x37/frame 0xffffff98a38d29c0 panic() at panic+0x1ce/frame 0xffffff98a38d2ac0 assfail() at assfail+0x1a/frame 0xffffff98a38d2ad0 zio_done() at zio_done+0x120/frame 0xffffff98a38d2b30 zio_execute() at zio_execute+0xc3/frame 0xffffff98a38d2b70 taskqueue_run_locked() at taskqueue_run_locked+0x74/frame 0xffffff98a38d2bc0 taskqueue_thread_loop() at taskqueue_thread_loop+0x46/frame 0xffffff98a38d2be0 fork_exit() at fork_exit+0x11f/frame 0xffffff98a38d2c30 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff98a38d2c30 --- trap 0, rip = 0, rsp = 0xffffff98a38d2cf0, rbp = 0 --- No debugger or dump partition configured, so all I could do is let it reboot. -GAWollman From owner-freebsd-fs@FreeBSD.ORG Thu Apr 10 16:21:16 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 082341BB for ; Thu, 10 Apr 2014 16:21:16 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 3F7AB1620 for ; Thu, 10 Apr 2014 16:21:14 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA12616 for ; Thu, 10 Apr 2014 19:17:30 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WYHfO-0004tr-GC for freebsd-fs@FreeBSD.org; Thu, 10 Apr 2014 19:17:30 +0300 Message-ID: <5346C3E2.2080302@FreeBSD.org> Date: Thu, 10 Apr 2014 19:16:34 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: freebsd-fs Subject: freebsd vfs, solaris vfs, zfs X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=X-VIET-VPS Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2014 16:21:16 -0000 I've tried to express some of my understanding of how FreeBSD VFS works and how it compares to Solaris VFS model, maybe you would find that interesting: http://www.hybridcluster.com/blog/complexity-freebsd-vfs-using-zfs-example-part-2/ I will certainly appreciate any feedback. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Apr 10 21:53:35 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 29D1151A for ; Thu, 10 Apr 2014 21:53:35 +0000 (UTC) Received: from mail-ve0-x22e.google.com (mail-ve0-x22e.google.com [IPv6:2607:f8b0:400c:c01::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E264618FC for ; Thu, 10 Apr 2014 21:53:34 +0000 (UTC) Received: by mail-ve0-f174.google.com with SMTP id oz11so4028954veb.33 for ; Thu, 10 Apr 2014 14:53:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=Esr8SCPtXdPnjNP97fP/k1m9RCf7pQRKaU9Cuqtcrno=; b=L5Q6GOsCAhYaP+AvZQBAdepLmEZA2WafOLTlozrvjWjvSaGsLrMQfFPOApcnaJlFrO yUCkO7cdJ43Lt2ng4MzzekhXqZqzPl33L5D1ABIOITAGEqMBz1b8H90ui/tQ6RBBEhqb SPC1Ew5FUBY7lfllalxA5Kebk+sCkBlxFtJ8XT2rcGoHogBD82KAVuC+VsGTDvAsguPf UN5plgsm5MPfRJMo7N4A7YRhpqRAIcDRYdD+sCV05/AgZQmRLh+TMaIJN4XphxBccw// +U3+pv8ams+Hy1kuQFSaf9vch6UjnLsd2JAmZgv/+ah+Iu+CrhFZBpkCQ3LNmJZV2hGQ BQ/g== MIME-Version: 1.0 X-Received: by 10.52.120.6 with SMTP id ky6mr1636715vdb.38.1397166813987; Thu, 10 Apr 2014 14:53:33 -0700 (PDT) Received: by 10.221.67.136 with HTTP; Thu, 10 Apr 2014 14:53:33 -0700 (PDT) Date: Thu, 10 Apr 2014 14:53:33 -0700 Message-ID: Subject: Immutable files on UFS? From: Garrett Cooper To: freebsd-fs@FreeBSD.org Content-Type: text/plain; charset=ISO-8859-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2014 21:53:35 -0000 Hi all, This seems like a bit more than a basic question, but I apologize if I overlooked anything trivial. Basically I have some paths that don't seem to be removable. I'm not sure what needs to be done to make the paths mutable. I'm open to any and all suggestions in trying to clear out the filesystem :). Thanks! -Garrett # uname -a FreeBSD fbsd-vm.zonarsystems.net 11.0-CURRENT FreeBSD 11.0-CURRENT #2 5dc0f18(atf): Tue Apr 8 18:39:49 PDT 2014 root@fbsd-vm.zonarsystems.net:/usr/obj/usr/src/sys/GENERIC i386 # whoami root # mount | grep /dev/ada0p2 /dev/ada0p2 on / (ufs, local, journaled soft-updates) # chflags -R 0 /usr/obj~/ # rm -Rf /usr/obj~/ rm: /usr/obj~/usr/src/tmp/usr/lib/engines: Directory not empty rm: /usr/obj~/usr/src/tmp/usr/lib: Directory not empty rm: /usr/obj~/usr/src/tmp/usr: Directory not empty rm: /usr/obj~/usr/src/tmp: Directory not empty rm: /usr/obj~/usr/src: Directory not empty rm: /usr/obj~/usr: Directory not empty rm: /usr/obj~/obj/usr/src/tmp/usr/tests/bin/pkill: Directory not empty rm: /usr/obj~/obj/usr/src/tmp/usr/tests/bin: Directory not empty rm: /usr/obj~/obj/usr/src/tmp/usr/tests: Directory not empty rm: /usr/obj~/obj/usr/src/tmp/usr: Directory not empty rm: /usr/obj~/obj/usr/src/tmp: Directory not empty rm: /usr/obj~/obj/usr/src: Directory not empty rm: /usr/obj~/obj/usr: Directory not empty rm: /usr/obj~/obj: Directory not empty rm: /usr/obj~/: Directory not empty # truss -o log rm -Rf pjdfstest_0144459825150ee4088883866736d6ed/pjdfstest_5927dfc58a61fcfd500945dc28976d89 rm: pjdfstest_0144459825150ee4088883866736d6ed/pjdfstest_5927dfc58a61fcfd500945dc28976d89: Operation not permitted # egrep 'rmdir|unlink' log unlink("pjdfstest_0144459825150ee4088883866736d6ed/pjdfstest_5927dfc58a61fcfd500945dc28976d89") ERR#1 'Operation not permitted' # truss -o log rm -Rf /usr/obj~/usr/src/tmp/usr/lib/engines/ rm: /usr/obj~/usr/src/tmp/usr/lib/engines/: Directory not empty # find /usr/obj~/usr/src/tmp/usr/lib/engines/ -mindepth 1 | wc -l 0 # egrep 'rmdir|unlink' log rmdir(0x2880b100,0x10,0x0,0xbfbfec10,0x0,0x2) ERR#66 'Directory not empty' From owner-freebsd-fs@FreeBSD.ORG Thu Apr 10 22:04:22 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8E5BAB63 for ; Thu, 10 Apr 2014 22:04:22 +0000 (UTC) Received: from mail.in-addr.com (noop.in-addr.com [208.58.23.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5EC7A19E4 for ; Thu, 10 Apr 2014 22:04:22 +0000 (UTC) Received: from gjp by mail.in-addr.com with local (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1WYN4t-0005tN-KR; Thu, 10 Apr 2014 18:04:11 -0400 Date: Thu, 10 Apr 2014 18:04:11 -0400 From: Gary Palmer To: Garrett Cooper Subject: Re: Immutable files on UFS? Message-ID: <20140410220411.GB15884@in-addr.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: gpalmer@freebsd.org X-SA-Exim-Scanned: No (on mail.in-addr.com); SAEximRunCond expanded to false Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2014 22:04:22 -0000 On Thu, Apr 10, 2014 at 02:53:33PM -0700, Garrett Cooper wrote: > Hi all, > This seems like a bit more than a basic question, but I apologize > if I overlooked anything trivial. Basically I have some paths that > don't seem to be removable. I'm not sure what needs to be done to make > the paths mutable. > I'm open to any and all suggestions in trying to clear out the > filesystem :). > Thanks! > -Garrett > > # uname -a > FreeBSD fbsd-vm.zonarsystems.net 11.0-CURRENT FreeBSD 11.0-CURRENT #2 > 5dc0f18(atf): Tue Apr 8 18:39:49 PDT 2014 > root@fbsd-vm.zonarsystems.net:/usr/obj/usr/src/sys/GENERIC i386 > # whoami > root > # mount | grep /dev/ada0p2 > /dev/ada0p2 on / (ufs, local, journaled soft-updates) > # chflags -R 0 /usr/obj~/ > # rm -Rf /usr/obj~/ > rm: /usr/obj~/usr/src/tmp/usr/lib/engines: Directory not empty > rm: /usr/obj~/usr/src/tmp/usr/lib: Directory not empty > rm: /usr/obj~/usr/src/tmp/usr: Directory not empty > rm: /usr/obj~/usr/src/tmp: Directory not empty > rm: /usr/obj~/usr/src: Directory not empty > rm: /usr/obj~/usr: Directory not empty > rm: /usr/obj~/obj/usr/src/tmp/usr/tests/bin/pkill: Directory not empty > rm: /usr/obj~/obj/usr/src/tmp/usr/tests/bin: Directory not empty > rm: /usr/obj~/obj/usr/src/tmp/usr/tests: Directory not empty > rm: /usr/obj~/obj/usr/src/tmp/usr: Directory not empty > rm: /usr/obj~/obj/usr/src/tmp: Directory not empty > rm: /usr/obj~/obj/usr/src: Directory not empty > rm: /usr/obj~/obj/usr: Directory not empty > rm: /usr/obj~/obj: Directory not empty > rm: /usr/obj~/: Directory not empty > # truss -o log rm -Rf > pjdfstest_0144459825150ee4088883866736d6ed/pjdfstest_5927dfc58a61fcfd500945dc28976d89 > rm: pjdfstest_0144459825150ee4088883866736d6ed/pjdfstest_5927dfc58a61fcfd500945dc28976d89: > Operation not permitted > # egrep 'rmdir|unlink' log > unlink("pjdfstest_0144459825150ee4088883866736d6ed/pjdfstest_5927dfc58a61fcfd500945dc28976d89") > ERR#1 'Operation not permitted' > > # truss -o log rm -Rf /usr/obj~/usr/src/tmp/usr/lib/engines/ > rm: /usr/obj~/usr/src/tmp/usr/lib/engines/: Directory not empty > # find /usr/obj~/usr/src/tmp/usr/lib/engines/ -mindepth 1 | wc -l > 0 > # egrep 'rmdir|unlink' log > rmdir(0x2880b100,0x10,0x0,0xbfbfec10,0x0,0x2) ERR#66 'Directory not empty' Try looking at files in that directory with the '-o' flag to ls. e.g. ls -lago /usr/obj~/usr/src/tmp/usr/lib/engines/ If you see files with 'schg' on them, then run chflags noschg You could also do chflags -PR noschg /usr/obj~/ although be careful, as some files on the filesystem (such as /lib/libc.so.*) are meant to be immutable. If you are running with a securelevel above 0 the above won't be possible. Regards, Gary From owner-freebsd-fs@FreeBSD.ORG Thu Apr 10 22:07:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5D1FED5D for ; Thu, 10 Apr 2014 22:07:15 +0000 (UTC) Received: from mail-ee0-x235.google.com (mail-ee0-x235.google.com [IPv6:2a00:1450:4013:c00::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E72AF1A1A for ; Thu, 10 Apr 2014 22:07:14 +0000 (UTC) Received: by mail-ee0-f53.google.com with SMTP id b57so3464196eek.40 for ; Thu, 10 Apr 2014 15:07:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=r1OOKLq00QL61Ei6Ni09muYR6Ml2jStkItXY3CpYF20=; b=UutZptvj74dnT/1KUmx5Rfx7FLIy3F8Y9ecVhM9ZFzzxaGX16aBHqFHdGFug9YRE4F O1JtHa9mxWlKZJ//XGnnEMRZHzfUQzDkzkgoh3tuvsGCAiQvHxhMm9wHI4EzlIjuvTy5 FEzKGxe5GfQ+NlunVOS+V8Do85pDgTx2BM3xNsedXUxwrZSik7E68JT5nUSghw+79SKe hGsGavwEvS1RAk7pDa4gXnTHr0iL7YyDK7iMxxx2RwinV/vESw+luKGn0N4j3KRxwx3w 1vBKFPrZVgsejJOXwP6WKsHlrLGiHXnuQZCvn2N2W49JVrjYQVdZCCWgLY3sVWU+VCSL 8WnQ== X-Received: by 10.15.64.75 with SMTP id n51mr23960821eex.33.1397167633308; Thu, 10 Apr 2014 15:07:13 -0700 (PDT) Received: from [192.168.1.10] (actn47.neoplus.adsl.tpnet.pl. [83.11.67.47]) by mx.google.com with ESMTPSA id u1sm12907279eex.31.2014.04.10.15.07.12 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 10 Apr 2014 15:07:12 -0700 (PDT) Sender: =?UTF-8?Q?Edward_Tomasz_Napiera=C5=82a?= Subject: Re: Immutable files on UFS? Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=iso-8859-2 From: =?iso-8859-2?Q?Edward_Tomasz_Napiera=B3a?= In-Reply-To: Date: Fri, 11 Apr 2014 00:07:11 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <3943BCE8-66C2-4A22-8997-167564A3AD0E@FreeBSD.org> References: To: Garrett Cooper X-Mailer: Apple Mail (2.1283) Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2014 22:07:15 -0000 Wiadomo=B6=E6 napisana przez Garrett Cooper w dniu 10 kwi 2014, o godz. = 23:53: > Hi all, > This seems like a bit more than a basic question, but I apologize > if I overlooked anything trivial. Basically I have some paths that > don't seem to be removable. I'm not sure what needs to be done to make > the paths mutable. > I'm open to any and all suggestions in trying to clear out the > filesystem :). Full fsck, without using journal? From owner-freebsd-fs@FreeBSD.ORG Thu Apr 10 22:07:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 286EFDDE; Thu, 10 Apr 2014 22:07:59 +0000 (UTC) Received: from mail-vc0-x22c.google.com (mail-vc0-x22c.google.com [IPv6:2607:f8b0:400c:c03::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CAAEB1A23; Thu, 10 Apr 2014 22:07:58 +0000 (UTC) Received: by mail-vc0-f172.google.com with SMTP id la4so4100059vcb.3 for ; Thu, 10 Apr 2014 15:07:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=69diIvjJQoPGp0qDkVjTUsEFKAB3gbXwOVi5L+UwSZE=; b=rJPqZhZ1YPz1Ibup599G0lD6nFJqBq+QjduPnjC/DasyZH7+DXQmNv1QPsc5rubnhG NwifKSNRU7b/iNQ4eyT2QspOBYYa/jDoPwrFFOHSK5vDcnMgBMR2tXDnfYrgoi4L9H7G ETAF3KA9L83z18ScdqnTAeVpxFom/PXq+ZLTAirA4F1MuVtGDpBItv7ia4zTRKCHRScG c8jTZWx66hlNbt5X0i2PpZYUef9luy7cRMhPoEmNilIRVIUg/BueAY+VzW89K6fANVaq +X7vIL/s8ZXry/HpeAjHNbdQwCuQaj5JYRdLH8VvAMVP+3byAChCU37JR1UqhN3Mb7Qm uxTQ== MIME-Version: 1.0 X-Received: by 10.58.207.74 with SMTP id lu10mr16213496vec.15.1397167677925; Thu, 10 Apr 2014 15:07:57 -0700 (PDT) Received: by 10.221.67.136 with HTTP; Thu, 10 Apr 2014 15:07:57 -0700 (PDT) In-Reply-To: <3943BCE8-66C2-4A22-8997-167564A3AD0E@FreeBSD.org> References: <3943BCE8-66C2-4A22-8997-167564A3AD0E@FreeBSD.org> Date: Thu, 10 Apr 2014 15:07:57 -0700 Message-ID: Subject: Re: Immutable files on UFS? From: Garrett Cooper To: =?ISO-8859-2?Q?Edward_Tomasz_Napiera=B3a?= Content-Type: text/plain; charset=ISO-8859-2 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2014 22:07:59 -0000 On Thu, Apr 10, 2014 at 3:07 PM, Edward Tomasz Napiera=B3a wrote: > Wiadomo=B6=E6 napisana przez Garrett Cooper w dniu 10 kwi 2014, o godz. 2= 3:53: >> Hi all, >> This seems like a bit more than a basic question, but I apologize >> if I overlooked anything trivial. Basically I have some paths that >> don't seem to be removable. I'm not sure what needs to be done to make >> the paths mutable. >> I'm open to any and all suggestions in trying to clear out the >> filesystem :). > > Full fsck, without using journal? Good point -- I haven't tried that yet :). Thanks! -Garrett From owner-freebsd-fs@FreeBSD.ORG Thu Apr 10 22:12:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6C4B828A; Thu, 10 Apr 2014 22:12:46 +0000 (UTC) Received: from mail-ve0-x231.google.com (mail-ve0-x231.google.com [IPv6:2607:f8b0:400c:c01::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1AC241AC3; Thu, 10 Apr 2014 22:12:46 +0000 (UTC) Received: by mail-ve0-f177.google.com with SMTP id sa20so3935028veb.22 for ; Thu, 10 Apr 2014 15:12:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=3VnLnER/P136Cg5ZhjWI5TfQivvr6pbClQEfAx0P0F0=; b=vrB66TegUTArdyVlcw6ogETGOaJ/MHV2Nz4R8QREQiiYOc67F/lN0OSnUyHWihB5OD TqublNzdfu6iv3bwRHVXLhfPt+tu2nwy5Qh4638/E4u7nhr4fdqPiZ+xbYA+tsCE4zeZ reBErpwIf7gT7BTYwwcReNIkUXWOt94aCadF9nyS2dM1IQ6hMR26hl658X2oCSa2hxva c8IdP9ggTSYe8JHYLF7bSwpdN17eq3iEFKIkhOKkX/Vjg0R5U/X6IdzZl6lK71KvDbng IgEj4hCXkh6ayeMQ2b2g4ywhFIvATkhe7aIj1hpQ6rzJityLQ5kEJMg6+u5hyMaVV5zz bMng== MIME-Version: 1.0 X-Received: by 10.220.92.135 with SMTP id r7mr16435287vcm.11.1397167965179; Thu, 10 Apr 2014 15:12:45 -0700 (PDT) Received: by 10.221.67.136 with HTTP; Thu, 10 Apr 2014 15:12:45 -0700 (PDT) In-Reply-To: <20140410220411.GB15884@in-addr.com> References: <20140410220411.GB15884@in-addr.com> Date: Thu, 10 Apr 2014 15:12:45 -0700 Message-ID: Subject: Re: Immutable files on UFS? From: Garrett Cooper To: Gary Palmer Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2014 22:12:46 -0000 On Thu, Apr 10, 2014 at 3:04 PM, Gary Palmer wrote: ... > Try looking at files in that directory with the '-o' flag to ls. e.g. > > ls -lago /usr/obj~/usr/src/tmp/usr/lib/engines/ Thanks for the extra ls -l command (so many flags, so little time!). > If you see files with 'schg' on them, then run Unfortunately there aren't any. chflags -R 0 clears all of the chflags on files. I always run that instead of running noschg nowadays on directories like /usr/obj*, like buildworld does. > chflags noschg > > You could also do > > chflags -PR noschg /usr/obj~/ Hmm... didn't try it without -P. According to the manpage it should be the default, but it wasn't when I ran it. That solved my issue with the pjdfstest file, but not /usr/obj~. > although be careful, as some files on the filesystem (such as /lib/libc.so.*) > are meant to be immutable. Indeed :). > If you are running with a securelevel above 0 the above won't be possible. My kern.securelevel's unset :). Thanks! -Garrett From owner-freebsd-fs@FreeBSD.ORG Thu Apr 10 22:22:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 47F9C429; Thu, 10 Apr 2014 22:22:38 +0000 (UTC) Received: from mail-vc0-x231.google.com (mail-vc0-x231.google.com [IPv6:2607:f8b0:400c:c03::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E9CA21BC5; Thu, 10 Apr 2014 22:22:37 +0000 (UTC) Received: by mail-vc0-f177.google.com with SMTP id if17so3942360vcb.36 for ; Thu, 10 Apr 2014 15:22:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=NHrF7Tnr+Tj9jD9dN5/5LaBYzPheiJlxmNoG6buRsxA=; b=tfPcMAM1nVoJG/iOkDwDJjWAFwZ+wlV3EzhxxlASS7kU3HkKrdkS6k7eZG8OXpL78U LcARyoGE9MOwgtC24xb/Ub1LLaELf49O6XVp26vFq/0QWP+R8fuOaxOfCsnLUVYAeGOJ jlPaBbl8qLTFBmjXWA6612XtHT01uBbYtcmooe24NYjii2SA522BW/uN6IXmFujLoKyy +9Kwg/AkAbTevZwhsblcFEJy9uueaJP8R0WEVyjHgl3XeyzxE/JMkzYU9CCOIUetCg4j nY/lVov/vAp2/RDm0BxKIKG3VkoqQ7vxqJ0+3BW0YzVkYglIvTL+IucI75haIRbai4tJ i/vw== MIME-Version: 1.0 X-Received: by 10.58.219.233 with SMTP id pr9mr16955107vec.10.1397168557074; Thu, 10 Apr 2014 15:22:37 -0700 (PDT) Received: by 10.221.67.136 with HTTP; Thu, 10 Apr 2014 15:22:37 -0700 (PDT) In-Reply-To: References: <3943BCE8-66C2-4A22-8997-167564A3AD0E@FreeBSD.org> Date: Thu, 10 Apr 2014 15:22:37 -0700 Message-ID: Subject: Re: Immutable files on UFS? From: Garrett Cooper To: =?ISO-8859-2?Q?Edward_Tomasz_Napiera=B3a?= Content-Type: text/plain; charset=ISO-8859-2 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2014 22:22:38 -0000 On Thu, Apr 10, 2014 at 3:07 PM, Garrett Cooper wro= te: > On Thu, Apr 10, 2014 at 3:07 PM, Edward Tomasz Napiera=B3a > wrote: >> Wiadomo=B6=E6 napisana przez Garrett Cooper w dniu 10 kwi 2014, o godz. = 23:53: >>> Hi all, >>> This seems like a bit more than a basic question, but I apologize >>> if I overlooked anything trivial. Basically I have some paths that >>> don't seem to be removable. I'm not sure what needs to be done to make >>> the paths mutable. >>> I'm open to any and all suggestions in trying to clear out the >>> filesystem :). >> >> Full fsck, without using journal? > > Good point -- I haven't tried that yet :). And voila -- a full fsck worked. Thank you :)! -Garrett From owner-freebsd-fs@FreeBSD.ORG Thu Apr 10 22:26:40 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3F0114CF; Thu, 10 Apr 2014 22:26:40 +0000 (UTC) Received: from mail-ve0-x22e.google.com (mail-ve0-x22e.google.com [IPv6:2607:f8b0:400c:c01::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E1B1E1BDD; Thu, 10 Apr 2014 22:26:39 +0000 (UTC) Received: by mail-ve0-f174.google.com with SMTP id oz11so3970614veb.5 for ; Thu, 10 Apr 2014 15:26:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=yZMfyT///8f7x5qko5OIm2WrYEkvI5W7/v+ArWGGk3g=; b=t8xgokYJbMTqu+hA8Wmjaj7D23BDhxE8Elxr5uNDw5Qyq/ahXtwllHCHdhOdp9mTSi 0OktwE8Pz30NNTG5LpGIpdiErfLxa7o0gCFr1iM/y4H26NFXK1bSFWPm9MNssWz1LRhF lYPiLMEa/46coCN5KJvViUwlWqsEMgk2XduzsUNNWmuEgsRQzcOOaJCE/IbrIhHm4o7Z fo4XU7JwZ7hUaExJzMpWCLnA9TInGPw/PdqP/Sg3rdj1mkgyShQ7+uEIQpRYJQW981XW S7151buuK+07uldgkBvU3iOuP46PTItxLc3An0kqb5vSjrxMpIIC738ZP7SnB3kGt8LB R05w== MIME-Version: 1.0 X-Received: by 10.52.12.36 with SMTP id v4mr14121141vdb.20.1397168799079; Thu, 10 Apr 2014 15:26:39 -0700 (PDT) Received: by 10.221.67.136 with HTTP; Thu, 10 Apr 2014 15:26:39 -0700 (PDT) In-Reply-To: References: <20140410220411.GB15884@in-addr.com> Date: Thu, 10 Apr 2014 15:26:39 -0700 Message-ID: Subject: Re: Immutable files on UFS? From: Garrett Cooper To: Gary Palmer Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2014 22:26:40 -0000 On Thu, Apr 10, 2014 at 3:12 PM, Garrett Cooper wrote: ... > Hmm... didn't try it without -P. According to the manpage it should be > the default, but it wasn't when I ran it. That solved my issue with > the pjdfstest file, but not /usr/obj~. Please ignore this note. I had another memory disk mounted on top of that (which means the file's still there and I'm trying to figure out how to get rid of it). Thanks! -Garrett From owner-freebsd-fs@FreeBSD.ORG Fri Apr 11 00:57:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AAA77B51 for ; Fri, 11 Apr 2014 00:57:27 +0000 (UTC) Received: from hergotha.csail.mit.edu (wollman-1-pt.tunnel.tserv4.nyc4.ipv6.he.net [IPv6:2001:470:1f06:ccb::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4DFCB1993 for ; Fri, 11 Apr 2014 00:57:27 +0000 (UTC) Received: from hergotha.csail.mit.edu (localhost [127.0.0.1]) by hergotha.csail.mit.edu (8.14.7/8.14.7) with ESMTP id s3B0vOOJ091618 for ; Thu, 10 Apr 2014 20:57:24 -0400 (EDT) (envelope-from wollman@hergotha.csail.mit.edu) Received: (from wollman@localhost) by hergotha.csail.mit.edu (8.14.7/8.14.4/Submit) id s3B0vOcS091615; Thu, 10 Apr 2014 20:57:24 -0400 (EDT) (envelope-from wollman) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <21319.15860.827329.334067@hergotha.csail.mit.edu> Date: Thu, 10 Apr 2014 20:57:24 -0400 From: Garrett Wollman To: freebsd-fs@freebsd.org Subject: ZFS panic: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig) X-Mailer: VM 7.17 under 21.4 (patch 22) "Instant Classic" XEmacs Lucid X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (hergotha.csail.mit.edu [127.0.0.1]); Thu, 10 Apr 2014 20:57:25 -0400 (EDT) X-Spam-Status: No, score=-0.8 required=5.0 tests=ALL_TRUSTED, HEADER_FROM_DIFFERENT_DOMAINS autolearn=disabled version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on hergotha.csail.mit.edu X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Apr 2014 00:57:27 -0000 I have a file server that has panic()ed twice today with the following: panic: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 2955 cpuid = 20 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a/frame 0xffffff98a38d2900 kdb_backtrace() at kdb_backtrace+0x37/frame 0xffffff98a38d29c0 panic() at panic+0x1ce/frame 0xffffff98a38d2ac0 assfail() at assfail+0x1a/frame 0xffffff98a38d2ad0 zio_done() at zio_done+0x120/frame 0xffffff98a38d2b30 zio_execute() at zio_execute+0xc3/frame 0xffffff98a38d2b70 taskqueue_run_locked() at taskqueue_run_locked+0x74/frame 0xffffff98a38d2bc0 taskqueue_thread_loop() at taskqueue_thread_loop+0x46/frame 0xffffff98a38d2be0 fork_exit() at fork_exit+0x11f/frame 0xffffff98a38d2c30 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff98a38d2c30 --- trap 0, rip = 0, rsp = 0xffffff98a38d2cf0, rbp = 0 --- Uptime: 11d15h27m51s Automatic reboot in 15 seconds - press a key on the console to abort --> Press a key on the console to reboot, --> or switch off the system now. (It also failed to reboot automatically, as if a character had been sent to the console.) Anyone have a guess about what's going on here? I obviously can't keep manually restarting a production file server every 14 hours. Here's the code in question: if (bp != NULL) { ASSERT(bp->blk_pad[0] == 0); ASSERT(bp->blk_pad[1] == 0); ASSERT(bcmp(bp, &zio->io_bp_copy, sizeof (blkptr_t)) == 0 || (bp == zio_unique_parent(zio)->io_bp)); if (zio->io_type == ZIO_TYPE_WRITE && !BP_IS_HOLE(bp) && zio->io_bp_override == NULL && !(zio->io_flags & ZIO_FLAG_IO_REPAIR)) { ASSERT(!BP_SHOULD_BYTESWAP(bp)); ASSERT3U(zio->io_prop.zp_copies, <=, BP_GET_NDVAS(bp)); ASSERT(BP_COUNT_GANG(bp) == 0 || (BP_COUNT_GANG(bp) == BP_GET_NDVAS(bp))); } if (zio->io_flags & ZIO_FLAG_NOPWRITE) VERIFY(BP_EQUAL(bp, &zio->io_bp_orig)); } And it just now panicked again. Help! -GAWollman From owner-freebsd-fs@FreeBSD.ORG Fri Apr 11 01:06:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 856BF707 for ; Fri, 11 Apr 2014 01:06:12 +0000 (UTC) Received: from anubis.delphij.net (anubis.delphij.net [IPv6:2001:470:1:117::25]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "anubis.delphij.net", Issuer "StartCom Class 1 Primary Intermediate Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 573BE1A98 for ; Fri, 11 Apr 2014 01:06:12 +0000 (UTC) Received: from zeta.ixsystems.com (unknown [69.198.165.132]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by anubis.delphij.net (Postfix) with ESMTPSA id 09A8046CB; Thu, 10 Apr 2014 18:06:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphij.net; s=anubis; t=1397178371; bh=hzJygbCZYFkF2yACvLmoEnZTr8afiLeJT2UxlcTcmeY=; h=Date:From:Reply-To:To:Subject:References:In-Reply-To; b=rvMA1GyL8w18MrxahmzAML9GXAbZ0Q9P8jaCsSIXx0L26NpNEzisCTzyERIlOAdr0 yy3f3PO6JyDcvmSLLT4BHEdq89ZogPexQf43VugRn/efkNViUm0y5GUmPCzB9c+d/b 7IxQadiNAScSM2TcV48ksJTnapTjJXXDAag1PZqA= Message-ID: <53474002.5020003@delphij.net> Date: Thu, 10 Apr 2014 18:06:10 -0700 From: Xin Li Organization: The FreeBSD Project MIME-Version: 1.0 To: Garrett Wollman , freebsd-fs@freebsd.org Subject: Re: ZFS panic: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig) References: <21319.15860.827329.334067@hergotha.csail.mit.edu> In-Reply-To: <21319.15860.827329.334067@hergotha.csail.mit.edu> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: d@delphij.net List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Apr 2014 01:06:12 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 04/10/14 17:57, Garrett Wollman wrote: > I have a file server that has panic()ed twice today with the > following: > > panic: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig), file: > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, > line: 2955 cpuid = 20 KDB: stack backtrace: db_trace_self_wrapper() > at db_trace_self_wrapper+0x2a/frame 0xffffff98a38d2900 > kdb_backtrace() at kdb_backtrace+0x37/frame 0xffffff98a38d29c0 > panic() at panic+0x1ce/frame 0xffffff98a38d2ac0 assfail() at > assfail+0x1a/frame 0xffffff98a38d2ad0 zio_done() at > zio_done+0x120/frame 0xffffff98a38d2b30 zio_execute() at > zio_execute+0xc3/frame 0xffffff98a38d2b70 taskqueue_run_locked() at > taskqueue_run_locked+0x74/frame 0xffffff98a38d2bc0 > taskqueue_thread_loop() at taskqueue_thread_loop+0x46/frame > 0xffffff98a38d2be0 fork_exit() at fork_exit+0x11f/frame > 0xffffff98a38d2c30 fork_trampoline() at fork_trampoline+0xe/frame > 0xffffff98a38d2c30 --- trap 0, rip = 0, rsp = 0xffffff98a38d2cf0, > rbp = 0 --- Uptime: 11d15h27m51s Automatic reboot in 15 seconds - > press a key on the console to abort --> Press a key on the console > to reboot, --> or switch off the system now. > > (It also failed to reboot automatically, as if a character had > been sent to the console.) > > Anyone have a guess about what's going on here? I obviously can't > keep manually restarting a production file server every 14 hours. > Here's the code in question: > > if (bp != NULL) { ASSERT(bp->blk_pad[0] == 0); > ASSERT(bp->blk_pad[1] == 0); ASSERT(bcmp(bp, &zio->io_bp_copy, > sizeof (blkptr_t)) == 0 || (bp == zio_unique_parent(zio)->io_bp)); > if (zio->io_type == ZIO_TYPE_WRITE && !BP_IS_HOLE(bp) && > zio->io_bp_override == NULL && !(zio->io_flags & > ZIO_FLAG_IO_REPAIR)) { ASSERT(!BP_SHOULD_BYTESWAP(bp)); > ASSERT3U(zio->io_prop.zp_copies, <=, BP_GET_NDVAS(bp)); > ASSERT(BP_COUNT_GANG(bp) == 0 || (BP_COUNT_GANG(bp) == > BP_GET_NDVAS(bp))); } if (zio->io_flags & ZIO_FLAG_NOPWRITE) > VERIFY(BP_EQUAL(bp, &zio->io_bp_orig)); } > > And it just now panicked again. Help! Have you tried setting vfs.zfs.nopwrite_enabled=0 in /boot/loader.conf? Cheers, - -- Xin LI https://www.delphij.net/ FreeBSD - The Power to Serve! Live free or die -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQIcBAEBCgAGBQJTR0ACAAoJEJW2GBstM+nsewsP/RqBXgufS6cAv0m9c8EQI0AP mnBdmIpN2fGNUZM+3vaHf+cRxwLNpgfzZ+6XZGJOPJbMYjT/EQkx7jOpXvmnWSV9 F4RmDMrwzkgmP/ER12mk03HTaXaQTsrHbIPmtzuUfOWBchQONXXDCdVXLN1XIv4u fkihDDsT/E22Be+3HoW0Z9r3uW4PCAe2td92N4nRMUZR0KRnju2qwOkXWJDN4TYG Sp/xfAeqpURY8sBHB6cQj/L3Q8zd5DHD72MGLvZk32BOJXfUubTjLI7Zdn15cUci hiz6XTzINn4ft/C8Y8qqBcrrD3BkMSkufIdBn3m9JmXlDLDdg7vmPomf8tOPIKo7 3SaU8nrt/N6D/CnSjQ09SRX+URyeTWYzkv3kyZIMDnE7XbktF5MZYL5Qm/Tahk7t 7ehpYiwP/OJ3HtiZqmdmupffECM/AYKO5srgr70znuvSxwEEEC6MDsBunrJfymfF GZiK5OTPGBdLzYOHcTSe2bHofswr514cyTttw0rTekI5Rsp1ycpbQjfiLtT3FUfU Ymhz8jEHLLLOf4uFGF02C4cJrrkhe/SuNv9LFXZvblswyyvF3U0rSubCUC1W6b+w dlqYe+HzlM1wEM2+2JYHzC0SdLE/G30lo3Ecu1HDYGPgxmJlw38QzIgnSwNODVhy dYAfTPyGcOqCr6WeFKtb =ff1A -----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Fri Apr 11 01:53:40 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F3DB85EF for ; Fri, 11 Apr 2014 01:53:39 +0000 (UTC) Received: from hergotha.csail.mit.edu (wollman-1-pt.tunnel.tserv4.nyc4.ipv6.he.net [IPv6:2001:470:1f06:ccb::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AC59E109C for ; Fri, 11 Apr 2014 01:53:39 +0000 (UTC) Received: from hergotha.csail.mit.edu (localhost [127.0.0.1]) by hergotha.csail.mit.edu (8.14.7/8.14.7) with ESMTP id s3B1rOsa092269; Thu, 10 Apr 2014 21:53:24 -0400 (EDT) (envelope-from wollman@hergotha.csail.mit.edu) Received: (from wollman@localhost) by hergotha.csail.mit.edu (8.14.7/8.14.4/Submit) id s3B1rOsI092266; Thu, 10 Apr 2014 21:53:24 -0400 (EDT) (envelope-from wollman) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <21319.19220.289068.38797@hergotha.csail.mit.edu> Date: Thu, 10 Apr 2014 21:53:24 -0400 From: Garrett Wollman To: d@delphij.net Subject: Re: ZFS panic: solaris assert: BP_EQUAL(bp, &zio->io_bp_orig) In-Reply-To: <53474002.5020003@delphij.net> References: <21319.15860.827329.334067@hergotha.csail.mit.edu> <53474002.5020003@delphij.net> X-Mailer: VM 7.17 under 21.4 (patch 22) "Instant Classic" XEmacs Lucid X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (hergotha.csail.mit.edu [127.0.0.1]); Thu, 10 Apr 2014 21:53:24 -0400 (EDT) X-Spam-Status: No, score=-0.8 required=5.0 tests=ALL_TRUSTED, HEADER_FROM_DIFFERENT_DOMAINS autolearn=disabled version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on hergotha.csail.mit.edu Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Apr 2014 01:53:40 -0000 < said: >> [I wrote:] >> And it just now panicked again. Help! > Have you tried setting vfs.zfs.nopwrite_enabled=0 in /boot/loader.conf? Didn't even know about it. I looked at the Illumos commit log and figured out that I should be able to work around it by setting checksum=fletcher4. Since it's a production server I can't really take a reboot just to try this out. (I added it to loader.conf just now, so if I need to reboot it again before this bug is fixed, I can put it back to sha256.) -GAWollman From owner-freebsd-fs@FreeBSD.ORG Fri Apr 11 08:17:58 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 93A2BC0D; Fri, 11 Apr 2014 08:17:58 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 68241166D; Fri, 11 Apr 2014 08:17:58 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3B8HwIC076113; Fri, 11 Apr 2014 08:17:58 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3B8HwYE076112; Fri, 11 Apr 2014 08:17:58 GMT (envelope-from linimon) Date: Fri, 11 Apr 2014 08:17:58 GMT Message-Id: <201404110817.s3B8HwYE076112@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/188443: [smbfs] Segfault with tail(1) when mmap(2) called X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Apr 2014 08:17:58 -0000 Synopsis: [smbfs] Segfault with tail(1) when mmap(2) called Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Fri Apr 11 08:17:46 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=188443 From owner-freebsd-fs@FreeBSD.ORG Sat Apr 12 01:30:54 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 61D50538 for ; Sat, 12 Apr 2014 01:30:54 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2B1521612 for ; Sat, 12 Apr 2014 01:30:54 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3C1UscI010704 for ; Sat, 12 Apr 2014 01:30:54 GMT (envelope-from bdrewery@freefall.freebsd.org) Received: (from bdrewery@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3C1UrCr010701 for freebsd-fs@FreeBSD.org; Sat, 12 Apr 2014 01:30:53 GMT (envelope-from bdrewery) Received: (qmail 17112 invoked from network); 11 Apr 2014 20:30:52 -0500 Received: from unknown (HELO ?10.10.0.24?) (freebsd@shatow.net@10.10.0.24) by sweb.xzibition.com with ESMTPA; 11 Apr 2014 20:30:52 -0500 Message-ID: <53489745.4080708@FreeBSD.org> Date: Fri, 11 Apr 2014 20:30:45 -0500 From: Bryan Drewery Organization: FreeBSD User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Konstantin Belousov Subject: Re: Poudriere: rm -rf: Directory not empty References: <20140403173044.GY21331@kib.kiev.ua> In-Reply-To: <20140403173044.GY21331@kib.kiev.ua> X-Enigmail-Version: 1.6 OpenPGP: id=6E4697CF; url=http://www.shatow.net/bryan/bryan2.asc Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="wsG8224PseElff0wt3dNe89joGiGV6JxC" Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2014 01:30:54 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --wsG8224PseElff0wt3dNe89joGiGV6JxC Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On 4/3/2014 12:30 PM, Konstantin Belousov wrote: > On Thu, Apr 03, 2014 at 09:35:27AM -0500, Bryan Drewery wrote: >> Hi, >> >> While using Poudriere to build packages on segregated tmpfs jails >> we sometimes get the following errors: >>[snip] >> In the other cases it's not clear if looping on rm -rf would work or >> if it would spin forever. We have not tried it since it's so difficult= >> to reproduce. >> >=20 > When the situation occured and you notes it, do you still have an acces= s > to the tmpfs directory which failed rm -rf ? If yes, try to do ls -la > there, and ktrace the "rm -rf". No. Once it realizes it can't do any work it cleans up and umounts the tmpfs. What I will do is add some debugging into poudriere so if it hits the issue it will try to analyze it a bit and gather some more data. Hopefully the user running it will report it back to us. >=20 > Another approach is to patch tmpfs_rmdir() in tmpfs_vnops.c and dump so= me > information when ENOTEMPTY error is returned; e.g. you could print the > directory content and tn_size. >=20 If I can find a way to reliably reproduce the issue I will do that. --=20 Regards, Bryan Drewery --wsG8224PseElff0wt3dNe89joGiGV6JxC Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTSJdFAAoJEDXXcbtuRpfPj8oIAMP8iOGOuC/yV91opBwl66Ul 8rJGwdhBExUcAzZKNRqsW3rnSbsnILGHUNwlt32jEMWhoIuDARrwbbkEnehbiTeo ggA0CSJajhYm2nFkah1/6MYCndxcnOWXrGWk6vvSJM4NSaGt2pY/Np7c5A+r/oPz 2xgvk4SI196M5G6cYhroEQfAM3JQoinncBDtqXw7F9hbF/byrLd3Rd1RN3AD2MUB eUU+wT0RfMx/hoY6pdHXj853KaNZBEkeUT3GOntcxs8qyl4lnrh1mJworBCPEdk2 An9YklnCTzfLWOVt1fDHrg+IWhLnv2NNpFNCQuB/CDEJAIC/FzQEBmWPo/dTvWo= =4MZe -----END PGP SIGNATURE----- --wsG8224PseElff0wt3dNe89joGiGV6JxC-- From owner-freebsd-fs@FreeBSD.ORG Sat Apr 12 02:04:08 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2366B9E4 for ; Sat, 12 Apr 2014 02:04:08 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 071451873 for ; Sat, 12 Apr 2014 02:04:08 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3C247K7022287 for ; Sat, 12 Apr 2014 02:04:07 GMT (envelope-from bdrewery@freefall.freebsd.org) Received: (from bdrewery@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3C247rg022286 for freebsd-fs@FreeBSD.org; Sat, 12 Apr 2014 02:04:07 GMT (envelope-from bdrewery) Received: (qmail 22840 invoked from network); 11 Apr 2014 21:04:05 -0500 Received: from unknown (HELO ?10.10.0.24?) (freebsd@shatow.net@10.10.0.24) by sweb.xzibition.com with ESMTPA; 11 Apr 2014 21:04:05 -0500 Message-ID: <53489F0D.1020702@FreeBSD.org> Date: Fri, 11 Apr 2014 21:03:57 -0500 From: Bryan Drewery Organization: FreeBSD User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org Subject: getdirentries cookies usage outside of UFS X-Enigmail-Version: 1.6 OpenPGP: id=6E4697CF; url=http://www.shatow.net/bryan/bryan2.asc Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="XrrQWNhbOJ2nFQ0pcgF29OwHuLlcSWJmL" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2014 02:04:08 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --XrrQWNhbOJ2nFQ0pcgF29OwHuLlcSWJmL Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Recently I was working on a compat syscall for sys_getdirentries() that converts between our dirent and the FreeBSD dirent struct. We had never tried using this on TMPFS and when we did ran into weird issues (hence my recent commits to TMPFS to clarify some of the getdirentries() code). We were not using cookies, so I referenced the Linux compat module (linux_file.c getdents_common()) I ran across this code: > /* > * When using cookies, the vfs has the option of reading from > * a different offset than that supplied (UFS truncates the > * offset to a block boundary to make sure that it never reads > * partway through a directory entry, even if the directory > * has been compacted). > */ > while (len > 0 && ncookies > 0 && *cookiep <=3D off) { > bdp =3D (struct dirent *) inp; > len -=3D bdp->d_reclen; > inp +=3D bdp->d_reclen; > cookiep++; > ncookies--; > }=20 At first it looked innocuous but then it occurred to me it was the root of the issue I was having as it was eating my cookies based on their value, despite tmpfs cookies being random hash values that have no sequential relation. So I looked at how NFS was handling the same code and found this lovely hack from r216691: > not_zfs =3D strcmp(vp->v_mount->mnt_vfc->vfc_name, "zfs"); =2E.. > while (cpos < cend && ncookies > 0 && > (dp->d_fileno =3D=3D 0 || dp->d_type =3D=3D DT_WHT || > (not_zfs !=3D 0 && ((u_quad_t)(*cookiep)) <=3D toff))) { > cpos +=3D dp->d_reclen; > dp =3D (struct dirent *)cpos; > cookiep++; > ncookies--; > } I ended up doing the opposite, only running the code if getting dirents from "ufs". So there's multiple issue here. 1. NFS is broken on TMPFS. I can see why it's gone so long unnoticed, why would you do that? Still probably worth fixing. 2. Linux and SVR4 getdirentries() are both broken on TMPFS/ZFS. I am surprised Linux+ZFS has not been noticed by now. I am aware the SVR4 is full of other bugs too. I ran across many just reviewing the getdirentries code alone. Do any other file systems besides UFS do this offset/cookie truncation/rewind? If UFS is the only one it may be acceptable to change this zfs check to !ufs and add it to the other modules. If we don't like that, or there are potentially other file systems doing this too, how about adding a flag to somewhere to indicate the file system has monotonically increasing offsets and needs this rewind support. I'm not sure where that is best done, struct vfsconf? --=20 Regards, Bryan Drewery --XrrQWNhbOJ2nFQ0pcgF29OwHuLlcSWJmL Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTSJ8NAAoJEDXXcbtuRpfPZZAIAN7eTl1evOll1APi2j+M8LBW rLW9ngE498ZVF6aDnIrQ/+q/fvACvoa28aF+i0E0+IR9LQ4OsBGM0tPr/AMQd/Dt NIZ8ByMQyewhUx7vCiVTU2NF2uLlpixFu7f4SrlguEe1tzWudbAow8Dgf1urUx6v tbpypDdHGGwTLvffP5rfDl8W3PATBJZdKx80ba+Dj9ZOsc+ZBeZxr8htHWcp5WJp nDT5waI49F0xKBR4U30WN1srWyK/g6UVBmbAhlYT6/x20vr4j7L7tyzv6MnglBJ9 ywi52G5lXuyNzD5U7p1Wo9/bvkOyeT0SbwmmAzSfbM1Lr2d7sKwuM1SeT5kzoRk= =06qd -----END PGP SIGNATURE----- --XrrQWNhbOJ2nFQ0pcgF29OwHuLlcSWJmL-- From owner-freebsd-fs@FreeBSD.ORG Sat Apr 12 13:10:37 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AD4F4680; Sat, 12 Apr 2014 13:10:37 +0000 (UTC) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id 6E3D615B4; Sat, 12 Apr 2014 13:10:37 +0000 (UTC) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:38fe:bc98:65e7:fb6b]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 9F8824AC2D; Sat, 12 Apr 2014 17:10:29 +0400 (MSK) Date: Sat, 12 Apr 2014 17:09:53 +0400 From: Lev Serebryakov Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <981154629.20140412170953@serebryakov.spb.ru> To: freebsd-fs@FreeBSD.org, freebsd-stable@freebsd.org Subject: One process which would not die force me to power-cycle server and ALL UFS SUJ FSes are completely broken after that AGAIN! MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: lev@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2014 13:10:37 -0000 Hello, Freebsd-fs. On my 10-STABLE (r263965) system transmission-daemon stops to work, could not be killed (waits forever in STOP state after "kill -KILL), kernel reports about overfilled accept TCP queue for its socket (sonewconn: pcb 012345678FFFFFFF: Listen queue overflow). Try "shutdown -r now", process aborted due to process which would not die, nothing could be done: system doesn't react on keyboard after that. Wait one hour (!). No result, only more "Listen queue overflow" messages on console. Power-off. Power-on. All UFS2 filesystems can not be recovered with using of automated fsck, due to journal/softupdate inconsistencies. I need to run "fsck -f" TWICE for each of them (as first run ask to re-run fsck). Please note, they are filesystems on MBR slice + BSD label on simple SATA disk attached to chipset port, no RAID, no "strange" GEOM modules, nothing fancy. Plain and easy install -- MBR with one slice, BSD label, filesystems, it's all. So, there are two questions: (1) Does UFS2 SUJ works at all on STABLE system? Should it?! (2) How could I avoid such situation, how could I reboot system WITHOUT such disaster when one process refuse to die? -- // Black Lion AKA Lev Serebryakov From owner-freebsd-fs@FreeBSD.ORG Sat Apr 12 13:28:17 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B151AD14; Sat, 12 Apr 2014 13:28:17 +0000 (UTC) Received: from alogt.com (alogt.com [69.36.191.58]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 883F1173F; Sat, 12 Apr 2014 13:28:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=alogt.com; s=default; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date; bh=nHMs4xpb+mN62KpckcUEgN8TDdR1UDnlo1e9C1W5+HU=; b=d+1dzssQIzC6pVY4ymBNDFtIQOnlOs0vIYwxbBcZx92NrCDip6tnwB168fUtfDGIdZ1nTOC94+cu/gaqLVqBN07VpglQsyrDQ9RyAcwMlQhz4vP5hhXWjaElKk4jjwAouSel5ky92IHAQhtcwVlX2r2JnXDTnHutIGl9mEC9CsU=; Received: from [182.55.101.96] (port=53373 helo=X220.alogt.com) by sl-508-2.slc.westdc.net with esmtpsa (SSLv3:DHE-RSA-AES128-SHA:128) (Exim 4.82) (envelope-from ) id 1WYxyi-001Lll-IA; Sat, 12 Apr 2014 07:28:17 -0600 Date: Sat, 12 Apr 2014 21:28:13 +0800 From: Erich Dollansky To: lev@FreeBSD.org Subject: Re: One process which would not die force me to power-cycle server and ALL UFS SUJ FSes are completely broken after that AGAIN! Message-ID: <20140412212813.49da6fbf@X220.alogt.com> In-Reply-To: <981154629.20140412170953@serebryakov.spb.ru> References: <981154629.20140412170953@serebryakov.spb.ru> X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; amd64-portbld-freebsd10.0) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - sl-508-2.slc.westdc.net X-AntiAbuse: Original Domain - freebsd.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - alogt.com X-Get-Message-Sender-Via: sl-508-2.slc.westdc.net: authenticated_id: erichsfreebsdlist@alogt.com X-Source: X-Source-Args: X-Source-Dir: Cc: freebsd-fs@FreeBSD.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2014 13:28:17 -0000 Hi, On Sat, 12 Apr 2014 17:09:53 +0400 Lev Serebryakov wrote: > (1) Does UFS2 SUJ works at all on STABLE system? Should it?! > it should. > (2) How could I avoid such situation, how could I reboot system > WITHOUT such disaster when one process refuse to die? > Do you know the name of the program which refuses to stop? Erich From owner-freebsd-fs@FreeBSD.ORG Sat Apr 12 13:39:45 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C7C0D1A7; Sat, 12 Apr 2014 13:39:45 +0000 (UTC) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 8B71F1814; Sat, 12 Apr 2014 13:39:45 +0000 (UTC) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:38fe:bc98:65e7:fb6b]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 22D184AC1C; Sat, 12 Apr 2014 17:39:44 +0400 (MSK) Date: Sat, 12 Apr 2014 17:39:06 +0400 From: Lev Serebryakov Organization: FreeBSD Project X-Priority: 3 (Normal) Message-ID: <42681337.20140412173906@serebryakov.spb.ru> To: Erich Dollansky Subject: Re: One process which would not die force me to power-cycle server and ALL UFS SUJ FSes are completely broken after that AGAIN! In-Reply-To: <20140412212813.49da6fbf@X220.alogt.com> References: <981154629.20140412170953@serebryakov.spb.ru> <20140412212813.49da6fbf@X220.alogt.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@FreeBSD.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: lev@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2014 13:39:45 -0000 Hello, Erich. You wrote 12 =D0=B0=D0=BF=D1=80=D0=B5=D0=BB=D1=8F 2014 =D0=B3., 17:28:13: >> (1) Does UFS2 SUJ works at all on STABLE system? Should it?! ED> it should. It doesn't according to my experience. Every "non-clean" reboot gives me a hour of dances around server with running fsck by hands multiple times. >> (2) How could I avoid such situation, how could I reboot system >> WITHOUT such disaster when one process refuse to die? ED> Do you know the name of the program which refuses to stop? Yep. "transmission-daemon" from "net-p2p/transmission-daemon" port. --=20 // Black Lion AKA Lev Serebryakov From owner-freebsd-fs@FreeBSD.ORG Sun Apr 13 10:10:20 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A4EA28E8; Sun, 13 Apr 2014 10:10:20 +0000 (UTC) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 6689E1CBE; Sun, 13 Apr 2014 10:10:20 +0000 (UTC) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:5803:a51e:6ee3:b101]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 907124AC32; Sun, 13 Apr 2014 14:10:11 +0400 (MSK) Date: Sun, 13 Apr 2014 14:10:10 +0400 From: Lev Serebryakov Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <482103242.20140413141010@serebryakov.spb.ru> To: freebsd-fs@FreeBSD.org, freebsd-stable@freebsd.org Subject: UFS2 SU+J could not recover after power-off sgain (was: One process which would not die force me to power-cycle server and ALL UFS SUJ FSes are completely broken after that AGAIN!) In-Reply-To: <981154629.20140412170953@serebryakov.spb.ru> References: <981154629.20140412170953@serebryakov.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: pfg@FreeBSD.org, mckusick@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: lev@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Apr 2014 10:10:20 -0000 Hello, Freebsd-fs. You wrote 12 =D0=B0=D0=BF=D1=80=D0=B5=D0=BB=D1=8F 2014 =D0=B3., 17:09:53: LS> All UFS2 filesystems can not be recovered with using of automated fsck= , due LS> to journal/softupdate inconsistencies. I need to run "fsck -f" TWICE for LS> each of them (as first run ask to re-run fsck). "shutdown -h" reboots system, UPS switch power off after that (with delay), 2 out of 5 FSes could not be checked with journal automatically. Manual full "fsck" run didn't find any serious problems, only one or two unlinked files (recovered to lost+found) free block bitmaps! WHY?! How could I trust to UFS2 now?! Both filesystems have same scenario: /dev/ufs/tmp: Journal file sequence mismatch 233263 !=3D 231707 /dev/ufs/tmp: UNEXPECTED SU+J INCONSISTENCY /dev/ufs/tmp: INTERNAL ERROR: GOT TO reply() /dev/ufs/tmp: UNEXPECTED SOFT UPDATE INCONSISTENCY. RUN fsck MANUALLY. /dev/ufs/usr: Journal file sequence mismatch 287936 !=3D 282572 /dev/ufs/usr: UNEXPECTED SU+J INCONSISTENCY /dev/ufs/usr: INTERNAL ERROR: GOT TO reply() /dev/ufs/usr: UNEXPECTED SOFT UPDATE INCONSISTENCY. RUN fsck MANUALLY. Again: these FSes were checked with full fsck two days ago. They reside at SATA HDD without any non-standard or complex geom modules (only geonm_part), and HDD is attahed to chipset SATA port, there is no any RAID controllers or things like that. EVERY non-clean reboot of server leads to "RUN fsck MANUALLY". --=20 // Black Lion AKA Lev Serebryakov From owner-freebsd-fs@FreeBSD.ORG Sun Apr 13 12:03:33 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4F9F1E9C; Sun, 13 Apr 2014 12:03:33 +0000 (UTC) Received: from mail-lb0-x232.google.com (mail-lb0-x232.google.com [IPv6:2a00:1450:4010:c04::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 16A68177F; Sun, 13 Apr 2014 12:03:31 +0000 (UTC) Received: by mail-lb0-f178.google.com with SMTP id s7so4965495lbd.9 for ; Sun, 13 Apr 2014 05:03:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=APZXUXJOOdiOORLHYcHE/aJlsdC35pYZlUEdeRkD/io=; b=mId54X3F1p3dapNnvHKlwnGeF3G4WOr2xX+V983Q7RV6L4FDYu9BNohNdacYsoyFd+ 8fRvv9cae6O0Yxbf9KG0FA76hxfWdT+wcHVGCrafu6SVG0JhDRCPtGEBHWo/1H2vBR5b awsLXaCTAODnF8xyw7sIXk+pTni0Myp4CUgjfSR41DklDgEEcVNG+9CUPVmAA6Z6g6ja h0ar95zzVYHkTReXyAcDxmzORpdP4JASu4YKrnf94n5wKyVXOrbxEBf2DNdRlJColJIV C8E0p7dIJJXCbkFsRyVjIPj+FQ+Klp0HslNot8NC5IbwJrK85OLtV9WYRAf6B/Cvy7q0 EFIw== X-Received: by 10.112.126.7 with SMTP id mu7mr23628864lbb.17.1397390609541; Sun, 13 Apr 2014 05:03:29 -0700 (PDT) Received: from [10.0.1.9] (ip-95-220-138-28.bb.netbynet.ru. [95.220.138.28]) by mx.google.com with ESMTPSA id d4sm11248713lbr.27.2014.04.13.05.03.27 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 13 Apr 2014 05:03:28 -0700 (PDT) Subject: Re: UFS2 SU+J could not recover after power-off sgain (was: One process which would not die force me to power-cycle server and ALL UFS SUJ FSes are completely broken after that AGAIN!) Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Content-Type: text/plain; charset=utf-8 From: Dmitry Sivachenko X-Priority: 3 (Normal) In-Reply-To: <482103242.20140413141010@serebryakov.spb.ru> Date: Sun, 13 Apr 2014 16:03:26 +0400 Content-Transfer-Encoding: quoted-printable Message-Id: <8A3243EF-90D0-428A-99C1-8360DB402B86@gmail.com> References: <981154629.20140412170953@serebryakov.spb.ru> <482103242.20140413141010@serebryakov.spb.ru> To: lev@FreeBSD.org X-Mailer: Apple Mail (2.1874) Cc: freebsd-fs@FreeBSD.org, pfg@FreeBSD.org, freebsd-stable@freebsd.org, mckusick@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Apr 2014 12:03:33 -0000 Turn off journaling, it has many issues reported. I run hundreds of servers at work with UFS2+SU (w/o J), and never had a = single problem. On 13 =D0=B0=D0=BF=D1=80. 2014 =D0=B3., at 14:10, Lev Serebryakov = wrote: > Hello, Freebsd-fs. > You wrote 12 =D0=B0=D0=BF=D1=80=D0=B5=D0=BB=D1=8F 2014 =D0=B3., = 17:09:53: >=20 > LS> All UFS2 filesystems can not be recovered with using of automated = fsck, due > LS> to journal/softupdate inconsistencies. I need to run "fsck -f" = TWICE for > LS> each of them (as first run ask to re-run fsck). > "shutdown -h" reboots system, UPS switch power off after that (with = delay), > 2 out of 5 FSes could not be checked with journal automatically. = Manual full > "fsck" run didn't find any serious problems, only one or two unlinked = files > (recovered to lost+found) free block bitmaps! >=20 > WHY?! How could I trust to UFS2 now?! >=20 > Both filesystems have same scenario: >=20 > /dev/ufs/tmp: Journal file sequence mismatch 233263 !=3D 231707 > /dev/ufs/tmp: UNEXPECTED SU+J INCONSISTENCY > /dev/ufs/tmp: INTERNAL ERROR: GOT TO reply() > /dev/ufs/tmp: UNEXPECTED SOFT UPDATE INCONSISTENCY. RUN fsck MANUALLY. >=20 > /dev/ufs/usr: Journal file sequence mismatch 287936 !=3D 282572 > /dev/ufs/usr: UNEXPECTED SU+J INCONSISTENCY > /dev/ufs/usr: INTERNAL ERROR: GOT TO reply() > /dev/ufs/usr: UNEXPECTED SOFT UPDATE INCONSISTENCY. RUN fsck MANUALLY. >=20 > Again: these FSes were checked with full fsck two days ago. They = reside at > SATA HDD without any non-standard or complex geom modules (only > geonm_part), and HDD is attahed to chipset SATA port, there is no any = RAID > controllers or things like that. >=20 > EVERY non-clean reboot of server leads to "RUN fsck MANUALLY". >=20 > --=20 > // Black Lion AKA Lev Serebryakov >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to = "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sun Apr 13 14:20:53 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7462152F; Sun, 13 Apr 2014 14:20:53 +0000 (UTC) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 3554F12F4; Sun, 13 Apr 2014 14:20:53 +0000 (UTC) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:5803:a51e:6ee3:b101]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 0936D4AC1C; Sun, 13 Apr 2014 18:20:39 +0400 (MSK) Date: Sun, 13 Apr 2014 18:20:37 +0400 From: Lev Serebryakov Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <1927324019.20140413182037@serebryakov.spb.ru> To: Dmitry Sivachenko Subject: Re: UFS2 SU+J could not recover after power-off sgain (was: One process which would not die force me to power-cycle server and ALL UFS SUJ FSes are completely broken after that AGAIN!) In-Reply-To: <8A3243EF-90D0-428A-99C1-8360DB402B86@gmail.com> References: <981154629.20140412170953@serebryakov.spb.ru> <482103242.20140413141010@serebryakov.spb.ru> <8A3243EF-90D0-428A-99C1-8360DB402B86@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@FreeBSD.org, pfg@FreeBSD.org, freebsd-stable@freebsd.org, mckusick@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: lev@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Apr 2014 14:20:53 -0000 Hello, Dmitry. You wrote 13 =D0=B0=D0=BF=D1=80=D0=B5=D0=BB=D1=8F 2014 =D0=B3., 16:03:26: DS> Turn off journaling, it has many issues reported. DS> I run hundreds of servers at work with UFS2+SU (w/o J), and never had a= single problem. To be honest, before journal was introduced, I had smae problems ("unexpected SU inconsistency") but after 1.5-2 hours of thrashing HDDs with "background fsck". --=20 // Black Lion AKA Lev Serebryakov From owner-freebsd-fs@FreeBSD.ORG Mon Apr 14 04:30:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BDCA3808; Mon, 14 Apr 2014 04:30:44 +0000 (UTC) Received: from udns.ultimateDNS.NET (ultimatedns.net [209.180.214.225]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6C2041BEE; Mon, 14 Apr 2014 04:30:40 +0000 (UTC) Received: from udns.ultimateDNS.NET (localhost [127.0.0.1]) by udns.ultimateDNS.NET (8.14.5/8.14.5) with ESMTP id s3E4YHkA058857; Sun, 13 Apr 2014 21:34:23 -0700 (PDT) (envelope-from bsd-lists@bsdforge.com) Received: (from www@localhost) by udns.ultimateDNS.NET (8.14.5/8.14.5/Submit) id s3E4YBHU058851; Sun, 13 Apr 2014 21:34:11 -0700 (PDT) (envelope-from bsd-lists@bsdforge.com) Received: from udns.ultimatedns.net ([209.180.214.225]) (UDNSMS authenticated user chrish) by ultimatedns.net with HTTP; Sun, 13 Apr 2014 21:34:12 -0700 (PDT) Message-ID: <5b6093dfd46778ea273115ad12cbaf26.authenticated@ultimatedns.net> In-Reply-To: <981154629.20140412170953@serebryakov.spb.ru> References: <981154629.20140412170953@serebryakov.spb.ru> Date: Sun, 13 Apr 2014 21:34:12 -0700 (PDT) Subject: Re: One process which would not die force me to power-cycle server and ALL UFS SUJ FSes are completely broken after that AGAIN! From: "Chris H" To: lev@freebsd.org User-Agent: UDNSMS/2.0.3 MIME-Version: 1.0 Content-Type: text/plain;charset=utf-8 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 04:30:44 -0000 > Hello, Freebsd-fs. > > On my 10-STABLE (r263965) system transmission-daemon stops to work, could not be > killed (waits forever in STOP state after "kill -KILL), kernel reports about > overfilled accept TCP queue for its socket (sonewconn: pcb 012345678FFFFFFF: Listen queue > overflow). > > Try "shutdown -r now", process aborted due to process which would not die, > nothing could be done: system doesn't react on keyboard after that. Does using halt work better? --Chris > > Wait one hour (!). No result, only more "Listen queue overflow" messages on > console. > > Power-off. Power-on. > > All UFS2 filesystems can not be recovered with using of automated fsck, due > to journal/softupdate inconsistencies. I need to run "fsck -f" TWICE for > each of them (as first run ask to re-run fsck). > > Please note, they are filesystems on MBR slice + BSD label on simple SATA > disk attached to chipset port, no RAID, no "strange" GEOM modules, nothing > fancy. Plain and easy install -- MBR with one slice, BSD label, filesystems, > it's all. > > So, there are two questions: > > (1) Does UFS2 SUJ works at all on STABLE system? Should it?! > > (2) How could I avoid such situation, how could I reboot system WITHOUT such > disaster when one process refuse to die? > > -- > // Black Lion AKA Lev Serebryakov > > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Apr 14 09:37:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0411A506 for ; Mon, 14 Apr 2014 09:37:13 +0000 (UTC) Received: from mail-wg0-x22d.google.com (mail-wg0-x22d.google.com [IPv6:2a00:1450:400c:c00::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 961B2190A for ; Mon, 14 Apr 2014 09:37:12 +0000 (UTC) Received: by mail-wg0-f45.google.com with SMTP id l18so8045222wgh.16 for ; Mon, 14 Apr 2014 02:37:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:date:message-id:subject:from:to:content-type; bh=+xCukF/HxIH3kJckYFNwuN0bf0bkwZWDKXVFmngh45s=; b=L/gvzTilYN5KtonelVhzvT0oyiosrmCRgSfiqKl5Nkqd7mv+05ubKSbE8sh+AE2KBJ B+B2Hjv0frPS7puOaCvud2AaOjfOsSoAF6G0keA9ahhXzN/WtgJkNWBKYxjaDePkHY6M A2lxnT4MHTRiuwqDH0xKqoWdAJluZXmUc8ARinfYMOdONb2fry2PiCL2oXDCX5/OuK3p Pkn9rdkLyhACVGTC6NGOu+wU9PVt8dctjTLL1P+jRzC0rZ0vODf66du0p0/yUlXOmXGT R1wGIIiZc+U8em5R6FGWsao40qqJZ96oPLVIZtZSXvsBd11BWr7L4mCZ9duTak1VrBnp Lx5g== MIME-Version: 1.0 X-Received: by 10.180.93.226 with SMTP id cx2mr8999448wib.16.1397468230908; Mon, 14 Apr 2014 02:37:10 -0700 (PDT) Received: by 10.217.9.134 with HTTP; Mon, 14 Apr 2014 02:37:10 -0700 (PDT) Date: Mon, 14 Apr 2014 17:37:10 +0800 Message-ID: Subject: NFSv4: prob err=10036 From: Marcelo Araujo To: "freebsd-fs@freebsd.org" Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: araujo@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 09:37:13 -0000 Hi all, Anyone have saw this prob err before when try to mount a NFSv4? machine_a# mount -t nfs -o nfsv4 192.168.2.100:/a /mnt/ machine_a# mount_nfs: /mnt, : Input/output error machine_a# tail /var/log/messages |grep nfsv4 Apr 13 17:03:33 ESSD46B6E kernel: nfsv4 client/server protocol prob err=10036 I have another machine with the same settings that can mount successfully the same NFSv4 share. machine_c# mount -t nfs -o nfsv4 192.168.2.100:/a /mnt/ machine_c# Best Regards, -- Marcelo Araujo araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Mon Apr 14 11:06:44 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4865BF00 for ; Mon, 14 Apr 2014 11:06:44 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2B8831656 for ; Mon, 14 Apr 2014 11:06:44 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3EB6iM1025850 for ; Mon, 14 Apr 2014 11:06:44 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3EB6hqt025848 for freebsd-fs@FreeBSD.org; Mon, 14 Apr 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 14 Apr 2014 11:06:43 GMT Message-Id: <201404141106.s3EB6hqt025848@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 11:06:44 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/188443 fs [smbfs] Segfault with tail(1) when mmap(2) called o kern/188187 fs [zfs] 10-stable: Kernel panic on zpool import: integer o kern/187905 fs [zpool] Confusion zpool with a block size in HDD - blo o kern/187778 fs [zfs] Two ZFS filesystems mounted on / at same time o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix o kern/187261 fs [fuse] FUSE kernel panic when using socket / bind o bin/187071 fs [nfs] nfs server only start 2 daemons 1 master & 1 ser o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186574 fs [zfs] zpool history hangs (infinite loop) o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 345 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 14 14:00:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DBF7CB04; Mon, 14 Apr 2014 14:00:46 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 61C591A91; Mon, 14 Apr 2014 14:00:45 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqUEAHrpS1ODaFve/2dsb2JhbABZg0FXgxC4W4ZkUYE4dIIlAQEBAwEBAQEgKyALBRYYAgINGQIpAQkmDgcEARwEh1MIDahZolMXgSmMYxACARs0BxaCWYFJBJYJhA6RDYNNITGBPQ X-IronPort-AV: E=Sophos;i="4.97,857,1389762000"; d="scan'208";a="114284475" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 14 Apr 2014 10:00:17 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 3F114B4069; Mon, 14 Apr 2014 10:00:17 -0400 (EDT) Date: Mon, 14 Apr 2014 10:00:17 -0400 (EDT) From: Rick Macklem To: araujo@FreeBSD.org Message-ID: <936380350.10694814.1397484017247.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: NFSv4: prob err=10036 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 14:00:46 -0000 Marcelo Araujo wrote: > Hi all, > > Anyone have saw this prob err before when try to mount a NFSv4? > > machine_a# mount -t nfs -o nfsv4 192.168.2.100:/a /mnt/ > machine_a# mount_nfs: /mnt, : Input/output error > machine_a# tail /var/log/messages |grep nfsv4 > Apr 13 17:03:33 ESSD46B6E kernel: nfsv4 client/server protocol prob > err=10036 > Well, 10036 is NFSERR_BADXDR (they are all in sys/fs/nfs/nfsproto.h). This means that the server didn't like the RPC message presented to it. (I have no idea why that would be the case for machine_a?) If you capture packets while attempting the mount, you can look at them in wireshark and maybe see how they are trashed? (I just got home, so I can take a look at a packet capture, if you email it to me as an attachment.) # tcpdump -s 0 -w mnt.pcap host 192.168.1.100 - run on machine_a during the mount attempt, should do it (in mnt.pcap). rick > I have another machine with the same settings that can mount > successfully > the same NFSv4 share. > > machine_c# mount -t nfs -o nfsv4 192.168.2.100:/a /mnt/ > machine_c# > > Best Regards, > -- > Marcelo Araujo > araujo@FreeBSD.org > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Apr 14 14:26:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 45C94FDB; Mon, 14 Apr 2014 14:26:43 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id CF8371D18; Mon, 14 Apr 2014 14:26:42 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqEEAI7vS1ODaFve/2dsb2JhbABZhBiDEMARgTh0giUBAQEDJARLBwUWDgoCAg0ZAl+IBwiocKJVF4EpjRE0gnaBSQSUdJYwg00hgW4 X-IronPort-AV: E=Sophos;i="4.97,857,1389762000"; d="scan'208";a="114292967" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 14 Apr 2014 10:26:41 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 7A3C8B40A2; Mon, 14 Apr 2014 10:26:41 -0400 (EDT) Date: Mon, 14 Apr 2014 10:26:41 -0400 (EDT) From: Rick Macklem To: Bryan Drewery Message-ID: <7314631.10715155.1397485601490.JavaMail.root@uoguelph.ca> Subject: Re: getdirentries cookies usage outside of UFS MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Kirk McKusick , Konstantin Belousov X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 14:26:43 -0000 Bryan Drewery wrote: > Recently I was working on a compat syscall for sys_getdirentries() that > converts between our dirent and the FreeBSD dirent struct. We had never > tried using this on TMPFS and when we did ran into weird issues (hence > my recent commits to TMPFS to clarify some of the getdirentries() code). > We were not using cookies, so I referenced the Linux compat module > (linux_file.c getdents_common()) > > I ran across this code: > > /* > > * When using cookies, the vfs has the option of reading from > > * a different offset than that supplied (UFS truncates the > > * offset to a block boundary to make sure that it never reads > > * partway through a directory entry, even if the directory > > * has been compacted). > > */ > > while (len > 0 && ncookies > 0 && *cookiep <= off) { > > bdp =3D (struct dirent *) inp; > > len -=3D bdp->d_reclen; > > inp +=3D bdp->d_reclen; > > cookiep++; > > ncookies--; > > } > > > At first it looked innocuous but then it occurred to me it was the root > of the issue I was having as it was eating my cookies based on their > value, despite tmpfs cookies being random hash values that have no > sequential relation. So I looked at how NFS was handling the same code > and found this lovely hack from r216691: > > > not_zfs =3D strcmp(vp->v_mount->mnt_vfc->vfc_name, "zfs"); > >... > > while (cpos < cend && ncookies > 0 && > > (dp->d_fileno =3D=3D 0 || dp->d_type =3D=3D DT_WHT || > > (not_zfs !=3D 0 && ((u_quad_t)(*cookiep)) <=3D toff))) { > > cpos +=3D dp->d_reclen; > > dp =3D (struct dirent *)cpos; > > cookiep++; > > ncookies--; > > } > > I ended up doing the opposite, only running the code if getting dirents > from "ufs". > > So there's multiple issue here. > > 1. NFS is broken on TMPFS. I can see why it's gone so long unnoticed, > why would you do that? Still probably worth fixing. > Well, since exporting a volatile file system over NFS definitely breaks the protocol, the correct fix should probably be to disable TMPFS so that it cannot be exported at all. However, someone probably likes to do this anyhow, so I guess it should be left as is or fixed... > 2. Linux and SVR4 getdirentries() are both broken on TMPFS/ZFS. I am > surprised Linux+ZFS has not been noticed by now. I am aware the SVR4 is > full of other bugs too. I ran across many just reviewing the > getdirentries code alone. > > Do any other file systems besides UFS do this offset/cookie > truncation/rewind? If UFS is the only one it may be acceptable to change > this zfs check to !ufs and add it to the other modules. If we don't like > that, or there are potentially other file systems doing this too, how > about adding a flag to somewhere to indicate the file system has > monotonically increasing offsets and needs this rewind support. I'm not > sure where that is best done, struct vfsconf? > At a glance, I don't think UFS returns directory entries starting at a block boundary any more, either. (It reads from a block boundary, but skips the ones before the startoffset, if I read ufs_readdir() correctly.) As such, I'm not sure this is needed for UFS? However, it is going to be difficult to determine if this loop is no longer necessary for any file system. If if can't be determined that this is no longer necessary at all, a flag indicating a file system uses non-monotonically increasing directory offsets seems the cleanest fix to me. However, this change might be difficult to MFC. If is was a "non-monotonically increasing dir offset" flag that is set, old code would still function the same without knowledge of the flag, so long as the NFS server case was changed to "not_zfs and flag no set" and the "not_zfs" test was still done. (Of course, then the "not_zfs" case will probably be in the sources for decades to come, because no one can be sure it is safe to get rid of;-) It would be nice if it could be determined if any file system still backs up to a block boundary, but I can't answer that. I've cc'd Kirk and Kostik, in case either of them know or have opinions on this? rick ps: You might also take a look for uio_offset < 0 tests, since a -ve offset used to be meaningful. (Sorry, I can't remember when/how this was used, but if the high order bit of a directory offset cookie is set, there might be surprises;-) Maybe this isn't a problem for dirs, if I recall it correctly? > --=20 > Regards, > Bryan Drewery From owner-freebsd-fs@FreeBSD.ORG Mon Apr 14 15:50:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 80BD7DE for ; Mon, 14 Apr 2014 15:50:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5A0C31764 for ; Mon, 14 Apr 2014 15:50:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3EFo0uM022812 for ; Mon, 14 Apr 2014 15:50:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3EFo0Tu022802; Mon, 14 Apr 2014 15:50:00 GMT (envelope-from gnats) Date: Mon, 14 Apr 2014 15:50:00 GMT Message-Id: <201404141550.s3EFo0Tu022802@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Karl Denninger List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 15:50:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Karl Denninger To: bug-followup@FreeBSD.org, karl@fs.denninger.net Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Mon, 14 Apr 2014 10:40:49 -0500 This is a cryptographically signed message in MIME format. --------------ms090507070206000900070606 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable Follow-up: 21 days at this point of uninterrupted uptime, inact pages are stable as = is the free list, wired and free are appropriate and varies with load as = expected, ZERO swapping and performance is and has remained excellent,=20 all on a very-heavily used fairly-beefy (~24GB RAM, dual Xeon CPUs)=20 production system under 10-STABLE. --=20 -- Karl karl@denninger.net --------------ms090507070206000900070606 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA0MTQxNTQwNDlaMCMGCSqGSIb3DQEJBDEW BBQYq2pvkbMCcqXmG1R35UciRZZfHTBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAXwyNfelEJ1dXzTY8t4nHykXZincj dfw/aVPIlXlUPk87GtZCSj2t30EAGMt4+wjhwUre+ZGl/9A6DBnBJ6XHNpqSqY5kSGA19L7+ beTSTC4/dUcLA8EPVNbMyYPN1SsY/uyz/AzSfTAra9Ypy+n9pEPE4dQPfCN9W3RmKAeiQ4uX W1mkFQNUcYoGy5pwROpY2PT6ZUpWT8OM809PvhSI74pvepv95E0Gkl0kyg43MTO114A8jNPf XueHegfAvy5Prhb+VxKwWRZtD35toCepCy4q446nMGMVbyptKxcZ/HQj45ukCsCbQfRUP4yC ichEitLOJ5LfO5jg+3aQKaSgz8ByHLLXOCD0pZufXPkGiqM/Pd+bnZB/J4pwgHDMxnHjsOnf ImrDJzRsZ6e90ym6SwvGeZ4Ta4zwivc+y0WpnP2csN80gr128CB9yGqWOIOg2viB8PpY0hXh HnT0cdPZt2jw235qoYMzIRpbjDkAXIyGiIVNuf4ga5fELo/Z54GWYzvvXDI4PNQ42CTeNxcT dKfUgcqWyxIFczhF67apxw0xiM4XxPTMmaqV8EhtxizUpAEMvuatrInELjxyCkoD5ie2PZdh l+bw/MVLlcYMGO6i29XKfUdupAQIO6eAtmJB/lllH600R/YcpTQia9nibgX3/wAAD76k+JEf Ij7SLrkAAAAAAAA= --------------ms090507070206000900070606-- From owner-freebsd-fs@FreeBSD.ORG Mon Apr 14 18:00:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4E443890 for ; Mon, 14 Apr 2014 18:00:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 22DAF16ED for ; Mon, 14 Apr 2014 18:00:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3EI00UJ064692 for ; Mon, 14 Apr 2014 18:00:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3EI00kY064691; Mon, 14 Apr 2014 18:00:00 GMT (envelope-from gnats) Date: Mon, 14 Apr 2014 18:00:00 GMT Message-Id: <201404141800.s3EI00kY064691@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dteske@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 18:00:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: To: , Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Mon, 14 Apr 2014 10:58:09 -0700 Been running this on stable/8 for a week now on 3 separate machines. All appears stable and under heavy load, we can certainly see the new reclaim firing early and appropriately when-needed (no longer do we have programs getting swapped out). Interestingly, in our testing we've found that we can force the old reclaim (code state prior to applying Karl's patch) to fire by sapping the few remaining pages from unallocated memory. I do this by exploiting a little known bug in the bourne-shell to leak memory (command below). sh -c 'f(){ while :;do local b;done;};f' Watching "top" in the un-patched state, we can see Wired memory grow from ARC usage but not drop. I then run the above command and "top" shows an "sh" process with a fast-growing "SIZE", quickly eating up about 100MB per second. When "top" shows the Free memory drop to mere KB (single pages), we see the original (again, unpatched) reclaim algorithm fire and the Wired memory finally starts to drop. After applying this patch, we no longer have to play the game of "eat all my remaining memory to force the original reclaim event to free up pages", but rather the ARC waxes and wanes with normal applicate usage. However, I must say that on stable/8 the problem of applications going to sleep is not nearly as bad as I have experienced it in 9 or 10. We are happy to report that the patch seems to be a win for stable/8 as well because in our case, we do like to have a bit of free memory and the old reclaim was not providing that. It's nice to not have to resort to tricks to get the ARC to pare down. -- Cheers, Devin _____________ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 14 18:40:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 57C7E7EE for ; Mon, 14 Apr 2014 18:40:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 455771ABC for ; Mon, 14 Apr 2014 18:40:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3EIe1aF078386 for ; Mon, 14 Apr 2014 18:40:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3EIe1EA078385; Mon, 14 Apr 2014 18:40:01 GMT (envelope-from gnats) Date: Mon, 14 Apr 2014 18:40:01 GMT Message-Id: <201404141840.s3EIe1EA078385@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: dfilter@FreeBSD.ORG (dfilter service) Subject: Re: kern/186574: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 18:40:01 -0000 The following reply was made to PR kern/186574; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/186574: commit references a PR Date: Mon, 14 Apr 2014 18:38:18 +0000 (UTC) Author: delphij Date: Mon Apr 14 18:38:14 2014 New Revision: 264467 URL: http://svnweb.freebsd.org/changeset/base/264467 Log: Take into account when zpool history block grows exceeding 128KB in zpool(8) and zdb(8) by growing the buffer on demand with a cap of 1GB (specified in spa_history_create_obj()). PR: bin/186574 Submitted by: Andrew Childs (with changes) MFC after: 2 weeks Modified: head/cddl/contrib/opensolaris/cmd/zdb/zdb.c head/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Modified: head/cddl/contrib/opensolaris/cmd/zdb/zdb.c ============================================================================== --- head/cddl/contrib/opensolaris/cmd/zdb/zdb.c Mon Apr 14 18:14:09 2014 (r264466) +++ head/cddl/contrib/opensolaris/cmd/zdb/zdb.c Mon Apr 14 18:38:14 2014 (r264467) @@ -929,11 +929,16 @@ dump_dtl(vdev_t *vd, int indent) dump_dtl(vd->vdev_child[c], indent + 4); } +/* from spa_history.c: spa_history_create_obj() */ +#define HIS_BUF_LEN_DEF (128 << 10) +#define HIS_BUF_LEN_MAX (1 << 30) + static void dump_history(spa_t *spa) { nvlist_t **events = NULL; - char buf[SPA_MAXBLOCKSIZE]; + char *buf = NULL; + uint64_t bufsize = HIS_BUF_LEN_DEF; uint64_t resid, len, off = 0; uint_t num = 0; int error; @@ -942,8 +947,11 @@ dump_history(spa_t *spa) char tbuf[30]; char internalstr[MAXPATHLEN]; + if ((buf = malloc(bufsize)) == NULL) + (void) fprintf(stderr, "Unable to read history: " + "out of memory\n"); do { - len = sizeof (buf); + len = bufsize; if ((error = spa_history_get(spa, &off, &len, buf)) != 0) { (void) fprintf(stderr, "Unable to read history: " @@ -953,9 +961,26 @@ dump_history(spa_t *spa) if (zpool_history_unpack(buf, len, &resid, &events, &num) != 0) break; - off -= resid; + + /* + * If the history block is too big, double the buffer + * size and try again. + */ + if (resid == len) { + free(buf); + buf = NULL; + + bufsize <<= 1; + if ((bufsize >= HIS_BUF_LEN_MAX) || + ((buf = malloc(bufsize)) == NULL)) { + (void) fprintf(stderr, "Unable to read history: " + "out of memory\n"); + return; + } + } } while (len != 0); + free(buf); (void) printf("\nHistory:\n"); for (int i = 0; i < num; i++) { Modified: head/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c ============================================================================== --- head/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Mon Apr 14 18:14:09 2014 (r264466) +++ head/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Mon Apr 14 18:38:14 2014 (r264467) @@ -3744,7 +3744,9 @@ zpool_history_unpack(char *buf, uint64_t return (0); } -#define HIS_BUF_LEN (128*1024) +/* from spa_history.c: spa_history_create_obj() */ +#define HIS_BUF_LEN_DEF (128 << 10) +#define HIS_BUF_LEN_MAX (1 << 30) /* * Retrieve the command history of a pool. @@ -3752,21 +3754,24 @@ zpool_history_unpack(char *buf, uint64_t int zpool_get_history(zpool_handle_t *zhp, nvlist_t **nvhisp) { - char buf[HIS_BUF_LEN]; + char *buf = NULL; + uint64_t bufsize = HIS_BUF_LEN_DEF; uint64_t off = 0; nvlist_t **records = NULL; uint_t numrecords = 0; int err, i; + if ((buf = malloc(bufsize)) == NULL) + return (ENOMEM); do { - uint64_t bytes_read = sizeof (buf); + uint64_t bytes_read = bufsize; uint64_t leftover; if ((err = get_history(zhp, buf, &off, &bytes_read)) != 0) break; /* if nothing else was read in, we're at EOF, just return */ - if (!bytes_read) + if (bytes_read == 0) break; if ((err = zpool_history_unpack(buf, bytes_read, @@ -3774,8 +3779,25 @@ zpool_get_history(zpool_handle_t *zhp, n break; off -= leftover; + /* + * If the history block is too big, double the buffer + * size and try again. + */ + if (leftover == bytes_read) { + free(buf); + buf = NULL; + + bufsize <<= 1; + if ((bufsize >= HIS_BUF_LEN_MAX) || + ((buf = malloc(bufsize)) == NULL)) { + err = ENOMEM; + break; + } + } + /* CONSTCOND */ } while (1); + free(buf); if (!err) { verify(nvlist_alloc(nvhisp, NV_UNIQUE_NAME, 0) == 0); _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Apr 14 20:40:37 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C9908AFD; Mon, 14 Apr 2014 20:40:37 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9ED171841; Mon, 14 Apr 2014 20:40:37 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3EKebMU017675; Mon, 14 Apr 2014 20:40:37 GMT (envelope-from rmacklem@freefall.freebsd.org) Received: (from rmacklem@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3EKebsu017674; Mon, 14 Apr 2014 20:40:37 GMT (envelope-from rmacklem) Date: Mon, 14 Apr 2014 20:40:37 GMT Message-Id: <201404142040.s3EKebsu017674@freefall.freebsd.org> To: rich@enterprisesystems.net, rmacklem@FreeBSD.org, freebsd-fs@FreeBSD.org From: rmacklem@FreeBSD.org Subject: Re: bin/187071: [nfs] nfs server only start 2 daemons 1 master & 1 server X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 20:40:37 -0000 Synopsis: [nfs] nfs server only start 2 daemons 1 master & 1 server State-Changed-From-To: open->closed State-Changed-By: rmacklem State-Changed-When: Mon Apr 14 20:38:32 UTC 2014 State-Changed-Why: For FreeBSD9 and later, the nfsd threads are kernel threads and can be seen if the "H" option is used on the "ps" command. In other words, there are there, but not visible unless "H" is specified for "ps". http://www.freebsd.org/cgi/query-pr.cgi?pr=187071 From owner-freebsd-fs@FreeBSD.ORG Mon Apr 14 22:28:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 37DDD1A6; Mon, 14 Apr 2014 22:28:03 +0000 (UTC) Received: from chez.mckusick.com (chez.mckusick.com [IPv6:2001:5a8:4:7e72:4a5b:39ff:fe12:452]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 188F21517; Mon, 14 Apr 2014 22:28:03 +0000 (UTC) Received: from chez.mckusick.com (localhost [127.0.0.1]) by chez.mckusick.com (8.14.3/8.14.3) with ESMTP id s3EMRwIL080960; Mon, 14 Apr 2014 15:27:58 -0700 (PDT) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201404142227.s3EMRwIL080960@chez.mckusick.com> To: Bryan Drewery Subject: Re: getdirentries cookies usage outside of UFS To: freebsd-fs@freebsd.org In-reply-to: <53489F0D.1020702@FreeBSD.org> Date: Mon, 14 Apr 2014 15:27:58 -0700 From: Kirk McKusick X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Apr 2014 22:28:03 -0000 > Date: Fri, 11 Apr 2014 21:03:57 -0500 > From: Bryan Drewery > To: freebsd-fs@freebsd.org > Subject: getdirentries cookies usage outside of UFS > > Recently I was working on a compat syscall for sys_getdirentries() that > converts between our dirent and the FreeBSD dirent struct. We had never > tried using this on TMPFS and when we did ran into weird issues (hence > my recent commits to TMPFS to clarify some of the getdirentries() code). > We were not using cookies, so I referenced the Linux compat module > (linux_file.c getdents_common()) > > I ran across this code: > >> /* >> * When using cookies, the vfs has the option of reading from >> * a different offset than that supplied (UFS truncates the >> * offset to a block boundary to make sure that it never reads >> * partway through a directory entry, even if the directory >> * has been compacted). >> */ >> while (len > 0 && ncookies > 0 && *cookiep <= off) { >> bdp = (struct dirent *) inp; >> len -= bdp->d_reclen; >> inp += bdp->d_reclen; >> cookiep++; >> ncookies--; >> }=20 > > > At first it looked innocuous but then it occurred to me it was the root > of the issue I was having as it was eating my cookies based on their > value, despite tmpfs cookies being random hash values that have no > sequential relation. So I looked at how NFS was handling the same code > and found this lovely hack from r216691: > >> not_zfs = strcmp(vp->v_mount->mnt_vfc->vfc_name, "zfs"); > =2E.. >> while (cpos < cend && ncookies > 0 && >> (dp->d_fileno == 0 || dp->d_type == DT_WHT || >> (not_zfs != 0 && ((u_quad_t)(*cookiep)) <= toff))) { >> cpos += dp->d_reclen; >> dp = (struct dirent *)cpos; >> cookiep++; >> ncookies--; >> } > > I ended up doing the opposite, only running the code if getting dirents > from "ufs". > > So there's multiple issue here. > > 1. NFS is broken on TMPFS. I can see why it's gone so long unnoticed, > why would you do that? Still probably worth fixing. > > 2. Linux and SVR4 getdirentries() are both broken on TMPFS/ZFS. I am > surprised Linux+ZFS has not been noticed by now. I am aware the SVR4 is > full of other bugs too. I ran across many just reviewing the > getdirentries code alone. > > Do any other file systems besides UFS do this offset/cookie > truncation/rewind? If UFS is the only one it may be acceptable to change > this zfs check to !ufs and add it to the other modules. If we don't like > that, or there are potentially other file systems doing this too, how > about adding a flag to somewhere to indicate the file system has > monotonically increasing offsets and needs this rewind support. I'm not > sure where that is best done, struct vfsconf? > > Regards, > Bryan Drewery This code is specific to UFS. I concur with your fix of making it conditionl on UFS. I feel guilty for putting that code in unconditionally in the first place. In my defense it was 1982 and UFS was the only filesystem :-) Kirk McKusick From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 02:21:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8D7EC894 for ; Tue, 15 Apr 2014 02:21:25 +0000 (UTC) Received: from mail-we0-x22c.google.com (mail-we0-x22c.google.com [IPv6:2a00:1450:400c:c03::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1920119AF for ; Tue, 15 Apr 2014 02:21:24 +0000 (UTC) Received: by mail-we0-f172.google.com with SMTP id t61so8959926wes.3 for ; Mon, 14 Apr 2014 19:21:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=KrvKkpNf6sKb7p8alr69gH7zuySUnCQubPTVzza9G2Q=; b=X1XTcxR99hwF/WuaSlY3HEFHL90RY8L1deoTJ6CIlEMOVP2PBBu+1gOjMQ3sv3gzZM yHKRRReZ7I8BBPuSiHcY4M/lEHBNNLlxa7PqRlxidWIUq+e+3wYBeTkfd56yOhCWGEyu 0NtIBUz637TCiD331ZeK31MJSZAP6ewiPLvVmwU12a4curkrK5xHsbL+F0mO+TNv7qjY tiPIhi8GmPW8o6gnH62gqILShNIx6kDquPZ4L0qorzXE0X1vZUrIhJsESSrVDraHhptd Ehig6xU22ZjtG+OhzJXJGbG20fLTj0abu5fsALTD5KHprm/Zv+e29T0uDZj+v1Dz/RTa QLNQ== MIME-Version: 1.0 X-Received: by 10.180.7.227 with SMTP id m3mr105716wia.59.1397528483077; Mon, 14 Apr 2014 19:21:23 -0700 (PDT) Received: by 10.217.9.134 with HTTP; Mon, 14 Apr 2014 19:21:22 -0700 (PDT) In-Reply-To: <936380350.10694814.1397484017247.JavaMail.root@uoguelph.ca> References: <936380350.10694814.1397484017247.JavaMail.root@uoguelph.ca> Date: Tue, 15 Apr 2014 10:21:22 +0800 Message-ID: Subject: Re: NFSv4: prob err=10036 From: Marcelo Araujo To: Rick Macklem Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: araujo@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 02:21:25 -0000 Hello Rick, Thanks by the prompt reply, and I'm sorry my late reply, unfortunately I'm located in Taiwan, so, timezone is an issue. So here attached is my pcap. Server IP: 172.17.32.42 Client IP: 172.17.32.54 Something related with RELEASE_LOCKOWNER, I'm still investigating, maybe I can find a solution before you reply, if yes, I will post here. Thanks again. 2014-04-14 22:00 GMT+08:00 Rick Macklem : > Marcelo Araujo wrote: > > Hi all, > > > > Anyone have saw this prob err before when try to mount a NFSv4? > > > > machine_a# mount -t nfs -o nfsv4 192.168.2.100:/a /mnt/ > > machine_a# mount_nfs: /mnt, : Input/output error > > machine_a# tail /var/log/messages |grep nfsv4 > > Apr 13 17:03:33 ESSD46B6E kernel: nfsv4 client/server protocol prob > > err=10036 > > > Well, 10036 is NFSERR_BADXDR (they are all in sys/fs/nfs/nfsproto.h). > This means that the server didn't like the RPC message presented to it. > (I have no idea why that would be the case for machine_a?) > > If you capture packets while attempting the mount, you can look at > them in wireshark and maybe see how they are trashed? (I just got home, > so I can take a look at a packet capture, if you email it to me as an > attachment.) > # tcpdump -s 0 -w mnt.pcap host 192.168.1.100 > - run on machine_a during the mount attempt, should do it (in mnt.pcap). > > rick > > > I have another machine with the same settings that can mount > > successfully > > the same NFSv4 share. > > > > machine_c# mount -t nfs -o nfsv4 192.168.2.100:/a /mnt/ > > machine_c# > > > > Best Regards, > > -- > > Marcelo Araujo > > araujo@FreeBSD.org > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > -- Marcelo Araujo araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 05:49:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A4A9D498; Tue, 15 Apr 2014 05:49:18 +0000 (UTC) Received: from bellagio.open2view.net (bellagio.open2view.net [210.48.79.75]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6AACB1CDA; Tue, 15 Apr 2014 05:49:18 +0000 (UTC) Received: from bellagio.open2view.net (localhost [127.0.0.1]) by bellagio.open2view.net (Postfix) with ESMTP id DC50C12AA60A; Tue, 15 Apr 2014 17:39:31 +1200 (NZST) X-Virus-Scanned: amavisd-new at open2view.com Received: from bellagio.open2view.net ([127.0.0.1]) by bellagio.open2view.net (bellagio.open2view.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id WMyXuuzRUMwc; Tue, 15 Apr 2014 17:39:27 +1200 (NZST) Received: from [10.58.1.14] (241.196.252.27.dyn.cust.vf.net.nz [27.252.196.241]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: pmurray@nevada.net.nz) by bellagio.open2view.net (Postfix) with ESMTPSA id A3ED712AA5EC; Tue, 15 Apr 2014 17:39:26 +1200 (NZST) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: Panic in ZFS, solaris assert: sa.sa_magic == 0x2F505A From: Phil Murray In-Reply-To: <5347C5A0.3030806@FreeBSD.org> Date: Tue, 15 Apr 2014 17:39:25 +1200 Content-Transfer-Encoding: quoted-printable Message-Id: References: <5347C5A0.3030806@FreeBSD.org> To: Andriy Gapon X-Mailer: Apple Mail (2.1874) Cc: freebsd-fs@freebsd.org, stable@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 05:49:18 -0000 On 11/04/2014, at 10:36 pm, Andriy Gapon wrote: > on 11/04/2014 11:02 Phil Murray said the following: >> Hi there, >>=20 >> I=92ve recently experienced two kernel panics on 8.4-RELEASE (within = 2 days of each other, and both around the same time of day oddly) with = ZFS. Sorry no dump available, but panic below. >>=20 >> Any ideas where to start solving this? Will upgrading to 9 (or 10) = solve it? >=20 > By chance, could the system be running zfs recv at the times when the = panics > happened? I think it might be related to this bug reported on ZFS-on-linux when = upgrading from v3 -> v5, which is exactly what I=92ve done on this = machine: https://github.com/zfsonlinux/zfs/issues/2025 In my case, the bogus sa.sa_magic value looks like this: panic:solaris asset: sa.sa_magic =3D=3D 0x2F505A (0x5112fb3d =3D=3D = 0x2f505a), file:=20 $ date -r 0x5112fb3d Thu Feb 7 13:54:21 NZDT 2013 Cheers Phil From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 09:16:57 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AB21EC70; Tue, 15 Apr 2014 09:16:57 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id AA71810E8; Tue, 15 Apr 2014 09:16:56 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id MAA10019; Tue, 15 Apr 2014 12:16:39 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WZzTq-0000s3-Mh; Tue, 15 Apr 2014 12:16:38 +0300 Message-ID: <534CF8A6.1050205@FreeBSD.org> Date: Tue, 15 Apr 2014 12:15:18 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Phil Murray Subject: Re: Panic in ZFS, solaris assert: sa.sa_magic == 0x2F505A References: <5347C5A0.3030806@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Cc: freebsd-fs@FreeBSD.org, stable@FreeBSD.org, Matthew Ahrens X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 09:16:57 -0000 on 15/04/2014 08:39 Phil Murray said the following: > > On 11/04/2014, at 10:36 pm, Andriy Gapon wrote: > >> on 11/04/2014 11:02 Phil Murray said the following: >>> Hi there, >>> >>> I’ve recently experienced two kernel panics on 8.4-RELEASE (within 2 days of each other, and both around the same time of day oddly) with ZFS. Sorry no dump available, but panic below. >>> >>> Any ideas where to start solving this? Will upgrading to 9 (or 10) solve it? >> >> By chance, could the system be running zfs recv at the times when the panics >> happened? > > I think it might be related to this bug reported on ZFS-on-linux when upgrading from v3 -> v5, which is exactly what I’ve done on this machine: > > https://github.com/zfsonlinux/zfs/issues/2025 > > In my case, the bogus sa.sa_magic value looks like this: > > panic:solaris asset: sa.sa_magic == 0x2F505A (0x5112fb3d == 0x2f505a), file: > > $ date -r 0x5112fb3d > Thu Feb 7 13:54:21 NZDT 2013 Great job finding that ZoL bug report! And very good job done by people who analyzed the problem. Below is my guess about what could be wrong. A thread is changing file attributes and it could end up calling zfs_sa_upgrade() to convert file's bonus from DMU_OT_ZNODE to DMU_OT_SA. The conversion is achieved in two steps: - dmu_set_bonustype() to change the bonus type in the dnode - sa_replace_all_by_template_locked() to re-populate the bonus data dmu_set_bonustype() calls dnode_setbonus_type() which does the following: dn->dn_bonustype = newtype; dn->dn_next_bonustype[tx->tx_txg & TXG_MASK] = dn->dn_bonustype; Concurrently, the sync thread can run into the dnode if it was dirtied in an earlier txg. The sync thread calls dmu_objset_userquota_get_ids() via dnode_sync(). dmu_objset_userquota_get_ids() uses dn_bonustype that has the new value, but the data corresponding to the txg being sync-ed is still in the old format. As I understand, dmu_objset_userquota_get_ids() already uses dmu_objset_userquota_find_data() when before == B_FALSE to find a proper copy of the data corresponding to the txg being sync-ed. So, I think that in that case dmu_objset_userquota_get_ids() should also use values of dn_bonustype and dn_bonuslen that correspond to the txg. If I am not mistaken, those values could be deduced from dn_next_bonustype[tx->tx_txg & TXG_MASK] plus dn_phys->dn_bonustype and dn_next_bonuslen[tx->tx_txg & TXG_MASK] plus dn_phys->dn_bonuslen. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 12:28:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EB38958D for ; Tue, 15 Apr 2014 12:28:49 +0000 (UTC) Received: from mail-vc0-x22c.google.com (mail-vc0-x22c.google.com [IPv6:2607:f8b0:400c:c03::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AE6B61452 for ; Tue, 15 Apr 2014 12:28:49 +0000 (UTC) Received: by mail-vc0-f172.google.com with SMTP id la4so9240372vcb.3 for ; Tue, 15 Apr 2014 05:28:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type; bh=aFFNo8Sbos2wqF/JVXa4XFfEB0W1risuHfhv6qCXsZM=; b=chuCtAXsILszyUDT0NcLOmc4V3knFANkEu3cwmeMNapFAWKQgAYF3NJH5hobJEQnD5 xr/re1uKHD+pmAiwZ+jgI3+17/f6ruisUqqa4e3THYzxbPXGH2gA0+QGcFmjbpu2J6ji PD+3zCCz+wmBySYQckDA7nwzFRWkHPb8BsWjtRTHstCjSZ1AsioQStMp5R9dHzogIfFS nJl7CyNArU8BRlOQAKzcZu7+cGjiF/6EGJBwxakmPBOpWvG0QQQ9U+4k/uJ8QAvwdgKy NivZ2NHsG2MQvpvgN9Yg8Qw4IGEG2k2FIhBAr9ozPtTl+v6+O21jNI3cpNpCUEHioHTx DaIw== X-Received: by 10.52.51.197 with SMTP id m5mr959684vdo.9.1397564928744; Tue, 15 Apr 2014 05:28:48 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.91.74 with HTTP; Tue, 15 Apr 2014 05:28:28 -0700 (PDT) From: Anton Sayetsky Date: Tue, 15 Apr 2014 15:28:28 +0300 Message-ID: Subject: ZFS prefetch efficiency To: freebsd-fs Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 12:28:50 -0000 Hello, I have a machine with 256 G RAM almost all dedicated to ARC. zfs-stats shows the following: root@cs0:~# zfs-stats -EZ ------------------------------------------------------------------------ ZFS Subsystem Report Tue Apr 15 15:25:10 2014 ------------------------------------------------------------------------ ARC Efficiency: 1.07b Cache Hit Ratio: 73.53% 788.04m Cache Miss Ratio: 26.47% 283.62m Actual Hit Ratio: 64.30% 689.08m Data Demand Efficiency: 99.69% 359.48m Data Prefetch Efficiency: 31.54% 409.48m CACHE HITS BY CACHE LIST: Anonymously Used: 10.38% 81.78m Most Recently Used: 38.09% 300.13m Most Frequently Used: 49.36% 388.95m Most Recently Used Ghost: 0.56% 4.45m Most Frequently Used Ghost: 1.62% 12.74m CACHE HITS BY DATA TYPE: Demand Data: 45.48% 358.38m Prefetch Data: 16.39% 129.15m Demand Metadata: 37.79% 297.81m Prefetch Metadata: 0.34% 2.69m CACHE MISSES BY DATA TYPE: Demand Data: 0.39% 1.10m Prefetch Data: 98.84% 280.33m Demand Metadata: 0.76% 2.16m Prefetch Metadata: 0.01% 39.25k ------------------------------------------------------------------------ File-Level Prefetch: (HEALTHY) DMU Efficiency: 3.30b Hit Ratio: 91.98% 3.04b Miss Ratio: 8.02% 264.94m Colinear: 264.94m Hit Ratio: 0.01% 20.57k Miss Ratio: 99.99% 264.92m Stride: 2.77b Hit Ratio: 99.99% 2.77b Miss Ratio: 0.01% 245.81k DMU Misc: Reclaim: 264.92m Successes: 0.60% 1.60m Failures: 99.40% 263.32m Streams: 270.09m +Resets: 0.06% 164.49k -Resets: 99.94% 269.93m Bogus: 0 ------------------------------------------------------------------------ root@cs0:~# I'm confused with the next 2 values: 1. Data Prefetch Efficiency: 31.54% 409.48m 2. DMU Efficiency: 3.30b Hit Ratio: 91.98% 3.04b So here is my question: is prefetch really efficient or not? From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 12:31:24 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B4FD6687 for ; Tue, 15 Apr 2014 12:31:24 +0000 (UTC) Received: from smtp.01.com (smtp.01.com [199.36.142.181]) by mx1.freebsd.org (Postfix) with ESMTP id 80C19148A for ; Tue, 15 Apr 2014 12:31:24 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id 77B89296680 for ; Tue, 15 Apr 2014 07:25:54 -0500 (CDT) X-Virus-Scanned: amavisd-new at smtp-out-1.01.com Received: from smtp.01.com ([127.0.0.1]) by localhost (smtp-out-1.01.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id h3YHTb_wGwBG for ; Tue, 15 Apr 2014 07:25:54 -0500 (CDT) Received: from smtp.01.com (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id 53E8F296682 for ; Tue, 15 Apr 2014 07:25:54 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id 4646B296681 for ; Tue, 15 Apr 2014 07:25:54 -0500 (CDT) X-Virus-Scanned: amavisd-new at smtp-out-1.01.com Received: from smtp.01.com ([127.0.0.1]) by localhost (smtp-out-1.01.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 7geb1EcMS8EV for ; Tue, 15 Apr 2014 07:25:54 -0500 (CDT) Received: from newman.zxcvm.com (unknown [38.109.103.138]) by smtp-out-1.01.com (Postfix) with ESMTPSA id C78D229667B for ; Tue, 15 Apr 2014 07:25:53 -0500 (CDT) From: Jason Breitman Message-Id: Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: Differences in reporting by du df and usedbydataset Date: Tue, 15 Apr 2014 08:25:51 -0400 References: <0D13866F-04ED-4572-B7C9-04DC806B6513@zxcvm.com> To: fs@freebsd.org In-Reply-To: <0D13866F-04ED-4572-B7C9-04DC806B6513@zxcvm.com> X-Mailer: Apple Mail (2.1874) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 12:31:24 -0000 I have not received a reply to my question and wanted to post again so = that the group could see the question again. Jason Breitman jbreitman@zxcvm.com On Mar 26, 2014, at 12:53 PM, Jason Breitman = wrote: The different disk usage measurements are frequently discussed and most = of the time snapshots are the source of confusion. I use refquota to avoid this confusion for user based file systems, but = can not explain the below reports and hope you can help. Why is there an ~18 GB difference between du and df / usedbydataset? I included additional information so that you can see that used =3D = usedbysnapshots + usedbydataset and that there are no reservations. # du -sh /tank/users/auser 5.1G /tank/users/auser # df -h /tank/users/auser Filesystem Size Used Avail Capacity Mounted on tank/users/auser 35G 23G 11G 66% /tank/users/auser =20= # zfs get usedbydataset tank/users/auser NAME PROPERTY VALUE SOURCE tank/users/auser usedbydataset 23.2G - # zfs get used,usedbysnapshots,usedbydataset tank/users/auser NAME PROPERTY VALUE SOURCE tank/users/auser used 63.9G - tank/users/auser usedbysnapshots 40.7G - tank/users/auser usedbydataset 23.2G - # zfs get refreservation,usedbyrefreservation tank/users/auser NAME PROPERTY VALUE SOURCE tank/users/auser refreservation none default tank/users/auser usedbyrefreservation 0=20 OS: Freebsd 9.1 # zpool upgrade -v This system is currently running ZFS pool version 28. The following versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices 11 Improved scrub performance 12 Snapshot properties 13 snapused property 14 passthrough-x aclinherit 15 user/group space accounting 16 stmf property support 17 Triple-parity RAID-Z 18 Snapshot user holds 19 Log device removal 20 Compression using zle (zero-length encoding) 21 Deduplication 22 Received properties 23 Slim ZIL 24 System attributes 25 Improved scrub stats 26 Improved snapshot deletion performance 27 Improved snapshot creation performance 28 Multiple vdev replacements For more information on a particular version, including supported = releases, see the ZFS Administration Guide. # zfs upgrade -v The following filesystem versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS filesystem version 2 Enhanced directory entries 3 Case insensitive and filesystem user identifier (FUID) 4 userquota, groupquota properties 5 System attributes For more information on a particular version, including supported = releases, see the ZFS Administration Guide. Jason Breitman jbreitman@zxcvm.com From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 13:09:53 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 325C8224 for ; Tue, 15 Apr 2014 13:09:53 +0000 (UTC) Received: from mail.in-addr.com (noop.in-addr.com [208.58.23.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 016701826 for ; Tue, 15 Apr 2014 13:09:52 +0000 (UTC) Received: from gjp by mail.in-addr.com with local (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1Wa37P-000EgZ-Vu; Tue, 15 Apr 2014 09:09:44 -0400 Date: Tue, 15 Apr 2014 09:09:43 -0400 From: Gary Palmer To: Jason Breitman Subject: Re: Differences in reporting by du df and usedbydataset Message-ID: <20140415130943.GC15884@in-addr.com> References: <0D13866F-04ED-4572-B7C9-04DC806B6513@zxcvm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: gpalmer@freebsd.org X-SA-Exim-Scanned: No (on mail.in-addr.com); SAEximRunCond expanded to false Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 13:09:53 -0000 On Tue, Apr 15, 2014 at 08:25:51AM -0400, Jason Breitman wrote: > I have not received a reply to my question and wanted to post again so that the group could see the question again. > Are there any open files on the filesystem? fstat or lsof should tell you, I think. Files thare are open but that have been deleted show up in df but not du. That is not unique to zfs. If there are no open files I'm not sure what else to suggest. Does a scrub pass OK? Regards, Gary > > > > On Mar 26, 2014, at 12:53 PM, Jason Breitman wrote: > > The different disk usage measurements are frequently discussed and most of the time snapshots are the source of confusion. > I use refquota to avoid this confusion for user based file systems, but can not explain the below reports and hope you can help. > > Why is there an ~18 GB difference between du and df / usedbydataset? > I included additional information so that you can see that used = usedbysnapshots + usedbydataset and that there are no reservations. > > # du -sh /tank/users/auser > 5.1G /tank/users/auser > > # df -h /tank/users/auser > Filesystem Size Used Avail Capacity Mounted on > tank/users/auser 35G 23G 11G 66% /tank/users/auser > > # zfs get usedbydataset tank/users/auser > NAME PROPERTY VALUE SOURCE > tank/users/auser usedbydataset 23.2G - > > # zfs get used,usedbysnapshots,usedbydataset tank/users/auser > NAME PROPERTY VALUE SOURCE > tank/users/auser used 63.9G - > tank/users/auser usedbysnapshots 40.7G - > tank/users/auser usedbydataset 23.2G - > > # zfs get refreservation,usedbyrefreservation tank/users/auser > NAME PROPERTY VALUE SOURCE > tank/users/auser refreservation none default > tank/users/auser usedbyrefreservation 0 > > > OS: Freebsd 9.1 > > # zpool upgrade -v > This system is currently running ZFS pool version 28. > > The following versions are supported: > > VER DESCRIPTION > --- -------------------------------------------------------- > 1 Initial ZFS version > 2 Ditto blocks (replicated metadata) > 3 Hot spares and double parity RAID-Z > 4 zpool history > 5 Compression using the gzip algorithm > 6 bootfs pool property > 7 Separate intent log devices > 8 Delegated administration > 9 refquota and refreservation properties > 10 Cache devices > 11 Improved scrub performance > 12 Snapshot properties > 13 snapused property > 14 passthrough-x aclinherit > 15 user/group space accounting > 16 stmf property support > 17 Triple-parity RAID-Z > 18 Snapshot user holds > 19 Log device removal > 20 Compression using zle (zero-length encoding) > 21 Deduplication > 22 Received properties > 23 Slim ZIL > 24 System attributes > 25 Improved scrub stats > 26 Improved snapshot deletion performance > 27 Improved snapshot creation performance > 28 Multiple vdev replacements > > For more information on a particular version, including supported releases, > see the ZFS Administration Guide. > > # zfs upgrade -v > The following filesystem versions are supported: > > VER DESCRIPTION > --- -------------------------------------------------------- > 1 Initial ZFS filesystem version > 2 Enhanced directory entries > 3 Case insensitive and filesystem user identifier (FUID) > 4 userquota, groupquota properties > 5 System attributes > > For more information on a particular version, including supported releases, > see the ZFS Administration Guide. > > > > Jason Breitman > jbreitman@zxcvm.com > > > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 13:23:54 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B3DFD9E5; Tue, 15 Apr 2014 13:23:54 +0000 (UTC) Received: from smtp.01.com (smtp.01.com [199.36.142.181]) by mx1.freebsd.org (Postfix) with ESMTP id 82CC419F0; Tue, 15 Apr 2014 13:23:54 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id E11722966D5; Tue, 15 Apr 2014 08:23:53 -0500 (CDT) X-Virus-Scanned: amavisd-new at smtp-out-1.01.com Received: from smtp.01.com ([127.0.0.1]) by localhost (smtp-out-1.01.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Qd5-GtigAsek; Tue, 15 Apr 2014 08:23:53 -0500 (CDT) Received: from smtp.01.com (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id BC0A12966DA; Tue, 15 Apr 2014 08:23:53 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id AEB5E2966D2; Tue, 15 Apr 2014 08:23:53 -0500 (CDT) X-Virus-Scanned: amavisd-new at smtp-out-1.01.com Received: from smtp.01.com ([127.0.0.1]) by localhost (smtp-out-1.01.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id WF7L_6sZVr8h; Tue, 15 Apr 2014 08:23:53 -0500 (CDT) Received: from newman.zxcvm.com (unknown [38.109.103.138]) by smtp-out-1.01.com (Postfix) with ESMTPSA id 0DDA12966D5; Tue, 15 Apr 2014 08:23:52 -0500 (CDT) Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: Differences in reporting by du df and usedbydataset From: Jason Breitman In-Reply-To: Date: Tue, 15 Apr 2014 09:23:50 -0400 Message-Id: <191CFFCB-6E4C-4F17-AF6E-33384E1572A5@zxcvm.com> References: <0D13866F-04ED-4572-B7C9-04DC806B6513@zxcvm.com> <20140415130943.GC15884@in-addr.com> To: Gary Palmer X-Mailer: Apple Mail (2.1874) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 13:23:54 -0000 There are many open files as the server is a file server and I can see = 60 files open that have been deleted. 9 of those files belong to the user that I detailed below. Weekly zpool scrubs are run. I appreciate your response, but am not convinced that those 9 files add = up to the 18 GB difference for the user below. I can see the size of the files from lsof which adds up to just over 1 = MB. Jason Breitman jbreitman@zxcvm.com On Apr 15, 2014, at 9:09 AM, Gary Palmer wrote: On Tue, Apr 15, 2014 at 08:25:51AM -0400, Jason Breitman wrote: > I have not received a reply to my question and wanted to post again so = that the group could see the question again. >=20 Are there any open files on the filesystem? fstat or lsof should tell = you, I think. Files thare are open but that have been deleted show up in df but not = du. That is not unique to zfs. If there are no open files I'm not sure what else to suggest. Does a = scrub pass OK? Regards, Gary >=20 >=20 >=20 > On Mar 26, 2014, at 12:53 PM, Jason Breitman = wrote: >=20 > The different disk usage measurements are frequently discussed and = most of the time snapshots are the source of confusion. > I use refquota to avoid this confusion for user based file systems, = but can not explain the below reports and hope you can help. >=20 > Why is there an ~18 GB difference between du and df / usedbydataset? > I included additional information so that you can see that used =3D = usedbysnapshots + usedbydataset and that there are no reservations. >=20 > # du -sh /tank/users/auser > 5.1G /tank/users/auser >=20 > # df -h /tank/users/auser > Filesystem Size Used Avail Capacity Mounted on > tank/users/auser 35G 23G 11G 66% /tank/users/auser =20= >=20 > # zfs get usedbydataset tank/users/auser > NAME PROPERTY VALUE SOURCE > tank/users/auser usedbydataset 23.2G - >=20 > # zfs get used,usedbysnapshots,usedbydataset tank/users/auser > NAME PROPERTY VALUE SOURCE > tank/users/auser used 63.9G - > tank/users/auser usedbysnapshots 40.7G - > tank/users/auser usedbydataset 23.2G - >=20 > # zfs get refreservation,usedbyrefreservation tank/users/auser > NAME PROPERTY VALUE SOURCE > tank/users/auser refreservation none default > tank/users/auser usedbyrefreservation 0=20 >=20 >=20 > OS: Freebsd 9.1 >=20 > # zpool upgrade -v > This system is currently running ZFS pool version 28. >=20 > The following versions are supported: >=20 > VER DESCRIPTION > --- -------------------------------------------------------- > 1 Initial ZFS version > 2 Ditto blocks (replicated metadata) > 3 Hot spares and double parity RAID-Z > 4 zpool history > 5 Compression using the gzip algorithm > 6 bootfs pool property > 7 Separate intent log devices > 8 Delegated administration > 9 refquota and refreservation properties > 10 Cache devices > 11 Improved scrub performance > 12 Snapshot properties > 13 snapused property > 14 passthrough-x aclinherit > 15 user/group space accounting > 16 stmf property support > 17 Triple-parity RAID-Z > 18 Snapshot user holds > 19 Log device removal > 20 Compression using zle (zero-length encoding) > 21 Deduplication > 22 Received properties > 23 Slim ZIL > 24 System attributes > 25 Improved scrub stats > 26 Improved snapshot deletion performance > 27 Improved snapshot creation performance > 28 Multiple vdev replacements >=20 > For more information on a particular version, including supported = releases, > see the ZFS Administration Guide. >=20 > # zfs upgrade -v > The following filesystem versions are supported: >=20 > VER DESCRIPTION > --- -------------------------------------------------------- > 1 Initial ZFS filesystem version > 2 Enhanced directory entries > 3 Case insensitive and filesystem user identifier (FUID) > 4 userquota, groupquota properties > 5 System attributes >=20 > For more information on a particular version, including supported = releases, > see the ZFS Administration Guide. >=20 >=20 >=20 > Jason Breitman > jbreitman@zxcvm.com >=20 >=20 >=20 >=20 >=20 >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20 From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 13:40:58 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EEA67F63; Tue, 15 Apr 2014 13:40:58 +0000 (UTC) Received: from mail0.glenbarber.us (mail0.glenbarber.us [IPv6:2607:fc50:1:2300:1001:1001:1001:face]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail0.glenbarber.us", Issuer "Gandi Standard SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id BBEDA1B37; Tue, 15 Apr 2014 13:40:58 +0000 (UTC) Received: from glenbarber.us (70.15.88.86.res-cmts.sewb.ptd.net [70.15.88.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: gjb) by mail0.glenbarber.us (Postfix) with ESMTPSA id 0BC691022C; Tue, 15 Apr 2014 13:40:56 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.8.3 mail0.glenbarber.us 0BC691022C Authentication-Results: mail0.glenbarber.us; dkim=none reason="no signature"; dkim-adsp=none Date: Tue, 15 Apr 2014 09:40:55 -0400 From: Glen Barber To: Jason Breitman Subject: Re: Differences in reporting by du df and usedbydataset Message-ID: <20140415134055.GC1300@glenbarber.us> References: <0D13866F-04ED-4572-B7C9-04DC806B6513@zxcvm.com> <20140415130943.GC15884@in-addr.com> <191CFFCB-6E4C-4F17-AF6E-33384E1572A5@zxcvm.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="VrqPEDrXMn8OVzN4" Content-Disposition: inline In-Reply-To: <191CFFCB-6E4C-4F17-AF6E-33384E1572A5@zxcvm.com> X-Operating-System: FreeBSD 11.0-CURRENT amd64 X-SCUD-Definition: Sudden Completely Unexpected Dataloss X-SULE-Definition: Sudden Unexpected Learning Event User-Agent: Mutt/1.5.23 (2014-03-12) Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 13:40:59 -0000 --VrqPEDrXMn8OVzN4 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Tue, Apr 15, 2014 at 09:23:50AM -0400, Jason Breitman wrote: > I appreciate your response, but am not convinced that those 9 > files add up to the 18 GB difference for the user below. What does 'du -shA .' say? Also, please show 'zfs get compression'. Glen --VrqPEDrXMn8OVzN4 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQIcBAEBCAAGBQJTTTbmAAoJELls3eqvi17QgKYQAKMFuyyz5Tdsxuq5ICAWRHCy FXLckPQeW2XhcSGd7nwr4wzJyOYLidxHVvq4is5EqRnNQSOZFVrIyJL2uAm2plYO G6XGo4Ha0tpR69XJewM77krlMXXhnW2heklCgjb6mIxmUUauRu25XfvI0BR0A4xl zFjp2kdGS/BizGnUGeNqN+ISiP7BAyu4rvWnG0mLvgOwSBWNwzRNLCGwIn9a8eTp CwSNYaWT5WD4LtYlQ1dGagUpVkfrNxFsNOxmJcwOuS5hqvZoK57QDGhUDIk7v0EE uISQaYn3o3AEoWLVrZsDCF3zr9gkKaf03cn+Ah1RtfL6NRDJ3o+6S5Xv+tbbxXQY N9paCFy1lnOsxipqc4JO9/mEBZ1JB8fS3egJyA4YFmTNQbv2hFiW0SNVofSkk3EH Sun1BTDK/f5oV6cwGeEw8m7gyWSaT3L9NVmQ/Ll1AtVf5xNJMOi4lnqhR5EKuRuU MDNXBs5Ksj+MwCorhaQyL84UpgPs3vObx/UpF7q/FHKwPi1XEkV0132wA0AZ0nB8 PojPPB3Mt7xLcOCepgoIN+jg4F83Stl72roTU35Dy9f9N8G9zLnW0suoVTQ2fHV7 622j6IbYcbz+qZUtve+myQydjnVprGozWArprJge+UENXNkgaAzmAa6n/p2/lqZf KRrKYZh9EkC3MM9ASiDH =nFij -----END PGP SIGNATURE----- --VrqPEDrXMn8OVzN4-- From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 14:09:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 57AFCEFD for ; Tue, 15 Apr 2014 14:09:15 +0000 (UTC) Received: from mx1.dui.nkhosting.net (mx1.dui.nkhosting.net [213.9.94.26]) by mx1.freebsd.org (Postfix) with ESMTP id 1B9CE1043 for ; Tue, 15 Apr 2014 14:09:14 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mx1.dui.nkhosting.net (Postfix) with ESMTP id A259D256A8AC6 for ; Tue, 15 Apr 2014 16:09:08 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at mx1.dui.nkhosting.net Received: from mx1.dui.nkhosting.net ([127.0.0.1]) by localhost (mx1.dui.nkhosting.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HOlX7pwLeW0w for ; Tue, 15 Apr 2014 16:08:54 +0200 (CEST) Received: from [192.168.36.108] (w36.nkhosting.net [85.183.116.20]) (Authenticated sender: pjlists@netzkommune.de) by mx1.dui.nkhosting.net (Postfix) with ESMTP id 73E1B20E4B1B9 for ; Tue, 15 Apr 2014 16:08:53 +0200 (CEST) From: Philip Jocks Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Subject: ZFS bootloader issues Message-Id: Date: Tue, 15 Apr 2014 16:08:52 +0200 To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\)) X-Mailer: Apple Mail (2.1510) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 14:09:15 -0000 Hi, I hope this is the right list. I recently installed a new server with FreeBSD 10, roughly following = this Wiki page https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE The machine holds 5x3TB S-ATA Disks in hardware RAID 6, since the = controller doesn't support JBOD and I didn't want to create 5 RAID0 = arrays because that sounded like more trouble than it would help. Yesterday I found out that the ISP didn't get me a Dual CPU machine, but = only a single CPU machine, so today they installed a second CPU and = started the box again, but it wouldn't boot, the Bootloader throws this = error: -- ZFS: i/o error - all block copies unavailable ZFS: can=B4t find filesystem by guid ZFS: i/o error - all block copies unavailable ZFS: can=B4t find filesystem by guid can=B4t load =B4kernel=B4 -- I tried re-adding the bootcode by issuing "gpart bootcode -b /boot/pmbr = -p /boot/gptzfsboot -i 1 mfid0", but it didn't help. If I boot from CD I can access the Filesystem just fine. Any pointers would be greatly appreciated. Cheers, Philip= From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 14:26:38 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C8C2E64B; Tue, 15 Apr 2014 14:26:38 +0000 (UTC) Received: from smtp.01.com (smtp.01.com [199.36.142.181]) by mx1.freebsd.org (Postfix) with ESMTP id 9509C123E; Tue, 15 Apr 2014 14:26:38 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id A1D6529C166; Tue, 15 Apr 2014 09:26:37 -0500 (CDT) X-Virus-Scanned: amavisd-new at smtp-out-1.01.com Received: from smtp.01.com ([127.0.0.1]) by localhost (smtp-out-1.01.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tuRQe9AtxdWZ; Tue, 15 Apr 2014 09:26:37 -0500 (CDT) Received: from smtp.01.com (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id 7C3CA29C16C; Tue, 15 Apr 2014 09:26:37 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by smtp-out-1.01.com (Postfix) with ESMTP id 6E82E29C16A; Tue, 15 Apr 2014 09:26:37 -0500 (CDT) X-Virus-Scanned: amavisd-new at smtp-out-1.01.com Received: from smtp.01.com ([127.0.0.1]) by localhost (smtp-out-1.01.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 0Ay-LfzDTwlj; Tue, 15 Apr 2014 09:26:37 -0500 (CDT) Received: from newman.zxcvm.com (unknown [38.109.103.138]) by smtp-out-1.01.com (Postfix) with ESMTPSA id 7541129C157; Tue, 15 Apr 2014 09:26:36 -0500 (CDT) Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: Differences in reporting by du df and usedbydataset From: Jason Breitman In-Reply-To: Date: Tue, 15 Apr 2014 10:26:34 -0400 Message-Id: References: <0D13866F-04ED-4572-B7C9-04DC806B6513@zxcvm.com> <20140415130943.GC15884@in-addr.com> <191CFFCB-6E4C-4F17-AF6E-33384E1572A5@zxcvm.com> <20140415134055.GC1300@glenbarber.us> To: Glen Barber X-Mailer: Apple Mail (2.1874) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 14:26:38 -0000 # du -shA /tank/users/auser 4.4G /tank/users/auser # zfs get compression tank/users/auser NAME PROPERTY VALUE SOURCE tank/users/auser compression off default Jason Breitman jbreitman@zxcvm.com On Apr 15, 2014, at 9:40 AM, Glen Barber wrote: On Tue, Apr 15, 2014 at 09:23:50AM -0400, Jason Breitman wrote: > I appreciate your response, but am not convinced that those 9 > files add up to the 18 GB difference for the user below. What does 'du -shA .' say? Also, please show 'zfs get compression'. Glen From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 16:54:29 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0BA94B82 for ; Tue, 15 Apr 2014 16:54:29 +0000 (UTC) Received: from cavuit02.kulnet.kuleuven.be (rhcavuit02.kulnet.kuleuven.be [IPv6:2a02:2c40:0:c0::25:130]) by mx1.freebsd.org (Postfix) with ESMTP id BA5DC1253 for ; Tue, 15 Apr 2014 16:54:28 +0000 (UTC) X-KULeuven-Envelope-From: bram.vandoren@ster.kuleuven.be X-Spam-Status: not spam, SpamAssassin (not cached, score=-48.726, required 5, autolearn=disabled, LOCAL_SMTPS -50.00, RDNS_NONE 1.27) X-KULeuven-Scanned: Found to be clean X-KULeuven-ID: 36A48128049.A0D05 X-KULeuven-Information: Katholieke Universiteit Leuven Received: from icts-p-smtps-1.cc.kuleuven.be (icts-p-smtps-1e.kulnet.kuleuven.be [134.58.240.33]) by cavuit02.kulnet.kuleuven.be (Postfix) with ESMTP id 36A48128049; Tue, 15 Apr 2014 18:54:24 +0200 (CEST) Received: from miaplacidus.ster.kuleuven.be (unknown [10.33.178.95]) by icts-p-smtps-1.cc.kuleuven.be (Postfix) with ESMTP id 2CB1F4070; Tue, 15 Apr 2014 18:54:21 +0200 (CEST) Message-ID: <534D643D.7030901@ster.kuleuven.be> Date: Tue, 15 Apr 2014 18:54:21 +0200 X-Kuleuven: This mail passed the K.U.Leuven mailcluster From: Bram Vandoren User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Jason Breitman Subject: Re: Differences in reporting by du df and usedbydataset References: <0D13866F-04ED-4572-B7C9-04DC806B6513@zxcvm.com> <20140415130943.GC15884@in-addr.com> <191CFFCB-6E4C-4F17-AF6E-33384E1572A5@zxcvm.com> <20140415134055.GC1300@glenbarber.us> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 16:54:29 -0000 You should check the compressratio property if you enabled compression in the past and disabled it again. Cheers, Bram On 04/15/2014 04:26 PM, Jason Breitman wrote: > # du -shA /tank/users/auser > 4.4G /tank/users/auser > > # zfs get compression tank/users/auser > NAME PROPERTY VALUE SOURCE > tank/users/auser compression off default > > Jason Breitman > jbreitman@zxcvm.com > > > > On Apr 15, 2014, at 9:40 AM, Glen Barber wrote: > > On Tue, Apr 15, 2014 at 09:23:50AM -0400, Jason Breitman wrote: >> I appreciate your response, but am not convinced that those 9 >> files add up to the 18 GB difference for the user below. > > What does 'du -shA .' say? > > Also, please show 'zfs get compression'. > > Glen From owner-freebsd-fs@FreeBSD.ORG Tue Apr 15 21:01:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3827513F; Tue, 15 Apr 2014 21:01:29 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id DE78F1BC8; Tue, 15 Apr 2014 21:01:28 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqUEAJGdTVODaFve/2dsb2JhbABag0FXgxG4Z4ZlUYFBdIIeBwEBAQQBAQEgKyALGxgCAg0ZAikBCSYOBwQBGgIEhSgHgiwNqWqiYheBKYxYEAIBGzQHgm+BSQSWCoQOkQ+DTSExgT0 X-IronPort-AV: E=Sophos;i="4.97,866,1389762000"; d="scan'208";a="114741925" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 15 Apr 2014 17:01:21 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id C3079B404F; Tue, 15 Apr 2014 17:01:21 -0400 (EDT) Date: Tue, 15 Apr 2014 17:01:21 -0400 (EDT) From: Rick Macklem To: araujo@FreeBSD.org Message-ID: <1786007789.11586463.1397595681788.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: NFSv4: prob err=10036 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Linux)/7.2.1_GA_2790) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Apr 2014 21:01:29 -0000 Marcelo Araujo wrote: > > Hello Rick, > > > Thanks by the prompt reply, and I'm sorry my late reply, > unfortunately I'm located in Taiwan, so, timezone is an issue. > > > So here attached is my pcap. > > > Server IP: 172.17.32.42 > Client IP: 172.17.32.54 > > > Something related with RELEASE_LOCKOWNER, I'm still investigating, > maybe I can find a solution before you reply, if yes, I will post > here. > Well, I looked at the packet trace and it is weird. One field (the NFSv4 operation #) is incorrect in the packet. It should have been 33 (0x21), which is PUTROOTFH and instead it is 39 (0x27), which is RELEASELOCKOWNER. All the arguments after the operation # are correct for the RPC, if that operation# was 33 (PUTROOTFH). Since the call looks like this (around line#4303 in sys/fs/nfsclient/nfs_clrpcops.c): nfscl_reqstart(nd, NFSPROC_PUTROOTFH, nmp, NULL, 0, &opcntp, NULL); I can't imagine how NFSPROC_PUTROOTFH became NFSPROC_RELEASELCKOWN? (Btw, there is a mapping from NFSPROC_xxx to NFSV4OP_xxx that occurs, so these arguments are 33 and 34 respectively and not 33 and 39.) So, somehow the argument gets incremented by one when it is on the stack for the call. (It would be 34 in nfscl_reqstart(), since the tag is "Rellckown" and not "Dirpath" in the packet header. This tag is for debugging only and doesn't affect the RPC's semantics. For once, it was useful;-) So, this isn't some data error later, such as "on the wire". All I can suggest is that something is stomping on this field on the stack or there is a memory problem where this stack argument sits? Aren't computers fun? rick > > Thanks again. > > > > 2014-04-14 22:00 GMT+08:00 Rick Macklem < rmacklem@uoguelph.ca > : > > > > Marcelo Araujo wrote: > > Hi all, > > > > Anyone have saw this prob err before when try to mount a NFSv4? > > > > machine_a# mount -t nfs -o nfsv4 192.168.2.100:/a /mnt/ > > machine_a# mount_nfs: /mnt, : Input/output error > > machine_a# tail /var/log/messages |grep nfsv4 > > Apr 13 17:03:33 ESSD46B6E kernel: nfsv4 client/server protocol prob > > err=10036 > > > Well, 10036 is NFSERR_BADXDR (they are all in sys/fs/nfs/nfsproto.h). > This means that the server didn't like the RPC message presented to > it. > (I have no idea why that would be the case for machine_a?) > > If you capture packets while attempting the mount, you can look at > them in wireshark and maybe see how they are trashed? (I just got > home, > so I can take a look at a packet capture, if you email it to me as an > attachment.) > # tcpdump -s 0 -w mnt.pcap host 192.168.1.100 > - run on machine_a during the mount attempt, should do it (in > mnt.pcap). > > rick > > > > I have another machine with the same settings that can mount > > successfully > > the same NFSv4 share. > > > > machine_c# mount -t nfs -o nfsv4 192.168.2.100:/a /mnt/ > > machine_c# > > > > Best Regards, > > -- > > Marcelo Araujo > > araujo@FreeBSD.org > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to " > > freebsd-fs-unsubscribe@freebsd.org " > > > > > > > -- > Marcelo Araujo > araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 00:38:57 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 103DDA15; Wed, 16 Apr 2014 00:38:57 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D96F811A6; Wed, 16 Apr 2014 00:38:56 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3G0cub0089564; Wed, 16 Apr 2014 00:38:56 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3G0cuEN089563; Wed, 16 Apr 2014 00:38:56 GMT (envelope-from linimon) Date: Wed, 16 Apr 2014 00:38:56 GMT Message-Id: <201404160038.s3G0cuEN089563@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/181791: [zfs] ZFS ARC Deadlock X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 00:38:57 -0000 Old Synopsis: ZFS ARC Deadlock New Synopsis: [zfs] ZFS ARC Deadlock Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 16 00:38:40 UTC 2014 Responsible-Changed-Why: reclassify and assign. http://www.freebsd.org/cgi/query-pr.cgi?pr=181791 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 01:00:24 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4DD41288; Wed, 16 Apr 2014 01:00:24 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 22CE51369; Wed, 16 Apr 2014 01:00:24 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3G10NHF099389; Wed, 16 Apr 2014 01:00:23 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3G10NNS099388; Wed, 16 Apr 2014 01:00:23 GMT (envelope-from linimon) Date: Wed, 16 Apr 2014 01:00:23 GMT Message-Id: <201404160100.s3G10NNS099388@freefall.freebsd.org> To: harrison@glsan.com, linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/184092: [zfs] zfs zvol devices are not appearing till after reimport of pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 01:00:24 -0000 Old Synopsis: zfs zvol devices are not appearing till after reimport of pool New Synopsis: [zfs] zfs zvol devices are not appearing till after reimport of pool State-Changed-From-To: open->closed State-Changed-By: linimon State-Changed-When: Wed Apr 16 00:59:45 UTC 2014 State-Changed-Why: see kern/178999. Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 16 00:59:45 UTC 2014 Responsible-Changed-Why: http://www.freebsd.org/cgi/query-pr.cgi?pr=184092 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 01:03:10 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 566C3488; Wed, 16 Apr 2014 01:03:10 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2BD931451; Wed, 16 Apr 2014 01:03:10 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3G13Axs000489; Wed, 16 Apr 2014 01:03:10 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3G13A1Z000488; Wed, 16 Apr 2014 01:03:10 GMT (envelope-from linimon) Date: Wed, 16 Apr 2014 01:03:10 GMT Message-Id: <201404160103.s3G13A1Z000488@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/184677: [zfs] [panic] ZFS snapshot umount kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 01:03:10 -0000 Old Synopsis: ZFS snapshot umount kernel panic New Synopsis: [zfs] [panic] ZFS snapshot umount kernel panic Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 16 01:02:38 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=184677 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 01:18:27 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A3A5DB7A; Wed, 16 Apr 2014 01:18:27 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 787D715B1; Wed, 16 Apr 2014 01:18:27 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3G1IRR9005268; Wed, 16 Apr 2014 01:18:27 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3G1IRNk005267; Wed, 16 Apr 2014 01:18:27 GMT (envelope-from linimon) Date: Wed, 16 Apr 2014 01:18:27 GMT Message-Id: <201404160118.s3G1IRNk005267@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/186112: [zfs] [panic] ZFS Panic/Solaris Assert/zap.c:479 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 01:18:27 -0000 Old Synopsis: ZFS Panic/Solaris Assert/zap.c:479 New Synopsis: [zfs] [panic] ZFS Panic/Solaris Assert/zap.c:479 Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 16 01:18:05 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=186112 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 01:35:40 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5ABC72AE; Wed, 16 Apr 2014 01:35:40 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2E2DB1750; Wed, 16 Apr 2014 01:35:40 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3G1ZesY011976; Wed, 16 Apr 2014 01:35:40 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3G1Zei0011975; Wed, 16 Apr 2014 01:35:40 GMT (envelope-from linimon) Date: Wed, 16 Apr 2014 01:35:40 GMT Message-Id: <201404160135.s3G1Zei0011975@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/188328: [zfs] UPDATING should provide caveats for running `zpool upgrade` X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 01:35:40 -0000 Old Synopsis: UPDATING should provide caveats for running `zpool upgrade` New Synopsis: [zfs] UPDATING should provide caveats for running `zpool upgrade` Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 16 01:34:11 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=188328 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 01:45:51 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D0B327BF; Wed, 16 Apr 2014 01:45:51 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A59D51850; Wed, 16 Apr 2014 01:45:51 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3G1jp31015751; Wed, 16 Apr 2014 01:45:51 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3G1jo6v015750; Wed, 16 Apr 2014 01:45:50 GMT (envelope-from linimon) Date: Wed, 16 Apr 2014 01:45:50 GMT Message-Id: <201404160145.s3G1jo6v015750@freefall.freebsd.org> To: kmd@kmd.twbbs.org, linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/187414: [zfs] ZFS Write Deadlock on 8.4 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 01:45:51 -0000 Old Synopsis: ZFS Write Deadlock New Synopsis: [zfs] ZFS Write Deadlock on 8.4 State-Changed-From-To: open->suspended State-Changed-By: linimon State-Changed-When: Wed Apr 16 01:43:20 UTC 2014 State-Changed-Why: Clean up synopsis and assignment. Setting to Suspended since submitter notes it no longer happens with 10.0, but leaving in GNATS to warn others. Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 16 01:43:20 UTC 2014 Responsible-Changed-Why: http://www.freebsd.org/cgi/query-pr.cgi?pr=187414 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 02:08:55 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5EC58DB7; Wed, 16 Apr 2014 02:08:55 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 311F51A1D; Wed, 16 Apr 2014 02:08:55 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3G28tqk023754; Wed, 16 Apr 2014 02:08:55 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3G28t2O023753; Wed, 16 Apr 2014 02:08:55 GMT (envelope-from linimon) Date: Wed, 16 Apr 2014 02:08:55 GMT Message-Id: <201404160208.s3G28t2O023753@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/186652: [smbfs] [panic] crash during umount -a -t smbfs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 02:08:55 -0000 Old Synopsis: Crash during umount -a -t smbfs New Synopsis: [smbfs] [panic] crash during umount -a -t smbfs Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 16 02:08:27 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=186652 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 02:10:40 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 661D7EBA; Wed, 16 Apr 2014 02:10:40 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 399891AA3; Wed, 16 Apr 2014 02:10:40 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3G2AekQ026466; Wed, 16 Apr 2014 02:10:40 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3G2Aeex026465; Wed, 16 Apr 2014 02:10:40 GMT (envelope-from linimon) Date: Wed, 16 Apr 2014 02:10:40 GMT Message-Id: <201404160210.s3G2Aeex026465@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/186854: [ext2fs] [patch] allow mounting an ext4 file system with uninit_bg and flex_bg in read-only mode X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 02:10:40 -0000 Old Synopsis: mount a ext4 file system with uninit_bg and flex_bg in read-only mode New Synopsis: [ext2fs] [patch] allow mounting an ext4 file system with uninit_bg and flex_bg in read-only mode Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 16 02:09:05 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=186854 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 02:11:12 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D5AD1F81; Wed, 16 Apr 2014 02:11:12 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9467D1AB3; Wed, 16 Apr 2014 02:11:12 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3G2BCsP026526; Wed, 16 Apr 2014 02:11:12 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3G2BCHq026525; Wed, 16 Apr 2014 02:11:12 GMT (envelope-from linimon) Date: Wed, 16 Apr 2014 02:11:12 GMT Message-Id: <201404160211.s3G2BCHq026525@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/186942: [zfs] [panic] Fatal trap 12 (seems zfs related) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 02:11:12 -0000 Old Synopsis: Fatal trap 12 (seems zfs related) New Synopsis: [zfs] [panic] Fatal trap 12 (seems zfs related) Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 16 02:10:49 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=186942 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 11:41:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 147E3150 for ; Wed, 16 Apr 2014 11:41:52 +0000 (UTC) Received: from walkerdesigns.hosting24.com.au (walkerdesigns.hosting24.com.au [111.67.12.204]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9EB78138D for ; Wed, 16 Apr 2014 11:41:51 +0000 (UTC) Received: from themewsm by walkerdesigns.hosting24.com.au with local (Exim 4.82) (envelope-from ) id 1WaODm-0002Q2-Iq for freebsd-fs@freebsd.org; Wed, 16 Apr 2014 21:41:42 +1000 Date: Wed, 16 Apr 2014 21:41:42 +1000 To: freebsd-fs@freebsd.org From: =?UTF-8?Q?Latest_Drive?= Subject: =?UTF-8?Q?New_Document?= Message-ID: <88c1c0d5b3cf3d2614f00e0b49d63d14@www.themewsmotel.com.au> X-Priority: 1 X-Mailer: PHPMailer (phpmailer.sourceforge.net) [version ] X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - walkerdesigns.hosting24.com.au X-AntiAbuse: Original Domain - freebsd.org X-AntiAbuse: Originator/Caller UID/GID - [596 593] / [47 12] X-AntiAbuse: Sender Address Domain - walkerdesigns.hosting24.com.au X-Get-Message-Sender-Via: walkerdesigns.hosting24.com.au: authenticated_id: themewsm/only user confirmed/virtual account not confirmed MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 11:41:52 -0000 Citi Hello ! Introducing the new GOOGLE DRIVE APP !! This is the Latest version of google driver to help you create, manage, keep and share your documents. Please click on the link below to download the latest google driver [1]http://google.com/downloadnewdriver/new-client Note: Please note that this driver helps you manage/merge all your contacts as well as emails in one place. And you may be required to enter your username and password to help ensure your google driver is being downloaded by you and not someone else. Please be informed, that this service is NOT limited to google subscribers only, it's for all email users. On event that this email hits your junk folder, please. move it to your inbox by clicking the ''not spam'' button in your junk folder, as this will enable you click the link included in this email. Alternatively, you may include apps@google.com in your email contacts to ensure you get all future communications from google in your inbox folder Thank you for your time. Sincerely: Google email services [2][small?v=mpbl-1&px=-1] To see all of the services available to you, such as: google translate, e-books, google wallet, shopping, google blogger, google finance and many more, please visit [3]https://http://www.google.com/intl/en/about/products/. To reply to this Alert, please send us a secure message from [4]www.gmail.com. References Visible links 1. https://moviebug.org/k/index.htm 2. https://gmail.com/ 3. https://http://www.google.com/intl/en/about/products/ 4. https://www.gmail.com/ob?gsessionid=DEoeJd8Pi4W6p8KgibDAwQ Hidden links: 5. http://google.com/downloadnewdriver/new-client From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 13:40:06 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2FE1131D for ; Wed, 16 Apr 2014 13:40:06 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 033BF10F1 for ; Wed, 16 Apr 2014 13:40:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3GDe50h070395 for ; Wed, 16 Apr 2014 13:40:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3GDe5iG070393; Wed, 16 Apr 2014 13:40:05 GMT (envelope-from gnats) Date: Wed, 16 Apr 2014 13:40:05 GMT Message-Id: <201404161340.s3GDe5iG070393@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: John Baldwin Subject: Re: kern/187414: [zfs] ZFS Write Deadlock on 8.4 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: John Baldwin List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 13:40:06 -0000 The following reply was made to PR kern/187414; it has been noted by GNATS. From: John Baldwin To: bug-followup@freebsd.org, kmd@kmd.twbbs.org Cc: Subject: Re: kern/187414: [zfs] ZFS Write Deadlock on 8.4 Date: Wed, 16 Apr 2014 09:39:26 -0400 -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Wed Apr 16 14:10:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6C9D0E9F for ; Wed, 16 Apr 2014 14:10:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4095C13DB for ; Wed, 16 Apr 2014 14:10:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3GEA13t077342 for ; Wed, 16 Apr 2014 14:10:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3GEA1nO077341; Wed, 16 Apr 2014 14:10:01 GMT (envelope-from gnats) Date: Wed, 16 Apr 2014 14:10:01 GMT Message-Id: <201404161410.s3GEA1nO077341@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: John Baldwin Subject: Re: kern/187414: [zfs] ZFS Write Deadlock on 8.4 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: John Baldwin List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Apr 2014 14:10:01 -0000 The following reply was made to PR kern/187414; it has been noted by GNATS. From: John Baldwin To: bug-followup@freebsd.org, kmd@kmd.twbbs.org Cc: Subject: Re: kern/187414: [zfs] ZFS Write Deadlock on 8.4 Date: Wed, 16 Apr 2014 09:40:31 -0400 If the processes are hung in the "dmu_tx_delay" wait channel, then this might have been fixed by: http://svnweb.freebsd.org/base?view=revision&revision=264505 -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 01:13:41 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 113878EA; Thu, 17 Apr 2014 01:13:41 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DB2EA1D8E; Thu, 17 Apr 2014 01:13:40 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3H1De1E098585; Thu, 17 Apr 2014 01:13:40 GMT (envelope-from pfg@freefall.freebsd.org) Received: (from pfg@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3H1DeQN098584; Thu, 17 Apr 2014 01:13:40 GMT (envelope-from pfg) Date: Thu, 17 Apr 2014 01:13:40 GMT Message-Id: <201404170113.s3H1DeQN098584@freefall.freebsd.org> To: gnehzuil@gmail.com, pfg@FreeBSD.org, freebsd-fs@FreeBSD.org, pfg@FreeBSD.org From: pfg@FreeBSD.org Subject: Re: kern/186854: [ext2fs] [patch] allow mounting an ext4 file system with uninit_bg and flex_bg in read-only mode X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 01:13:41 -0000 Synopsis: [ext2fs] [patch] allow mounting an ext4 file system with uninit_bg and flex_bg in read-only mode State-Changed-From-To: open->closed State-Changed-By: pfg State-Changed-When: Thu Apr 17 01:10:51 UTC 2014 State-Changed-Why: Variant committed as r262346 and MFC'd, Thank you for your contribution! Responsible-Changed-From-To: freebsd-fs->pfg Responsible-Changed-By: pfg Responsible-Changed-When: Thu Apr 17 01:10:51 UTC 2014 Responsible-Changed-Why: Grab it as I committed a variant already. http://www.freebsd.org/cgi/query-pr.cgi?pr=186854 From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 04:01:21 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8CDD6CAC for ; Thu, 17 Apr 2014 04:01:21 +0000 (UTC) Received: from hergotha.csail.mit.edu (wollman-1-pt.tunnel.tserv4.nyc4.ipv6.he.net [IPv6:2001:470:1f06:ccb::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 188A11D5D for ; Thu, 17 Apr 2014 04:01:20 +0000 (UTC) Received: from hergotha.csail.mit.edu (localhost [127.0.0.1]) by hergotha.csail.mit.edu (8.14.7/8.14.7) with ESMTP id s3H41HUS082803 for ; Thu, 17 Apr 2014 00:01:18 -0400 (EDT) (envelope-from wollman@hergotha.csail.mit.edu) Received: (from wollman@localhost) by hergotha.csail.mit.edu (8.14.7/8.14.4/Submit) id s3H41GTV082800; Thu, 17 Apr 2014 00:01:16 -0400 (EDT) (envelope-from wollman) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <21327.21004.879860.960260@hergotha.csail.mit.edu> Date: Thu, 17 Apr 2014 00:01:16 -0400 From: Garrett Wollman To: freebsd-fs@freebsd.org Subject: NFS behavior on a ZFS dataset with no quota remaining X-Mailer: VM 7.17 under 21.4 (patch 22) "Instant Classic" XEmacs Lucid X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (hergotha.csail.mit.edu [127.0.0.1]); Thu, 17 Apr 2014 00:01:18 -0400 (EDT) X-Spam-Status: No, score=-0.8 required=5.0 tests=ALL_TRUSTED, HEADER_FROM_DIFFERENT_DOMAINS autolearn=disabled version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on hergotha.csail.mit.edu X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 04:01:21 -0000 Recently one of our users managed to constipate one of our NFS servers in an odd way. They hit the quota on their dataset, and rather than having all of their writes error out as they should have, the NFS server instead stopped responding to all requests. While this was happening, sysctl vfs.nfsd reported: vfs.nfsd.disable_checkutf8: 0 vfs.nfsd.server_max_nfsvers: 4 vfs.nfsd.server_min_nfsvers: 2 vfs.nfsd.nfs_privport: 0 vfs.nfsd.async: 0 vfs.nfsd.enable_locallocks: 0 vfs.nfsd.issue_delegations: 0 vfs.nfsd.commit_miss: 0 vfs.nfsd.commit_blks: 0 vfs.nfsd.mirrormnt: 1 vfs.nfsd.cachetcp: 1 vfs.nfsd.tcpcachetimeo: 300 vfs.nfsd.udphighwater: 500 vfs.nfsd.tcphighwater: 150000 vfs.nfsd.minthreads: 16 vfs.nfsd.maxthreads: 64 vfs.nfsd.threads: 18 vfs.nfsd.request_space_used: 36520872 vfs.nfsd.request_space_used_highest: 47536420 vfs.nfsd.request_space_high: 47185920 vfs.nfsd.request_space_low: 31457280 vfs.nfsd.request_space_throttled: 1 vfs.nfsd.request_space_throttle_count: 8451 vfs.nfsd.fha.enable: 1 vfs.nfsd.fha.bin_shift: 22 vfs.nfsd.fha.max_nfsds_per_fh: 8 vfs.nfsd.fha.max_reqs_per_nfsd: 0 vfs.nfsd.fha.fhe_stats: fhe 0xfffffe103fcab6c0: { fh: 32922711030235146 num_rw: 457 num_exclusive: 7 num_threads: 2 thread 0xfffffe0112f72a00 offset 26738688 (count 457) thread 0xfffffe04b0751080 offset 4390912 (count 7) }, fhe 0xfffffe02fe8acd80: { fh: 32922925778599946 num_rw: 90 num_exclusive: 0 num_threads: 2 thread 0xfffffe0e77ee2c80 offset 6946816 (count 17) thread 0xfffffe0d25c1f280 offset 2752512 (count 73) } I increased their quota by a terabyte, and NFS immediately started working again, for all clients. But this seems, um, very bad. Can anyone explain what's going on in either NFS or ZFS that could cause this? I must emphasize that the zpool was by no means out of space; it was merely one client dataset (out of many) that hit its quota. A few seconds after increasing the quota, the sysctl tree looks like this: vfs.nfsd.disable_checkutf8: 0 vfs.nfsd.server_max_nfsvers: 4 vfs.nfsd.server_min_nfsvers: 2 vfs.nfsd.nfs_privport: 0 vfs.nfsd.async: 0 vfs.nfsd.enable_locallocks: 0 vfs.nfsd.issue_delegations: 0 vfs.nfsd.commit_miss: 0 vfs.nfsd.commit_blks: 0 vfs.nfsd.mirrormnt: 1 vfs.nfsd.cachetcp: 1 vfs.nfsd.tcpcachetimeo: 300 vfs.nfsd.udphighwater: 500 vfs.nfsd.tcphighwater: 150000 vfs.nfsd.minthreads: 16 vfs.nfsd.maxthreads: 64 vfs.nfsd.threads: 36 vfs.nfsd.request_space_used: 71688 vfs.nfsd.request_space_used_highest: 47536420 vfs.nfsd.request_space_high: 47185920 vfs.nfsd.request_space_low: 31457280 vfs.nfsd.request_space_throttled: 0 vfs.nfsd.request_space_throttle_count: 8455 vfs.nfsd.fha.enable: 1 vfs.nfsd.fha.bin_shift: 22 vfs.nfsd.fha.max_nfsds_per_fh: 8 vfs.nfsd.fha.max_reqs_per_nfsd: 0 vfs.nfsd.fha.fhe_stats: fhe 0xfffffe10807ad540: { fh: 32896773722738482 num_rw: 9 num_exclusive: 0 num_threads: 1 thread 0xfffffe0035dc6380 offset 131072 (count 9) }, fhe 0xfffffe14197de7c0: { fh: 32757316134636194 num_rw: 8 num_exclusive: 0 num_threads: 1 thread 0xfffffe02c3290800 offset 131072 (count 8) }, fhe 0xfffffe06f2280cc0: { fh: 32869182852828802 num_rw: 2 num_exclusive: 0 num_threads: 1 thread 0xfffffe038b3a8200 offset 0 (count 2) }, fhe 0xfffffe0c90f5f400: { fh: 32493416164103072 num_rw: 2 num_exclusive: 0 num_threads: 1 thread 0xfffffe0ea55e4d00 offset 0 (count 2) }, fhe 0xfffffe0ca9bd3d40: { fh: 32896984176135987 num_rw: 2 num_exclusive: 0 num_threads: 1 thread 0xfffffe038b41ee00 offset 0 (count 2) }, fhe 0xfffffe07c47884c0: { fh: 32897044305678131 num_rw: 4 num_exclusive: 0 num_threads: 1 thread 0xfffffe03aff63300 offset 131072 (count 4) }, fhe 0xfffffe0aa9b151c0: { fh: 32892809467924243 num_rw: 2 num_exclusive: 0 num_threads: 1 thread 0xfffffe0f928e3780 offset 0 (count 2) }, fhe 0xfffffe0762c91300: { fh: 32609079633383714 num_rw: 1 num_exclusive: 0 num_threads: 1 thread 0xfffffe0a44496700 offset 0 (count 1) }, fhe 0xfffffe11b0bf43c0: { fh: 32869363241455234 num_rw: 2 num_exclusive: 0 num_threads: 1 thread 0xfffffe0d550b4900 offset 0 (count 2) }, fhe 0xfffffe1771ebd740: { fh: 32753381944593018 num_rw: 6 num_exclusive: 0 num_threads: 1 thread 0xfffffe1342368700 offset 131072 (count 6) }, fhe 0xfffffe0ba23a52c0: { fh: 32679023175800193 num_rw: 1 num_exclusive: 0 num_threads: 1 thread 0xfffffe07460a8280 offset 0 (count 1) }, fhe 0xfffffe092bd460c0: { fh: 32770347065412426 num_rw: 2 num_exclusive: 0 num_threads: 1 thread 0xfffffe0446182400 offset 0 (count 2) }, fhe 0xfffffe07d65df600: { fh: 32416961451261960 num_rw: 1 num_exclusive: 0 num_threads: 1 thread 0xfffffe1596ead400 offset 0 (count 1) }, fhe 0xfffffe036487ab40: { fh: 32746333903260256 num_rw: 1 num_exclusive: 0 num_threads: 1 thread 0xfffffe0955989380 offset 0 (count 1) }, fhe 0xfffffe12db02e640: { fh: 32803607292153112 num_rw: 0 num_exclusive: 1 num_threads: 1 thread 0xfffffe0a88c8b780 offset 0 (count 1) }, fhe 0xfffffe0c50823a00: { fh: 32696404908442640 num_rw: 2 num_exclusive: 0 num_threads: 1 thread 0xfffffe04e4a7fc00 offset 1305526272 (count 2) }, fhe 0xfffffe1193fd7000: { fh: 32623167126115560 num_rw: 2 num_exclusive: 0 num_threads: 1 thread 0xfffffe0551cb4280 offset 0 (count 2) }, fhe 0xfffffe0eeacd33c0: { fh: 32809096260357425 num_rw: 2 num_exclusive: 0 num_threads: 1 thread 0xfffffe1516ecdc80 offset 0 (count 2) } Unfortunately, I did not think to take a procstat -kk of the nfsd threads. -GAWollman From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 04:32:23 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BA7854AB; Thu, 17 Apr 2014 04:32:23 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 906151123; Thu, 17 Apr 2014 04:32:23 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3H4WNRl063992; Thu, 17 Apr 2014 04:32:23 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3H4WN42063991; Thu, 17 Apr 2014 04:32:23 GMT (envelope-from linimon) Date: Thu, 17 Apr 2014 04:32:23 GMT Message-Id: <201404170432.s3H4WN42063991@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/172630: [zfs] [lor] zfs/zfs_vfsops.c kern/kern_descrip.c X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 04:32:23 -0000 Old Synopsis: LOR zfs/zfs_vfsops.c kern/kern_descrip.c New Synopsis: [zfs] [lor] zfs/zfs_vfsops.c kern/kern_descrip.c Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu Apr 17 04:32:02 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=172630 From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 04:33:50 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 65108573; Thu, 17 Apr 2014 04:33:50 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 36C4B1137; Thu, 17 Apr 2014 04:33:50 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3H4Xo4B064182; Thu, 17 Apr 2014 04:33:50 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3H4Xniw064181; Thu, 17 Apr 2014 04:33:49 GMT (envelope-from linimon) Date: Thu, 17 Apr 2014 04:33:49 GMT Message-Id: <201404170433.s3H4Xniw064181@freefall.freebsd.org> To: takeda@takeda.tk, linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/172197: [zfs] Userquota (as well as groupquota) does not work on ZFS (possible regression) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 04:33:50 -0000 Old Synopsis: Userquota (as well as groupquota) does not work on ZFS (possible regression) New Synopsis: [zfs] Userquota (as well as groupquota) does not work on ZFS (possible regression) State-Changed-From-To: open->feedback State-Changed-By: linimon State-Changed-When: Thu Apr 17 04:32:42 UTC 2014 State-Changed-Why: reclassify. Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu Apr 17 04:32:42 UTC 2014 Responsible-Changed-Why: to submitter: ZFS has been updated several times since this PR was filed. Is it still relevant? http://www.freebsd.org/cgi/query-pr.cgi?pr=172197 From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 04:35:27 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CFCF963A; Thu, 17 Apr 2014 04:35:27 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A4C24114A; Thu, 17 Apr 2014 04:35:27 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3H4ZR4j064427; Thu, 17 Apr 2014 04:35:27 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3H4ZRFJ064426; Thu, 17 Apr 2014 04:35:27 GMT (envelope-from linimon) Date: Thu, 17 Apr 2014 04:35:27 GMT Message-Id: <201404170435.s3H4ZRFJ064426@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/170523: [zfs] zfs rename pool@snapshot1 pool@snapshot2 UNMOUNTS dataset X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 04:35:27 -0000 Old Synopsis: zfs rename pool@snapshot1 pool@snapshot2 UNMOUNTS dataset New Synopsis: [zfs] zfs rename pool@snapshot1 pool@snapshot2 UNMOUNTS dataset Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu Apr 17 04:35:12 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=170523 From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 09:12:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C92DDA0 for ; Thu, 17 Apr 2014 09:12:34 +0000 (UTC) Received: from mail.bytecamp.net (mail.bytecamp.net [212.204.60.9]) by mx1.freebsd.org (Postfix) with ESMTP id 182B510E4 for ; Thu, 17 Apr 2014 09:12:33 +0000 (UTC) Received: (qmail 17495 invoked by uid 89); 17 Apr 2014 11:12:32 +0200 Received: from stella.bytecamp.net (HELO ?212.204.60.37?) (rs%bytecamp.net@212.204.60.37) by mail.bytecamp.net with CAMELLIA256-SHA encrypted SMTP; 17 Apr 2014 11:12:32 +0200 Message-ID: <534F9AFF.6010809@bytecamp.net> Date: Thu, 17 Apr 2014 11:12:31 +0200 From: Robert Schulze Organization: bytecamp GmbH User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130330 Thunderbird/17.0.5 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: NFS behavior on a ZFS dataset with no quota remaining References: <21327.21004.879860.960260@hergotha.csail.mit.edu> In-Reply-To: <21327.21004.879860.960260@hergotha.csail.mit.edu> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 09:12:34 -0000 Hi, Am 17.04.2014 06:01, schrieb Garrett Wollman: > Recently one of our users managed to constipate one of our NFS servers > in an odd way. They hit the quota on their dataset, and rather than > having all of their writes error out as they should have, the NFS > server instead stopped responding to all requests. this behaviour is present since the beginnings of ZFS in FreeBSD. When a quota limit is reached, local reads are performing very bad or stall completely. The issue in combination with NFS is even more annoying, I've seen that happen quite often. IMHO this is a flaw which can easily be used to DOS any NFS server with quotas set. with kind regards, Robert Schulze -- /7\ bytecamp GmbH Geschwister-Scholl-Str. 10, 14776 Brandenburg a.d. Havel HRB15752, Amtsgericht Potsdam, Geschaeftsfuehrer: Bjoern Barnekow, Frank Rosenbaum, Sirko Zidlewitz tel +49 3381 79637-0 werktags 10-12,13-17 Uhr, fax +49 3381 79637-20 mail rs@bytecamp.net, web http://bytecamp.net/ From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 09:56:02 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 013F5883; Thu, 17 Apr 2014 09:56:02 +0000 (UTC) Received: from home.opsec.eu (home.opsec.eu [IPv6:2001:14f8:200::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 84B9A14E7; Thu, 17 Apr 2014 09:56:01 +0000 (UTC) Received: from home.opsec.eu (localhost [127.0.0.1]) by home.opsec.eu (8.14.7/8.14.7) with ESMTP id s3H9tiCn063188; Thu, 17 Apr 2014 11:55:59 +0200 (CEST) (envelope-from lists@opsec.eu) Received: (from pi@localhost) by home.opsec.eu (8.14.7/8.14.7/Submit) id s3E4b979014506; Mon, 14 Apr 2014 06:37:09 +0200 (CEST) (envelope-from lists@opsec.eu) X-Authentication-Warning: home.opsec.eu: pi set sender to lists@opsec.eu using -f Date: Mon, 14 Apr 2014 06:37:08 +0200 From: Kurt Jaeger To: Dmitry Sivachenko Subject: Re: UFS2 SU+J could not recover after power-off sgain (was: One process which would not die force me to power-cycle server and ALL UFS SUJ FSes are completely broken after that AGAIN!) Message-ID: <20140414043708.GA2341@home.opsec.eu> References: <981154629.20140412170953@serebryakov.spb.ru> <482103242.20140413141010@serebryakov.spb.ru> <8A3243EF-90D0-428A-99C1-8360DB402B86@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8A3243EF-90D0-428A-99C1-8360DB402B86@gmail.com> Cc: freebsd-fs@FreeBSD.org, freebsd-stable@freebsd.org, lev@FreeBSD.org, pfg@FreeBSD.org, mckusick@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 09:56:02 -0000 Hi! > Turn off journaling, it has many issues reported. I agree with that. We do this on every new install. -- pi@opsec.eu +49 171 3101372 6 years to go ! From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 12:30:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4FF7FE0C for ; Thu, 17 Apr 2014 12:30:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3D6A41575 for ; Thu, 17 Apr 2014 12:30:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3HCU2X4045873 for ; Thu, 17 Apr 2014 12:30:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3HCU2UL045872; Thu, 17 Apr 2014 12:30:02 GMT (envelope-from gnats) Date: Thu, 17 Apr 2014 12:30:02 GMT Message-Id: <201404171230.s3HCU2UL045872@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: dfilter@FreeBSD.ORG (dfilter service) Subject: Re: kern/186652: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 12:30:02 -0000 The following reply was made to PR kern/186652; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/186652: commit references a PR Date: Thu, 17 Apr 2014 12:22:14 +0000 (UTC) Author: ae Date: Thu Apr 17 12:22:08 2014 New Revision: 264600 URL: http://svnweb.freebsd.org/changeset/base/264600 Log: Remove redundant unlock. This code was removed from the opensolaris and darwin's netsmb implementations, in DfBSD it also has been disabled. PR: 36566, 87859, 139407, 161579, 175557, 178412, 186652 MFC after: 2 weeks Sponsored by: Yandex LLC Modified: head/sys/netsmb/smb_iod.c Modified: head/sys/netsmb/smb_iod.c ============================================================================== --- head/sys/netsmb/smb_iod.c Thu Apr 17 12:16:51 2014 (r264599) +++ head/sys/netsmb/smb_iod.c Thu Apr 17 12:22:08 2014 (r264600) @@ -87,8 +87,6 @@ smb_iod_invrq(struct smbiod *iod) */ SMB_IOD_RQLOCK(iod); TAILQ_FOREACH(rqp, &iod->iod_rqlist, sr_link) { - if (rqp->sr_flags & SMBR_INTERNAL) - SMBRQ_SUNLOCK(rqp); rqp->sr_flags |= SMBR_RESTART; smb_iod_rqprocessed(rqp, ENOTCONN); } _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 12:30:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 41AA2E0B for ; Thu, 17 Apr 2014 12:30:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2F5EF1574 for ; Thu, 17 Apr 2014 12:30:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3HCU1t2045862 for ; Thu, 17 Apr 2014 12:30:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3HCU1hg045861; Thu, 17 Apr 2014 12:30:01 GMT (envelope-from gnats) Date: Thu, 17 Apr 2014 12:30:01 GMT Message-Id: <201404171230.s3HCU1hg045861@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: dfilter@FreeBSD.ORG (dfilter service) Subject: Re: kern/87859: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 12:30:01 -0000 The following reply was made to PR kern/87859; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/87859: commit references a PR Date: Thu, 17 Apr 2014 12:22:11 +0000 (UTC) Author: ae Date: Thu Apr 17 12:22:08 2014 New Revision: 264600 URL: http://svnweb.freebsd.org/changeset/base/264600 Log: Remove redundant unlock. This code was removed from the opensolaris and darwin's netsmb implementations, in DfBSD it also has been disabled. PR: 36566, 87859, 139407, 161579, 175557, 178412, 186652 MFC after: 2 weeks Sponsored by: Yandex LLC Modified: head/sys/netsmb/smb_iod.c Modified: head/sys/netsmb/smb_iod.c ============================================================================== --- head/sys/netsmb/smb_iod.c Thu Apr 17 12:16:51 2014 (r264599) +++ head/sys/netsmb/smb_iod.c Thu Apr 17 12:22:08 2014 (r264600) @@ -87,8 +87,6 @@ smb_iod_invrq(struct smbiod *iod) */ SMB_IOD_RQLOCK(iod); TAILQ_FOREACH(rqp, &iod->iod_rqlist, sr_link) { - if (rqp->sr_flags & SMBR_INTERNAL) - SMBRQ_SUNLOCK(rqp); rqp->sr_flags |= SMBR_RESTART; smb_iod_rqprocessed(rqp, ENOTCONN); } _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 12:30:03 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 402D0E0D for ; Thu, 17 Apr 2014 12:30:03 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2E2841578 for ; Thu, 17 Apr 2014 12:30:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3HCU3HZ045879 for ; Thu, 17 Apr 2014 12:30:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3HCU3GN045878; Thu, 17 Apr 2014 12:30:03 GMT (envelope-from gnats) Date: Thu, 17 Apr 2014 12:30:03 GMT Message-Id: <201404171230.s3HCU3GN045878@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: dfilter@FreeBSD.ORG (dfilter service) Subject: Re: kern/36566: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 12:30:03 -0000 The following reply was made to PR kern/36566; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/36566: commit references a PR Date: Thu, 17 Apr 2014 12:22:11 +0000 (UTC) Author: ae Date: Thu Apr 17 12:22:08 2014 New Revision: 264600 URL: http://svnweb.freebsd.org/changeset/base/264600 Log: Remove redundant unlock. This code was removed from the opensolaris and darwin's netsmb implementations, in DfBSD it also has been disabled. PR: 36566, 87859, 139407, 161579, 175557, 178412, 186652 MFC after: 2 weeks Sponsored by: Yandex LLC Modified: head/sys/netsmb/smb_iod.c Modified: head/sys/netsmb/smb_iod.c ============================================================================== --- head/sys/netsmb/smb_iod.c Thu Apr 17 12:16:51 2014 (r264599) +++ head/sys/netsmb/smb_iod.c Thu Apr 17 12:22:08 2014 (r264600) @@ -87,8 +87,6 @@ smb_iod_invrq(struct smbiod *iod) */ SMB_IOD_RQLOCK(iod); TAILQ_FOREACH(rqp, &iod->iod_rqlist, sr_link) { - if (rqp->sr_flags & SMBR_INTERNAL) - SMBRQ_SUNLOCK(rqp); rqp->sr_flags |= SMBR_RESTART; smb_iod_rqprocessed(rqp, ENOTCONN); } _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 12:30:04 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5DF12E11 for ; Thu, 17 Apr 2014 12:30:04 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4BE941579 for ; Thu, 17 Apr 2014 12:30:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3HCU4nU045885 for ; Thu, 17 Apr 2014 12:30:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3HCU4YK045884; Thu, 17 Apr 2014 12:30:04 GMT (envelope-from gnats) Date: Thu, 17 Apr 2014 12:30:04 GMT Message-Id: <201404171230.s3HCU4YK045884@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: dfilter@FreeBSD.ORG (dfilter service) Subject: Re: kern/178412: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 12:30:04 -0000 The following reply was made to PR kern/178412; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/178412: commit references a PR Date: Thu, 17 Apr 2014 12:22:13 +0000 (UTC) Author: ae Date: Thu Apr 17 12:22:08 2014 New Revision: 264600 URL: http://svnweb.freebsd.org/changeset/base/264600 Log: Remove redundant unlock. This code was removed from the opensolaris and darwin's netsmb implementations, in DfBSD it also has been disabled. PR: 36566, 87859, 139407, 161579, 175557, 178412, 186652 MFC after: 2 weeks Sponsored by: Yandex LLC Modified: head/sys/netsmb/smb_iod.c Modified: head/sys/netsmb/smb_iod.c ============================================================================== --- head/sys/netsmb/smb_iod.c Thu Apr 17 12:16:51 2014 (r264599) +++ head/sys/netsmb/smb_iod.c Thu Apr 17 12:22:08 2014 (r264600) @@ -87,8 +87,6 @@ smb_iod_invrq(struct smbiod *iod) */ SMB_IOD_RQLOCK(iod); TAILQ_FOREACH(rqp, &iod->iod_rqlist, sr_link) { - if (rqp->sr_flags & SMBR_INTERNAL) - SMBRQ_SUNLOCK(rqp); rqp->sr_flags |= SMBR_RESTART; smb_iod_rqprocessed(rqp, ENOTCONN); } _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 12:30:06 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B7475EE1 for ; Thu, 17 Apr 2014 12:30:06 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A5534157C for ; Thu, 17 Apr 2014 12:30:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3HCU6lM045901 for ; Thu, 17 Apr 2014 12:30:06 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3HCU6oQ045900; Thu, 17 Apr 2014 12:30:06 GMT (envelope-from gnats) Date: Thu, 17 Apr 2014 12:30:06 GMT Message-Id: <201404171230.s3HCU6oQ045900@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: dfilter@FreeBSD.ORG (dfilter service) Subject: Re: kern/139407: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 12:30:06 -0000 The following reply was made to PR kern/139407; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/139407: commit references a PR Date: Thu, 17 Apr 2014 12:22:12 +0000 (UTC) Author: ae Date: Thu Apr 17 12:22:08 2014 New Revision: 264600 URL: http://svnweb.freebsd.org/changeset/base/264600 Log: Remove redundant unlock. This code was removed from the opensolaris and darwin's netsmb implementations, in DfBSD it also has been disabled. PR: 36566, 87859, 139407, 161579, 175557, 178412, 186652 MFC after: 2 weeks Sponsored by: Yandex LLC Modified: head/sys/netsmb/smb_iod.c Modified: head/sys/netsmb/smb_iod.c ============================================================================== --- head/sys/netsmb/smb_iod.c Thu Apr 17 12:16:51 2014 (r264599) +++ head/sys/netsmb/smb_iod.c Thu Apr 17 12:22:08 2014 (r264600) @@ -87,8 +87,6 @@ smb_iod_invrq(struct smbiod *iod) */ SMB_IOD_RQLOCK(iod); TAILQ_FOREACH(rqp, &iod->iod_rqlist, sr_link) { - if (rqp->sr_flags & SMBR_INTERNAL) - SMBRQ_SUNLOCK(rqp); rqp->sr_flags |= SMBR_RESTART; smb_iod_rqprocessed(rqp, ENOTCONN); } _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 12:30:05 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A8790E7A for ; Thu, 17 Apr 2014 12:30:05 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 96952157B for ; Thu, 17 Apr 2014 12:30:05 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3HCU58t045891 for ; Thu, 17 Apr 2014 12:30:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3HCU5oB045890; Thu, 17 Apr 2014 12:30:05 GMT (envelope-from gnats) Date: Thu, 17 Apr 2014 12:30:05 GMT Message-Id: <201404171230.s3HCU5oB045890@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: dfilter@FreeBSD.ORG (dfilter service) Subject: Re: kern/161579: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 12:30:05 -0000 The following reply was made to PR kern/161579; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/161579: commit references a PR Date: Thu, 17 Apr 2014 12:22:12 +0000 (UTC) Author: ae Date: Thu Apr 17 12:22:08 2014 New Revision: 264600 URL: http://svnweb.freebsd.org/changeset/base/264600 Log: Remove redundant unlock. This code was removed from the opensolaris and darwin's netsmb implementations, in DfBSD it also has been disabled. PR: 36566, 87859, 139407, 161579, 175557, 178412, 186652 MFC after: 2 weeks Sponsored by: Yandex LLC Modified: head/sys/netsmb/smb_iod.c Modified: head/sys/netsmb/smb_iod.c ============================================================================== --- head/sys/netsmb/smb_iod.c Thu Apr 17 12:16:51 2014 (r264599) +++ head/sys/netsmb/smb_iod.c Thu Apr 17 12:22:08 2014 (r264600) @@ -87,8 +87,6 @@ smb_iod_invrq(struct smbiod *iod) */ SMB_IOD_RQLOCK(iod); TAILQ_FOREACH(rqp, &iod->iod_rqlist, sr_link) { - if (rqp->sr_flags & SMBR_INTERNAL) - SMBRQ_SUNLOCK(rqp); rqp->sr_flags |= SMBR_RESTART; smb_iod_rqprocessed(rqp, ENOTCONN); } _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 17:39:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C1071BF7 for ; Thu, 17 Apr 2014 17:39:42 +0000 (UTC) Received: from mail-vc0-x244.google.com (mail-vc0-x244.google.com [IPv6:2607:f8b0:400c:c03::244]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 837B11B13 for ; Thu, 17 Apr 2014 17:39:42 +0000 (UTC) Received: by mail-vc0-f196.google.com with SMTP id il7so184339vcb.3 for ; Thu, 17 Apr 2014 10:39:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=5tNA65vgkzD9Az1WSj09fHNZYoK7puE5D2jFuMpEPCU=; b=sqsC50M7HHC99iygzlO4Q+uSzQ/JzQaqOlROZW5yLsJDt/pBbgwMajmHhnTOFQJN4t L2azDlBxyPhf4iOaeUyyPUFpQHUMbZ0PQCL2u+xtSsn1VoQfqS4lCsHuC0Ytb5fTsigx MIwZjlaQD/VFdU21QtEC+aR8Esd/y3qlULwCPflDjzgbfI7uwLyl0k59A1pr27mtLmhJ 99XhjSRmRYi1UCC7HrM1cngvAct4BMOYSZp0tiZrOi8QKj+MEwOBuccb4XiDiGWKE8ZC AXOvoAm643YsZSuPHUgf0wEUgjOUqC1aoLqyBlJhtjEmxhlLSk95I20TxMXW0+xc1dhA Uv3w== MIME-Version: 1.0 X-Received: by 10.52.128.231 with SMTP id nr7mr7205672vdb.17.1397756381609; Thu, 17 Apr 2014 10:39:41 -0700 (PDT) Received: by 10.58.191.4 with HTTP; Thu, 17 Apr 2014 10:39:41 -0700 (PDT) Received: by 10.58.191.4 with HTTP; Thu, 17 Apr 2014 10:39:41 -0700 (PDT) Date: Thu, 17 Apr 2014 12:39:41 -0500 Message-ID: Subject: From: Dustin Hernandez To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 17:39:42 -0000 louloulou308.@gmail.cam From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 17:41:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 260F3CEC for ; Thu, 17 Apr 2014 17:41:03 +0000 (UTC) Received: from mail-ve0-x241.google.com (mail-ve0-x241.google.com [IPv6:2607:f8b0:400c:c01::241]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DCD401BC0 for ; Thu, 17 Apr 2014 17:41:02 +0000 (UTC) Received: by mail-ve0-f193.google.com with SMTP id oz11so180449veb.0 for ; Thu, 17 Apr 2014 10:41:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=6dVIoIwyAL8NoM5uxwcHK5Xr20Wq0voX32KOBf5+yMw=; b=ZVc8bLYUobw0qLyyrRqqW/ZSuujDopqZzXTkt15Jt/T+UGaBUKvjIWmIsWyyDkFdHf OvHB7ZHS49TvBplH7vJ839s9QrMBuxfXyg3Il8bPJy7Mo1Q30B4LM6/Az9/URe7lRPxN pu2mQRqol71LIZqgCSXU6LWeV1YCSbvTvGOC0XSUuqxnqmWaWqI493xH9ck0N+6jK25H 1W3tIxDIhkBcE2vmUWi44RKlrGM6Qd9fHbY8JwKE7jtSS+ndAYPmf8hxEm8BGyBAJi4d +GBcV14dUD7CxoqcSvA1E9ViFaKnlHQIPoVGo4JZ1mjaqMvVitnVUpXhUDyifAKNK6cq gc/g== MIME-Version: 1.0 X-Received: by 10.52.138.112 with SMTP id qp16mr1211067vdb.40.1397756462046; Thu, 17 Apr 2014 10:41:02 -0700 (PDT) Received: by 10.58.191.4 with HTTP; Thu, 17 Apr 2014 10:41:01 -0700 (PDT) Received: by 10.58.191.4 with HTTP; Thu, 17 Apr 2014 10:41:01 -0700 (PDT) Date: Thu, 17 Apr 2014 12:41:01 -0500 Message-ID: Subject: From: Dustin Hernandez To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 17:41:03 -0000 louloulou308:dh@gmail.cam From owner-freebsd-fs@FreeBSD.ORG Thu Apr 17 17:41:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0E877D57 for ; Thu, 17 Apr 2014 17:41:12 +0000 (UTC) Received: from mail-vc0-x243.google.com (mail-vc0-x243.google.com [IPv6:2607:f8b0:400c:c03::243]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C52DE1BC3 for ; Thu, 17 Apr 2014 17:41:11 +0000 (UTC) Received: by mail-vc0-f195.google.com with SMTP id lg15so184092vcb.6 for ; Thu, 17 Apr 2014 10:41:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=hJ0SOPDfovwZV3+B6jIfCOO0vmPgmhcUMNkdmO7Bdc0=; b=ydMZvwp3Bn7yZwseIQ4qiZGGlzaFAF1faojorcF8IruqVrvFV65IBPEgsBWvgnuWm6 LEb1Ty+E2fQOaKqbrnxkMSGOJzTGaI8PY8KQlZawD7cI7g5vp3cPJ/GSvhZ+WB8Z/Fix sWLAY1TeVSAXVhKRnSIvq9Lwg413TqChX9m3Gjc8yVtxElU4wFz9YtJEduuFDeEEgk9u Di5FoB+/5afp6TT7c2FKdv6ma/Itede9uj+ePA8fSbygVGUYDHo/EBlMEg/4e7C8hOJe E9WzHD46Nou2kZZjY4Jubp4evY5HONVllKJ1dUEntnsjM4AcTEpo4n9SCnBD+Sdvf6Ib 5V0g== MIME-Version: 1.0 X-Received: by 10.220.12.66 with SMTP id w2mr8510627vcw.15.1397756470928; Thu, 17 Apr 2014 10:41:10 -0700 (PDT) Received: by 10.58.191.4 with HTTP; Thu, 17 Apr 2014 10:41:10 -0700 (PDT) Received: by 10.58.191.4 with HTTP; Thu, 17 Apr 2014 10:41:10 -0700 (PDT) Date: Thu, 17 Apr 2014 12:41:10 -0500 Message-ID: Subject: From: Dustin Hernandez To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2014 17:41:12 -0000 LO louloulou308.@gmail.cam From owner-freebsd-fs@FreeBSD.ORG Fri Apr 18 19:35:05 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0312A9E3 for ; Fri, 18 Apr 2014 19:35:05 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [88.198.178.88]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 456061676 for ; Fri, 18 Apr 2014 19:35:03 +0000 (UTC) Received: (qmail 79251 invoked by uid 89); 18 Apr 2014 19:30:19 -0000 Received: by simscan 1.4.0 ppid: 79246, pid: 79248, t: 0.1657s scanners: attach: 1.4.0 clamav: 0.97.3/m:55/d:18821 Received: from unknown (HELO ?212.71.117.95?) (rainer@ultra-secure.de@212.71.117.95) by mail.ultra-secure.de with ESMTPA; 18 Apr 2014 19:30:18 -0000 From: Rainer Duffner Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Subject: What happened with the GlusterFS port? Message-Id: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> Date: Fri, 18 Apr 2014 21:30:17 +0200 To: FreeBSD FS Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) X-Mailer: Apple Mail (2.1874) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Apr 2014 19:35:05 -0000 Hi, does anybody know where the effort to port GluserFS to FreeBSD went? There=92s this (very) outdated wiki-page: https://wiki.freebsd.org/GlusterFS and there=92s the SoC project: https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport But nothing seems to have happened after it was finished. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 18 19:43:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7C966381 for ; Fri, 18 Apr 2014 19:43:42 +0000 (UTC) Received: from mail.ignoranthack.me (ujvl.x.rootbsd.net [199.102.79.106]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 586DD1768 for ; Fri, 18 Apr 2014 19:43:41 +0000 (UTC) Received: from [10.73.160.242] (nat-dip7.cfw-a-gci.corp.yahoo.com [209.131.62.116]) (using SSLv3 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: sbruno@ignoranthack.me) by mail.ignoranthack.me (Postfix) with ESMTPSA id 308471928E0; Fri, 18 Apr 2014 19:43:41 +0000 (UTC) Subject: Re: What happened with the GlusterFS port? From: Sean Bruno To: Rainer Duffner In-Reply-To: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature"; boundary="=-OiC5H3oUnA79fPcWbxwx" Date: Fri, 18 Apr 2014 12:43:40 -0700 Message-ID: <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: sbruno@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Apr 2014 19:43:42 -0000 --=-OiC5H3oUnA79fPcWbxwx Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Fri, 2014-04-18 at 21:30 +0200, Rainer Duffner wrote: > Hi, >=20 > does anybody know where the effort to port GluserFS to FreeBSD went? >=20 > There=E2=80=99s this (very) outdated wiki-page: > https://wiki.freebsd.org/GlusterFS >=20 > and there=E2=80=99s the SoC project: > https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport >=20 > But nothing seems to have happened after it was finished. The port made decent progress over the GSOC period, but no, it never gained any traction to end up in the ports collection. sean --=-OiC5H3oUnA79fPcWbxwx Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAABAgAGBQJTUYBsAAoJEBkJRdwI6BaHCUAH/i8qvftK0J1r7u6pXUhTiJKm PyKNMCQfIj+rrdlxxVVHvHX3PpVucHOmAdKXljFLbxjp2tYGJJFkZMjZDPCuwrTu Or3qODXPqOQ6Gmb5m5sGneVOJA1Lqc4uUTAk7ZfG8GlhowbyxeBAMPyaVNupTW4g 32wOj9zDTipK7y5T8c0ipn5aEPh459xlBjQStS5H3JVu4UlnMfZuoHpbPTxVeAyW zcSn33UG1h1CtYkS8un7Dtlq81MPCz8lVfCXsb7z+F7TzVS/vRx28wylrIqvLcfm O5Mz8ewppXIx++z6eQ8PbLIe5EX6TJ29OntTIVh4+9FrhJ/WGH06kgEPv6l7yu8= =nmp+ -----END PGP SIGNATURE----- --=-OiC5H3oUnA79fPcWbxwx-- From owner-freebsd-fs@FreeBSD.ORG Fri Apr 18 19:51:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C4D5656B; Fri, 18 Apr 2014 19:51:08 +0000 (UTC) Received: from mail-ob0-x22e.google.com (mail-ob0-x22e.google.com [IPv6:2607:f8b0:4003:c01::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 820A41841; Fri, 18 Apr 2014 19:51:08 +0000 (UTC) Received: by mail-ob0-f174.google.com with SMTP id gq1so2143207obb.5 for ; Fri, 18 Apr 2014 12:51:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=PcTVvZS8AXvBSh0UcKwapbFFrNHng57QsswKXB5FdY4=; b=ZeJ3XDM7BI8/ssWs3lgQyAyZTSIx7RBX8cV2cykGZZh95uo+jFTtlZmVWthumbJ6XX eBV3nBE+UmpSlwVRrh3OOgGRbgFqhm9r0wSI/e3ovAUfs8cfyZMe20ipXdLBA/c2Qdk/ 8SWqgUbUU0bLxJnIuW1CBABT4QjE2nL29rw1BRdbJsGTGHi32jns3VgnsVHWykhwPC74 1gI7QAZhKN6y+3XNNXzW3CcxUsI/x+DN1cI6SdSOLb0NcWFDEpYoU/+cPj/ovj85RG0V KPoPWXsVWGwlRDxPZeSyxmqbFpfqiMBLA2WEef6eZTFoi6j9dygrwtGcXaSYE7ZPEhsO Bbvg== MIME-Version: 1.0 X-Received: by 10.60.133.17 with SMTP id oy17mr3414218oeb.51.1397850667785; Fri, 18 Apr 2014 12:51:07 -0700 (PDT) Received: by 10.76.170.4 with HTTP; Fri, 18 Apr 2014 12:51:07 -0700 (PDT) In-Reply-To: <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> Date: Fri, 18 Apr 2014 15:51:07 -0400 Message-ID: Subject: Re: What happened with the GlusterFS port? From: Outback Dingo To: sbruno@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Apr 2014 19:51:08 -0000 On Fri, Apr 18, 2014 at 3:43 PM, Sean Bruno wrote: > On Fri, 2014-04-18 at 21:30 +0200, Rainer Duffner wrote: > > Hi, > > > > does anybody know where the effort to port GluserFS to FreeBSD went? > > > > There=E2=80=99s this (very) outdated wiki-page: > > https://wiki.freebsd.org/GlusterFS > > > > and there=E2=80=99s the SoC project: > > https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport > > > > But nothing seems to have happened after it was finished. > > > The port made decent progress over the GSOC period, but no, it never > gained any traction to end up in the ports collection. > > not to hijack the thread but, glusters ancient, use riak-cs, or swift, or port leofs > sean > > > > From owner-freebsd-fs@FreeBSD.ORG Fri Apr 18 20:05:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CB3DF821; Fri, 18 Apr 2014 20:05:52 +0000 (UTC) Received: from mail-wg0-x22a.google.com (mail-wg0-x22a.google.com [IPv6:2a00:1450:400c:c00::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3F6D21972; Fri, 18 Apr 2014 20:05:52 +0000 (UTC) Received: by mail-wg0-f42.google.com with SMTP id y10so817668wgg.1 for ; Fri, 18 Apr 2014 13:05:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=DN/bzmRLlipUlEzulmyVvX0gjU8xEyY9GY94cTxUij0=; b=z4GUdfNflIFY7RbkDNh0fM+Du+tcjgHAwGh2PsS2P7lokRC0zaVibxSLKQfnJhXtQo ulOFawVM1B1ulbENsInRjE9PwzOqw5y8bTULTl98fQhysRJ/18sk6C7Uomkm3G02WNZB sMQso/1C0UFqB/M3tjmvjeeSsI2fzMTtKH1aetFMdWT80d3IamtzRTUct9dI+GobYSl/ s+r8bPjZ9NRQ5nvbMJNflP7OFPfnBNil6Ge9OwhXWpn1bj/0JiSgkoKQ8d5M+U9U++MR z0uqj7a2eN7iLBXl+AAN/L5nNsQ2jisZ6vpgUbbwoFUV+UOCETCfY2vly5KTOI4845J9 BpxA== MIME-Version: 1.0 X-Received: by 10.194.81.98 with SMTP id z2mr17888552wjx.12.1397851550491; Fri, 18 Apr 2014 13:05:50 -0700 (PDT) Received: by 10.194.28.196 with HTTP; Fri, 18 Apr 2014 13:05:50 -0700 (PDT) In-Reply-To: References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> Date: Fri, 18 Apr 2014 15:05:50 -0500 Message-ID: Subject: Re: What happened with the GlusterFS port? From: Marty Rosenberg To: Outback Dingo Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Apr 2014 20:05:52 -0000 If you just want to use freebsd as a brick (as I do), I have patches against 3.3 that enable it to build, and it hasn't lost any data on me yet, but every once in a while, things get funky. I've been meaning to upstream the patches *forever*, but I simply haven't gotten around to it. On Fri, Apr 18, 2014 at 2:51 PM, Outback Dingo wrot= e: > On Fri, Apr 18, 2014 at 3:43 PM, Sean Bruno > wrote: > > > On Fri, 2014-04-18 at 21:30 +0200, Rainer Duffner wrote: > > > Hi, > > > > > > does anybody know where the effort to port GluserFS to FreeBSD went? > > > > > > There=E2=80=99s this (very) outdated wiki-page: > > > https://wiki.freebsd.org/GlusterFS > > > > > > and there=E2=80=99s the SoC project: > > > https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport > > > > > > But nothing seems to have happened after it was finished. > > > > > > The port made decent progress over the GSOC period, but no, it never > > gained any traction to end up in the ports collection. > > > > > not to hijack the thread but, glusters ancient, use riak-cs, or swift, or > port leofs > > > > sean > > > > > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri Apr 18 21:06:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B8322A08 for ; Fri, 18 Apr 2014 21:06:00 +0000 (UTC) Received: from mail.ignoranthack.me (ujvl.x.rootbsd.net [199.102.79.106]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 82A7B1FE6 for ; Fri, 18 Apr 2014 21:06:00 +0000 (UTC) Received: from [10.73.160.242] (nat-dip7.cfw-a-gci.corp.yahoo.com [209.131.62.116]) (using SSLv3 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: sbruno@ignoranthack.me) by mail.ignoranthack.me (Postfix) with ESMTPSA id 292151928E0; Fri, 18 Apr 2014 21:05:57 +0000 (UTC) Subject: Re: What happened with the GlusterFS port? From: Sean Bruno To: Marty Rosenberg In-Reply-To: References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature"; boundary="=-xtX5v5mWycLZW+DfJ0tb" Date: Fri, 18 Apr 2014 14:05:54 -0700 Message-ID: <1397855154.58880.25.camel@powernoodle.corp.yahoo.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: sbruno@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Apr 2014 21:06:00 -0000 --=-xtX5v5mWycLZW+DfJ0tb Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Fri, 2014-04-18 at 15:05 -0500, Marty Rosenberg wrote: > If you just want to use freebsd as a brick (as I do), I have patches > against 3.3 that enable it to build, and it hasn't lost any data on me > yet, but every once in a while, things get funky. I've been meaning > to upstream the patches *forever*, but I simply haven't gotten around > to it. >=20 >=20 If you feel frisky, grab up what you have and start assembling a port. It shouldn't be too gross, but it would definitely get more eyeballs on the code if its available for easy installation. sean >=20 > On Fri, Apr 18, 2014 at 2:51 PM, Outback Dingo > wrote: > On Fri, Apr 18, 2014 at 3:43 PM, Sean Bruno > wrote: > =20 > > On Fri, 2014-04-18 at 21:30 +0200, Rainer Duffner wrote: > > > Hi, > > > > > > does anybody know where the effort to port GluserFS to > FreeBSD went? > > > > > > There=E2=80=99s this (very) outdated wiki-page: > > > https://wiki.freebsd.org/GlusterFS > > > > > > and there=E2=80=99s the SoC project: > > > https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport > > > > > > But nothing seems to have happened after it was finished. > > > > > > The port made decent progress over the GSOC period, but no, > it never > > gained any traction to end up in the ports collection. > > > > > =20 > not to hijack the thread but, glusters ancient, use riak-cs, > or swift, or > port leofs > =20 > =20 > > sean > > > > > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org" >=20 >=20 --=-xtX5v5mWycLZW+DfJ0tb Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAABAgAGBQJTUZOyAAoJEBkJRdwI6BaH0a8H/AltZMi7omHVrcIqManl/opZ JwgzRfHB6Eq1gtc1cDBsKxAR+yQu3tpLS42lw1d9Mw7n/9rz/UghLuBNvm4r42Fu DKYOlXCPhJM6JQNP7kWjahHLozuVWxyusHvQTu6hmVEzvKqISl++ODLCo/LKQ9ol nfiSXN19xvifv4LdjcdSQV/ED1GrAytgnL1JlEmCpvSfrN5WgRY/n8o88FsQKZ8P WhGHQE8Rhnd/2g9yLmAYrqHrrAz4PJerrAu4Jv7pA7T4Gb4z2RkIzxkVTMFJLTXY 1OoBvTjT9pL2XX70QFKcUr9CeT7OVRVtxzf19VtyIBW23rxG7rNewRCdFjaSAus= =Jerv -----END PGP SIGNATURE----- --=-xtX5v5mWycLZW+DfJ0tb-- From owner-freebsd-fs@FreeBSD.ORG Fri Apr 18 21:06:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E2ABCA84 for ; Fri, 18 Apr 2014 21:06:29 +0000 (UTC) Received: from mail.ignoranthack.me (ujvl.x.rootbsd.net [199.102.79.106]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BCD261FF5 for ; Fri, 18 Apr 2014 21:06:29 +0000 (UTC) Received: from [10.73.160.242] (nat-dip7.cfw-a-gci.corp.yahoo.com [209.131.62.116]) (using SSLv3 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: sbruno@ignoranthack.me) by mail.ignoranthack.me (Postfix) with ESMTPSA id 00E9B1928E0; Fri, 18 Apr 2014 21:06:28 +0000 (UTC) Subject: Re: What happened with the GlusterFS port? From: Sean Bruno To: Outback Dingo In-Reply-To: References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature"; boundary="=-+fUqUxaEtY5kqHVLWm8k" Date: Fri, 18 Apr 2014 14:06:28 -0700 Message-ID: <1397855188.58880.26.camel@powernoodle.corp.yahoo.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: sbruno@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Apr 2014 21:06:29 -0000 --=-+fUqUxaEtY5kqHVLWm8k Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Fri, 2014-04-18 at 15:51 -0400, Outback Dingo wrote: >=20 >=20 > On Fri, Apr 18, 2014 at 3:43 PM, Sean Bruno > wrote: > On Fri, 2014-04-18 at 21:30 +0200, Rainer Duffner wrote: > > Hi, > > > > does anybody know where the effort to port GluserFS to > FreeBSD went? > > > > There=E2=80=99s this (very) outdated wiki-page: > > https://wiki.freebsd.org/GlusterFS > > > > and there=E2=80=99s the SoC project: > > https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport > > > > But nothing seems to have happened after it was finished. > =20 > =20 > =20 > The port made decent progress over the GSOC period, but no, it > never > gained any traction to end up in the ports collection. > =20 >=20 >=20 > not to hijack the thread but, glusters ancient, use riak-cs, or swift, > or port leofs >=20 > =20 > sean > =20 > =20 > =20 >=20 >=20 I think swift and then ceph to be honest. At least that's what my universe looks like. sean --=-+fUqUxaEtY5kqHVLWm8k Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAABAgAGBQJTUZPUAAoJEBkJRdwI6BaHJ2EIAIO/Z7FVd5sOq68y9l94s3j5 6uH2Nfj8erM4zpH9L3CjuXc9JAtb+ivZTB/Y2vkmIFuUQwRP9jZ8y/YeDHwiubI0 fh0aLo4zfB9RRWj9dHKXkUVBp5FLQx44mYHVCcKckG/G2J3q3OLSq1u6vjyll+4Q pavislC540jNq80Mzu0XMMCJto3IH/+XT0slYP0Rl+4XKVCg0TytAU9MPyd01PDM V0VnZmTIWE+10PY9+xe+xkPTTV2TGiwPxml2so7uZm2/NdN5Uu6TA627uJCqS/XZ ou+F/HgQpHQN1yH0qPPCAbPwpdzgsVP4NNfbcsYFeVNhXqTtzlH53g92/JKARHg= =byT4 -----END PGP SIGNATURE----- --=-+fUqUxaEtY5kqHVLWm8k-- From owner-freebsd-fs@FreeBSD.ORG Fri Apr 18 21:16:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 90F2CD53; Fri, 18 Apr 2014 21:16:10 +0000 (UTC) Received: from mail-ob0-x22c.google.com (mail-ob0-x22c.google.com [IPv6:2607:f8b0:4003:c01::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4BFB810E5; Fri, 18 Apr 2014 21:16:10 +0000 (UTC) Received: by mail-ob0-f172.google.com with SMTP id wo20so2253502obc.31 for ; Fri, 18 Apr 2014 14:16:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=T7fWUS74LULsw+DEX7leSbZMeLENpHlooerOPCYQzuY=; b=CW4PtWLy3w1Q6YbSI51GadnthudDzEAmcdYoZF3haYOkcxXQEMDQy+NmirhPmr+2H6 JzcYaiD63pf+kCPA5RzIu7CQ/C5SVnwGhFYoNoHsBHjsGChoKJQFkAZXds732tf5X+f4 zGrLu9hAp19qHfZ7pdFT7k8TG+stCALA8fHDCwdR0X7dhFbYZK2urPXxRkeGH/cB7Upu XJfLWX6QUSa9RyK6+1sOnukz8ymR2Xt7hbA+BJtUI3LLb9oG6mZPCeAp82ykapa77lou KRQvac3lMzBQD4q/C06zTew0cpp7cjcumy/2e+YHvlXRJYH+/qJPbqs8sfKVyj70ReJU Ea7A== MIME-Version: 1.0 X-Received: by 10.60.161.101 with SMTP id xr5mr28270oeb.71.1397855769540; Fri, 18 Apr 2014 14:16:09 -0700 (PDT) Received: by 10.76.170.4 with HTTP; Fri, 18 Apr 2014 14:16:09 -0700 (PDT) In-Reply-To: <1397855188.58880.26.camel@powernoodle.corp.yahoo.com> References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> <1397855188.58880.26.camel@powernoodle.corp.yahoo.com> Date: Fri, 18 Apr 2014 17:16:09 -0400 Message-ID: Subject: Re: What happened with the GlusterFS port? From: Outback Dingo To: sbruno@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Apr 2014 21:16:10 -0000 On Fri, Apr 18, 2014 at 5:06 PM, Sean Bruno wrote: > On Fri, 2014-04-18 at 15:51 -0400, Outback Dingo wrote: > > > > > > On Fri, Apr 18, 2014 at 3:43 PM, Sean Bruno > > wrote: > > On Fri, 2014-04-18 at 21:30 +0200, Rainer Duffner wrote: > > > Hi, > > > > > > does anybody know where the effort to port GluserFS to > > FreeBSD went? > > > > > > There=E2=80=99s this (very) outdated wiki-page: > > > https://wiki.freebsd.org/GlusterFS > > > > > > and there=E2=80=99s the SoC project: > > > https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport > > > > > > But nothing seems to have happened after it was finished. > > > > > > > > The port made decent progress over the GSOC period, but no, it > > never > > gained any traction to end up in the ports collection. > > > > > > > > not to hijack the thread but, glusters ancient, use riak-cs, or swift, > > or port leofs > > > > > > sean > > > > > > > > > > > > I think swift and then ceph to be honest. At least that's what my > universe looks like. > does ceph run on freebsd yet ??? i knew there was a port in the works but like the glusterfs ports, it seems to have died also > > sean > > From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 02:22:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4926F95B; Sat, 19 Apr 2014 02:22:49 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 257011B41; Sat, 19 Apr 2014 02:22:48 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id 3C28C74CBB; Fri, 18 Apr 2014 19:22:48 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 78610-09; Fri, 18 Apr 2014 19:22:48 -0700 (PDT) Received: from [10.10.9.48] (unknown [124.195.193.123]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id 6797074CB4; Fri, 18 Apr 2014 19:22:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=ixsystems.com; s=newknight0; t=1397874167; bh=2MJEz+Gwd0lo9SpGsJaPyiu/zUuyKANNSCWYGFTosAs=; h=Subject:From:In-Reply-To:Date:Cc:References:To; b=u5oEAUieFXvIlcYoM6PogfaW5oFcgrGEGuH4JyQNQeYlUKEfaYnhsSSK3J+yTv75f LSCuTrchNB5nuQBooslhAaK2sZKQrRejrTuBSHn9oPVT0qb4jhIQEN9HYHYj4Vl2ea 4dQ7r7khbBdoaLBdfAzlL/3TFLwWBr7qCmJW4cuU= Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: What happened with the GlusterFS port? From: Jordan Hubbard In-Reply-To: Date: Sat, 19 Apr 2014 07:22:37 +0500 Content-Transfer-Encoding: quoted-printable Message-Id: References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> <1397855188.58880.26.camel@powernoodle.corp.yahoo.com> To: Outback Dingo X-Mailer: Apple Mail (2.1874) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 02:22:49 -0000 On Apr 19, 2014, at 2:16 AM, Outback Dingo = wrote: > does ceph run on freebsd yet ??? i knew there was a port in the works = but > like the glusterfs ports, it seems to have died also The situation with clustered / object filesystems on FreeBSD is rather = dismal, to be honest. The same is true of most enterprise filestore / = cloud integration code out there now. Amazon AWS compatible APIs? = Linux-only. Swift/Openstack? Linux. CEPH? Linux. GlusterFS? = Linux. I=92ve looked at all of them, and found for the most part just = vestiges of some FreeBSD porting effort that has since rotted (there are = a few #ifdefs for FreeBSD in CEPH, but apparently it was only partially = ported about 3 years ago and then not maintained anymore). - Jordan From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 08:44:05 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B3C568CE for ; Sat, 19 Apr 2014 08:44:05 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [88.198.178.88]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F0B181C0B for ; Sat, 19 Apr 2014 08:44:04 +0000 (UTC) Received: (qmail 97006 invoked by uid 89); 19 Apr 2014 08:44:01 -0000 Received: from unknown (HELO ?192.168.1.207?) (rainer@ultra-secure.de@217.71.83.52) by mail.ultra-secure.de with ESMTPA; 19 Apr 2014 08:44:01 -0000 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: What happened with the GlusterFS port? From: Rainer Duffner In-Reply-To: Date: Sat, 19 Apr 2014 10:43:58 +0200 Message-Id: <7FAD7618-593D-4797-9EE9-BA36A87CE79B@ultra-secure.de> References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> To: Outback Dingo X-Mailer: Apple Mail (2.1874) Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 08:44:05 -0000 Am 18.04.2014 um 21:51 schrieb Outback Dingo : >=20 >=20 > On Fri, Apr 18, 2014 at 3:43 PM, Sean Bruno = wrote: > On Fri, 2014-04-18 at 21:30 +0200, Rainer Duffner wrote: > > Hi, > > > > does anybody know where the effort to port GluserFS to FreeBSD went? > > > > There=92s this (very) outdated wiki-page: > > https://wiki.freebsd.org/GlusterFS > > > > and there=92s the SoC project: > > https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport > > > > But nothing seems to have happened after it was finished. >=20 >=20 > The port made decent progress over the GSOC period, but no, it never > gained any traction to end up in the ports collection. >=20 >=20 That is very unfortunate. > not to hijack the thread but, glusters ancient, use riak-cs, or swift, = or port leofs > =20 >=20 Currently, the unavailability of GlusterFS in FreeBSD (vs. the = availability in Linux) is sort of a deal-breaker for some projects here. There are, regrettably, a large number of legacy applications the rely = on a traditional filesystem. An equally large number of customers continue to rely on these same = applications, for the foreseeable future (and they pay us to run the = stuff). Traditionally, I would have just suggested a ZFS NFS fileserver - but it = adds a single point of failure, manual failover with ZFS sends/HAST etc. GlusterFS would eliminate this (in situations where the customer needs a = number of servers anyway). I guess, it won=92t happen until somebody is paid to do it (SoC sort of = proofed that) - directly or indirectly. From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 08:59:09 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5A835C02 for ; Sat, 19 Apr 2014 08:59:09 +0000 (UTC) Received: from sasl.smtp.pobox.com (a-pb-sasl-quonix.pobox.com [208.72.237.25]) by mx1.freebsd.org (Postfix) with ESMTP id 0E9391CE0 for ; Sat, 19 Apr 2014 08:59:08 +0000 (UTC) Received: from sasl.smtp.pobox.com (unknown [127.0.0.1]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTP id 4B21311D2C; Sat, 19 Apr 2014 04:59:06 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=pobox.com; h=date :message-id:from:to:cc:subject:in-reply-to:references :mime-version:content-type:content-transfer-encoding; s=sasl; bh=Ucv3gB1Ipe0Ebc42ywDEO0DsXko=; b=kGcC/IV5wL7yqU/pVibhULuhKsEN afBpdmbA6+7Ix6FNbulOWR9W7YBTf20svBzIkki5SCUm8CNTuTKTqbFK8fQVTPBk LvXQICl6IiPNSYaJ5eayPTpHGjgH92F2HdSJ3AFpjcL11J5ZyCbLplcIwjmYgsxZ w79gycOXiU8lz7c= DomainKey-Signature: a=rsa-sha1; c=nofws; d=pobox.com; h=date:message-id :from:to:cc:subject:in-reply-to:references:mime-version :content-type:content-transfer-encoding; q=dns; s=sasl; b=Dyskbs S1NZojHNpjq48QMZIGsVX/o4LLUAohdIC2dONSo0cOR6bBC/TM+kcjBPpcVLESVl 1bkwF9kCInz8tyjkFg3rs1WBF+oGHpb5EP8qwk8eqyCAIktX2j8mpSIJDS5Y84US Gcbi8F6MUE99RdgqJG1+/uon4MpeWj1aza358= Received: from a-pb-sasl-quonix.pobox.com (unknown [127.0.0.1]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTP id 4226511D2B; Sat, 19 Apr 2014 04:59:06 -0400 (EDT) Received: from bmach.nederware.nl (unknown [27.252.218.8]) by a-pb-sasl-quonix.pobox.com (Postfix) with ESMTPA id 48E0E11D2A; Sat, 19 Apr 2014 04:59:02 -0400 (EDT) Received: from quadrio.nederware.nl (quadrio.nederware.nl [192.168.33.13]) by bmach.nederware.nl (Postfix) with ESMTP id B3B0B2C9DF; Sat, 19 Apr 2014 20:59:00 +1200 (NZST) Received: from quadrio.nederware.nl (localhost [127.0.0.1]) by quadrio.nederware.nl (Postfix) with ESMTP id 95941836494D; Sat, 19 Apr 2014 20:59:00 +1200 (NZST) Date: Sat, 19 Apr 2014 20:59:00 +1200 Message-ID: <87zjjhofu3.wl%berend@pobox.com> From: Berend de Boer To: Rainer Duffner Subject: Re: What happened with the GlusterFS port? In-Reply-To: <7FAD7618-593D-4797-9EE9-BA36A87CE79B@ultra-secure.de> References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> <7FAD7618-593D-4797-9EE9-BA36A87CE79B@ultra-secure.de> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 EasyPG/1.0.0 Emacs/24.3 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) Organization: Xplain Technology Ltd MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: multipart/signed; boundary="pgp-sign-Multipart_Sat_Apr_19_20:59:00_2014-1"; micalg=pgp-sha256; protocol="application/pgp-signature" Content-Transfer-Encoding: 7bit X-Pobox-Relay-ID: DA5703FA-C7A0-11E3-BEC7-6F330E5B5709-48001098!a-pb-sasl-quonix.pobox.com Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 08:59:09 -0000 --pgp-sign-Multipart_Sat_Apr_19_20:59:00_2014-1 Content-Type: text/plain; charset=US-ASCII >>>>> "Rainer" == Rainer Duffner writes: Rainer> GlusterFS would eliminate this (in situations where the Rainer> customer needs a number of servers anyway). If it works. People who try gluster, usually are pretty disappointed. It's not a swap for your traditional nfs server. -- All the best, Berend de Boer --pgp-sign-Multipart_Sat_Apr_19_20:59:00_2014-1 Content-Type: application/pgp-signature Content-Transfer-Encoding: 7bit Content-Description: OpenPGP Digital Signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAABCAAGBQJTUjrUAAoJEKOfeD48G3g5euwP/0Tk2YfgmCyWbuQ5P9zrRwHD TCIbIrSHlR/zVT0aQREeuzEP8ZTOSlObmrDA8ES31QXNx/parzP0jk6Cd0oRfQsm FkynRtO/pIUy3/C9y294I+0eY5VNHW+euVfh6VD37gJDSqSQJFQMG4UsdCPXNu9r 7rrVqj2Soz3LYQFBjXiE/9jnG5Yf0NzkWRZxmeAm/FsKpew0oIWcm8Y4UOy1O2tE fTQC2MDTQgSs+sREVYB/f7zIBKIv0NQUvwcqwV7yMWfa3t9cdUgBuqnE8HUcY9NM yCfMyc2D5Fn7H/z4zVu3Ti8q7w7Qo7HEw6qHJoL08g7c6+MnPV++dRTgqpy67kFy EoOeBWc4Zxyij7n8gxIicolb8UQal5Chd3GlwOYswuBcS8U6/DR/6up33bybqgx3 sJ9TmmLJ9Rz/ZJa/95Pa0yVPITjFFU+f/fH+jOvAd6FJCW1jbVpnwePalVMTMJ7s WPHkhvLGfkCVu5SLY2GABCLJrXt2tJh+Md/QLuBc8UeKmDOFShlhieuzhMs7TCnM lfZgdfB8nf0cTQE2jznFN8aAv1yaG1aR/aFb4hVSQsVfrHC/laWzJOwGwTYltLYt T5ffkAW7JJw9ykTSE5zxvUn2pYeht2XeTYIjphsNxC3V+R3GaO2iSc3Uv9bz92YS ypbuY3T+GxrgeaC93hcI =T3u2 -----END PGP SIGNATURE----- --pgp-sign-Multipart_Sat_Apr_19_20:59:00_2014-1-- From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 09:11:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CED06E7D for ; Sat, 19 Apr 2014 09:11:45 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [88.198.178.88]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1AA551E49 for ; Sat, 19 Apr 2014 09:11:44 +0000 (UTC) Received: (qmail 97533 invoked by uid 89); 19 Apr 2014 09:11:42 -0000 Received: from unknown (HELO ?192.168.1.207?) (rainer@ultra-secure.de@217.71.83.52) by mail.ultra-secure.de with ESMTPA; 19 Apr 2014 09:11:42 -0000 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: What happened with the GlusterFS port? From: Rainer Duffner In-Reply-To: <87zjjhofu3.wl%berend@pobox.com> Date: Sat, 19 Apr 2014 11:11:40 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> <7FAD7618-593D-4797-9EE9-BA36A87CE79B@ultra-secure.de> <87zjjhofu3.wl%berend@pobox.com> To: Berend de Boer X-Mailer: Apple Mail (2.1874) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 09:11:45 -0000 =09 Am 19.04.2014 um 10:59 schrieb Berend de Boer : >>>>>> "Rainer" =3D=3D Rainer Duffner writes: >=20 > Rainer> GlusterFS would eliminate this (in situations where the > Rainer> customer needs a number of servers anyway). >=20 > If it works. People who try gluster, usually are pretty disappointed. >=20 > It's not a swap for your traditional nfs server. >=20 Well, I was suspecting something like this. We=92ve yet to make it production with the first GlusterFS customer - = and then the ramp-up until it=92s running their whole stuff will be very = long. But I think their dataset is pretty small anyway. We don=92t deal with a lot of data, usually. =46rom what I have read, GlusterFS does not work very well with lots of = small files (as they would occur in a website-hosting environment, where = GlusterFS would look like a logical choice). Is that still the case? From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 09:27:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5736BF89 for ; Sat, 19 Apr 2014 09:27:12 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [88.198.178.88]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9B4E11049 for ; Sat, 19 Apr 2014 09:27:11 +0000 (UTC) Received: (qmail 97794 invoked by uid 89); 19 Apr 2014 09:27:09 -0000 Received: from unknown (HELO ?192.168.1.207?) (rainer@ultra-secure.de@217.71.83.52) by mail.ultra-secure.de with ESMTPA; 19 Apr 2014 09:27:09 -0000 From: Rainer Duffner Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Subject: zfs sends "hang" recipient? Message-Id: <16C5BA3C-2E32-4BD6-93AC-8640D8CEF060@ultra-secure.de> Date: Sat, 19 Apr 2014 11:27:08 +0200 To: FreeBSD FS Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) X-Mailer: Apple Mail (2.1874) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 09:27:12 -0000 Hi, I=92ve got a FreeBSD 9.1 server that does zfs sends (via zxfer from = ports) to a FreeBSD 10.0 server. All AMD64, the pool is some 6T, 50% = full. Sender has 144GB RAM, recipient has 192GB. The snapshots are done with one of the tools in the ports (can=92t = remember the name right now, I=92ve basically tried them all). During the send process (with takes 5 to 15 minutes), the receiving host = is blocking all commands involving filesystems. Stuff like df, or zpool list hangs until the receive has completed. Is this a known problem? Would it help upgrading the 9.1 server to 10.0, too? In this setup, I=92m really conservative regarding upgrades, as I have = very little downtime. I=92ve got a MySQL database on the receiving server (not on the same = filesystem, of course, but on the same pool) that functions normally. I could ignore this, but our statistics-gathering tool relies on stuff = like the above and is completely b0rked by the hangs (which I could = ignore again, if it wasn=92t for the customer who wanted these = statistics, too=85) Rainer= From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 17:12:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6336D7B1 for ; Sat, 19 Apr 2014 17:12:36 +0000 (UTC) Received: from st11p02mm-asmtp002.mac.com (st11p02mm-asmtp002.mac.com [17.172.220.237]) by mx1.freebsd.org (Postfix) with ESMTP id 34FE41794 for ; Sat, 19 Apr 2014 17:12:35 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1252 Received: from [17.153.104.243] (unknown [17.153.104.243]) by st11p02mm-asmtp002.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4A00HCBFSR7W90@st11p02mm-asmtp002.mac.com> for freebsd-fs@freebsd.org; Sat, 19 Apr 2014 17:12:29 +0000 (GMT) From: Gena Guchin Content-transfer-encoding: quoted-printable Date: Sat, 19 Apr 2014 10:12:26 -0700 Subject: ZFS unable to import pool To: freebsd-fs@freebsd.org Message-id: <3A9BFA07-7142-4F48-9151-E6BB5FFFD71D@icloud.com> X-Mailer: Apple Mail (2.1877.9) X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaHhEKTEMXGx0EGx0YBBIZBBscEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hjeREKQ04XSxsbGmJCH2lsG0NcGXhzB xlgGx0bGHkdEQpYXBcZBBoEHQdNSx0SSEkcTAUbHQQbHRgEEhkEGxwQGx4aHxsRCl5ZF2FDbGd yEQpDWhcbHQQbHxkEGxoeBBgeGREKRFgXGBEKQkUXYEhYUHtzQ2tQG14RCkJOF2tFGlJQHkNcW VxoEQpCTBdmSH1ZYl1Se2JZHxEKQmwXb2dBWF5jX0JmGEQRCkJAF2YYeFl5fkl+aWQdEQpwZxd scnxBbX9DARNobREKcGgXZmZCTHNzXWt8QmMRCnBoF218bHgbZE9kYR5zEQpwaBdlWEESXB4YX VkTGhEKcGgXaR5JGgFueGlcWBsRCnBoF2xAGmZzT2lIXm5YEQpwfxdvT3paElBLWGsfUBEKcF8 XaxtDfFt8HRJEQ3oRCnBsF2cfRVx+eR4bQntfEQ== X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-18_01:2014-04-18,2014-04-18,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=4 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404190303 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 17:12:36 -0000 Hello FreeBSD users,=20 I have this huge problem with my ZFS server. I have accidentally = formatted one of the drives in exported ZFS pool. and now I can=92t = import the pool back. this is extremely important pool for me. device = that is missing is still attached to the system. Any help would be = greatly appreciated. #uname -a FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 = 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC = amd64 #zpool import pool: storage id: 11699153865862401654 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://illumos.org/msg/ZFS-8000-6X config: storage UNAVAIL missing device raidz1-0 DEGRADED ada3 ONLINE ada4 ONLINE ada5 ONLINE ada6 ONLINE 248348789931078390 UNAVAIL cannot open cache ada1s2 logs ada1s1 ONLINE Additional devices are known to be part of this pool, though = their exact configuration cannot be determined. # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 920G 17.9G 902G 1% 1.00x ONLINE - Thanks a lot! =97 Gena= From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 17:14:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 12D31856; Sat, 19 Apr 2014 17:14:46 +0000 (UTC) Received: from mail.in-addr.com (noop.in-addr.com [208.58.23.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A606E17B5; Sat, 19 Apr 2014 17:14:44 +0000 (UTC) Received: from gjp by mail.in-addr.com with local (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1WbYqZ-000MRq-UG; Sat, 19 Apr 2014 13:14:35 -0400 Date: Sat, 19 Apr 2014 13:14:35 -0400 From: Gary Palmer To: Outback Dingo Subject: Re: What happened with the GlusterFS port? Message-ID: <20140419171435.GD15884@in-addr.com> References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> <1397855188.58880.26.camel@powernoodle.corp.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: gpalmer@freebsd.org X-SA-Exim-Scanned: No (on mail.in-addr.com); SAEximRunCond expanded to false Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 17:14:46 -0000 On Fri, Apr 18, 2014 at 05:16:09PM -0400, Outback Dingo wrote: > On Fri, Apr 18, 2014 at 5:06 PM, Sean Bruno wrote: > > > On Fri, 2014-04-18 at 15:51 -0400, Outback Dingo wrote: > > > > > > > > > On Fri, Apr 18, 2014 at 3:43 PM, Sean Bruno > > > wrote: > > > On Fri, 2014-04-18 at 21:30 +0200, Rainer Duffner wrote: > > > > Hi, > > > > > > > > does anybody know where the effort to port GluserFS to > > > FreeBSD went? > > > > > > > > There???s this (very) outdated wiki-page: > > > > https://wiki.freebsd.org/GlusterFS > > > > > > > > and there???s the SoC project: > > > > https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport > > > > > > > > But nothing seems to have happened after it was finished. > > > > > > > > > > > > The port made decent progress over the GSOC period, but no, it > > > never > > > gained any traction to end up in the ports collection. > > > > > > > > > > > > not to hijack the thread but, glusters ancient, use riak-cs, or swift, > > > or port leofs > > > > > > > > > sean > > > > > > > > > > > > > > > > > > > I think swift and then ceph to be honest. At least that's what my > > universe looks like. > > > > does ceph run on freebsd yet ??? i knew there was a port in the works but > like the glusterfs ports, it seems to have died also http://wiki.ceph.com/Planning/Blueprints/Emperor/Increasing_Ceph_portability Note: from that page it says "Currently building on OSX 10.8 and FreeBSD 9.1" Unfortunately the github location linked to on the page doesn't exist Building ceph 0.72.2 on FreeBSD is non-trivial by the looks of it due to too many non-portable Linuxisms. Hope their porting effort moves forward. Regards, Gary From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 17:18:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B6CC2904 for ; Sat, 19 Apr 2014 17:18:34 +0000 (UTC) Received: from mail-oa0-x22a.google.com (mail-oa0-x22a.google.com [IPv6:2607:f8b0:4003:c02::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7F33A17EB for ; Sat, 19 Apr 2014 17:18:34 +0000 (UTC) Received: by mail-oa0-f42.google.com with SMTP id i4so2880627oah.1 for ; Sat, 19 Apr 2014 10:18:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=QUwogIdycA97+9bMmQ8mc5ynJopLcRHzlNH5MiA42TU=; b=s9C4uv3XG7adTccpnJ4B/UxzmE+kcd+8v7UtYv2ei75paMxplJNFIoCAlaRrVxY3tR x3Pxs9th8chjEm2cV/RVH2q0P/Aw2peMdgf4Q6Qtv47OKYqxF1xCJz9zhQEmdsOowSIY jsC3aq0L5DvYfzpgJkNm0gX8YDiaZ5v7Csc/ZpJkecGEUVCkloa7R56vqF603kzCSK/u KUGVSbgV3H4sQ+FYYjBTmhCkORkxIGYwYAu2FSn4tJi8kSzCb4J0YlPjfnC2Kx3/DlVJ eQ22K13/U+xvV3paLTw+mUeetboyprbqsCHfWhh5ITgC7xZzeqqfHlVZVe62hDxOAu0a okvA== MIME-Version: 1.0 X-Received: by 10.182.165.3 with SMTP id yu3mr22634434obb.14.1397927913610; Sat, 19 Apr 2014 10:18:33 -0700 (PDT) Received: by 10.76.180.40 with HTTP; Sat, 19 Apr 2014 10:18:33 -0700 (PDT) Received: by 10.76.180.40 with HTTP; Sat, 19 Apr 2014 10:18:33 -0700 (PDT) In-Reply-To: <3A9BFA07-7142-4F48-9151-E6BB5FFFD71D@icloud.com> References: <3A9BFA07-7142-4F48-9151-E6BB5FFFD71D@icloud.com> Date: Sat, 19 Apr 2014 10:18:33 -0700 Message-ID: Subject: Re: ZFS unable to import pool From: Freddie Cash To: Gena Guchin Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 17:18:34 -0000 On Apr 19, 2014 10:12 AM, "Gena Guchin" wrote: > > Hello FreeBSD users, > > I have this huge problem with my ZFS server. I have accidentally formatted one of the drives in exported ZFS pool. and now I can=E2=80=99t i= mport the pool back. this is extremely important pool for me. device that is missing is still attached to the system. Any help would be greatly appreciated. > > > > > #uname -a > FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > > #zpool import > pool: storage > id: 11699153865862401654 > state: UNAVAIL > status: One or more devices are missing from the system. > action: The pool cannot be imported. Attach the missing > devices and try again. > see: http://illumos.org/msg/ZFS-8000-6X > config: > > storage UNAVAIL missing device > raidz1-0 DEGRADED > ada3 ONLINE > ada4 ONLINE > ada5 ONLINE > ada6 ONLINE > 248348789931078390 UNAVAIL cannot open > cache > ada1s2 > logs > ada1s1 ONLINE If you do the following, can you then import it: zfs offline storage 248348789931078390 From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 17:21:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 373FCA3E; Sat, 19 Apr 2014 17:21:56 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 130971898; Sat, 19 Apr 2014 17:21:55 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id D788974BF3; Sat, 19 Apr 2014 10:21:54 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 40858-02; Sat, 19 Apr 2014 10:21:54 -0700 (PDT) Received: from [10.8.0.18] (unknown [10.8.0.18]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id 7367474BED; Sat, 19 Apr 2014 10:21:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=ixsystems.com; s=newknight0; t=1397928114; bh=Pk7hJ4UTaV2psA0AY6EalYuIG6nS3YU/sG03ePlou3w=; h=Subject:From:In-Reply-To:Date:Cc:References:To; b=SKxKSCcncJINzppPLiGnwRPp43AbgjUjiS80tnRmHGV133WsnGd1og/yRjVfASUVq k3iXb0/q5Cjl5moDoaZLMPV6Jp+Jc+fj4HCmV5iyuImOf9St09ILyqDXcaUieR07wZ 6zvcrxJPSWRP3X6xRKHR/hGbpqjR1tqJ0+0V2rwU= Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: What happened with the GlusterFS port? From: Jordan Hubbard In-Reply-To: <20140419171435.GD15884@in-addr.com> Date: Sat, 19 Apr 2014 22:21:20 +0500 Content-Transfer-Encoding: quoted-printable Message-Id: <253AE3BB-AA64-44B1-AA4D-31C8B6B99863@ixsystems.com> References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> <1397855188.58880.26.camel@powernoodle.corp.yahoo.com> <20140419171435.GD15884@in-addr.com> To: Gary Palmer X-Mailer: Apple Mail (2.1874) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 17:21:56 -0000 On Apr 19, 2014, at 10:14 PM, Gary Palmer wrote: > Building ceph 0.72.2 on FreeBSD is non-trivial by the looks of it due = to > too many non-portable Linuxisms. Hope their porting effort moves = forward. I=92ve tried. It=92s a long ways off from building and running on = FreeBSD, and even with a rough port, there are certain features (the = filesystem hooks) which we=92ll have to either settle for as a FUSE = port, which means somewhat less performance, or create the requisite = shims. Unfortunately, it doesn=92t seem like anyone in the CEPH project = itself uses anything but Linux, which is probably key. - Jordan P.S. Hey Gary, gee, long time! From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 17:28:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 787A8B64 for ; Sat, 19 Apr 2014 17:28:00 +0000 (UTC) Received: from mail-oa0-x230.google.com (mail-oa0-x230.google.com [IPv6:2607:f8b0:4003:c02::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 40A071909 for ; Sat, 19 Apr 2014 17:28:00 +0000 (UTC) Received: by mail-oa0-f48.google.com with SMTP id m1so2816733oag.21 for ; Sat, 19 Apr 2014 10:27:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=4HEfGFaRN0ZG6nfRsR/uZxucTOTV9DBSSMRFYF2APKs=; b=kT4lG8KYfPoogyOuq6HDFeEBBSS0l5LGYNvU4Z4+3HJ7xqFqDHVcDRRNcsMkxYJFYc 9wjaPoYeSH0uTXa8KyM79fQ8dTrse84xJ6gwXno6fm9N+oTfLq0EpHjkBAQWrHaBCriG pPnIfiw2i5AJCNJQ2PHZhznyCe8x79iemXyMxWit4GcfN3moMo8qzJp7HLUsBcJIgY5M Gumh2FyhX3qcUH3jDSGlKvJQmIWh2SuWnUr6TFptsH2gC6hfUMVyDAsAvYHa6DMOCEd8 UDdiJpMXsIzzNIhiKP7SVqV6wVtnm6Z8MecCqO0vfrtkQcPa+/ezb6Er7yzo0kq1gFzW jx0A== MIME-Version: 1.0 X-Received: by 10.182.165.3 with SMTP id yu3mr22664906obb.14.1397928479569; Sat, 19 Apr 2014 10:27:59 -0700 (PDT) Received: by 10.76.180.40 with HTTP; Sat, 19 Apr 2014 10:27:59 -0700 (PDT) Received: by 10.76.180.40 with HTTP; Sat, 19 Apr 2014 10:27:59 -0700 (PDT) In-Reply-To: References: <3A9BFA07-7142-4F48-9151-E6BB5FFFD71D@icloud.com> Date: Sat, 19 Apr 2014 10:27:59 -0700 Message-ID: Subject: Re: ZFS unable to import pool From: Freddie Cash To: Gena Guchin Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 17:28:00 -0000 On Apr 19, 2014 10:21 AM, "Gena Guchin" wrote: > > # zpool offline storage 248348789931078390 > cannot open 'storage': no such pool > > :( Darn. Was hoping that would work on an exported pool. From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 17:55:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DA5A63F6 for ; Sat, 19 Apr 2014 17:55:32 +0000 (UTC) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps1.elischer.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 9050C1C14 for ; Sat, 19 Apr 2014 17:55:32 +0000 (UTC) Received: from Julian-MBP3.local (ppp121-45-232-70.lns20.per1.internode.on.net [121.45.232.70]) (authenticated bits=0) by vps1.elischer.org (8.14.8/8.14.8) with ESMTP id s3JHtR3b032235 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Sat, 19 Apr 2014 10:55:30 -0700 (PDT) (envelope-from julian@freebsd.org) Message-ID: <5352B889.9050608@freebsd.org> Date: Sun, 20 Apr 2014 01:55:21 +0800 From: Julian Elischer User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Rainer Duffner , Outback Dingo Subject: Re: What happened with the GlusterFS port? References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> <7FAD7618-593D-4797-9EE9-BA36A87CE79B@ultra-secure.de> In-Reply-To: <7FAD7618-593D-4797-9EE9-BA36A87CE79B@ultra-secure.de> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 17:55:32 -0000 On 4/19/14, 4:43 PM, Rainer Duffner wrote: > Am 18.04.2014 um 21:51 schrieb Outback Dingo : > >> >> On Fri, Apr 18, 2014 at 3:43 PM, Sean Bruno wrote: >> On Fri, 2014-04-18 at 21:30 +0200, Rainer Duffner wrote: >>> Hi, >>> >>> does anybody know where the effort to port GluserFS to FreeBSD went? >>> >>> There’s this (very) outdated wiki-page: >>> https://wiki.freebsd.org/GlusterFS >>> >>> and there’s the SoC project: >>> https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport >>> >>> But nothing seems to have happened after it was finished. >> >> The port made decent progress over the GSOC period, but no, it never >> gained any traction to end up in the ports collection. >> >> > > That is very unfortunate. > > > >> not to hijack the thread but, glusters ancient, use riak-cs, or swift, or port leofs >> >> > > Currently, the unavailability of GlusterFS in FreeBSD (vs. the availability in Linux) is sort of a deal-breaker for some projects here. > > There are, regrettably, a large number of legacy applications the rely on a traditional filesystem. > An equally large number of customers continue to rely on these same applications, for the foreseeable future (and they pay us to run the stuff). > Traditionally, I would have just suggested a ZFS NFS fileserver - but it adds a single point of failure, manual failover with ZFS sends/HAST etc. hey check out panzura's product.. (disclaimer: I work there) > > GlusterFS would eliminate this (in situations where the customer needs a number of servers anyway). > > I guess, it won’t happen until somebody is paid to do it (SoC sort of proofed that) - directly or indirectly. > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 18:21:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 12D77A52 for ; Sat, 19 Apr 2014 18:21:45 +0000 (UTC) Received: from st11p02mm-asmtp001.mac.com (st11p02mm-asmtp001.mac.com [17.172.220.236]) by mx1.freebsd.org (Postfix) with ESMTP id D67B910A7 for ; Sat, 19 Apr 2014 18:21:44 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1252 Received: from [17.153.104.243] (unknown [17.153.104.243]) by st11p02mm-asmtp001.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4A00BGQG7VN720@st11p02mm-asmtp001.mac.com> for freebsd-fs@freebsd.org; Sat, 19 Apr 2014 17:21:33 +0000 (GMT) Subject: Re: ZFS unable to import pool From: Gena Guchin In-reply-to: Date: Sat, 19 Apr 2014 10:21:30 -0700 Content-transfer-encoding: quoted-printable Message-id: References: <3A9BFA07-7142-4F48-9151-E6BB5FFFD71D@icloud.com> To: Freddie Cash X-Mailer: Apple Mail (2.1877.9) X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaGREKTEMXGx0EGx0YBBIZBBsdEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hjeREKQ04XSxsYGmJCH2lvGHgEGXhzB xlgGx0YG31sEQpYXBcZBBoEHQdNSx0SSEkcTAUbHQQbHRgEEhkEGx0QGx4aHxsRCl5ZF2FDbH8 ZEQpMRhdia2sRCkNaFxsdBBsfGQQbGh4EGB4ZEQpEWBcZEQpESRcbEQpCRRdgSFhQe3NDa1AbX hEKQk4Xa0UaUlAeQ1xZXGgRCkJMF2ZIfVliXVJ7YlkfEQpCbBdvZ0FYXmNfQmYYRBEKQkAXZW5 rZX96Z1BnHV0RCnBnF2xyfEFtf0MBE2htEQpwaBdmZkJMc3Nda3xCYxEKcGgXbXxseBtkT2RhH nMRCnBoF2VYQRJcHhhdWRMaEQpwaBdnbHwFRFxnQ354bhEKcGgXbh5PBRJ7SEceeGURCnB/F29 PeloSUEtYax9QEQpwXxdrG0N8W3wdEkRDehEKcGwXZx9FXH55HhtCe18RCnBMF2FPT0ZLGFh+a BsTEQ== X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-18_01:2014-04-18,2014-04-18,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404190309 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 18:21:45 -0000 # zpool offline storage 248348789931078390 cannot open 'storage': no such pool :( On Apr 19, 2014, at 10:18 AM, Freddie Cash wrote: >=20 > On Apr 19, 2014 10:12 AM, "Gena Guchin" wrote: > > > > Hello FreeBSD users, > > > > I have this huge problem with my ZFS server. I have accidentally = formatted one of the drives in exported ZFS pool. and now I can=92t = import the pool back. this is extremely important pool for me. device = that is missing is still attached to the system. Any help would be = greatly appreciated. > > > > > > > > > > #uname -a > > FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 = 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC = amd64 > > > > #zpool import > > pool: storage > > id: 11699153865862401654 > > state: UNAVAIL > > status: One or more devices are missing from the system. > > action: The pool cannot be imported. Attach the missing > > devices and try again. > > see: http://illumos.org/msg/ZFS-8000-6X > > config: > > > > storage UNAVAIL missing device > > raidz1-0 DEGRADED > > ada3 ONLINE > > ada4 ONLINE > > ada5 ONLINE > > ada6 ONLINE > > 248348789931078390 UNAVAIL cannot open > > cache > > ada1s2 > > logs > > ada1s1 ONLINE >=20 > If you do the following, can you then import it: > zfs offline storage 248348789931078390 >=20 From owner-freebsd-fs@FreeBSD.ORG Sat Apr 19 22:26:02 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D1CB35CF; Sat, 19 Apr 2014 22:26:02 +0000 (UTC) Received: from mail-ob0-x22d.google.com (mail-ob0-x22d.google.com [IPv6:2607:f8b0:4003:c01::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8A7AC18AC; Sat, 19 Apr 2014 22:26:02 +0000 (UTC) Received: by mail-ob0-f173.google.com with SMTP id wn1so2997176obc.4 for ; Sat, 19 Apr 2014 15:26:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Q5mQfoPh7BcTlPshMSset59ZLqQe493BQyE6UU6R160=; b=LppGx0F9UvDATVH8PQHTqm/oisZEt9ertQD1F5b3+vpSebxFtK1zOUcRbI2pXe8y8w ObtLkmuOIy9iXtOExn5oyP91lg5syV9+/uLEC4sj1eizGYoDicjHUGY+4jOtAtp0IsHL YfVGoUfsZnVOgdSyKl/te9luRh2tksCel2CDrRua1jlkHz3w8uHmmyllXV0ma15u2U2J q+f253tJa8W0XD2VrXQsE+4KfNbUUldvAT41gJdT3ER0Vh8RmLOXTW1PEnxajwWZpic/ Bh4vNe5WTYfIT1/EisoMqT1Ou0KYjX31B4tBGvFPI5BcImmagTgjzZo0l2qUXt3PHpZH K15A== MIME-Version: 1.0 X-Received: by 10.182.47.196 with SMTP id f4mr3546810obn.50.1397946361843; Sat, 19 Apr 2014 15:26:01 -0700 (PDT) Received: by 10.76.170.4 with HTTP; Sat, 19 Apr 2014 15:26:01 -0700 (PDT) In-Reply-To: <5352B889.9050608@freebsd.org> References: <28234312-7982-49F5-83FD-865649AA9CCB@ultra-secure.de> <1397850220.58880.16.camel@powernoodle.corp.yahoo.com> <7FAD7618-593D-4797-9EE9-BA36A87CE79B@ultra-secure.de> <5352B889.9050608@freebsd.org> Date: Sat, 19 Apr 2014 18:26:01 -0400 Message-ID: Subject: Re: What happened with the GlusterFS port? From: Outback Dingo To: Julian Elischer Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Apr 2014 22:26:02 -0000 On Sat, Apr 19, 2014 at 1:55 PM, Julian Elischer wrote= : > On 4/19/14, 4:43 PM, Rainer Duffner wrote: > >> Am 18.04.2014 um 21:51 schrieb Outback Dingo : >> >> >>> On Fri, Apr 18, 2014 at 3:43 PM, Sean Bruno >>> wrote: >>> On Fri, 2014-04-18 at 21:30 +0200, Rainer Duffner wrote: >>> >>>> Hi, >>>> >>>> does anybody know where the effort to port GluserFS to FreeBSD went? >>>> >>>> There=E2=80=99s this (very) outdated wiki-page: >>>> https://wiki.freebsd.org/GlusterFS >>>> >>>> and there=E2=80=99s the SoC project: >>>> https://wiki.freebsd.org/SummerOfCode2013/GlusterFSport >>>> >>>> But nothing seems to have happened after it was finished. >>>> >>> >>> The port made decent progress over the GSOC period, but no, it never >>> gained any traction to end up in the ports collection. >>> >>> >>> >> That is very unfortunate. >> >> >> >> not to hijack the thread but, glusters ancient, use riak-cs, or swift, >>> or port leofs >>> >>> >> >> Currently, the unavailability of GlusterFS in FreeBSD (vs. the >> availability in Linux) is sort of a deal-breaker for some projects here. >> >> There are, regrettably, a large number of legacy applications the rely o= n >> a traditional filesystem. >> An equally large number of customers continue to rely on these same >> applications, for the foreseeable future (and they pay us to run the stu= ff). >> Traditionally, I would have just suggested a ZFS NFS fileserver - but it >> adds a single point of failure, manual failover with ZFS sends/HAST etc. >> > > hey check out panzura's product.. (disclaimer: I work there) > >> >> Shameless plug...... its not Free either......... !!!! > GlusterFS would eliminate this (in situations where the customer needs a >> number of servers anyway). >> >> I guess, it won=E2=80=99t happen until somebody is paid to do it (SoC so= rt of >> proofed that) - directly or indirectly. >> >> >> >> > _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> >> > From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 01:06:44 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 03E7746F; Sun, 20 Apr 2014 01:06:44 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id CCA9614E8; Sun, 20 Apr 2014 01:06:43 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3K16hhj073440; Sun, 20 Apr 2014 01:06:43 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3K16hho073439; Sun, 20 Apr 2014 01:06:43 GMT (envelope-from linimon) Date: Sun, 20 Apr 2014 01:06:43 GMT Message-Id: <201404200106.s3K16hho073439@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/186720: [xfs] is xfs now unsupported in the kernel? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 01:06:44 -0000 Old Synopsis: FS XFS New Synopsis: [xfs] is xfs now unsupported in the kernel? Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Apr 20 01:05:29 UTC 2014 Responsible-Changed-Why: retitle and assign. http://www.freebsd.org/cgi/query-pr.cgi?pr=186720 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 01:32:08 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9CABB646; Sun, 20 Apr 2014 01:32:08 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 71DDC16BD; Sun, 20 Apr 2014 01:32:08 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3K1W8uR083069; Sun, 20 Apr 2014 01:32:08 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3K1W8VL083068; Sun, 20 Apr 2014 01:32:08 GMT (envelope-from linimon) Date: Sun, 20 Apr 2014 01:32:08 GMT Message-Id: <201404200132.s3K1W8VL083068@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/172092: [zfs] [panic] zfs import panics kernel X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 01:32:08 -0000 Old Synopsis: zfs import panics kernel New Synopsis: [zfs] [panic] zfs import panics kernel Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Apr 20 01:31:49 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=172092 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 01:36:55 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8CDF2933; Sun, 20 Apr 2014 01:36:55 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 60E6516E2; Sun, 20 Apr 2014 01:36:55 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3K1atmh083282; Sun, 20 Apr 2014 01:36:55 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3K1atvV083281; Sun, 20 Apr 2014 01:36:55 GMT (envelope-from linimon) Date: Sun, 20 Apr 2014 01:36:55 GMT Message-Id: <201404200136.s3K1atvV083281@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-amd64@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/173235: [smbfs] [panic] Have received two crashes within 1 day after installing new packages: Fatal trap 12: page fault in kernel mode X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 01:36:55 -0000 Old Synopsis: Have received two crashes within 1 day after installing new packages: Fatal trap 12: page fault in kernel mode New Synopsis: [smbfs] [panic] Have received two crashes within 1 day after installing new packages: Fatal trap 12: page fault in kernel mode Responsible-Changed-From-To: freebsd-amd64->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Apr 20 01:35:11 UTC 2014 Responsible-Changed-Why: reclassify. http://www.freebsd.org/cgi/query-pr.cgi?pr=173235 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 01:41:48 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 445DDC23; Sun, 20 Apr 2014 01:41:48 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 183431708; Sun, 20 Apr 2014 01:41:48 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3K1fl7u086285; Sun, 20 Apr 2014 01:41:47 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3K1flhK086284; Sun, 20 Apr 2014 01:41:47 GMT (envelope-from linimon) Date: Sun, 20 Apr 2014 01:41:47 GMT Message-Id: <201404200141.s3K1flhK086284@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/162195: [softupdates] [panic] panic with soft updates journaling during umount -f X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 01:41:48 -0000 Old Synopsis: panic with soft updates journaling during umount -f New Synopsis: [softupdates] [panic] panic with soft updates journaling during umount -f Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Apr 20 01:41:31 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=162195 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 02:17:03 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9EFC03D0; Sun, 20 Apr 2014 02:17:03 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7348B19E5; Sun, 20 Apr 2014 02:17:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3K2H3RS097960; Sun, 20 Apr 2014 02:17:03 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3K2H3tS097959; Sun, 20 Apr 2014 02:17:03 GMT (envelope-from linimon) Date: Sun, 20 Apr 2014 02:17:03 GMT Message-Id: <201404200217.s3K2H3tS097959@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/184771: [nfs] [panic] panic on nfs mount X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 02:17:03 -0000 Old Synopsis: panic on nfs mount New Synopsis: [nfs] [panic] panic on nfs mount Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Apr 20 02:16:45 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=184771 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 02:18:56 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 186C15CD; Sun, 20 Apr 2014 02:18:56 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E12641A01; Sun, 20 Apr 2014 02:18:55 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3K2Itha098162; Sun, 20 Apr 2014 02:18:55 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3K2ItLO098161; Sun, 20 Apr 2014 02:18:55 GMT (envelope-from linimon) Date: Sun, 20 Apr 2014 02:18:55 GMT Message-Id: <201404200218.s3K2ItLO098161@freefall.freebsd.org> To: tyler@monkeypox.org, linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/185374: [msdosfs] [panic] Unmounting msdos filesystem in a bad state causes kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 02:18:56 -0000 Old Synopsis: Unmounting msdos filesystem in a bad state causes kernel panic New Synopsis: [msdosfs] [panic] Unmounting msdos filesystem in a bad state causes kernel panic State-Changed-From-To: open->open State-Changed-By: linimon State-Changed-When: Sun Apr 20 01:48:45 UTC 2014 State-Changed-Why: Over to maintainer(s). Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Apr 20 01:48:45 UTC 2014 Responsible-Changed-Why: http://www.freebsd.org/cgi/query-pr.cgi?pr=185374 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 02:20:43 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2A878695; Sun, 20 Apr 2014 02:20:43 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 00E3A1A13; Sun, 20 Apr 2014 02:20:43 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3K2KgpL001105; Sun, 20 Apr 2014 02:20:42 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3K2KgRQ001104; Sun, 20 Apr 2014 02:20:42 GMT (envelope-from linimon) Date: Sun, 20 Apr 2014 02:20:42 GMT Message-Id: <201404200220.s3K2KgRQ001104@freefall.freebsd.org> To: tyler@monkeypox.org, linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/185734: [zfs] [panic] panic on stable/10 when writing to ZFS disk X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 02:20:43 -0000 Old Synopsis: panic on stable/10 when writing to ZFS disk New Synopsis: [zfs] [panic] panic on stable/10 when writing to ZFS disk State-Changed-From-To: open->open State-Changed-By: linimon State-Changed-When: Sun Apr 20 01:48:45 UTC 2014 State-Changed-Why: Over to maintainer(s). Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Apr 20 01:48:45 UTC 2014 Responsible-Changed-Why: http://www.freebsd.org/cgi/query-pr.cgi?pr=185734 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 02:22:48 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A5E0F75D; Sun, 20 Apr 2014 02:22:48 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7A4271A97; Sun, 20 Apr 2014 02:22:48 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3K2Mm0o001301; Sun, 20 Apr 2014 02:22:48 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3K2MmeW001300; Sun, 20 Apr 2014 02:22:48 GMT (envelope-from linimon) Date: Sun, 20 Apr 2014 02:22:48 GMT Message-Id: <201404200222.s3K2MmeW001300@freefall.freebsd.org> To: jjh-freebsd@deterlab.net, linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/185827: [nfs] [panic] Kernel Panic after upgrading NFS server from FreeBSD 9.1 to 9.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 02:22:48 -0000 Old Synopsis: Kernel Panic after upgrading NFS server from FreeBSD 9.1 to 9.2 New Synopsis: [nfs] [panic] Kernel Panic after upgrading NFS server from FreeBSD 9.1 to 9.2 State-Changed-From-To: open->open State-Changed-By: linimon State-Changed-When: Sun Apr 20 01:48:45 UTC 2014 State-Changed-Why: Over to maintainer(s). Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Apr 20 01:48:45 UTC 2014 Responsible-Changed-Why: http://www.freebsd.org/cgi/query-pr.cgi?pr=185827 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 12:55:26 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D99AE2FC; Sun, 20 Apr 2014 12:55:26 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id ABB3D1C61; Sun, 20 Apr 2014 12:55:26 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3KCtQtt032892; Sun, 20 Apr 2014 12:55:26 GMT (envelope-from rmacklem@freefall.freebsd.org) Received: (from rmacklem@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3KCtPdx032891; Sun, 20 Apr 2014 12:55:25 GMT (envelope-from rmacklem) Date: Sun, 20 Apr 2014 12:55:25 GMT Message-Id: <201404201255.s3KCtPdx032891@freefall.freebsd.org> To: barber@mail.ru, rmacklem@FreeBSD.org, freebsd-fs@FreeBSD.org From: rmacklem@FreeBSD.org Subject: Re: kern/184771: [nfs] [panic] panic on nfs mount X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 12:55:26 -0000 Synopsis: [nfs] [panic] panic on nfs mount State-Changed-From-To: open->closed State-Changed-By: rmacklem State-Changed-When: Sun Apr 20 12:53:06 UTC 2014 State-Changed-Why: I'm pretty sure this crash was fixed by r259765, which has been MFC'd to stable/9 as r261061. It was specific to an NFSv2 mount and the stack corruption only seemed to affect i386. If you experience a similar crash with an up-to-date stable/9 or 9.3 system (when released), please submit another PR. http://www.freebsd.org/cgi/query-pr.cgi?pr=184771 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 12:58:23 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E38F937D; Sun, 20 Apr 2014 12:58:23 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B81A71C71; Sun, 20 Apr 2014 12:58:23 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3KCwN2d032971; Sun, 20 Apr 2014 12:58:23 GMT (envelope-from rmacklem@freefall.freebsd.org) Received: (from rmacklem@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3KCwNMG032970; Sun, 20 Apr 2014 12:58:23 GMT (envelope-from rmacklem) Date: Sun, 20 Apr 2014 12:58:23 GMT Message-Id: <201404201258.s3KCwNMG032970@freefall.freebsd.org> To: jjh-freebsd@deterlab.net, rmacklem@FreeBSD.org, freebsd-fs@FreeBSD.org From: rmacklem@FreeBSD.org Subject: Re: kern/185827: [nfs] [panic] Kernel Panic after upgrading NFS server from FreeBSD 9.1 to 9.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 12:58:24 -0000 Synopsis: [nfs] [panic] Kernel Panic after upgrading NFS server from FreeBSD 9.1 to 9.2 State-Changed-From-To: open->closed State-Changed-By: rmacklem State-Changed-When: Sun Apr 20 12:55:44 UTC 2014 State-Changed-Why: I'm pretty sure this crash was fixed by r259765, which was MFC'd to stable/9 as r261061. It was caused by an NFSv2 mount and the stack corruption only seemed to affect i386 systems. Thanks go to John for his help with isolating the problem. If anyone experiences a similar crash with an up to date stable/9 or 9.3 (when it is released) system, please submit another PR. http://www.freebsd.org/cgi/query-pr.cgi?pr=185827 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 21:57:26 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 87B62873; Sun, 20 Apr 2014 21:57:26 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5BFB4186C; Sun, 20 Apr 2014 21:57:26 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3KLvQLg000578; Sun, 20 Apr 2014 21:57:26 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3KLvQmR000577; Sun, 20 Apr 2014 21:57:26 GMT (envelope-from linimon) Date: Sun, 20 Apr 2014 21:57:26 GMT Message-Id: <201404202157.s3KLvQmR000577@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/185232: [nfs] [patch] Kernel page fault in jailed() via vn_stat() when using uio_td from nfsrv_read() X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 21:57:26 -0000 Old Synopsis: Kernel page fault in jailed() via vn_stat() when using uio_td from nfsrv_read() New Synopsis: [nfs] [patch] Kernel page fault in jailed() via vn_stat() when using uio_td from nfsrv_read() Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun Apr 20 21:57:05 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=185232 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 20 23:04:48 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B14CE751; Sun, 20 Apr 2014 23:04:48 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 84B6D1D49; Sun, 20 Apr 2014 23:04:48 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3KN4miK023986; Sun, 20 Apr 2014 23:04:48 GMT (envelope-from rmacklem@freefall.freebsd.org) Received: (from rmacklem@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3KN4m10023982; Sun, 20 Apr 2014 23:04:48 GMT (envelope-from rmacklem) Date: Sun, 20 Apr 2014 23:04:48 GMT Message-Id: <201404202304.s3KN4m10023982@freefall.freebsd.org> To: rmacklem@FreeBSD.org, freebsd-fs@FreeBSD.org, rmacklem@FreeBSD.org From: rmacklem@FreeBSD.org Subject: Re: kern/185232: [nfs] [patch] Kernel page fault in jailed() via vn_stat() when using uio_td from nfsrv_read() X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Apr 2014 23:04:48 -0000 Synopsis: [nfs] [patch] Kernel page fault in jailed() via vn_stat() when using uio_td from nfsrv_read() Responsible-Changed-From-To: freebsd-fs->rmacklem Responsible-Changed-By: rmacklem Responsible-Changed-When: Sun Apr 20 23:03:30 UTC 2014 Responsible-Changed-Why: I'll take this one. http://www.freebsd.org/cgi/query-pr.cgi?pr=185232 From owner-freebsd-fs@FreeBSD.ORG Mon Apr 21 11:06:46 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1C191F6F for ; Mon, 21 Apr 2014 11:06:46 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0864C1957 for ; Mon, 21 Apr 2014 11:06:46 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3LB6jP5085693 for ; Mon, 21 Apr 2014 11:06:45 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3LB6jxn085691 for freebsd-fs@FreeBSD.org; Mon, 21 Apr 2014 11:06:45 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 21 Apr 2014 11:06:45 GMT Message-Id: <201404211106.s3LB6jxn085691@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Apr 2014 11:06:46 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/188443 fs [smbfs] Segfault with tail(1) when mmap(2) called o kern/188328 fs [zfs] UPDATING should provide caveats for running `zpo o kern/188187 fs [zfs] [panic] 10-stable: Kernel panic on zpool import: o kern/187905 fs [zpool] Confusion zpool with a block size in HDD - blo o kern/187778 fs [zfs] Two ZFS filesystems mounted on / at same time o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix s kern/187414 fs [zfs] ZFS Write Deadlock on 8.4 o kern/187261 fs [fusefs] FUSE kernel panic when using socket / bind o kern/186942 fs [zfs] [panic] Fatal trap 12 (seems zfs related) o kern/186720 fs [xfs] is xfs now unsupported in the kernel? o kern/186652 fs [smbfs] [panic] crash during umount -a -t smbfs o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186574 fs [zfs] zpool history hangs (infinite loop) o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/186112 fs [zfs] [panic] ZFS Panic/Solaris Assert/zap.c:479 o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185734 fs [zfs] [panic] panic on stable/10 when writing to ZFS d o kern/185374 fs [msdosfs] [panic] Unmounting msdos filesystem in a bad o kern/184677 fs [zfs] [panic] ZFS snapshot umount kernel panic o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] [panic] Kernel panic in ZFS I/O: solaris assert: o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181791 fs [zfs] ZFS ARC Deadlock o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [softupdates] [panic] softdep_deallocate_dependencies: o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173235 fs [smbfs] [panic] Have received two crashes within 1 day o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172630 fs [zfs] [lor] zfs/zfs_vfsops.c kern/kern_descrip.c o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus f kern/172197 fs [zfs] Userquota (as well as groupquota) does not work o kern/172092 fs [zfs] [panic] zfs import panics kernel o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170523 fs [zfs] zfs rename pool@snapshot1 pool@snapshot2 UNMOUNT o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs [ufs] [patch] Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162195 fs [softupdates] [panic] panic with soft updates journali o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] [panic] zfs/dbuf.c: panic: solaris assert: arc_b o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] [panic] panic during ls(1) zfs snapshot director o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [softupdates] [panic] Kernel panic in softdep_disk_wri o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] [panic] zpool scrub causes panic if geli vdevs d o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 360 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 21 15:07:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1A057AB5 for ; Mon, 21 Apr 2014 15:07:22 +0000 (UTC) Received: from mail-ie0-x234.google.com (mail-ie0-x234.google.com [IPv6:2607:f8b0:4001:c03::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D5F841111 for ; Mon, 21 Apr 2014 15:07:21 +0000 (UTC) Received: by mail-ie0-f180.google.com with SMTP id as1so3842416iec.25 for ; Mon, 21 Apr 2014 08:07:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dragondata.com; s=google; h=from:content-type:content-transfer-encoding:subject:message-id:date :to:mime-version; bh=vzI3h3bmXSQoL7OR2NIivoQ3WA3BLLYfeYOETSWoBmU=; b=BjvjTf6unj6pu9LdwFJ98mGR2hYbqggz34Lnu+Oz0emdbPpnwuks2sKwYbljVXZtOV jhKpgyaJEGK0dzRPs21Sv0UzSEm0WLmPkK3Ac+zy8e5T25DR2qHsO6yiOVfwJaW3nWYw L/A8ZcZFlOoCOE7oIVbcaSqWlfR+T3qOUYD6s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:content-type:content-transfer-encoding :subject:message-id:date:to:mime-version; bh=vzI3h3bmXSQoL7OR2NIivoQ3WA3BLLYfeYOETSWoBmU=; b=k3DwxPa2espYmLT4cG59bbi3LczW/4wpzdrXvNhjqoMGguPsmQriLa9nYIx8UeIpRE YX3sMYSiJKUUHwmCnN1Y/OrymbTU3HUlLRxr4RSjp9z1pmN4VlOkJ3XP1EJJvB+QhDeB A6qXGLuxkoJUYKL/9hnpTuk7Sl5BD0JHmUAaKCZlT2+c7+q67W5J+yzsbLzfODi2EKs6 GHRXdvhf+pViUsF9A2nlz1J30lzGVxsY3kjEB2Uk3H9o/wfkTaFkK+UbzxlHFdplQEWy tr8TS253k9CuIgefa6ZJnm4vLfWtwMPnGbTsFxqdmlC9mQe6pqAQqnIwJRec0Q7n0U8S 0klA== X-Gm-Message-State: ALoCoQlfR94ddS3VWdlLv6uq3NgC04Tw+wej4k+P7loewH+6be9caJy14ybmFgjwMQFiMIUm5WHV X-Received: by 10.50.32.70 with SMTP id g6mr22856557igi.0.1398092841169; Mon, 21 Apr 2014 08:07:21 -0700 (PDT) Received: from vpn132.rw1.your.org (vpn132.rw1.your.org. [204.9.51.132]) by mx.google.com with ESMTPSA id p4sm21689248igy.7.2014.04.21.08.07.19 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 21 Apr 2014 08:07:19 -0700 (PDT) From: Kevin Day Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Subject: encfs with 10.0's FUSE doesn't work Message-Id: Date: Mon, 21 Apr 2014 10:07:18 -0500 To: freebsd-fs@FreeBSD.org Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1877.9\)) X-Mailer: Apple Mail (2.1877.9) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Apr 2014 15:07:22 -0000 fusefs-encfs is in ports, and worked for me in older releases of FreeBSD = with the =93built from ports=94 version of fusefs-kmod. It builds okay = in 10.0-RELEASE, lets you mount an encrypted filesystem, lets you create = directories in it, but fails at actually creating any files. I traced = the problem down to the fact that encfs doesn=92t implement FUSE_CREATE, = it relies on the older fuse behavior of creating a file being = accomplished by FUSE_MKNOD + FUSE_OPEN. To reproduce, install fusefs-encfs then create an encrypted filesystem: root@media:~ # mkdir /mnt/encfs root@media:~ # mkdir /tmp/encsource root@media:~ # encfs /tmp/encsource /mnt/encfs Creating new encrypted volume. Please choose from one of the following options: enter "x" for expert configuration mode, enter "p" for pre-configured paranoia mode, anything else, or an empty line will select standard mode. ?> Standard configuration selected. Configuration finished. The filesystem to be created has the following properties: Filesystem cipher: "ssl/aes", version 3:0:2 Filename encoding: "nameio/block", version 3:0:1 Key Size: 192 bits Block Size: 1024 bytes Each file contains 8 byte header with unique IV data. Filenames encoded using IV chaining mode. File holes passed through to ciphertext. Now you will need to enter a password for your filesystem. You will need to remember this password, as there is absolutely no recovery mechanism. However, the password can be changed later using encfsctl. New Encfs Password: Verify Encfs Password: It=92s mounted and seems to work: root@media:~ # mount |grep encfs /dev/fuse on /mnt/encfs (fusefs, local, synchronous) root@media:~ # mkdir /mnt/encfs/test root@media:~ # ls -l /mnt/encfs total 9 drwxr-xr-x 2 root wheel 2 Apr 21 07:13 test But trying to create a file fails, and fails differently the first time = compared to the second time: root@media:~ # touch /mnt/encfs/xx touch: /mnt/encfs/xx: Function not implemented root@media:~ # touch /mnt/encfs/xx touch: /mnt/encfs/xxx: Invalid argument This matches the behavior of fuse_vnops.c if FUSE_CREATE isn=92t = recognized by the fuse program. The first time through if it sees ENOSYS = back from the daemon it does this: 361 if (err) { 362 if (err =3D=3D ENOSYS) 363 fsess_set_notimpl(mp, FUSE_CREATE); 364 debug_printf("create: got err=3D%d from = daemon\n", err); 365 goto out; 366 } which is recognized the second time around: 345 if (!fsess_isimpl(mp, FUSE_CREATE)) { 346 debug_printf("eh, daemon doesn't implement = create?\n"); 347 return (EINVAL); 348 } This is happening because encfs doesn=92t define a create function. =46rom= encfs=92s main.cpp: //encfs_oper.create =3D encfs_create; It looks like this is supposed to be allowable, because the fuse = documentation at = http://fuse.sourceforge.net/doxygen/structfuse__operations.html says: int(* fuse_operations::create)(const char *, mode_t, struct = fuse_file_info *) Create and open a file If the file does not exist, first create it with the specified mode, and = then open it. If this method is not implemented or under Linux kernel versions earlier = than 2.6.15, the mknod() and open() methods will be called instead. Introduced in version 2.5 It looks like the linux kernel does this: = https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/= fs/fuse/dir.c#n535 OpenBSD has a rather large patch to encfs that gives = it an encfs_create() function, but it seems to be relying on behavior = specific to OpenBSD=92s implementation of fuse. Is there a reason that the included fuse support is no longer allowing = for this? From owner-freebsd-fs@FreeBSD.ORG Mon Apr 21 19:29:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7FA9BFD4 for ; Mon, 21 Apr 2014 19:29:34 +0000 (UTC) Received: from mail-out.apple.com (mail-out.apple.com [17.151.62.51]) (using TLSv1 with cipher DES-CBC3-SHA (168/168 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 633431CBA for ; Mon, 21 Apr 2014 19:29:34 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1252 Received: from mail-out.apple.com by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) id <0N4E00A00BDYQL00@local.mail-out.apple.com> for freebsd-fs@freebsd.org; Mon, 21 Apr 2014 12:29:27 -0700 (PDT) Received: from relay8.apple.com ([17.128.113.102]) by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) with ESMTP id <0N4E00HGMBFHVWY0@local.mail-out.apple.com> for freebsd-fs@freebsd.org; Mon, 21 Apr 2014 12:29:27 -0700 (PDT) X-AuditID: 11807166-f79c26d000001623-8a-5355719726d5 Received: from ggulchin.apple.com (ggulchin.apple.com [17.199.68.248]) (using TLS with cipher AES128-SHA (128/128 bits)) (Client did not present a certificate) by relay8.apple.com (Apple SCV relay) with SMTP id 9D.29.05667.79175535; Mon, 21 Apr 2014 12:29:27 -0700 (PDT) From: Gena Guchin Content-transfer-encoding: quoted-printable Date: Mon, 21 Apr 2014 12:29:27 -0700 Subject: ZFS unable to import pool To: FreeBSD Filesystems Message-id: X-Mailer: Apple Mail (2.1878) X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrLJMWRmVeSWpSXmKPExsUieNzlh+70wtBgg4dPlS2OPf7JZjF3wXVW ByaPGZ/ms3gsn3aUMYApissmJTUnsyy1SN8ugSvj/JTD7AXb+Cpe/rrA2sD4irOLkZNDQsBE 4vSGJSwQtpjEhXvr2boYuTiEBKYwSVxvXQOWYBNQl/j59REriM0soCex4/ovKFtbYtnC18wg NouAqsSdp8/ZQGxhASWJ529ngMVFBAwkHq7dBFbPK2Ar8eZRC9BMDiDbQGLtokiIvbISExu3 MU5g5JmFZMMsJBtmIXQsYGRexShQlJqTWGmhl1hQkJOql5yfu4kRFCgNhWk7GJuWWx1iFOBg VOLhlTAIDRZiTSwrrsw9xCjBwawkwrs2DSjEm5JYWZValB9fVJqTWnyIUZqDRUmcV4/ZP1hI ID2xJDU7NbUgtQgmy8TBKdXAGNI45fqUJdYl/xZENIRXRim49Kz5f/NlUqjNdiuDsNPbmRxS 9on5X7OMm3xpPvOeszeEw690mfGK5foJhwmZHXzmfXTj3c3RW542toTNE1ihek8xJzLTzWZ1 ddX/8tirz0L2qc5RE5yxK4FrY+OPu34Zp7f2sjrrzBdfXegs0rqI92mG9tW3SizFGYmGWsxF xYkAs1dCchACAAA= X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Apr 2014 19:29:34 -0000 Hello FreeBSD users,=20 my appologies for reposting, but I'd really need your help! I have this huge problem with my ZFS server. I have accidentally = formatted one of the drives in exported ZFS pool. and now I can=92t = import the pool back. this is extremely important pool for me. device = that is missing is still attached to the system. Any help would be = greatly appreciated. #uname -a FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 = 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC = amd64 #zpool import pool: storage id: 11699153865862401654 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://illumos.org/msg/ZFS-8000-6X config: storage UNAVAIL missing device raidz1-0 DEGRADED ada3 ONLINE ada4 ONLINE ada5 ONLINE ada6 ONLINE 248348789931078390 UNAVAIL cannot open cache ada1s2 logs ada1s1 ONLINE Additional devices are known to be part of this pool, though = their exact configuration cannot be determined. # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 920G 17.9G 902G 1% 1.00x ONLINE - # zpool upgrade This system supports ZFS pool feature flags. All pools are formatted using feature flags. Every feature flags pool has all supported features enabled. # zfs upgrade This system is currently running ZFS filesystem version 5. All filesystems are formatted with the current version. Thanks a lot! =97 Gena= From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 01:06:37 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7852F15C for ; Tue, 22 Apr 2014 01:06:37 +0000 (UTC) Received: from mail-out.apple.com (bramley.apple.com [17.151.62.49]) (using TLSv1 with cipher DES-CBC3-SHA (168/168 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 59C6E1CC9 for ; Tue, 22 Apr 2014 01:06:36 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1252 Received: from mail-out.apple.com by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) id <0N4E00200O05FD00@local.mail-out.apple.com> for freebsd-fs@freebsd.org; Mon, 21 Apr 2014 17:06:21 -0700 (PDT) Received: from relay2.apple.com ([17.128.113.67]) by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) with ESMTP id <0N4E00MVJO9WEA21@local.mail-out.apple.com> for freebsd-fs@freebsd.org; Mon, 21 Apr 2014 17:06:21 -0700 (PDT) X-AuditID: 11807143-f79f66d0000015d3-06-5355b27d1ce3 Received: from ggulchin.apple.com (ggulchin.apple.com [17.199.68.248]) (using TLS with cipher AES128-SHA (128/128 bits)) (Client did not present a certificate) by relay2.apple.com (Apple SCV relay) with SMTP id CE.FB.05587.D72B5535; Mon, 21 Apr 2014 17:06:21 -0700 (PDT) Subject: Fwd: ZFS unable to import pool From: Gena Guchin Date: Mon, 21 Apr 2014 17:06:20 -0700 Content-transfer-encoding: quoted-printable Message-id: References: <17405F3F-D209-40B5-85A1-01AB94DC54BE@icloud.com> To: FreeBSD Filesystems X-Mailer: Apple Mail (2.1878) X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupkluLIzCtJLcpLzFFi42IRPO7yQ7d2U2iwwdctChbHHv9ks5i74Dqr A5PHjE/zWTyWTzvKGMAUxWWTkpqTWZZapG+XwJVxefIHxoIHlhUPvn1mbWDcrNXFyMkhIWAi 0THrHguELSZx4d56ti5GLg4hgSlMEtuO3WAGSTAL6EnsuP6LtYuRg4NXwEBi7aJIkLCwgLrE 3L5vYL1sQPbPr49YQWwWAVWJOffXMEG0akssW/gabAyvgK3EmrlfwOqFgOwrm+4zgtgiQCMf rt3ECnGDrMTExm2MExh5ZyHZPAth8ywkUxcwMq9iFChKzUmsNNJLLCjISdVLzs/dxAgKoIZC 5x2Mx5ZZHWIU4GBU4uGVMAgNFmJNLCuuzD3EKMHBrCTCuzYNKMSbklhZlVqUH19UmpNafIhR moNFSZxXj9k/WEggPbEkNTs1tSC1CCbLxMEp1cAYZ77B8pXf3kbJz/O5V86KfxigpRSxUOX3 +8vR6SrJ4nXZC+xm9ex7FlzZcDHgm+abT5Yt8ZfWnlZkkUy1FTvvWtSlunux7BzZyGT36VKX Q35WNHne8pw25dWZgtLNX8wnvdzzTcN1FWtryHdehv8OzFoaS1i2s9u8EP3r/3CV2OX11V4d BZ+UWIozEg21mIuKEwGfz5MFHAIAAA== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 01:06:37 -0000 Begin forwarded message: > From: Gena Guchin > Subject: Re: ZFS unable to import pool > Date: April 21, 2014 at 3:18:43 PM PDT > To: Hakisho Nukama >=20 > Hakisho,=20 >=20 > this is weird, while I do not see ONLINE next to cache device ada1s2, = it is the same device as logs ada1s1, just different slice. > I tried to see the difference between zfs labels on that device. >=20 >=20 > [gena@ggulchin]-pts/0:57# zdb -l /dev/ada1s2 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > version: 5000 > state: 4 > guid: 7108193965515577889 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > version: 5000 > state: 4 > guid: 7108193965515577889 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version: 5000 > state: 4 > guid: 7108193965515577889 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version: 5000 > state: 4 > guid: 7108193965515577889 > [gena@ggulchin]-pts/0:58# zdb -l /dev/ada1s1 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > version: 5000 > name: 'storage' > state: 1 > txg: 14792113 > pool_guid: 11699153865862401654 > hostid: 3089874380 > hostname: 'ggulchin.homeunix.com' > top_guid: 15354816574459194272 > guid: 15354816574459194272 > is_log: 1 > vdev_children: 3 > vdev_tree: > type: 'disk' > id: 1 > guid: 15354816574459194272 > path: '/dev/ada1s1' > phys_path: '/dev/ada1s1' > whole_disk: 1 > metaslab_array: 125 > metaslab_shift: 27 > ashift: 9 > asize: 16100884480 > is_log: 1 > DTL: 137 > create_txg: 10478480 > features_for_read: > -------------------------------------------- > LABEL 1 > -------------------------------------------- > version: 5000 > name: 'storage' > state: 1 > txg: 14792113 > pool_guid: 11699153865862401654 > hostid: 3089874380 > hostname: 'ggulchin.homeunix.com' > top_guid: 15354816574459194272 > guid: 15354816574459194272 > is_log: 1 > vdev_children: 3 > vdev_tree: > type: 'disk' > id: 1 > guid: 15354816574459194272 > path: '/dev/ada1s1' > phys_path: '/dev/ada1s1' > whole_disk: 1 > metaslab_array: 125 > metaslab_shift: 27 > ashift: 9 > asize: 16100884480 > is_log: 1 > DTL: 137 > create_txg: 10478480 > features_for_read: > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version: 5000 > name: 'storage' > state: 1 > txg: 14792113 > pool_guid: 11699153865862401654 > hostid: 3089874380 > hostname: 'ggulchin.homeunix.com' > top_guid: 15354816574459194272 > guid: 15354816574459194272 > is_log: 1 > vdev_children: 3 > vdev_tree: > type: 'disk' > id: 1 > guid: 15354816574459194272 > path: '/dev/ada1s1' > phys_path: '/dev/ada1s1' > whole_disk: 1 > metaslab_array: 125 > metaslab_shift: 27 > ashift: 9 > asize: 16100884480 > is_log: 1 > DTL: 137 > create_txg: 10478480 > features_for_read: > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version: 5000 > name: 'storage' > state: 1 > txg: 14792113 > pool_guid: 11699153865862401654 > hostid: 3089874380 > hostname: 'ggulchin.homeunix.com' > top_guid: 15354816574459194272 > guid: 15354816574459194272 > is_log: 1 > vdev_children: 3 > vdev_tree: > type: 'disk' > id: 1 > guid: 15354816574459194272 > path: '/dev/ada1s1' > phys_path: '/dev/ada1s1' > whole_disk: 1 > metaslab_array: 125 > metaslab_shift: 27 > ashift: 9 > asize: 16100884480 > is_log: 1 > DTL: 137 > create_txg: 10478480 > features_for_read: >=20 >=20 > does this mean SSD drive is corrupted? > is my pool lost forever? >=20 > thanks! >=20 >=20 > On Apr 21, 2014, at 2:24 PM, Hakisho Nukama wrote: >=20 >> Hi Gena, >>=20 >> there are several options to import a pool, which might work. >> It looks like only one device is missing in raidz1, so the pool >> could be importable, if the cache device is also available. >> Try to connect it back, this can cause an non-importable pool. >>=20 >> Try reading the zpool man page and investigate into following flags: >> zpool import -F -o readonly=3Don >>=20 >> Best Regards, >> Nukama >>=20 >> On Mon, Apr 21, 2014 at 7:29 PM, Gena Guchin = wrote: >>> Hello FreeBSD users, >>>=20 >>> my appologies for reposting, but I'd really need your help! >>>=20 >>>=20 >>> I have this huge problem with my ZFS server. I have accidentally = formatted one of the drives in exported ZFS pool. and now I can=92t = import the pool back. this is extremely important pool for me. device = that is missing is still attached to the system. Any help would be = greatly appreciated. >>>=20 >>>=20 >>>=20 >>>=20 >>> #uname -a >>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 = 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC = amd64 >>>=20 >>> #zpool import >>> pool: storage >>> id: 11699153865862401654 >>> state: UNAVAIL >>> status: One or more devices are missing from the system. >>> action: The pool cannot be imported. Attach the missing >>> devices and try again. >>> see: http://illumos.org/msg/ZFS-8000-6X >>> config: >>>=20 >>> storage UNAVAIL missing device >>> raidz1-0 DEGRADED >>> ada3 ONLINE >>> ada4 ONLINE >>> ada5 ONLINE >>> ada6 ONLINE >>> 248348789931078390 UNAVAIL cannot open >>> cache >>> ada1s2 >>> logs >>> ada1s1 ONLINE >>>=20 >>> Additional devices are known to be part of this pool, though = their >>> exact configuration cannot be determined. >>>=20 >>>=20 >>> # zpool list >>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>>=20 >>> # zpool upgrade >>> This system supports ZFS pool feature flags. >>>=20 >>> All pools are formatted using feature flags. >>>=20 >>> Every feature flags pool has all supported features enabled. >>>=20 >>> # zfs upgrade >>> This system is currently running ZFS filesystem version 5. >>>=20 >>> All filesystems are formatted with the current version. >>>=20 >>>=20 >>> Thanks a lot! >>>=20 >>> =97 Gena >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >=20 From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 01:06:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7FB9315D for ; Tue, 22 Apr 2014 01:06:38 +0000 (UTC) Received: from mail-out.apple.com (crispin.apple.com [17.151.62.50]) (using TLSv1 with cipher DES-CBC3-SHA (168/168 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 619F01CCA for ; Tue, 22 Apr 2014 01:06:38 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1252 Received: from mail-out.apple.com by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) id <0N4E00B00O1WJZ00@local.mail-out.apple.com> for freebsd-fs@freebsd.org; Mon, 21 Apr 2014 17:06:32 -0700 (PDT) Received: from relay2.apple.com ([17.128.113.67]) by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) with ESMTP id <0N4E00M9NOASMJ21@local.mail-out.apple.com> for freebsd-fs@freebsd.org; Mon, 21 Apr 2014 17:06:32 -0700 (PDT) X-AuditID: 11807143-f79f66d0000015d3-73-5355b2886ed0 Received: from ggulchin.apple.com (ggulchin.apple.com [17.199.68.248]) (using TLS with cipher AES128-SHA (128/128 bits)) (Client did not present a certificate) by relay2.apple.com (Apple SCV relay) with SMTP id 50.1C.05587.882B5535; Mon, 21 Apr 2014 17:06:32 -0700 (PDT) Subject: Fwd: ZFS unable to import pool From: Gena Guchin Date: Mon, 21 Apr 2014 17:06:32 -0700 Content-transfer-encoding: quoted-printable Message-id: <50B7A3BC-293C-4A9E-AD13-30582EA4561E@icloud.com> References: <79B67A2F-DE78-4272-BA1D-FD6E6D5F1D07@icloud.com> To: FreeBSD Filesystems X-Mailer: Apple Mail (2.1878) X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupgluLIzCtJLcpLzFFi42IRPO7yQ7djU2iwQftfNYtjj3+yWcxdcJ3V gcljxqf5LB7Lpx1lDGCK4rJJSc3JLEst0rdL4MrY2viMreBTZsWy/6tYGhgvBXQxcnJICJhI LPq+gwXCFpO4cG89WxcjF4eQwBQmiTnHm5hAEswCehI7rv9i7WLk4OAVMJBYuygSJCwsoC4x t+8bWC8bkP3z6yNWEJtFQFVixsWtjBCt2hLLFr5mBrF5BWwl9j54wAgyRgjIvra9DCQsAjTx 4dpNrBAnyEpMbNzGOIGRdxaSxbMQFs9CMnQBI/MqRoGi1JzESiO9xIKCnFS95PzcTYyg8Gko dN7BeGyZ1SFGAQ5GJR5eCYPQYCHWxLLiytxDjBIczEoivGvTgEK8KYmVValF+fFFpTmpxYcY pTlYlMR59Zj9g4UE0hNLUrNTUwtSi2CyTBycUg2MAncPfZG3S+YyuHtFbPnX06wsiom5DD83 ZGpsVrd0W8u5XvzdIZ8e7QreqZ0xaQfz51k/Kar9df7c5PUPL00wlBB5XRC9bNIxVckjN+7f T1jXbjRrzjfX7QzW5/ZG+j8Oj9y5Ve/RUbty858srpvv1xoqHrBpD1z+T/dP0TZv9X8h6g5f JihGK7EUZyQaajEXFScCADMwWKIbAgAA X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 01:06:38 -0000 Begin forwarded message: > From: Gena Guchin > Subject: Re: ZFS unable to import pool > Date: April 21, 2014 at 4:25:14 PM PDT > To: Hakisho Nukama >=20 > Hakisho,=20 >=20 > I did try it. >=20 >=20 > # zpool import -F -o readonly=3Don storage > cannot import 'storage': one or more devices is currently unavailable >=20 >=20 > # gpart list > Geom name: ada0 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 1953525134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: ada0p1 > Mediasize: 524288 (512K) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r0w0e0 > rawuuid: e621bb07-a4a4-11e3-98fc-001d7d090860 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: gptboot0 > length: 524288 > offset: 20480 > type: freebsd-boot > index: 1 > end: 1063 > start: 40 > 2. Name: ada0p2 > Mediasize: 4294967296 (4.0G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e1 > rawuuid: e6633c97-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap0 > length: 4294967296 > offset: 544768 > type: freebsd-swap > index: 2 > end: 8389671 > start: 1064 > 3. Name: ada0p3 > Mediasize: 995909353472 (928G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e2 > rawuuid: e6953f31-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: zfs0 > length: 995909353472 > offset: 4295512064 > type: freebsd-zfs > index: 3 > end: 1953525127 > start: 8389672 > Consumers: > 1. Name: ada0 > Mediasize: 1000204886016 (932G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r2w2e5 >=20 > Geom name: ada1 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 62499999 > first: 63 > entries: 4 > scheme: MBR > Providers: > 1. Name: ada1s1 > Mediasize: 16105775616 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 32256 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 16105775616 > offset: 32256 > type: freebsd > index: 1 > end: 31456655 > start: 63 > 2. Name: ada1s2 > Mediasize: 15893692416 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 3220905984 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 15893692416 > offset: 16105807872 > type: freebsd > index: 2 > end: 62499023 > start: 31456656 > Consumers: > 1. Name: ada1 > Mediasize: 32000000000 (30G) > Sectorsize: 512 > Mode: r0w0e0 >=20 > Geom name: diskid/DISK-CVEM852600N5032HGN > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 62499999 > first: 63 > entries: 4 > scheme: MBR > Providers: > 1. Name: diskid/DISK-CVEM852600N5032HGNs1 > Mediasize: 16105775616 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 32256 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 16105775616 > offset: 32256 > type: freebsd > index: 1 > end: 31456655 > start: 63 > 2. Name: diskid/DISK-CVEM852600N5032HGNs2 > Mediasize: 15893692416 (15G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 3220905984 > Mode: r0w0e0 > attrib: active > rawtype: 165 > length: 15893692416 > offset: 16105807872 > type: freebsd > index: 2 > end: 62499023 > start: 31456656 > Consumers: > 1. Name: diskid/DISK-CVEM852600N5032HGN > Mediasize: 32000000000 (30G) > Sectorsize: 512 > Mode: r0w0e0 >=20 > Geom name: ada2 > modified: false > state: OK > fwheads: 16 > fwsectors: 63 > last: 1953525134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: ada2p1 > Mediasize: 524288 (512K) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r0w0e0 > rawuuid: e73e1154-a4a4-11e3-98fc-001d7d090860 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: gptboot1 > length: 524288 > offset: 20480 > type: freebsd-boot > index: 1 > end: 1063 > start: 40 > 2. Name: ada2p2 > Mediasize: 4294967296 (4.0G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e1 > rawuuid: e77bd5dd-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap1 > length: 4294967296 > offset: 544768 > type: freebsd-swap > index: 2 > end: 8389671 > start: 1064 > 3. Name: ada2p3 > Mediasize: 995909353472 (928G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e2 > rawuuid: e7ad15ae-a4a4-11e3-98fc-001d7d090860 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: zfs1 > length: 995909353472 > offset: 4295512064 > type: freebsd-zfs > index: 3 > end: 1953525127 > start: 8389672 > Consumers: > 1. Name: ada2 > Mediasize: 1000204886016 (932G) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r2w2e5 >=20 >=20 >=20 > thanks for your help! >=20 >=20 > On Apr 21, 2014, at 4:17 PM, Hakisho Nukama wrote: >=20 >> Hi Gena, >>=20 >> a missing cache device shouldn't be a problem. >> There was a problem some years ago, where a pool was lost >> with a missing cache device. >> But that seems to be some ancient past (version 19 or whatever = changed that). >> Otherwise I wouldn't use a cache device myself. >>=20 >> You may try other ZFS implementations to import your pool. >> ZFSonLinux, Illumos. >> https://github.com/zfsonlinux/zfs/issues/1863 >> https://groups.google.com/forum/#!topic/zfs-fuse/TaOCLPQ8mp0 >> https://forums.freebsd.org/viewtopic.php?&t=3D18221 >>=20 >> Have you tried the -o readonly=3Don option for zpool import? >> Can you show your gpart list output? >>=20 >> Best Regards, >> Nukama >>=20 >> On Mon, Apr 21, 2014 at 10:18 PM, Gena Guchin = wrote: >>> Hakisho, >>>=20 >>> this is weird, while I do not see ONLINE next to cache device = ada1s2, it is the same device as logs ada1s1, just different slice. >>> I tried to see the difference between zfs labels on that device. >>>=20 >>>=20 >>> [gena@ggulchin]-pts/0:57# zdb -l /dev/ada1s2 >>> -------------------------------------------- >>> LABEL 0 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> -------------------------------------------- >>> LABEL 1 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> -------------------------------------------- >>> LABEL 2 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> -------------------------------------------- >>> LABEL 3 >>> -------------------------------------------- >>> version: 5000 >>> state: 4 >>> guid: 7108193965515577889 >>> [gena@ggulchin]-pts/0:58# zdb -l /dev/ada1s1 >>> -------------------------------------------- >>> LABEL 0 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>> -------------------------------------------- >>> LABEL 1 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>> -------------------------------------------- >>> LABEL 2 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>> -------------------------------------------- >>> LABEL 3 >>> -------------------------------------------- >>> version: 5000 >>> name: 'storage' >>> state: 1 >>> txg: 14792113 >>> pool_guid: 11699153865862401654 >>> hostid: 3089874380 >>> hostname: 'ggulchin.homeunix.com' >>> top_guid: 15354816574459194272 >>> guid: 15354816574459194272 >>> is_log: 1 >>> vdev_children: 3 >>> vdev_tree: >>> type: 'disk' >>> id: 1 >>> guid: 15354816574459194272 >>> path: '/dev/ada1s1' >>> phys_path: '/dev/ada1s1' >>> whole_disk: 1 >>> metaslab_array: 125 >>> metaslab_shift: 27 >>> ashift: 9 >>> asize: 16100884480 >>> is_log: 1 >>> DTL: 137 >>> create_txg: 10478480 >>> features_for_read: >>>=20 >>>=20 >>> does this mean SSD drive is corrupted? >>> is my pool lost forever? >>>=20 >>> thanks! >>>=20 >>>=20 >>> On Apr 21, 2014, at 2:24 PM, Hakisho Nukama = wrote: >>>=20 >>>> Hi Gena, >>>>=20 >>>> there are several options to import a pool, which might work. >>>> It looks like only one device is missing in raidz1, so the pool >>>> could be importable, if the cache device is also available. >>>> Try to connect it back, this can cause an non-importable pool. >>>>=20 >>>> Try reading the zpool man page and investigate into following = flags: >>>> zpool import -F -o readonly=3Don >>>>=20 >>>> Best Regards, >>>> Nukama >>>>=20 >>>> On Mon, Apr 21, 2014 at 7:29 PM, Gena Guchin = wrote: >>>>> Hello FreeBSD users, >>>>>=20 >>>>> my appologies for reposting, but I'd really need your help! >>>>>=20 >>>>>=20 >>>>> I have this huge problem with my ZFS server. I have accidentally = formatted one of the drives in exported ZFS pool. and now I can=92t = import the pool back. this is extremely important pool for me. device = that is missing is still attached to the system. Any help would be = greatly appreciated. >>>>>=20 >>>>>=20 >>>>>=20 >>>>>=20 >>>>> #uname -a >>>>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan = 16 22:34:59 UTC 2014 = root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 >>>>>=20 >>>>> #zpool import >>>>> pool: storage >>>>> id: 11699153865862401654 >>>>> state: UNAVAIL >>>>> status: One or more devices are missing from the system. >>>>> action: The pool cannot be imported. Attach the missing >>>>> devices and try again. >>>>> see: http://illumos.org/msg/ZFS-8000-6X >>>>> config: >>>>>=20 >>>>> storage UNAVAIL missing device >>>>> raidz1-0 DEGRADED >>>>> ada3 ONLINE >>>>> ada4 ONLINE >>>>> ada5 ONLINE >>>>> ada6 ONLINE >>>>> 248348789931078390 UNAVAIL cannot open >>>>> cache >>>>> ada1s2 >>>>> logs >>>>> ada1s1 ONLINE >>>>>=20 >>>>> Additional devices are known to be part of this pool, though = their >>>>> exact configuration cannot be determined. >>>>>=20 >>>>>=20 >>>>> # zpool list >>>>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>>>> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>>>>=20 >>>>> # zpool upgrade >>>>> This system supports ZFS pool feature flags. >>>>>=20 >>>>> All pools are formatted using feature flags. >>>>>=20 >>>>> Every feature flags pool has all supported features enabled. >>>>>=20 >>>>> # zfs upgrade >>>>> This system is currently running ZFS filesystem version 5. >>>>>=20 >>>>> All filesystems are formatted with the current version. >>>>>=20 >>>>>=20 >>>>> Thanks a lot! >>>>>=20 >>>>> =97 Gena >>>>> _______________________________________________ >>>>> freebsd-fs@freebsd.org mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >>>=20 >=20 From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 02:21:46 2014 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id ACAD2D29 for ; Tue, 22 Apr 2014 02:21:46 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8F0451487 for ; Tue, 22 Apr 2014 02:21:46 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3M2Lk4t092066 for ; Tue, 22 Apr 2014 02:21:46 GMT (envelope-from bdrewery@freefall.freebsd.org) Received: (from bdrewery@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3M2LkEG092065 for fs@FreeBSD.org; Tue, 22 Apr 2014 02:21:46 GMT (envelope-from bdrewery) Received: (qmail 4778 invoked from network); 21 Apr 2014 21:21:44 -0500 Received: from unknown (HELO ?10.10.0.24?) (freebsd@shatow.net@10.10.0.24) by sweb.xzibition.com with ESMTPA; 21 Apr 2014 21:21:44 -0500 Message-ID: <5355D236.7070101@FreeBSD.org> Date: Mon, 21 Apr 2014 21:21:42 -0500 From: Bryan Drewery Organization: FreeBSD User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: fs@FreeBSD.org Subject: NFS: mkdir: EBADRPC from recent changes X-Enigmail-Version: 1.6 OpenPGP: id=6E4697CF; url=http://www.shatow.net/bryan/bryan2.asc Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="uvWpV3pFLFuOtBI8F51gWk49w2OvRG2pp" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 02:21:46 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --uvWpV3pFLFuOtBI8F51gWk49w2OvRG2pp Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Client: head @ r264740 Server: 8-STABLE (Mon Jan 23 12:24:50 EST 2012) The server is zoo.FreeBSD.org. Not sure why it's so old. I was having constant EBADRPC errors with mkdir from the client, using NFS root. > mkdir("/blah2",0777) ERR#72 'RPC struct is = bad' TCPDUMP: http://dpaste.com/1790369/plain/ Reverting r264672, r264681, r264705, r264738 all fixed it. I had tried reverting just r264672 but it was not enough. I did not have r264739 in this test. --=20 Regards, Bryan Drewery --uvWpV3pFLFuOtBI8F51gWk49w2OvRG2pp Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTVdI2AAoJEDXXcbtuRpfPbP8IAIAC8J8Nyz8mTSHweREvr3Na RxsPtdU8poJQfHFqQrXS1ilEsYJIBGRYmnMDEJJeYlF7Ow7yJRfh/xg2AldRlgo1 FqCn48XtoFP+RRn1erEhByKlDBSiQRR7BVIDv4dxLlJ9dyGWXMW7wvRvJ8DSikvE KHFG+Ghi+oGx8nGeYSnJ7Xq76V5wEwhmiXrfi6iBMzW/V9+6391f9zmEGdriA3m6 Urjvmgsw7VOqquvKwnyGqiv9kjhjZIw0G751fona0llay11E6vGa4mQVYM/TYwH4 lck0VQ0YMI3SlxTRra89Q3I8Ecom1LnL55PZOBIVYukaM2x1mehP5dAZwcy1THo= =evfT -----END PGP SIGNATURE----- --uvWpV3pFLFuOtBI8F51gWk49w2OvRG2pp-- From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 04:53:07 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 153A3C24; Tue, 22 Apr 2014 04:53:07 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id BE5B11234; Tue, 22 Apr 2014 04:53:06 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAM/dVVODaFve/2dsb2JhbABZg1VXgw/BAIE0dIIlAQEEASNWBRYOCgICDRkCWQaITAipUaMhF4EpjEA5NAeCb4FJAQOaJZEZg00hgW4 X-IronPort-AV: E=Sophos;i="4.97,901,1389762000"; d="scan'208";a="117006901" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 22 Apr 2014 00:53:00 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 31A91B3F62; Tue, 22 Apr 2014 00:53:00 -0400 (EDT) Date: Tue, 22 Apr 2014 00:53:00 -0400 (EDT) From: Rick Macklem To: Bryan Drewery Message-ID: <761167240.14228617.1398142380192.JavaMail.root@uoguelph.ca> In-Reply-To: <5355D236.7070101@FreeBSD.org> Subject: Re: NFS: mkdir: EBADRPC from recent changes MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 04:53:07 -0000 Bryan Drewery wrote: > Client: head @ r264740 > Server: 8-STABLE (Mon Jan 23 12:24:50 EST 2012) > > The server is zoo.FreeBSD.org. Not sure why it's so old. > Well, since you were using NFSv2, a 1985 server would have resulted in the same failure;-) > I was having constant EBADRPC errors with mkdir from the client, > using > NFS root. > Oops, sorry. I keep forgetting to test NFSv2. Fixed by r264749. > > mkdir("/blah2",0777) ERR#72 'RPC struct > > is bad' > > TCPDUMP: http://dpaste.com/1790369/plain/ > > Reverting r264672, r264681, r264705, r264738 all fixed it. I had > tried > reverting just r264672 but it was not enough. > Yep, it was r264705 that broke mkdir for NFSv2. Thanks for reporting it, rick > I did not have r264739 in this test. > > -- > Regards, > Bryan Drewery > > From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 09:49:28 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DAE0059A for ; Tue, 22 Apr 2014 09:49:28 +0000 (UTC) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.81]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9CECA1CED for ; Tue, 22 Apr 2014 09:49:28 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1WcXKQ-0007rN-Ha for freebsd-fs@freebsd.org; Tue, 22 Apr 2014 11:49:26 +0200 Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: ZFS unable to import pool References: Date: Tue, 22 Apr 2014 11:49:25 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: Quoted-Printable From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.16 (Win32) X-Authenticated-As-Hash: 398f5522cb258ce43cb679602f8cfe8b62a256d1 X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: - X-Spam-Score: -1.0 X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED, BAYES_40 autolearn=disabled version=3.3.2 X-Scan-Signature: 22b714be0c51703cd3047a81d17f7b3c X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 09:49:28 -0000 On Mon, 21 Apr 2014 21:29:27 +0200, Gena Guchin = wrote: > Hello FreeBSD users, > > my appologies for reposting, but I'd really need your help! > > > I have this huge problem with my ZFS server. I have accidentally = > formatted one of the drives in exported ZFS pool. and now I can=E2=80=99= t import = > the pool back. this is extremely important pool for me. device that is= = > missing is still attached to the system. Any help would be greatly = > appreciated. > > > > > #uname -a > FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 = > 22:34:59 UTC 2014 = > root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > > #zpool import > pool: storage > id: 11699153865862401654 > state: UNAVAIL > status: One or more devices are missing from the system. > action: The pool cannot be imported. Attach the missing > devices and try again. > see: http://illumos.org/msg/ZFS-8000-6X > config: > > storage UNAVAIL missing device > raidz1-0 DEGRADED > ada3 ONLINE > ada4 ONLINE > ada5 ONLINE > ada6 ONLINE > 248348789931078390 UNAVAIL cannot open > cache > ada1s2 > logs > ada1s1 ONLINE > > Additional devices are known to be part of this pool, though th= eir > exact configuration cannot be determined. > > > # zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zroot 920G 17.9G 902G 1% 1.00x ONLINE - > > # zpool upgrade > This system supports ZFS pool feature flags. > > All pools are formatted using feature flags. > > Every feature flags pool has all supported features enabled. > > # zfs upgrade > This system is currently running ZFS filesystem version 5. > > All filesystems are formatted with the current version. > > > Thanks a lot! Does FreeBSD see the disk? Is it in /dev/ada2 (or another number)? If FreeBSD does not know anything about the disk, ZFS can't either. A = reboot or some fiddling (partitioning?) with GEOM might make the disk = reappear. Ronald. From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 16:36:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 727C2A77 for ; Tue, 22 Apr 2014 16:36:50 +0000 (UTC) Received: from mail-out.apple.com (mail-out.apple.com [17.151.62.51]) (using TLSv1 with cipher DES-CBC3-SHA (168/168 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 522261CD7 for ; Tue, 22 Apr 2014 16:36:49 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1252 Received: from mail-out.apple.com by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) id <0N4F00800XRMWK00@local.mail-out.apple.com> for freebsd-fs@freebsd.org; Tue, 22 Apr 2014 09:36:48 -0700 (PDT) Received: from relay4.apple.com ([17.128.113.87]) by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) with ESMTP id <0N4F00IULY3XA660@local.mail-out.apple.com>; Tue, 22 Apr 2014 09:36:48 -0700 (PDT) X-AuditID: 11807157-f79aa6d0000017b2-12-53569aa0d280 Received: from ggulchin.apple.com (ggulchin.apple.com [17.199.68.248]) (using TLS with cipher AES128-SHA (128/128 bits)) (Client did not present a certificate) by relay4.apple.com (Apple SCV relay) with SMTP id 15.CF.06066.1AA96535; Tue, 22 Apr 2014 09:36:49 -0700 (PDT) Subject: Re: ZFS unable to import pool From: Gena Guchin In-reply-to: Date: Tue, 22 Apr 2014 09:36:48 -0700 Content-transfer-encoding: quoted-printable Message-id: <6DACDF6E-E1ED-49C0-975C-A91F68EA8840@icloud.com> References: To: Ronald Klop X-Mailer: Apple Mail (2.1878) X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrFLMWRmVeSWpSXmKPExsUieNzlh+7CWWHBBu1neSyOPf7JZjF3wXVW i+9LNrI7MHvM+DSfxWP5tKOMHof3X2ILYI7isklJzcksSy3St0vgypi5/wJbwQSxisMvJ7M1 MM4X6GLk5JAQMJH48OIEE4QtJnHh3nq2LkYuDiGBKUwSq6+dZwdJCAuoSbxacZS1i5GDg1fA QGLtokiQMLOAnsSO679YQWw2AXWJn18fgdmcAjYSq36cBZvJIqAqMbWpgR2iXkri8pI/ULa2 xLKFr5lBbF4BW4l9W66C1QsJ5Eo8ffaVEWSVCNDaW9uUIU6TlZjYuI1xAiP/LIQjZiE5YhaS oQsYmVcxChSl5iRWmuglFhTkpOol5+duYgSFYENh+A7Gf8usDjEKcDAq8fBKGoQGC7EmlhVX 5h5ilOBgVhLhFesPCxbiTUmsrEotyo8vKs1JLT7EKM3BoiTOq8fsHywkkJ5YkpqdmlqQWgST ZeLglGpg3HXk56WyA+3/tkQKBzSv40wSXM4u7LB7x5+C89dCL/Uc2pVorp8eFfJ6uZl4X/7v 1qJ2339nPNx4bQOu5C5YVFoeKaX2p3rCs6uPxS59/nkv/qNmfcyFbWdf/56lt+LoZKZHkjc+ 3udp37yQTSD156Vn9ZskM/KuVG/xaMq9I/40tMxnkZvSLSWW4oxEQy3mouJEADDwBFQ9AgAA Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 16:36:50 -0000 Ronald,=20 system does see the disk, ada7, in this case. Nothing has been = disconnected from the system. what steps do you suggest I take with GEOM? thanks! On Apr 22, 2014, at 2:49 AM, Ronald Klop wrote: > On Mon, 21 Apr 2014 21:29:27 +0200, Gena Guchin = wrote: >=20 >> Hello FreeBSD users, >>=20 >> my appologies for reposting, but I'd really need your help! >>=20 >>=20 >> I have this huge problem with my ZFS server. I have accidentally = formatted one of the drives in exported ZFS pool. and now I can=92t = import the pool back. this is extremely important pool for me. device = that is missing is still attached to the system. Any help would be = greatly appreciated. >>=20 >>=20 >>=20 >>=20 >> #uname -a >> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 = 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC = amd64 >>=20 >> #zpool import >> pool: storage >> id: 11699153865862401654 >> state: UNAVAIL >> status: One or more devices are missing from the system. >> action: The pool cannot be imported. Attach the missing >> devices and try again. >> see: http://illumos.org/msg/ZFS-8000-6X >> config: >>=20 >> storage UNAVAIL missing device >> raidz1-0 DEGRADED >> ada3 ONLINE >> ada4 ONLINE >> ada5 ONLINE >> ada6 ONLINE >> 248348789931078390 UNAVAIL cannot open >> cache >> ada1s2 >> logs >> ada1s1 ONLINE >>=20 >> Additional devices are known to be part of this pool, though = their >> exact configuration cannot be determined. >>=20 >>=20 >> # zpool list >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>=20 >> # zpool upgrade >> This system supports ZFS pool feature flags. >>=20 >> All pools are formatted using feature flags. >>=20 >> Every feature flags pool has all supported features enabled. >>=20 >> # zfs upgrade >> This system is currently running ZFS filesystem version 5. >>=20 >> All filesystems are formatted with the current version. >>=20 >>=20 >> Thanks a lot! >=20 > Does FreeBSD see the disk? Is it in /dev/ada2 (or another number)? > If FreeBSD does not know anything about the disk, ZFS can't either. A = reboot or some fiddling (partitioning?) with GEOM might make the disk = reappear. >=20 > Ronald. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 16:39:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1166CD6C for ; Tue, 22 Apr 2014 16:39:35 +0000 (UTC) Received: from mail-ie0-f172.google.com (mail-ie0-f172.google.com [209.85.223.172]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CFD251D1D for ; Tue, 22 Apr 2014 16:39:34 +0000 (UTC) Received: by mail-ie0-f172.google.com with SMTP id as1so5513936iec.31 for ; Tue, 22 Apr 2014 09:39:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=QadoYXp6NLYLFGkzqpYUwN349NyehcYIxyWZSmg53LE=; b=NYYqCxuBW6FS4L3Kp2azgGk5aBT8/AN4ZbyXgB6sGr0vu7cFDdwYZi7M8emZUCtWM3 bU3q0pmpWNsg995PNiCvuIMuBUDlnJb7M3ZkpSqow0gJ3cGgENbR31VnxWeDpP5Ty0uA BIe4k6isr03CKd1XgzrkbwIzXo5TPur9A8g3uAezlcrt0OzZ8O8+24NlkOyg3f4iYeWJ FgdZ2rRjyhl1rhfGlS9EoDhWm5Txka04kgDU7iUlHnikVq0mP3YRfigjP1RSxsjVwuRe MZPrlLHYv11sPIBmE4+63IaesPHF7byUHO41lnm/P4lJ/oOTSul95CgsmRkGFWg6sWSh mt5Q== X-Gm-Message-State: ALoCoQlFOtw8Dsx4LLeZxXXEC4j6SUP9gUaZUHzQz4GRNxE5mrPm1WXXwAlvkjR+f3B/bbcmyhPM X-Received: by 10.50.82.73 with SMTP id g9mr31332587igy.0.1398184767871; Tue, 22 Apr 2014 09:39:27 -0700 (PDT) Received: from kateleycoimac.local ([63.231.252.189]) by mx.google.com with ESMTPSA id b6sm29374629igm.2.2014.04.22.09.39.26 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 22 Apr 2014 09:39:26 -0700 (PDT) Message-ID: <53569B38.7010406@kateley.com> Date: Tue, 22 Apr 2014 11:39:20 -0500 From: Linda Kateley User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS unable to import pool References: <6DACDF6E-E1ED-49C0-975C-A91F68EA8840@icloud.com> In-Reply-To: <6DACDF6E-E1ED-49C0-975C-A91F68EA8840@icloud.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 16:39:35 -0000 Have you tried to offline the disk? Then online the disk? lk On 4/22/14, 11:36 AM, Gena Guchin wrote: > Ronald, > > system does see the disk, ada7, in this case. Nothing has been disconnected from the system. > > what steps do you suggest I take with GEOM? > > > thanks! > > > On Apr 22, 2014, at 2:49 AM, Ronald Klop wrote: > >> On Mon, 21 Apr 2014 21:29:27 +0200, Gena Guchin wrote: >> >>> Hello FreeBSD users, >>> >>> my appologies for reposting, but I'd really need your help! >>> >>> >>> I have this huge problem with my ZFS server. I have accidentally formatted one of the drives in exported ZFS pool. and now I can’t import the pool back. this is extremely important pool for me. device that is missing is still attached to the system. Any help would be greatly appreciated. >>> >>> >>> >>> >>> #uname -a >>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 >>> >>> #zpool import >>> pool: storage >>> id: 11699153865862401654 >>> state: UNAVAIL >>> status: One or more devices are missing from the system. >>> action: The pool cannot be imported. Attach the missing >>> devices and try again. >>> see: http://illumos.org/msg/ZFS-8000-6X >>> config: >>> >>> storage UNAVAIL missing device >>> raidz1-0 DEGRADED >>> ada3 ONLINE >>> ada4 ONLINE >>> ada5 ONLINE >>> ada6 ONLINE >>> 248348789931078390 UNAVAIL cannot open >>> cache >>> ada1s2 >>> logs >>> ada1s1 ONLINE >>> >>> Additional devices are known to be part of this pool, though their >>> exact configuration cannot be determined. >>> >>> >>> # zpool list >>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>> >>> # zpool upgrade >>> This system supports ZFS pool feature flags. >>> >>> All pools are formatted using feature flags. >>> >>> Every feature flags pool has all supported features enabled. >>> >>> # zfs upgrade >>> This system is currently running ZFS filesystem version 5. >>> >>> All filesystems are formatted with the current version. >>> >>> >>> Thanks a lot! >> Does FreeBSD see the disk? Is it in /dev/ada2 (or another number)? >> If FreeBSD does not know anything about the disk, ZFS can't either. A reboot or some fiddling (partitioning?) with GEOM might make the disk reappear. >> >> Ronald. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 16:54:31 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 18B311B6 for ; Tue, 22 Apr 2014 16:54:31 +0000 (UTC) Received: from mail-out.apple.com (mail-out.apple.com [17.151.62.51]) (using TLSv1 with cipher DES-CBC3-SHA (168/168 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EB1A31FDC for ; Tue, 22 Apr 2014 16:54:30 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1252 Received: from mail-out.apple.com by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) id <0N4F00D00YX1GA00@local.mail-out.apple.com> for freebsd-fs@freebsd.org; Tue, 22 Apr 2014 09:54:16 -0700 (PDT) Received: from relay4.apple.com ([17.128.113.87]) by local.mail-out.apple.com (Oracle Communications Messaging Server 7.0.5.30.0 64bit (built Oct 22 2013)) with ESMTP id <0N4F00IKFYY2A680@local.mail-out.apple.com>; Tue, 22 Apr 2014 09:54:16 -0700 (PDT) X-AuditID: 11807157-f79aa6d0000017b2-94-53569eb826f8 Received: from ggulchin.apple.com (ggulchin.apple.com [17.199.68.248]) (using TLS with cipher AES128-SHA (128/128 bits)) (Client did not present a certificate) by relay4.apple.com (Apple SCV relay) with SMTP id C6.56.06066.9BE96535; Tue, 22 Apr 2014 09:54:17 -0700 (PDT) Subject: Re: ZFS unable to import pool From: Gena Guchin In-reply-to: <53569B38.7010406@kateley.com> Date: Tue, 22 Apr 2014 09:54:16 -0700 Content-transfer-encoding: quoted-printable Message-id: <629424D4-1C4A-40C0-8F5F-8920B3C7A997@icloud.com> References: <6DACDF6E-E1ED-49C0-975C-A91F68EA8840@icloud.com> <53569B38.7010406@kateley.com> To: Linda Kateley X-Mailer: Apple Mail (2.1878) X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrNLMWRmVeSWpSXmKPExsUieNzlh+7OeWHBBmd+8loce/yTzWLuguus FlOWHWNzYPaY8Wk+i8fyaUcZPZ48WMASwBzFZZOSmpNZllqkb5fAldGzeC1jwVXZildNZ1gb GJ+IdTFyckgImEgcf3KdEcIWk7hwbz0biC0kMIVJYslzfhBbWEBN4tWKo6xdjBwcvAIGEmsX RYKEmQX0JHZc/8UKYrMJqEv8/PoIzOYU0JY4cPIsO4jNIqAqsbxpFjNEvZTE5SV/2CFsbYll C1+DxXkFbCX+3pwNdAIX0Np9jBJ7790Hu0FEQENi2cozTBC3yUpMbNzGOIGRfxbCGbOQnDEL ydgFjMyrGAWKUnMSK030EgsKclL1kvNzNzGCgrChMHwH479lVocYBTgYlXh4JQ1Cg4VYE8uK K3MPMUpwMCuJ8Ir1hwUL8aYkVlalFuXHF5XmpBYfYpTmYFES59Vj9g8WEkhPLEnNTk0tSC2C yTJxcEo1MArH/hezCfj2ZJVa1/GW0OjtCc++ye+ulF3w5vkXta1qVg33dqsyvblf2jb7wuHN SiEcx2baTBU5aWvCyjbHyWyLZWfbLK5slmWvXi6/Ntvht/+ea2klqubnr+qcc/268PjeE48N puu+DdF65SEjbtp/88+tV681VS0MGR7+UDEI2pBx6aR+c7USS3FGoqEWc1FxIgBSz+m1PgIA AA== Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 16:54:31 -0000 Linda,=20 the pool is exported. I can't import it back in.=20 I can't offline or online while the pool is exported. :( On Apr 22, 2014, at 9:39 AM, Linda Kateley wrote: > Have you tried to offline the disk? Then online the disk? >=20 > lk >=20 >=20 > On 4/22/14, 11:36 AM, Gena Guchin wrote: >> Ronald, >>=20 >> system does see the disk, ada7, in this case. Nothing has been = disconnected from the system. >>=20 >> what steps do you suggest I take with GEOM? >>=20 >>=20 >> thanks! >>=20 >>=20 >> On Apr 22, 2014, at 2:49 AM, Ronald Klop = wrote: >>=20 >>> On Mon, 21 Apr 2014 21:29:27 +0200, Gena Guchin = wrote: >>>=20 >>>> Hello FreeBSD users, >>>>=20 >>>> my appologies for reposting, but I'd really need your help! >>>>=20 >>>>=20 >>>> I have this huge problem with my ZFS server. I have accidentally = formatted one of the drives in exported ZFS pool. and now I can=92t = import the pool back. this is extremely important pool for me. device = that is missing is still attached to the system. Any help would be = greatly appreciated. >>>>=20 >>>>=20 >>>>=20 >>>>=20 >>>> #uname -a >>>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan = 16 22:34:59 UTC 2014 = root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 >>>>=20 >>>> #zpool import >>>> pool: storage >>>> id: 11699153865862401654 >>>> state: UNAVAIL >>>> status: One or more devices are missing from the system. >>>> action: The pool cannot be imported. Attach the missing >>>> devices and try again. >>>> see: http://illumos.org/msg/ZFS-8000-6X >>>> config: >>>>=20 >>>> storage UNAVAIL missing device >>>> raidz1-0 DEGRADED >>>> ada3 ONLINE >>>> ada4 ONLINE >>>> ada5 ONLINE >>>> ada6 ONLINE >>>> 248348789931078390 UNAVAIL cannot open >>>> cache >>>> ada1s2 >>>> logs >>>> ada1s1 ONLINE >>>>=20 >>>> Additional devices are known to be part of this pool, though = their >>>> exact configuration cannot be determined. >>>>=20 >>>>=20 >>>> # zpool list >>>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>>> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>>>=20 >>>> # zpool upgrade >>>> This system supports ZFS pool feature flags. >>>>=20 >>>> All pools are formatted using feature flags. >>>>=20 >>>> Every feature flags pool has all supported features enabled. >>>>=20 >>>> # zfs upgrade >>>> This system is currently running ZFS filesystem version 5. >>>>=20 >>>> All filesystems are formatted with the current version. >>>>=20 >>>>=20 >>>> Thanks a lot! >>> Does FreeBSD see the disk? Is it in /dev/ada2 (or another number)? >>> If FreeBSD does not know anything about the disk, ZFS can't either. = A reboot or some fiddling (partitioning?) with GEOM might make the disk = reappear. >>>=20 >>> Ronald. >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 17:34:02 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2F5B0C2D for ; Tue, 22 Apr 2014 17:34:02 +0000 (UTC) Received: from mail-ig0-f175.google.com (mail-ig0-f175.google.com [209.85.213.175]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EC1A91437 for ; Tue, 22 Apr 2014 17:34:01 +0000 (UTC) Received: by mail-ig0-f175.google.com with SMTP id h3so3285427igd.8 for ; Tue, 22 Apr 2014 10:33:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=f3o8I04hrsjjHrbiJIUzrkK9rl2ziFi6JrqfYnG07vA=; b=QwvS+DD5NfA0ntnUeKMArYr84sva6isQ7QxdQNuaUvmrQTvZpyFiAsvgpHfxQgbifK aB4lrbKteEgUZwpBOSzJqCK5trc2jWRNFw5UR2+mhhx/I2EwggsbmK1FIu1ghlhaW2lW FtBjgs5EkeKxbzgXF8DsHRq961LCnMpGFAjNEWDMV9Lti8x06JJNyHrmHeWroDviFkcz PAVlL6F6LdbbZ+RUdJnQNFQo0YYbp91NsUQuNQpB/jrWV3AMT0nQJjXF0KdvZNs+QCJ+ VHj6r9R+7QIzN8ToYbn0Oz1EZD47OtW41kdvU7udDSN4xn4zlICmhkzkYiWGmwxe57bC avlQ== X-Gm-Message-State: ALoCoQm0wYZDZDxMApkgWdvhioH34997kEZbkFWVtJpy/IH3l5CQL0PHPVI5QHfGtUEaGo3T7DeV X-Received: by 10.50.66.143 with SMTP id f15mr31951251igt.18.1398188035596; Tue, 22 Apr 2014 10:33:55 -0700 (PDT) Received: from kateleycoimac.local ([63.231.252.189]) by mx.google.com with ESMTPSA id vu3sm29694176igc.6.2014.04.22.10.33.53 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 22 Apr 2014 10:33:54 -0700 (PDT) Message-ID: <5356A7FB.9050706@kateley.com> Date: Tue, 22 Apr 2014 12:33:47 -0500 From: Linda Kateley User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Gena Guchin Subject: Re: ZFS unable to import pool References: <6DACDF6E-E1ED-49C0-975C-A91F68EA8840@icloud.com> <53569B38.7010406@kateley.com> <629424D4-1C4A-40C0-8F5F-8920B3C7A997@icloud.com> In-Reply-To: <629424D4-1C4A-40C0-8F5F-8920B3C7A997@icloud.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 17:34:02 -0000 The message also says to try the import with a -d to search for disks lk On 4/22/14, 11:54 AM, Gena Guchin wrote: > Linda, > > the pool is exported. I can't import it back in. > I can't offline or online while the pool is exported. > > > :( > > On Apr 22, 2014, at 9:39 AM, Linda Kateley wrote: > >> Have you tried to offline the disk? Then online the disk? >> >> lk >> >> >> On 4/22/14, 11:36 AM, Gena Guchin wrote: >>> Ronald, >>> >>> system does see the disk, ada7, in this case. Nothing has been disconnected from the system. >>> >>> what steps do you suggest I take with GEOM? >>> >>> >>> thanks! >>> >>> >>> On Apr 22, 2014, at 2:49 AM, Ronald Klop wrote: >>> >>>> On Mon, 21 Apr 2014 21:29:27 +0200, Gena Guchin wrote: >>>> >>>>> Hello FreeBSD users, >>>>> >>>>> my appologies for reposting, but I'd really need your help! >>>>> >>>>> >>>>> I have this huge problem with my ZFS server. I have accidentally formatted one of the drives in exported ZFS pool. and now I can’t import the pool back. this is extremely important pool for me. device that is missing is still attached to the system. Any help would be greatly appreciated. >>>>> >>>>> >>>>> >>>>> >>>>> #uname -a >>>>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 >>>>> >>>>> #zpool import >>>>> pool: storage >>>>> id: 11699153865862401654 >>>>> state: UNAVAIL >>>>> status: One or more devices are missing from the system. >>>>> action: The pool cannot be imported. Attach the missing >>>>> devices and try again. >>>>> see: http://illumos.org/msg/ZFS-8000-6X >>>>> config: >>>>> >>>>> storage UNAVAIL missing device >>>>> raidz1-0 DEGRADED >>>>> ada3 ONLINE >>>>> ada4 ONLINE >>>>> ada5 ONLINE >>>>> ada6 ONLINE >>>>> 248348789931078390 UNAVAIL cannot open >>>>> cache >>>>> ada1s2 >>>>> logs >>>>> ada1s1 ONLINE >>>>> >>>>> Additional devices are known to be part of this pool, though their >>>>> exact configuration cannot be determined. >>>>> >>>>> >>>>> # zpool list >>>>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>>>> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>>>> >>>>> # zpool upgrade >>>>> This system supports ZFS pool feature flags. >>>>> >>>>> All pools are formatted using feature flags. >>>>> >>>>> Every feature flags pool has all supported features enabled. >>>>> >>>>> # zfs upgrade >>>>> This system is currently running ZFS filesystem version 5. >>>>> >>>>> All filesystems are formatted with the current version. >>>>> >>>>> >>>>> Thanks a lot! >>>> Does FreeBSD see the disk? Is it in /dev/ada2 (or another number)? >>>> If FreeBSD does not know anything about the disk, ZFS can't either. A reboot or some fiddling (partitioning?) with GEOM might make the disk reappear. >>>> >>>> Ronald. >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Apr 22 17:45:28 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5B563EF0 for ; Tue, 22 Apr 2014 17:45:28 +0000 (UTC) Received: from mail-ig0-f182.google.com (mail-ig0-f182.google.com [209.85.213.182]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 237B7158F for ; Tue, 22 Apr 2014 17:45:27 +0000 (UTC) Received: by mail-ig0-f182.google.com with SMTP id uy17so3310021igb.9 for ; Tue, 22 Apr 2014 10:45:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=nYqI43DKOx/VrlNmapf9atjCsETSaAFOOHNMrIhIPVk=; b=FgS0i2YcGfymlLGcOYtQGn0bsl2ZG7FWFy/tjJAydfO1ByJck4o1HScAHLtb4qBlPa pIEe+ln0Pg0nNG+gq4+5/8vpeC+nUeGQpoTXFA5N191Go1W+t/OcKO6w/AWaVSfeJm5f sDSxGJwWw78VqwbNNFB3kUN5HnezE2ec+NjtXFc1xzPQl0oVXa7mnDzznZ2mcmDlEUfU 8//psEYbQS+Iszc4pay1/5DjFGvqhWG1ZfNe8WHuTDzpw4s86hTK7iuS16cMuJX6ABZo SMeNibsc+8YS03skb2a/T/Nf1v1bXar/ujrWXKt6YkyeNcG2hJviQGGuF1pLJVvxiqPy +g+w== X-Gm-Message-State: ALoCoQnfDrhEhZCXRvHYQDqKQ1ghXsruDV7VCOI8m2He8zHRWvM+l8DuAGPtd6gZ8gpZxbitHJWf X-Received: by 10.43.159.197 with SMTP id lz5mr11168795icc.55.1398188721055; Tue, 22 Apr 2014 10:45:21 -0700 (PDT) Received: from kateleycoimac.local ([63.231.252.189]) by mx.google.com with ESMTPSA id nh12sm29745562igb.12.2014.04.22.10.45.19 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 22 Apr 2014 10:45:20 -0700 (PDT) Message-ID: <5356AAA8.7020403@kateley.com> Date: Tue, 22 Apr 2014 12:45:12 -0500 From: Linda Kateley User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Gena Guchin Subject: Re: ZFS unable to import pool References: <6DACDF6E-E1ED-49C0-975C-A91F68EA8840@icloud.com> <53569B38.7010406@kateley.com> <629424D4-1C4A-40C0-8F5F-8920B3C7A997@icloud.com> In-Reply-To: <629424D4-1C4A-40C0-8F5F-8920B3C7A997@icloud.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Apr 2014 17:45:28 -0000 also although it is not the preferred method, i have deleted the zpool.cache file in the past, rebooted and then retried the import(but on solaris). That should rebuild the information the system knows about the pool. I think it is saying it is the top level vdev is missing, not the disk itself according to the message id. On 4/22/14, 11:54 AM, Gena Guchin wrote: > Linda, > > the pool is exported. I can't import it back in. > I can't offline or online while the pool is exported. > > > :( > > On Apr 22, 2014, at 9:39 AM, Linda Kateley wrote: > >> Have you tried to offline the disk? Then online the disk? >> >> lk >> >> >> On 4/22/14, 11:36 AM, Gena Guchin wrote: >>> Ronald, >>> >>> system does see the disk, ada7, in this case. Nothing has been disconnected from the system. >>> >>> what steps do you suggest I take with GEOM? >>> >>> >>> thanks! >>> >>> >>> On Apr 22, 2014, at 2:49 AM, Ronald Klop wrote: >>> >>>> On Mon, 21 Apr 2014 21:29:27 +0200, Gena Guchin wrote: >>>> >>>>> Hello FreeBSD users, >>>>> >>>>> my appologies for reposting, but I'd really need your help! >>>>> >>>>> >>>>> I have this huge problem with my ZFS server. I have accidentally formatted one of the drives in exported ZFS pool. and now I can’t import the pool back. this is extremely important pool for me. device that is missing is still attached to the system. Any help would be greatly appreciated. >>>>> >>>>> >>>>> >>>>> >>>>> #uname -a >>>>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 >>>>> >>>>> #zpool import >>>>> pool: storage >>>>> id: 11699153865862401654 >>>>> state: UNAVAIL >>>>> status: One or more devices are missing from the system. >>>>> action: The pool cannot be imported. Attach the missing >>>>> devices and try again. >>>>> see: http://illumos.org/msg/ZFS-8000-6X >>>>> config: >>>>> >>>>> storage UNAVAIL missing device >>>>> raidz1-0 DEGRADED >>>>> ada3 ONLINE >>>>> ada4 ONLINE >>>>> ada5 ONLINE >>>>> ada6 ONLINE >>>>> 248348789931078390 UNAVAIL cannot open >>>>> cache >>>>> ada1s2 >>>>> logs >>>>> ada1s1 ONLINE >>>>> >>>>> Additional devices are known to be part of this pool, though their >>>>> exact configuration cannot be determined. >>>>> >>>>> >>>>> # zpool list >>>>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>>>> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>>>> >>>>> # zpool upgrade >>>>> This system supports ZFS pool feature flags. >>>>> >>>>> All pools are formatted using feature flags. >>>>> >>>>> Every feature flags pool has all supported features enabled. >>>>> >>>>> # zfs upgrade >>>>> This system is currently running ZFS filesystem version 5. >>>>> >>>>> All filesystems are formatted with the current version. >>>>> >>>>> >>>>> Thanks a lot! >>>> Does FreeBSD see the disk? Is it in /dev/ada2 (or another number)? >>>> If FreeBSD does not know anything about the disk, ZFS can't either. A reboot or some fiddling (partitioning?) with GEOM might make the disk reappear. >>>> >>>> Ronald. >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 06:05:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D593FC68 for ; Wed, 23 Apr 2014 06:05:42 +0000 (UTC) Received: from mail-we0-x22f.google.com (mail-we0-x22f.google.com [IPv6:2a00:1450:400c:c03::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 17A0213A8 for ; Wed, 23 Apr 2014 06:05:41 +0000 (UTC) Received: by mail-we0-f175.google.com with SMTP id q58so379758wes.20 for ; Tue, 22 Apr 2014 23:05:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=GcBimY11m2K8rxHJqT2s6J/yFsoOugG0FoAbIij6+L0=; b=Up9C9eavQnxA8Jo7Wem7RXN/J+E6779SqBwBEdW8WFuoz5sIk4qbptzs7IoEUw6UFY ETAeNtty3e7u7GSrcVzP+dVrmtBeMTExnaH032u7rFzZo0e6+1IVRAlnGBKI8wDCbQ0Q J66m3MW6Ix1UTHVAk4R/+gxIhnxGdYmXhI16AxBivxpx5e/6FhD1nqR5cFDB9VxOtT62 jI041LJzbxmeNAxz3J/8ZaQMR1CfPg6i2PRqCehomkify7yi9f8KRKviVAxymZawqQkI nWKyfS1QXLp6Fhj1TD5A/qxRhn1hI4Vc5bhl6USFNvASo0pTos0uJYYVxHrxamvs0N9c 251A== MIME-Version: 1.0 X-Received: by 10.180.13.209 with SMTP id j17mr276943wic.18.1398233140178; Tue, 22 Apr 2014 23:05:40 -0700 (PDT) Received: by 10.217.9.134 with HTTP; Tue, 22 Apr 2014 23:05:40 -0700 (PDT) In-Reply-To: <1786007789.11586463.1397595681788.JavaMail.root@uoguelph.ca> References: <1786007789.11586463.1397595681788.JavaMail.root@uoguelph.ca> Date: Wed, 23 Apr 2014 14:05:40 +0800 Message-ID: Subject: Re: NFSv4: prob err=10036 From: Marcelo Araujo To: Rick Macklem Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: araujo@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 06:05:42 -0000 2014-04-16 5:01 GMT+08:00 Rick Macklem : > > Well, I looked at the packet trace and it is weird. > > One field (the NFSv4 operation #) is incorrect in the packet. > It should have been 33 (0x21), which is PUTROOTFH and instead > it is 39 (0x27), which is RELEASELOCKOWNER. > All the arguments after the operation # are correct for the > RPC, if that operation# was 33 (PUTROOTFH). > > Since the call looks like this (around line#4303 in > sys/fs/nfsclient/nfs_clrpcops.c): > > nfscl_reqstart(nd, NFSPROC_PUTROOTFH, nmp, NULL, 0, &opcntp, NULL); > > I can't imagine how NFSPROC_PUTROOTFH became NFSPROC_RELEASELCKOWN? > (Btw, there is a mapping from NFSPROC_xxx to NFSV4OP_xxx that occurs, > so these arguments are 33 and 34 respectively and not 33 and 39.) > > So, somehow the argument gets incremented by one when it is on the > stack for the call. (It would be 34 in nfscl_reqstart(), since the > tag is "Rellckown" and not "Dirpath" in the packet header. This tag > is for debugging only and doesn't affect the RPC's semantics. For > once, it was useful;-) So, this isn't some data error later, such as > "on the wire". > > All I can suggest is that something is stomping on this field on > the stack or there is a memory problem where this stack argument > sits? > > Aren't computers fun? rick > > Hello Rick, First of all, thank you so much, once again with your help I could solve the weird situation. Actually the problem was related with a VAAI implementation in the server side that I was making the test, the VAAI implementation on that server move one bit ahead for the NFSPROC operations and make my client totally confused. I made a fix on my NFSv4 client and now everything works properly. I don't think make much sense send back this patch, as in my case here, this VAAI implementation is not too much portable. Once again, thank you so much, you have done an amazing working on NFS. Best Regards, -- Marcelo Araujo araujo@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 06:46:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C76C1180 for ; Wed, 23 Apr 2014 06:46:44 +0000 (UTC) Received: from squishy.elizium.za.net (squishy.elizium.za.net [80.68.90.178]) by mx1.freebsd.org (Postfix) with ESMTP id 90B411716 for ; Wed, 23 Apr 2014 06:46:44 +0000 (UTC) Received: from sludge.elizium.za.net (sludge.elizium.za.net [196.41.137.247]) by squishy.elizium.za.net (Postfix) with ESMTPSA id C2663803F; Wed, 23 Apr 2014 08:38:50 +0200 (SAST) Date: Wed, 23 Apr 2014 08:42:03 +0200 From: Hugo Lombard To: Gena Guchin Subject: Re: ZFS unable to import pool Message-ID: <20140423064203.GD2830@sludge.elizium.za.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 06:46:44 -0000 On Mon, Apr 21, 2014 at 12:29:27PM -0700, Gena Guchin wrote: > > I have this huge problem with my ZFS server. I have accidentally > formatted one of the drives in exported ZFS pool. Hello Apologies if I missed it, but can you please explain what happened during the time the disk got 'formatted'? Regards -- Hugo Lombard .___. (o,o) /) ) ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 07:46:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EE8D9EBD for ; Wed, 23 Apr 2014 07:46:51 +0000 (UTC) Received: from st11p02mm-asmtp001.mac.com (st11p02mm-asmtp001.mac.com [17.172.220.236]) by mx1.freebsd.org (Postfix) with ESMTP id BE3721CC2 for ; Wed, 23 Apr 2014 07:46:51 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from [192.168.0.138] (c-69-181-42-159.hsd1.ca.comcast.net [69.181.42.159]) by st11p02mm-asmtp001.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4H00M9E49UFI50@st11p02mm-asmtp001.mac.com> for freebsd-fs@freebsd.org; Wed, 23 Apr 2014 07:46:45 +0000 (GMT) Subject: Re: ZFS unable to import pool From: Gennadiy Gulchin X-Mailer: iPhone Mail (11D201) In-reply-to: <20140423064203.GD2830@sludge.elizium.za.net> Date: Wed, 23 Apr 2014 00:46:43 -0700 Message-id: References: <20140423064203.GD2830@sludge.elizium.za.net> To: Hugo Lombard X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaHBEKTEMXGx0EGx0YBBIZBBsSEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hueREKQ04XSxsbGmJCH2hfHx5yGXhzB xlkGh0eHEARClhcFxkEGgQdB01LHRJISRxMBRsdBBsdGAQSGQQbEhAbHhofGxEKXlkXYUB4eWs RCkxGF2xraxEKQ1oXHBMEGxIbBB4YBBsfExEKRFgXGBEKREkXGxEKQkUXZkV5cEAYTAVEchsRC kJOF2tFGlJQHkNcWVxoEQpCTBdmSH1ZYl1Se2JZHxEKQmwXbBkBQWwbUmZ/HmcRCkJAF2Vua2V /emdQZx1dEQpwZxdsfxNMRmtcEk4YGBEKcGgXYGBIaUJoWRxOZmQRCnBoF25aYQFlT2ldWwEaE QpwbBdjWXNEXGteGx1ebREKcEwXZ0lmeGdnRX8bRhgR X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-23_03:2014-04-22,2014-04-23,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404230135 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 07:46:52 -0000 Yes :( it was formatted :( I wanted to fix the 4k zfs issue. --Gena > On Apr 22, 2014, at 11:42 PM, Hugo Lombard wrote: > >> On Mon, Apr 21, 2014 at 12:29:27PM -0700, Gena Guchin wrote: >> >> I have this huge problem with my ZFS server. I have accidentally >> formatted one of the drives in exported ZFS pool. > > Hello > > Apologies if I missed it, but can you please explain what happened > during the time the disk got 'formatted'? > > Regards > > -- > Hugo Lombard > .___. > (o,o) > /) ) > ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 07:48:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A76E9F68 for ; Wed, 23 Apr 2014 07:48:47 +0000 (UTC) Received: from st11p02mm-asmtp001.mac.com (st11p02mm-asmtp001.mac.com [17.172.220.236]) by mx1.freebsd.org (Postfix) with ESMTP id 776EF1CE5 for ; Wed, 23 Apr 2014 07:48:47 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from [192.168.0.138] (c-69-181-42-159.hsd1.ca.comcast.net [69.181.42.159]) by st11p02mm-asmtp001.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4H00LQB4CFCG70@st11p02mm-asmtp001.mac.com> for freebsd-fs@freebsd.org; Wed, 23 Apr 2014 07:48:18 +0000 (GMT) Subject: Re: ZFS unable to import pool From: Gennadiy Gulchin X-Mailer: iPhone Mail (11D201) In-reply-to: <20140423064203.GD2830@sludge.elizium.za.net> Date: Wed, 23 Apr 2014 00:48:16 -0700 Message-id: References: <20140423064203.GD2830@sludge.elizium.za.net> To: Hugo Lombard X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaHBEKTEMXGx0EGx0YBBIZBBsTEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hueREKQ04XSxsYGmJCH2hfHx5yGXhzB xlkGh0eEmJrEQpYXBcZBBoEHQdNSx0SSEkcTAUbHQQbHRgEEhkEGxMQGx4aHxsRCl5ZF2FAeH5 JEQpMRhdsa2sRCkNaFxwTBBsSGwQeGAQbHxMRCkRYFxgRCkRJFxsRCkJFF2ZFeXBAGEwFRHIbE QpCThdrRRpSUB5DXFlcaBEKQkwXZkh9WWJdUntiWR8RCkJsF2wZAUFsG1Jmfx5nEQpCQBdlbmt lf3pnUGcdXREKcGcXbH8TTEZrXBJOGBgRCnBoF2BgSGlCaFkcTmZkEQpwaBdjaBJgAR5zW29dH xEKcGwXY1lzRFxrXhsdXm0RCnBMF2dJZnhnZ0V/G0YYEQ== X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-23_03:2014-04-22,2014-04-23,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404230135 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 07:48:47 -0000 Sorry, it was dd if=/dev/null of=/dev/ada7 bs=512 count=1... --Gena > On Apr 22, 2014, at 11:42 PM, Hugo Lombard wrote: > >> On Mon, Apr 21, 2014 at 12:29:27PM -0700, Gena Guchin wrote: >> >> I have this huge problem with my ZFS server. I have accidentally >> formatted one of the drives in exported ZFS pool. > > Hello > > Apologies if I missed it, but can you please explain what happened > during the time the disk got 'formatted'? > > Regards > > -- > Hugo Lombard > .___. > (o,o) > /) ) > ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 07:57:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E0AE42F9 for ; Wed, 23 Apr 2014 07:57:42 +0000 (UTC) Received: from squishy.elizium.za.net (squishy.elizium.za.net [80.68.90.178]) by mx1.freebsd.org (Postfix) with ESMTP id A88C61DD7 for ; Wed, 23 Apr 2014 07:57:42 +0000 (UTC) Received: from sludge.elizium.za.net (sludge.elizium.za.net [196.41.137.247]) by squishy.elizium.za.net (Postfix) with ESMTPSA id D8AB4803F; Wed, 23 Apr 2014 09:57:40 +0200 (SAST) Date: Wed, 23 Apr 2014 10:00:56 +0200 From: Hugo Lombard To: Gennadiy Gulchin Subject: Re: ZFS unable to import pool Message-ID: <20140423080056.GE2830@sludge.elizium.za.net> References: <20140423064203.GD2830@sludge.elizium.za.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 07:57:42 -0000 On Wed, Apr 23, 2014 at 12:48:16AM -0700, Gennadiy Gulchin wrote: > Sorry, it was dd if=/dev/null of=/dev/ada7 bs=512 count=1... > OK, so you only overwrote the first 512 bytes of the disk, am I understanding correctly? Can you see if any output results from: zdb -l /dev/ada7 -- Hugo Lombard .___. (o,o) /) ) ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 08:17:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B90EF721 for ; Wed, 23 Apr 2014 08:17:43 +0000 (UTC) Received: from st11p02mm-asmtp002.mac.com (st11p02mm-asmtp002.mac.com [17.172.220.237]) by mx1.freebsd.org (Postfix) with ESMTP id 87C8B113C for ; Wed, 23 Apr 2014 08:17:43 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from [17.153.37.167] (unknown [17.153.37.167]) by st11p02mm-asmtp002.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4H009ZH5OS2MA0@st11p02mm-asmtp002.mac.com> for freebsd-fs@freebsd.org; Wed, 23 Apr 2014 08:17:18 +0000 (GMT) Subject: Re: ZFS unable to import pool From: Gena Guchin In-reply-to: <20140423080056.GE2830@sludge.elizium.za.net> Date: Wed, 23 Apr 2014 01:17:16 -0700 Message-id: References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> To: Hugo Lombard X-Mailer: Apple Mail (2.1878) X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaHBEKTEMXGx0EGx0YBBIZBBsTEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hjeREKQ04XSxsYGmJCH2hfHx5yGXhzB xlkGhIbHWNoEQpYXBcZBBoEHQdNSx0SSEkcTAUbHQQbHRgEEhkEGxMQGx4aHxsRCl5ZF2FAeF9 aEQpMRhdia2sRCkNaFxsdBBsfGQQZHQQbHB0RCkRYFxkRCkRJFxsRCkJFF2BIWFB7c0NrUBteE QpCThdrRRpSUB5DXFlcaBEKQkwXZkh9WWJdUntiWR8RCkJsF29nQVheY19CZhhEEQpCQBdlbmt lf3pnUGcdXREKcGcXbH8TTEZrXBJOGBgRCnBoF2wST2hBe0Z5UkIZEQpwaBdsBUtZcmFLfnNee xEKcGgXa0hJfkVef2YcelgRCnBoF2ZHe0VeWR9Zekd9EQpwaBdiSUMZTntnbh1yZhEKcGwXZx9 FXH55HhtCe18RCnBMF2ddZx9cfUEZS0N5EQ== X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-23_03:2014-04-22,2014-04-23,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404230141 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 08:17:43 -0000 Hugo, I get Empty labels, 0-3 # zdb -l /dev/ada7 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 On Apr 23, 2014, at 1:00 AM, Hugo Lombard wrote: > On Wed, Apr 23, 2014 at 12:48:16AM -0700, Gennadiy Gulchin wrote: >> Sorry, it was dd if=/dev/null of=/dev/ada7 bs=512 count=1... >> > > OK, so you only overwrote the first 512 bytes of the disk, am I > understanding correctly? > > Can you see if any output results from: > > zdb -l /dev/ada7 > > -- > Hugo Lombard > .___. > (o,o) > /) ) > ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 08:23:53 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3C98F7F4 for ; Wed, 23 Apr 2014 08:23:53 +0000 (UTC) Received: from squishy.elizium.za.net (squishy.elizium.za.net [80.68.90.178]) by mx1.freebsd.org (Postfix) with ESMTP id 0398911DC for ; Wed, 23 Apr 2014 08:23:52 +0000 (UTC) Received: from sludge.elizium.za.net (sludge.elizium.za.net [196.41.137.247]) by squishy.elizium.za.net (Postfix) with ESMTPSA id D2CB4803F; Wed, 23 Apr 2014 10:23:50 +0200 (SAST) Date: Wed, 23 Apr 2014 10:27:06 +0200 From: Hugo Lombard To: Vusa Moyo Subject: Re: ZFS unable to import pool Message-ID: <20140423082706.GF2830@sludge.elizium.za.net> References: <20140423064203.GD2830@sludge.elizium.za.net> <06296798-A70F-438B-AF1B-82E4D6B74413@tuxsystems.co.za> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <06296798-A70F-438B-AF1B-82E4D6B74413@tuxsystems.co.za> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 08:23:53 -0000 On Wed, Apr 23, 2014 at 10:19:30AM +0200, Vusa Moyo wrote: > Sounds to me like you will need to remove the disk ada7 from the > system so ZFS marks it as failed. > > Depending on the raidz mode you have set for your zpool and the number > of disks remaining being ok, it should start. > It's a raidz1... -- Hugo Lombard .___. (o,o) /) ) ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 08:27:21 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C9A5B97A for ; Wed, 23 Apr 2014 08:27:21 +0000 (UTC) Received: from squishy.elizium.za.net (squishy.elizium.za.net [80.68.90.178]) by mx1.freebsd.org (Postfix) with ESMTP id 912DD1209 for ; Wed, 23 Apr 2014 08:27:21 +0000 (UTC) Received: from sludge.elizium.za.net (sludge.elizium.za.net [196.41.137.247]) by squishy.elizium.za.net (Postfix) with ESMTPSA id 1A632803F; Wed, 23 Apr 2014 10:27:19 +0200 (SAST) Date: Wed, 23 Apr 2014 10:30:35 +0200 From: Hugo Lombard To: Gena Guchin Subject: Re: ZFS unable to import pool Message-ID: <20140423083035.GG2830@sludge.elizium.za.net> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 08:27:21 -0000 On Wed, Apr 23, 2014 at 01:17:16AM -0700, Gena Guchin wrote: > Hugo, > > I get Empty labels, 0-3 > Oh dear... I'm trying to set up something similar to your situation. Will see if I can come up with something else. -- Hugo Lombard .___. (o,o) /) ) ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 08:28:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E455BAD8 for ; Wed, 23 Apr 2014 08:28:48 +0000 (UTC) Received: from MOY07-NIX1.wadns.net (moy07-nix1.wadns.net [41.185.26.137]) (using TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7A8911223 for ; Wed, 23 Apr 2014 08:28:48 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by MOY07-NIX1.wadns.net (Postfix) with ESMTP id AAF441FEF0; Wed, 23 Apr 2014 10:19:41 +0200 (SAST) X-Virus-Scanned: Debian amavisd-new at doggle.co.za Received: from MOY07-NIX1.wadns.net ([127.0.0.1]) by localhost (MOY07-NIX1.wadns.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JxlfNvdQVN7V; Wed, 23 Apr 2014 10:19:36 +0200 (SAST) Received: from [192.168.88.201] (105-208-160-142.access.mtnbusiness.co.za [105.208.160.142]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by MOY07-NIX1.wadns.net (Postfix) with ESMTPSA id 1FD361FD20; Wed, 23 Apr 2014 10:19:35 +0200 (SAST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: ZFS unable to import pool From: Vusa Moyo In-Reply-To: Date: Wed, 23 Apr 2014 10:19:30 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <06296798-A70F-438B-AF1B-82E4D6B74413@tuxsystems.co.za> References: <20140423064203.GD2830@sludge.elizium.za.net> To: Gennadiy Gulchin X-Mailer: Apple Mail (2.1874) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 08:28:48 -0000 Sounds to me like you will need to remove the disk ada7 from the system = so ZFS marks it as failed.=20 Depending on the raidz mode you have set for your zpool and the number = of disks remaining being ok, it should start. You should then re-introduce ada7 and let the pool resilver. Ciao. Vusa =20 On 23 Apr 2014, at 9:48 AM, Gennadiy Gulchin = wrote: > Sorry, it was dd if=3D/dev/null of=3D/dev/ada7 bs=3D512 count=3D1... >=20 > --Gena >=20 >> On Apr 22, 2014, at 11:42 PM, Hugo Lombard = wrote: >>=20 >>> On Mon, Apr 21, 2014 at 12:29:27PM -0700, Gena Guchin wrote: >>>=20 >>> I have this huge problem with my ZFS server. I have accidentally >>> formatted one of the drives in exported ZFS pool. >>=20 >> Hello >>=20 >> Apologies if I missed it, but can you please explain what happened >> during the time the disk got 'formatted'? >>=20 >> Regards >>=20 >> --=20 >> Hugo Lombard >> .___. >> (o,o) >> /) ) >> ---"-"--- > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 09:15:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5DED4652 for ; Wed, 23 Apr 2014 09:15:39 +0000 (UTC) Received: from squishy.elizium.za.net (squishy.elizium.za.net [80.68.90.178]) by mx1.freebsd.org (Postfix) with ESMTP id 260FC16D7 for ; Wed, 23 Apr 2014 09:15:39 +0000 (UTC) Received: from sludge.elizium.za.net (sludge.elizium.za.net [196.41.137.247]) by squishy.elizium.za.net (Postfix) with ESMTPSA id 62C86803F; Wed, 23 Apr 2014 11:15:37 +0200 (SAST) Date: Wed, 23 Apr 2014 11:18:52 +0200 From: Hugo Lombard To: Gennadiy Gulchin Subject: Re: ZFS unable to import pool Message-ID: <20140423091852.GH2830@sludge.elizium.za.net> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140423080056.GE2830@sludge.elizium.za.net> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 09:15:39 -0000 On Wed, Apr 23, 2014 at 10:00:56AM +0200, Hugo Lombard wrote: > On Wed, Apr 23, 2014 at 12:48:16AM -0700, Gennadiy Gulchin wrote: > > Sorry, it was dd if=/dev/null of=/dev/ada7 bs=512 count=1... > > > > OK, so you only overwrote the first 512 bytes of the disk, am I > understanding correctly? > Are you sure that the above command was the only command that altered the data on ada7 outside of ZFS? I tried to recreate your situation using file backed md devices, and even when I deleted the first 256k of the one device (equivalent to an entire label), zpool import could still see the device (and be willing to import it) and zdb would show the first label as 'failed to unpack' but would happily read the second (being+256k), third (end-512k), and fourth (end-256k) labels. Also, I might be on an entirely wrong track here... -- Hugo Lombard .___. (o,o) /) ) ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 09:16:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C753C6D2 for ; Wed, 23 Apr 2014 09:16:47 +0000 (UTC) Received: from squishy.elizium.za.net (squishy.elizium.za.net [80.68.90.178]) by mx1.freebsd.org (Postfix) with ESMTP id 6F90916ED for ; Wed, 23 Apr 2014 09:16:47 +0000 (UTC) Received: from sludge.elizium.za.net (sludge.elizium.za.net [196.41.137.247]) by squishy.elizium.za.net (Postfix) with ESMTPSA id 89979803F; Wed, 23 Apr 2014 11:16:45 +0200 (SAST) Date: Wed, 23 Apr 2014 11:20:01 +0200 From: Hugo Lombard To: Vusa Moyo Subject: Re: ZFS unable to import pool Message-ID: <20140423092000.GI2830@sludge.elizium.za.net> References: <20140423064203.GD2830@sludge.elizium.za.net> <06296798-A70F-438B-AF1B-82E4D6B74413@tuxsystems.co.za> <20140423082706.GF2830@sludge.elizium.za.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140423082706.GF2830@sludge.elizium.za.net> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 09:16:47 -0000 On Wed, Apr 23, 2014 at 10:27:06AM +0200, Hugo Lombard wrote: > On Wed, Apr 23, 2014 at 10:19:30AM +0200, Vusa Moyo wrote: > > Sounds to me like you will need to remove the disk ada7 from the > > system so ZFS marks it as failed. > > > > Depending on the raidz mode you have set for your zpool and the number > > of disks remaining being ok, it should start. > > > > It's a raidz1... > Which of course means 'RAID5' and not 'RAID1'... Sorry, my bad. I think I've been barking up the wrong tree... Changing course. -- Hugo Lombard .___. (o,o) /) ) ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 09:26:02 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E8487C94 for ; Wed, 23 Apr 2014 09:26:02 +0000 (UTC) Received: from MOY07-NIX1.wadns.net (moy07-nix1.wadns.net [41.185.26.137]) (using TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 83B6917E0 for ; Wed, 23 Apr 2014 09:26:02 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by MOY07-NIX1.wadns.net (Postfix) with ESMTP id 2E83D1FD4D; Wed, 23 Apr 2014 11:25:58 +0200 (SAST) X-Virus-Scanned: Debian amavisd-new at doggle.co.za Received: from MOY07-NIX1.wadns.net ([127.0.0.1]) by localhost (MOY07-NIX1.wadns.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id rqxre4AGBFYc; Wed, 23 Apr 2014 11:25:53 +0200 (SAST) Received: from [192.168.88.201] (105-208-160-142.access.mtnbusiness.co.za [105.208.160.142]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by MOY07-NIX1.wadns.net (Postfix) with ESMTPSA id 2101F1FD20; Wed, 23 Apr 2014 11:25:52 +0200 (SAST) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: ZFS unable to import pool From: Vusa Moyo In-Reply-To: <20140423092000.GI2830@sludge.elizium.za.net> Date: Wed, 23 Apr 2014 11:25:48 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <6CBB396B-93B6-4448-93D3-1A7074EA430E@tuxsystems.co.za> References: <20140423064203.GD2830@sludge.elizium.za.net> <06296798-A70F-438B-AF1B-82E4D6B74413@tuxsystems.co.za> <20140423082706.GF2830@sludge.elizium.za.net> <20140423092000.GI2830@sludge.elizium.za.net> To: Hugo Lombard X-Mailer: Apple Mail (2.1874) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 09:26:03 -0000 Is it possible you failed 2 drives instead of 1. Raidz1 cannot handle 2 = failed drives. Maybe, you were busy re-Silvering a previous failure when you dd=92d = ada7? Which is also catastrophic=20 On 23 Apr 2014, at 11:20 AM, Hugo Lombard wrote: > On Wed, Apr 23, 2014 at 10:27:06AM +0200, Hugo Lombard wrote: >> On Wed, Apr 23, 2014 at 10:19:30AM +0200, Vusa Moyo wrote: >>> Sounds to me like you will need to remove the disk ada7 from the >>> system so ZFS marks it as failed.=20 >>>=20 >>> Depending on the raidz mode you have set for your zpool and the = number >>> of disks remaining being ok, it should start. >>>=20 >>=20 >> It's a raidz1... >>=20 >=20 > Which of course means 'RAID5' and not 'RAID1'... >=20 > Sorry, my bad. >=20 > I think I've been barking up the wrong tree... Changing course. >=20 > --=20 > Hugo Lombard > .___. > (o,o) > /) ) > ---"-"--- > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 09:58:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 037DC516 for ; Wed, 23 Apr 2014 09:58:13 +0000 (UTC) Received: from squishy.elizium.za.net (squishy.elizium.za.net [80.68.90.178]) by mx1.freebsd.org (Postfix) with ESMTP id 8C7521AD2 for ; Wed, 23 Apr 2014 09:58:12 +0000 (UTC) Received: from sludge.elizium.za.net (sludge.elizium.za.net [196.41.137.247]) by squishy.elizium.za.net (Postfix) with ESMTPSA id DB84D803F; Wed, 23 Apr 2014 11:58:10 +0200 (SAST) Date: Wed, 23 Apr 2014 12:01:26 +0200 From: Hugo Lombard To: Gennadiy Gulchin Subject: Re: ZFS unable to import pool Message-ID: <20140423100126.GJ2830@sludge.elizium.za.net> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140423091852.GH2830@sludge.elizium.za.net> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 09:58:13 -0000 Hello In your original 'zpool import' output, it shows the following: Additional devices are known to be part of this pool, though their exact configuration cannot be determined. I'm thinking your problem might be related to devices that's supposed to be part of the pool but that's not shown in the import. For instance, here's my attempt at recreating your scenario: # zpool import pool: t id: 15230454775812525624 state: DEGRADED status: One or more devices are missing from the system. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. see: http://illumos.org/msg/ZFS-8000-2Q config: t DEGRADED raidz1-0 DEGRADED md3 ONLINE md4 ONLINE md5 ONLINE md6 ONLINE 3421664295019948379 UNAVAIL cannot open cache md1s2 logs md1s1 ONLINE # As you can see, the pool stattus is 'DEGRADED' instead of 'UNAVAIL', and I don't have the 'Additional devices...' message. The pool imports OK: # zpool import t # zpool status t pool: t state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-2Q scan: none requested config: NAME STATE READ WRITE CKSUM t DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 md3 ONLINE 0 0 0 md4 ONLINE 0 0 0 md5 ONLINE 0 0 0 md6 ONLINE 0 0 0 3421664295019948379 UNAVAIL 0 0 0 was /dev/md7 logs md1s1 ONLINE 0 0 0 cache md1s2 ONLINE 0 0 0 errors: No known data errors # As a further test, let's see what happens when the cache disk disappears: # zpool export t # gpart delete -i 2 md1 md1s2 deleted # zpool import pool: t id: 15230454775812525624 state: DEGRADED status: One or more devices are missing from the system. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. see: http://illumos.org/msg/ZFS-8000-2Q config: t DEGRADED raidz1-0 DEGRADED md3 ONLINE md4 ONLINE md5 ONLINE md6 ONLINE 3421664295019948379 UNAVAIL cannot open cache 7736388725784014558 logs md1s1 ONLINE # zpool import t # zpool status t pool: t state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-2Q scan: none requested config: NAME STATE READ WRITE CKSUM t DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 md3 ONLINE 0 0 0 md4 ONLINE 0 0 0 md5 ONLINE 0 0 0 md6 ONLINE 0 0 0 3421664295019948379 UNAVAIL 0 0 0 was /dev/md7 logs md1s1 ONLINE 0 0 0 cache 7736388725784014558 UNAVAIL 0 0 0 was /dev/md1s2 errors: No known data errors # So even with a missing raidz component and a missing cache device, the pool still imports. I think some crucial piece of information is missing to complete the picture. -- Hugo Lombard .___. (o,o) /) ) ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 10:18:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 45006D19 for ; Wed, 23 Apr 2014 10:18:42 +0000 (UTC) Received: from mail-ee0-x231.google.com (mail-ee0-x231.google.com [IPv6:2a00:1450:4013:c00::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CAC67105B for ; Wed, 23 Apr 2014 10:18:41 +0000 (UTC) Received: by mail-ee0-f49.google.com with SMTP id c41so593287eek.8 for ; Wed, 23 Apr 2014 03:18:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=azQUHNfkcq5CO2zn/+MNNuvFXXRlTV1x2NMY6TfrNG4=; b=g0tGb7Ku2ZlyvMYpHBhgPnmP/klGhdgyGYexaEtgmKYRpmdLEOyttjWsrJF767VN0q rW3F/3eAdeSzCcVv0uuVBM2b5NTas9QGu8VLeQRBSyvIIsiwvJv7gH/gyAqsY6w3VS8H mxCno2SYsKZQJwqf7l1Zn3q6HADZEStnybS5ci0jeE+BLOqLhpP3NVXdZhCO1fWQkAPi lR+jZfEtq08Yg8vfH3hbYTKhmkPzezb+PEUUpEX3v+OK7phUm1RxJzM6Ci/ach6/wx9k QxIf2NLn7WZuwYTMrUNx4YXBux0jgPN/wNA1MssfiSD3EYQOYmhwccgegtp6NYtiIS5v a0aA== X-Received: by 10.14.246.196 with SMTP id q44mr62483510eer.45.1398248318786; Wed, 23 Apr 2014 03:18:38 -0700 (PDT) Received: from [192.168.1.117] (schavemaker.nl. [213.84.84.186]) by mx.google.com with ESMTPSA id t50sm4922533eev.28.2014.04.23.03.18.37 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 23 Apr 2014 03:18:38 -0700 (PDT) Message-ID: <5357937D.4080302@gmail.com> Date: Wed, 23 Apr 2014 12:18:37 +0200 From: Johan Hendriks User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Hugo Lombard Subject: Re: ZFS unable to import pool References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> In-Reply-To: <20140423100126.GJ2830@sludge.elizium.za.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 10:18:42 -0000 op 23-04-14 12:01, Hugo Lombard schreef: > Hello > > In your original 'zpool import' output, it shows the following: > > Additional devices are known to be part of this pool, though their > exact configuration cannot be determined. > > I'm thinking your problem might be related to devices that's supposed to > be part of the pool but that's not shown in the import. > > For instance, here's my attempt at recreating your scenario: > > # zpool import > pool: t > id: 15230454775812525624 > state: DEGRADED > status: One or more devices are missing from the system. > action: The pool can be imported despite missing or damaged devices. The > fault tolerance of the pool may be compromised if imported. > see: http://illumos.org/msg/ZFS-8000-2Q > config: > > t DEGRADED > raidz1-0 DEGRADED > md3 ONLINE > md4 ONLINE > md5 ONLINE > md6 ONLINE > 3421664295019948379 UNAVAIL cannot open > cache > md1s2 > logs > md1s1 ONLINE > # > > As you can see, the pool stattus is 'DEGRADED' instead of 'UNAVAIL', and > I don't have the 'Additional devices...' message. > > The pool imports OK: > > # zpool import t > # zpool status t > pool: t > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using 'zpool online'. > see: http://illumos.org/msg/ZFS-8000-2Q > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > t DEGRADED 0 0 0 > raidz1-0 DEGRADED 0 0 0 > md3 ONLINE 0 0 0 > md4 ONLINE 0 0 0 > md5 ONLINE 0 0 0 > md6 ONLINE 0 0 0 > 3421664295019948379 UNAVAIL 0 0 0 was /dev/md7 > logs > md1s1 ONLINE 0 0 0 > cache > md1s2 ONLINE 0 0 0 > > errors: No known data errors > # > > As a further test, let's see what happens when the cache disk > disappears: > > # zpool export t > # gpart delete -i 2 md1 > md1s2 deleted > # zpool import > pool: t > id: 15230454775812525624 > state: DEGRADED > status: One or more devices are missing from the system. > action: The pool can be imported despite missing or damaged devices. The > fault tolerance of the pool may be compromised if imported. > see: http://illumos.org/msg/ZFS-8000-2Q > config: > > t DEGRADED > raidz1-0 DEGRADED > md3 ONLINE > md4 ONLINE > md5 ONLINE > md6 ONLINE > 3421664295019948379 UNAVAIL cannot open > cache > 7736388725784014558 > logs > md1s1 ONLINE > # zpool import t > # zpool status t > pool: t > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using 'zpool online'. > see: http://illumos.org/msg/ZFS-8000-2Q > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > t DEGRADED 0 0 0 > raidz1-0 DEGRADED 0 0 0 > md3 ONLINE 0 0 0 > md4 ONLINE 0 0 0 > md5 ONLINE 0 0 0 > md6 ONLINE 0 0 0 > 3421664295019948379 UNAVAIL 0 0 0 was /dev/md7 > logs > md1s1 ONLINE 0 0 0 > cache > 7736388725784014558 UNAVAIL 0 0 0 was /dev/md1s2 > > errors: No known data errors > # > > So even with a missing raidz component and a missing cache device, the > pool still imports. > > I think some crucial piece of information is missing to complete the > picture. > Did you in the past add an extra disk to the pool? This could explain the whole issue as the pool is missing a whole vdev. regards Johan From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 11:57:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7E9778C6 for ; Wed, 23 Apr 2014 11:57:29 +0000 (UTC) Received: from squishy.elizium.za.net (squishy.elizium.za.net [80.68.90.178]) by mx1.freebsd.org (Postfix) with ESMTP id 450BC1B93 for ; Wed, 23 Apr 2014 11:57:29 +0000 (UTC) Received: from sludge.elizium.za.net (sludge.elizium.za.net [196.41.137.247]) by squishy.elizium.za.net (Postfix) with ESMTPSA id 7B631803F; Wed, 23 Apr 2014 13:57:27 +0200 (SAST) Date: Wed, 23 Apr 2014 14:00:42 +0200 From: Hugo Lombard To: Johan Hendriks Subject: Re: ZFS unable to import pool Message-ID: <20140423120042.GK2830@sludge.elizium.za.net> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5357937D.4080302@gmail.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 11:57:29 -0000 On Wed, Apr 23, 2014 at 12:18:37PM +0200, Johan Hendriks wrote: > > Did you in the past add an extra disk to the pool? > This could explain the whole issue as the pool is missing a whole vdev. > I agree that there's a vdev missing... I was able to "simulate" the current problematic import state (sans failed "disk7", since that doesn't seem to be the stumbling block) by adding 5 disks [1] to get to here: # zpool status test pool: test state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 md3 ONLINE 0 0 0 md4 ONLINE 0 0 0 md5 ONLINE 0 0 0 md6 ONLINE 0 0 0 md7 ONLINE 0 0 0 raidz1-2 ONLINE 0 0 0 md8 ONLINE 0 0 0 md9 ONLINE 0 0 0 md10 ONLINE 0 0 0 md11 ONLINE 0 0 0 md12 ONLINE 0 0 0 logs md1s1 ONLINE 0 0 0 cache md1s2 ONLINE 0 0 0 errors: No known data errors # Then exporting it, and removing md8-md12, which results in: # zpool import pool: test id: 8932371712846778254 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://illumos.org/msg/ZFS-8000-6X config: test UNAVAIL missing device raidz1-0 ONLINE md3 ONLINE md4 ONLINE md5 ONLINE md6 ONLINE md7 ONLINE cache md1s2 logs md1s1 ONLINE Additional devices are known to be part of this pool, though their exact configuration cannot be determined. # One more data point: In the 'zdb -l' output on the log device it shows vdev_children: 2 for the pool consisting of raidz1 + log + cache, but it shows vdev_children: 3 for the pool with raidz1 + raidz1 + log + cache. The pool in the problem report also shows 'vdev_children: 3' [2] [1] Trying to add a single device resulted in zpool add complaining with: mismatched replication level: pool uses raidz and new vdev is disk and trying it with three disks said: mismatched replication level: pool uses 5-way raidz and new vdev uses 3-way raidz [2] http://lists.freebsd.org/pipermail/freebsd-fs/2014-April/019340.html -- Hugo Lombard .___. (o,o) /) ) ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 12:03:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 51197B70 for ; Wed, 23 Apr 2014 12:03:11 +0000 (UTC) Received: from mail-ee0-x22c.google.com (mail-ee0-x22c.google.com [IPv6:2a00:1450:4013:c00::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D7DA71C64 for ; Wed, 23 Apr 2014 12:03:10 +0000 (UTC) Received: by mail-ee0-f44.google.com with SMTP id e49so715332eek.3 for ; Wed, 23 Apr 2014 05:03:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=NvyedYgbyEpMIPsjxASq3JwftILnQGfLnn98PnvOD8c=; b=gxDDnCcrTZxJp96m1TdDitJF/hPHHY3cxs+khMN0kjX0OSmQ1fl1Q9KqSh8E/n8dem M9NbhorylWVxnl/FHeiDaq0Bxxg16inb5SmqBGxJRJ4l1iB033trbPCt1Kq7lKFV3Bd8 4ic6lhp/e4e1cgflNhAB8Uk4QJApaJy8mmMrufeNEH2H/0OZWYn99/K5mwDbFtssgv6B ZGb/r7+9cX/ccXDwQk7y4LGqaJrERRurjaYAG/7x3EqnNvK1WF/DsJScVPWHvDPGHY7Z nC1bGZo/uJucZz5OaBonMcs38eMuuFB1pzS7cEiSSjWGOAiaM19oYn5NvQeExdtzfgWM nEhA== X-Received: by 10.14.202.201 with SMTP id d49mr18419618eeo.69.1398254589122; Wed, 23 Apr 2014 05:03:09 -0700 (PDT) Received: from [192.168.1.117] (schavemaker.nl. [213.84.84.186]) by mx.google.com with ESMTPSA id w46sm5684315eeo.35.2014.04.23.05.03.07 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 23 Apr 2014 05:03:08 -0700 (PDT) Message-ID: <5357ABFB.9060702@gmail.com> Date: Wed, 23 Apr 2014 14:03:07 +0200 From: Johan Hendriks User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Hugo Lombard , freebsd-fs@freebsd.org Subject: Re: ZFS unable to import pool References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <20140423120042.GK2830@sludge.elizium.za.net> In-Reply-To: <20140423120042.GK2830@sludge.elizium.za.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 12:03:11 -0000 op 23-04-14 14:00, Hugo Lombard schreef: > On Wed, Apr 23, 2014 at 12:18:37PM +0200, Johan Hendriks wrote: >> Did you in the past add an extra disk to the pool? >> This could explain the whole issue as the pool is missing a whole vdev. >> > I agree that there's a vdev missing... > > I was able to "simulate" the current problematic import state (sans > failed "disk7", since that doesn't seem to be the stumbling block) by > adding 5 disks [1] to get to here: > > # zpool status test > pool: test > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > test ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > md3 ONLINE 0 0 0 > md4 ONLINE 0 0 0 > md5 ONLINE 0 0 0 > md6 ONLINE 0 0 0 > md7 ONLINE 0 0 0 > raidz1-2 ONLINE 0 0 0 > md8 ONLINE 0 0 0 > md9 ONLINE 0 0 0 > md10 ONLINE 0 0 0 > md11 ONLINE 0 0 0 > md12 ONLINE 0 0 0 > logs > md1s1 ONLINE 0 0 0 > cache > md1s2 ONLINE 0 0 0 > > errors: No known data errors > # > > Then exporting it, and removing md8-md12, which results in: > > # zpool import > pool: test > id: 8932371712846778254 > state: UNAVAIL > status: One or more devices are missing from the system. > action: The pool cannot be imported. Attach the missing > devices and try again. > see: http://illumos.org/msg/ZFS-8000-6X > config: > > test UNAVAIL missing device > raidz1-0 ONLINE > md3 ONLINE > md4 ONLINE > md5 ONLINE > md6 ONLINE > md7 ONLINE > cache > md1s2 > logs > md1s1 ONLINE > > Additional devices are known to be part of this pool, though their > exact configuration cannot be determined. > # > > One more data point: In the 'zdb -l' output on the log device it shows > > vdev_children: 2 > > for the pool consisting of raidz1 + log + cache, but it shows > > vdev_children: 3 > > for the pool with raidz1 + raidz1 + log + cache. The pool in the > problem report also shows 'vdev_children: 3' [2] > > > > [1] Trying to add a single device resulted in zpool add complaining > with: > > mismatched replication level: pool uses raidz and new vdev is disk > > and trying it with three disks said: > > mismatched replication level: pool uses 5-way raidz and new vdev uses 3-way raidz > > > [2] http://lists.freebsd.org/pipermail/freebsd-fs/2014-April/019340.html > But you can force it.... If you force it, it will add a vdev not the same as the current vdev. So you will have a raidz1 and a single no parity vdev in the pool. If you destroy the single disk vdev then you will get a pool which can not be repaired as far as I know. regards Johan From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 12:03:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8D86DBDA for ; Wed, 23 Apr 2014 12:03:22 +0000 (UTC) Received: from mail-la0-x233.google.com (mail-la0-x233.google.com [IPv6:2a00:1450:4010:c03::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1479C1C66 for ; Wed, 23 Apr 2014 12:03:21 +0000 (UTC) Received: by mail-la0-f51.google.com with SMTP id pv20so670106lab.38 for ; Wed, 23 Apr 2014 05:03:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=MYI5Y0/k5vyJQft6Qe+RRp2ZPq6FuCJvg9vwt5V8tX4=; b=ra8Ik9CWK7WNxfnrJPMKAMoM8Xs+ai5du4MHnzD9UdSsn4hC9kG2jFGHZgoNeMfnVo JF+E8XcNSoI/Gq2fIHP69H1jFHxsVJ00mf+gF6iSUo5eXfL/yKN6j8322JNTQWvfQmNU cJtTngHhjwYwI22j2hK+RyetJFCM7E41VrjM+H8PBIboAIDqV9IlP3slzMFhwyI+dCZ1 TtiKzsydmiyJqYGyOFQufSf26lmhw4xTBEShrgC4tJRkYwS9kJUiErNtLBDAfT8roNpq i1vJMfeU8+rStRqnLqpe3Z6qg5dSJh7Mqnyg+an68tGCbxnlY6eDFGq4tQ+bxRULT0zU x+SQ== MIME-Version: 1.0 X-Received: by 10.152.22.166 with SMTP id e6mr56898laf.71.1398254599984; Wed, 23 Apr 2014 05:03:19 -0700 (PDT) Received: by 10.112.129.164 with HTTP; Wed, 23 Apr 2014 05:03:19 -0700 (PDT) In-Reply-To: <20140423120042.GK2830@sludge.elizium.za.net> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <20140423120042.GK2830@sludge.elizium.za.net> Date: Wed, 23 Apr 2014 13:03:19 +0100 Message-ID: Subject: Re: ZFS unable to import pool From: Tom Evans To: Hugo Lombard Content-Type: text/plain; charset=UTF-8 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 12:03:22 -0000 On Wed, Apr 23, 2014 at 1:00 PM, Hugo Lombard wrote: > [1] Trying to add a single device resulted in zpool add complaining > with: > > mismatched replication level: pool uses raidz and new vdev is disk > > and trying it with three disks said: > > mismatched replication level: pool uses 5-way raidz and new vdev uses 3-way raidz > > > [2] http://lists.freebsd.org/pipermail/freebsd-fs/2014-April/019340.html > In earlier versions of ZFS, ISTR that this check did not exist, and you could do exactly this - expand a pool by adding a vdev that is not of the same "class" as the existing vdevs on the pool. If the vdev still existed, this could be fixed by a dumping the data somewhere, re-creating the pool with the correct vdevs, and restoring the data. Otherwise, I think the only solution is to restore from backup. Perhaps you could roll back the txg before the vdev was added, that is way above my level of knowledge. Cheers Tom From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 12:16:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1CD85797 for ; Wed, 23 Apr 2014 12:16:01 +0000 (UTC) Received: from squishy.elizium.za.net (squishy.elizium.za.net [80.68.90.178]) by mx1.freebsd.org (Postfix) with ESMTP id D6E3E1DA1 for ; Wed, 23 Apr 2014 12:16:00 +0000 (UTC) Received: from sludge.elizium.za.net (sludge.elizium.za.net [196.41.137.247]) by squishy.elizium.za.net (Postfix) with ESMTPSA id 3520B803F; Wed, 23 Apr 2014 14:15:59 +0200 (SAST) Date: Wed, 23 Apr 2014 14:19:14 +0200 From: Hugo Lombard To: Johan Hendriks Subject: Re: ZFS unable to import pool Message-ID: <20140423121914.GL2830@sludge.elizium.za.net> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <20140423120042.GK2830@sludge.elizium.za.net> <5357ABFB.9060702@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5357ABFB.9060702@gmail.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 12:16:01 -0000 On Wed, Apr 23, 2014 at 02:03:07PM +0200, Johan Hendriks wrote: > > But you can force it.... True, I noted the hint from 'zpool add'. That said, I'm not in the habit of forcing my will on tools. I've seen too many cases of people reporting breakage in tool X and then upon further probing it turns out they indiscriminately used the '-f' to make their immediate problem go away, and then some time later there's a lot of tears over lost data. [1] But I digress. The point that I failed to make is that in my mind there's more to the problem that started this thread than the original statement of: I have accidentally formatted one of the drives in exported ZFS pool. [2] Having said that, in the process of trying to replicate the problem I've learned a lot more about ZFS than I knew before the report, so for that I'm grateful. > If you force it, it will add a vdev not the same as the current > vdev. So you will have a raidz1 and a single no parity vdev in the > pool. If you destroy the single disk vdev then you will get a pool > which can not be repaired as far as I know. Thanks, that seems like ample reason to reinforce my dislike of invoking the '-f' option on tools. [1] There's no evidence at this point to my knowledge that the reporter used '-f'. I'm making a general statement. [2] http://lists.freebsd.org/pipermail/freebsd-fs/2014-April/019339.html -- Hugo Lombard .___. (o,o) /) ) ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 12:30:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 41A58E75 for ; Wed, 23 Apr 2014 12:30:59 +0000 (UTC) Received: from mail-ee0-x22c.google.com (mail-ee0-x22c.google.com [IPv6:2a00:1450:4013:c00::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C82CD1059 for ; Wed, 23 Apr 2014 12:30:58 +0000 (UTC) Received: by mail-ee0-f44.google.com with SMTP id e49so745246eek.3 for ; Wed, 23 Apr 2014 05:30:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=6vAeNjdQrs9Bherhi5Jg0wqKt/fOYprDXqKEMwreT5M=; b=duWzgqRdNWiZLUCTk6OeASkp+sIJ1tL8US+Ehy/0uDTnJ+YZY+R1+gH4/TsDKgiOOQ 8Ylw8yntejmwzljWHLFtj4exMl6hG2/vZ087kKDJOc2se2faU7RpXUFNprtdg3Ztpg05 Mus+SYW9KtX5KWpBdvYwmNSpFbVUR518oFhPUO8KxSubUBAElXtgcwbuO9ojV0m+xQfj QJBBkRJFKAR0usIsh6EmM1wWibqLYsySS30GEp5HLDGmj5FDga86DzSjGDG8Jbui2Ilk 0lnSbUxmFM3ruO9kCOhvITUpjQRSYAdcIPW0mUMGk37vW05wEu716ZKiHbBn2rA/O8H3 v4yQ== X-Received: by 10.15.90.201 with SMTP id q49mr20012334eez.65.1398256257064; Wed, 23 Apr 2014 05:30:57 -0700 (PDT) Received: from [192.168.1.117] (schavemaker.nl. [213.84.84.186]) by mx.google.com with ESMTPSA id u1sm5888361eex.31.2014.04.23.05.30.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 23 Apr 2014 05:30:56 -0700 (PDT) Message-ID: <5357B27F.9090000@gmail.com> Date: Wed, 23 Apr 2014 14:30:55 +0200 From: Johan Hendriks User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Hugo Lombard , freebsd-fs@freebsd.org Subject: Re: ZFS unable to import pool References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <20140423120042.GK2830@sludge.elizium.za.net> <5357ABFB.9060702@gmail.com> <20140423121914.GL2830@sludge.elizium.za.net> In-Reply-To: <20140423121914.GL2830@sludge.elizium.za.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 12:30:59 -0000 op 23-04-14 14:19, Hugo Lombard schreef: > Thanks, that seems like ample reason to reinforce my dislike of invoking > the '-f' option on tools. > If you start forcing things, you know that you are doing something that can break things. In my opinion it is nice to have the -f. There are many cases you need to force things. > [1] There's no evidence at this point to my knowledge that the reporter > used '-f'. I'm making a general statement. Like Tom Evans said, in the beginning it was possible to add whatever you liked to a pool. Later after many mistakes by a lot of people the vdev check has been added. From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 14:09:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7CE3C81F for ; Wed, 23 Apr 2014 14:09:58 +0000 (UTC) Received: from st11p02mm-asmtp002.mac.com (st11p02mm-asmtpout002.mac.com [17.172.220.237]) by mx1.freebsd.org (Postfix) with ESMTP id 4767B1AA0 for ; Wed, 23 Apr 2014 14:09:57 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1252 Received: from [17.153.37.167] (unknown [17.153.37.167]) by st11p02mm-asmtp002.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4H00AW8M016H50@st11p02mm-asmtp002.mac.com> for freebsd-fs@freebsd.org; Wed, 23 Apr 2014 14:09:39 +0000 (GMT) Subject: Re: ZFS unable to import pool From: Gena Guchin In-reply-to: <5357937D.4080302@gmail.com> Date: Wed, 23 Apr 2014 07:09:35 -0700 Content-transfer-encoding: quoted-printable Message-id: <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> To: Johan Hendriks X-Mailer: Apple Mail (2.1878) X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaEhEKTEMXGx0EGx0YBBIZBBsSEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hjeREKQ04XSxsbGmJCH2lmHlxCGXhzB xlkGx4aE05sEQpYXBcZBBoEHQdNSx0SSEkcTAUbHQQbHRgEEhkEGxIQGx4aHxsRCl5ZF2FAfR4 BEQpMRhdia2sRCkNaFxsdBBsfGQQZHQQbHB0RCkRYFx4RCkRJFxgRCkJFF2BIWFB7c0NrUBteE QpCThdrRRpSUB5DXFlcaBEKQkwXZkh9WWJdUntiWR8RCkJsF29nQVheY19CZhhEEQpCQBdlbmt lf3pnUGcdXREKcGcXbkNueG5FRll9HRoRCnBoF3pnHkdvRmV+GHtkEQpwaBdhYlpOQWNwX1t5c BEKcGgXZl55WV4eU2VtX2IRCnBoF2lCa0ceZkR+HwUBEQpwaBdtbV98HxIcXEx8bhEKcH8Xb09 6WhJQS1hrH1ARCnBfF2sbQ3xbfB0SREN6EQpwfxdtGxp5e21bf0hFbhEKcF8XaH5cRnJPUEdeU GQRCnBnF2V4UGxgXl5BYWkYEQpwbBdnH0VcfnkeG0J7XxEKcEwXa197WFp+Ym9AZ3sR X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-23_04:2014-04-23,2014-04-23,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=2 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404230210 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 14:09:58 -0000 Johan,=20 Looking though the history, i DID add that disk ada7 (!) to the pool, = but I added it as a separate disk. I wanted to re-add the disk to the = storage pool, but it added as a new disk=85 this does help a lille.. anything I can do now?=20 can I remove that vdev? thanks! On Apr 23, 2014, at 3:18 AM, Johan Hendriks = wrote: >=20 > op 23-04-14 12:01, Hugo Lombard schreef: >> Hello >>=20 >> In your original 'zpool import' output, it shows the following: >>=20 >> Additional devices are known to be part of this pool, though = their >> exact configuration cannot be determined. >>=20 >> I'm thinking your problem might be related to devices that's supposed = to >> be part of the pool but that's not shown in the import. >>=20 >> For instance, here's my attempt at recreating your scenario: >>=20 >> # zpool import >> pool: t >> id: 15230454775812525624 >> state: DEGRADED >> status: One or more devices are missing from the system. >> action: The pool can be imported despite missing or damaged = devices. The >> fault tolerance of the pool may be compromised if imported. >> see: http://illumos.org/msg/ZFS-8000-2Q >> config: >> t DEGRADED >> raidz1-0 DEGRADED >> md3 ONLINE >> md4 ONLINE >> md5 ONLINE >> md6 ONLINE >> 3421664295019948379 UNAVAIL cannot open >> cache >> md1s2 >> logs >> md1s1 ONLINE >> # >>=20 >> As you can see, the pool stattus is 'DEGRADED' instead of 'UNAVAIL', = and >> I don't have the 'Additional devices...' message. >>=20 >> The pool imports OK: >>=20 >> # zpool import t >> # zpool status t >> pool: t >> state: DEGRADED >> status: One or more devices could not be opened. Sufficient = replicas exist for >> the pool to continue functioning in a degraded state. >> action: Attach the missing device and online it using 'zpool = online'. >> see: http://illumos.org/msg/ZFS-8000-2Q >> scan: none requested >> config: >> NAME STATE READ WRITE CKSUM >> t DEGRADED 0 0 0 >> raidz1-0 DEGRADED 0 0 0 >> md3 ONLINE 0 0 0 >> md4 ONLINE 0 0 0 >> md5 ONLINE 0 0 0 >> md6 ONLINE 0 0 0 >> 3421664295019948379 UNAVAIL 0 0 0 was = /dev/md7 >> logs >> md1s1 ONLINE 0 0 0 >> cache >> md1s2 ONLINE 0 0 0 >> errors: No known data errors >> # >>=20 >> As a further test, let's see what happens when the cache disk >> disappears: >>=20 >> # zpool export t >> # gpart delete -i 2 md1 >> md1s2 deleted >> # zpool import >> pool: t >> id: 15230454775812525624 >> state: DEGRADED >> status: One or more devices are missing from the system. >> action: The pool can be imported despite missing or damaged = devices. The >> fault tolerance of the pool may be compromised if imported. >> see: http://illumos.org/msg/ZFS-8000-2Q >> config: >> t DEGRADED >> raidz1-0 DEGRADED >> md3 ONLINE >> md4 ONLINE >> md5 ONLINE >> md6 ONLINE >> 3421664295019948379 UNAVAIL cannot open >> cache >> 7736388725784014558 >> logs >> md1s1 ONLINE >> # zpool import t >> # zpool status t >> pool: t >> state: DEGRADED >> status: One or more devices could not be opened. Sufficient = replicas exist for >> the pool to continue functioning in a degraded state. >> action: Attach the missing device and online it using 'zpool = online'. >> see: http://illumos.org/msg/ZFS-8000-2Q >> scan: none requested >> config: >> NAME STATE READ WRITE CKSUM >> t DEGRADED 0 0 0 >> raidz1-0 DEGRADED 0 0 0 >> md3 ONLINE 0 0 0 >> md4 ONLINE 0 0 0 >> md5 ONLINE 0 0 0 >> md6 ONLINE 0 0 0 >> 3421664295019948379 UNAVAIL 0 0 0 was = /dev/md7 >> logs >> md1s1 ONLINE 0 0 0 >> cache >> 7736388725784014558 UNAVAIL 0 0 0 was = /dev/md1s2 >> errors: No known data errors >> # >>=20 >> So even with a missing raidz component and a missing cache device, = the >> pool still imports. >>=20 >> I think some crucial piece of information is missing to complete the >> picture. >>=20 > Did you in the past add an extra disk to the pool? > This could explain the whole issue as the pool is missing a whole = vdev. >=20 > regards > Johan >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 14:10:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8B6118CA for ; Wed, 23 Apr 2014 14:10:46 +0000 (UTC) Received: from st11p02mm-asmtp002.mac.com (st11p02mm-asmtpout002.mac.com [17.172.220.237]) by mx1.freebsd.org (Postfix) with ESMTP id 5551F1AB2 for ; Wed, 23 Apr 2014 14:10:46 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from [17.153.37.167] (unknown [17.153.37.167]) by st11p02mm-asmtp002.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4H00AW8M016H50@st11p02mm-asmtp002.mac.com> for freebsd-fs@freebsd.org; Wed, 23 Apr 2014 14:10:29 +0000 (GMT) Subject: Re: ZFS unable to import pool From: Gena Guchin In-reply-to: <5357ABFB.9060702@gmail.com> Date: Wed, 23 Apr 2014 07:10:27 -0700 Message-id: References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <20140423120042.GK2830@sludge.elizium.za.net> <5357ABFB.9060702@gmail.com> To: Johan Hendriks X-Mailer: Apple Mail (2.1878) X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaHBEKTEMXGx0EGx0YBBIZBBsSEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hjeREKQ04XSxsbGmJCH2hfHx5yGXhzB xlkGx4bGn5mEQpYXBcZBBoEHQdNSx0SSEkcTAUbHQQbHRgEEhkEGxIQGx4aHxsRCl5ZF2FAfR9 dEQpMRhdia2sRCkNaFxsdBBsfGQQZHQQbHB0RCkRYFx4RCkRJFxgRCkJFF2BIWFB7c0NrUBteE QpCThdrRRpSUB5DXFlcaBEKQkwXZkh9WWJdUntiWR8RCkJsF29nQVheY19CZhhEEQpCQBdlbmt lf3pnUGcdXREKcGcXbkNueG5FRll9HRoRCnBoF2h5TH0YGklkSFpHEQpwaBdjfVtoZG1Oblllb xEKcGgXbkZdXUdNE0dzSUgRCnBoF20TWB5iX28BZFhgEQpwaBdiY1lLTwUeb2NPHhEKcH8Xb09 6WhJQS1hrH1ARCnBfF2sbQ3xbfB0SREN6EQpwfxdtGxp5e21bf0hFbhEKcF8XaVwTZE1sfllFe 08RCnBfF2h+XEZyT1BHXlBkEQpwZxdleFBsYF5eQWFpGBEKcGwXZx9FXH55HhtCe18RCnBMF2t fe1hafmJvQGd7EQ== X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-23_04:2014-04-23,2014-04-23,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=2 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404230210 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 14:10:46 -0000 looks like this is what I did :( On Apr 23, 2014, at 5:03 AM, Johan Hendriks wrote: > > op 23-04-14 14:00, Hugo Lombard schreef: >> On Wed, Apr 23, 2014 at 12:18:37PM +0200, Johan Hendriks wrote: >>> Did you in the past add an extra disk to the pool? >>> This could explain the whole issue as the pool is missing a whole vdev. >>> >> I agree that there's a vdev missing... >> >> I was able to "simulate" the current problematic import state (sans >> failed "disk7", since that doesn't seem to be the stumbling block) by >> adding 5 disks [1] to get to here: >> >> # zpool status test >> pool: test >> state: ONLINE >> scan: none requested >> config: >> NAME STATE READ WRITE CKSUM >> test ONLINE 0 0 0 >> raidz1-0 ONLINE 0 0 0 >> md3 ONLINE 0 0 0 >> md4 ONLINE 0 0 0 >> md5 ONLINE 0 0 0 >> md6 ONLINE 0 0 0 >> md7 ONLINE 0 0 0 >> raidz1-2 ONLINE 0 0 0 >> md8 ONLINE 0 0 0 >> md9 ONLINE 0 0 0 >> md10 ONLINE 0 0 0 >> md11 ONLINE 0 0 0 >> md12 ONLINE 0 0 0 >> logs >> md1s1 ONLINE 0 0 0 >> cache >> md1s2 ONLINE 0 0 0 >> errors: No known data errors >> # >> >> Then exporting it, and removing md8-md12, which results in: >> >> # zpool import >> pool: test >> id: 8932371712846778254 >> state: UNAVAIL >> status: One or more devices are missing from the system. >> action: The pool cannot be imported. Attach the missing >> devices and try again. >> see: http://illumos.org/msg/ZFS-8000-6X >> config: >> test UNAVAIL missing device >> raidz1-0 ONLINE >> md3 ONLINE >> md4 ONLINE >> md5 ONLINE >> md6 ONLINE >> md7 ONLINE >> cache >> md1s2 >> logs >> md1s1 ONLINE >> Additional devices are known to be part of this pool, though their >> exact configuration cannot be determined. >> # >> >> One more data point: In the 'zdb -l' output on the log device it shows >> >> vdev_children: 2 >> >> for the pool consisting of raidz1 + log + cache, but it shows >> >> vdev_children: 3 >> >> for the pool with raidz1 + raidz1 + log + cache. The pool in the >> problem report also shows 'vdev_children: 3' [2] >> >> >> >> [1] Trying to add a single device resulted in zpool add complaining >> with: >> >> mismatched replication level: pool uses raidz and new vdev is disk >> >> and trying it with three disks said: >> >> mismatched replication level: pool uses 5-way raidz and new vdev uses 3-way raidz >> >> >> [2] http://lists.freebsd.org/pipermail/freebsd-fs/2014-April/019340.html >> > But you can force it.... > If you force it, it will add a vdev not the same as the current vdev. So you will have a raidz1 and a single no parity vdev in the pool. If you destroy the single disk vdev then you will get a pool which can not be repaired as far as I know. > > regards > Johan > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 14:18:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E5AE4A2F for ; Wed, 23 Apr 2014 14:18:08 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C6BF61BAE for ; Wed, 23 Apr 2014 14:18:08 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id B6B397402F; Wed, 23 Apr 2014 07:18:07 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 29010-01-9; Wed, 23 Apr 2014 07:18:07 -0700 (PDT) Received: from [10.8.0.14] (unknown [10.8.0.14]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id 682D274F9E; Wed, 23 Apr 2014 07:16:23 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: ZFS unable to import pool From: Jordan Hubbard In-Reply-To: <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> Date: Wed, 23 Apr 2014 19:15:56 +0500 Content-Transfer-Encoding: quoted-printable Message-Id: <888649C4-CC66-48A6-9901-BEA93D1BBFA3@mail.turbofuzz.com> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> To: Gena Guchin X-Mailer: Apple Mail (2.1874) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 14:18:09 -0000 If you added a single disk to a pool, you have no choice but to destroy = the pool and start over. The single disk will essentially degrade the = performance of the whole pool, because it represents a unique = (100Mb/sec, typical) transaction group now, and if you lose that one = disk you will also lose the entire pool since it has no redundancy. This is a common mistake people make with ZFS, and it sucks, but block = pointer rewrite was never implemented so that=92s just the way it is, = too. That=92s another reason for FreeBSD-based front-ends to ZFS like = FreeNAS. The GUI adds some seat-belts to prevent users from trivially = doing things like that. On the command line, all bets are off. - Jordan On Apr 23, 2014, at 7:09 PM, Gena Guchin wrote: > Looking though the history, i DID add that disk ada7 (!) to the pool, = but I added it as a separate disk. I wanted to re-add the disk to the = storage pool, but it added as a new disk=85 > this does help a lille.. >=20 >=20 > anything I can do now?=20 > can I remove that vdev? From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 14:19:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 491D5AC0 for ; Wed, 23 Apr 2014 14:19:58 +0000 (UTC) Received: from st11p02mm-asmtp002.mac.com (st11p02mm-asmtpout002.mac.com [17.172.220.237]) by mx1.freebsd.org (Postfix) with ESMTP id 121FC1BBF for ; Wed, 23 Apr 2014 14:19:57 +0000 (UTC) MIME-version: 1.0 Content-type: text/plain; charset=windows-1251 Received: from [192.168.0.138] (c-69-181-42-159.hsd1.ca.comcast.net [69.181.42.159]) by st11p02mm-asmtp002.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4H00D5GMGWR620@st11p02mm-asmtp002.mac.com> for freebsd-fs@freebsd.org; Wed, 23 Apr 2014 14:19:47 +0000 (GMT) Subject: Re: ZFS unable to import pool From: Gennadiy Gulchin X-Mailer: iPhone Mail (11D201) In-reply-to: <888649C4-CC66-48A6-9901-BEA93D1BBFA3@mail.turbofuzz.com> Date: Wed, 23 Apr 2014 07:19:44 -0700 Content-transfer-encoding: quoted-printable Message-id: References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> <888649C4-CC66-48A6-9901-BEA93D1BBFA3@mail.turbofuzz.com> To: Jordan Hubbard X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaHREKTEMXGx0EGx0YBBIZBBsSEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hueREKQ04XSxsbGmJCH2lhHWt7GXhzB xlkGx4bE0FlEQpYXBcZBBoEHQdNSx0SSEkcTAUbHQQbHRgEEhkEGxIQGx4aHxsRCl5ZF2FAcml OEQpMRhdsa2sRCkNaFxwTBBsSGwQeGAQbHxMRCkRYFxkRCkRJFxkRCkJFF2ZFeXBAGEwFRHIbE QpCThdrRRpSUB5DXFlcaBEKQkwXZkh9WWJdUntiWR8RCkJsF2wZAUFsG1Jmfx5nEQpCQBdlbmt lf3pnUGcdXREKcGcXbRt8QWZlfUVuQUARCnBoF2FIaGF6fBtOHBkYEQpwaBdiTxN7Yh1uW3luT hEKcGgXaHJZSGBse2Z4W2ARCnBoF2MdS0ZGQHBpTV1uEQpwaBdjbBl9Y2sBbVgdXREKcGwXY1l zRFxrXhsdXm0RCnBMF2tfe1hafmJvQGd7EQ== X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-23_04:2014-04-23,2014-04-23,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404230212 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 14:19:58 -0000 Any data can be salvaged? --Gena > On Apr 23, 2014, at 7:15 AM, Jordan Hubbard wrote= : >=20 > If you added a single disk to a pool, you have no choice but to destroy th= e pool and start over. The single disk will essentially degrade the perform= ance of the whole pool, because it represents a unique (100Mb/sec, typical) t= ransaction group now, and if you lose that one disk you will also lose the e= ntire pool since it has no redundancy. >=20 > This is a common mistake people make with ZFS, and it sucks, but block poi= nter rewrite was never implemented so that=92s just the way it is, too. Tha= t=92s another reason for FreeBSD-based front-ends to ZFS like FreeNAS. The G= UI adds some seat-belts to prevent users from trivially doing things like th= at. On the command line, all bets are off. >=20 > - Jordan >=20 >> On Apr 23, 2014, at 7:09 PM, Gena Guchin wrote: >>=20 >> Looking though the history, i DID add that disk ada7 (!) to the pool, but= I added it as a separate disk. I wanted to re-add the disk to the storage p= ool, but it added as a new disk=85 >> this does help a lille.. >>=20 >>=20 >> anything I can do now?=20 >> can I remove that vdev? >=20 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 14:22:09 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5B7F4B67 for ; Wed, 23 Apr 2014 14:22:09 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 1AE2E1C52 for ; Wed, 23 Apr 2014 14:22:08 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s3NELqwb017386 for ; Wed, 23 Apr 2014 09:21:52 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Wed Apr 23 09:21:52 2014 Message-ID: <5357CC7B.2090003@denninger.net> Date: Wed, 23 Apr 2014 09:21:47 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS unable to import pool References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> <888649C4-CC66-48A6-9901-BEA93D1BBFA3@mail.turbofuzz.com> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030007010300090008070206" X-Antivirus: avast! (VPS 140423-0, 04/23/2014), Outbound message X-Antivirus-Status: Clean X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 14:22:09 -0000 This is a cryptographically signed message in MIME format. --------------ms030007010300090008070206 Content-Type: text/plain; charset=windows-1251; format=flowed Content-Transfer-Encoding: quoted-printable Probably not. You can try importing it read-only but if it refuses that then you're=20 screwed. The reason is that ZFS spreads data in a given file around and=20 as such it's likely that bits of various files are interspersed into the = missing device, so a huge percentage of the files on the volume are in=20 fact damaged and unusable. /*Filesystem based "redundancy" is not a backup strategy!*/ On 4/23/2014 9:19 AM, Gennadiy Gulchin wrote: > Any data can be salvaged? > > --Gena > >> On Apr 23, 2014, at 7:15 AM, Jordan Hubbard w= rote: >> >> If you added a single disk to a pool, you have no choice but to destro= y the pool and start over. The single disk will essentially degrade the = performance of the whole pool, because it represents a unique (100Mb/sec,= typical) transaction group now, and if you lose that one disk you will a= lso lose the entire pool since it has no redundancy. >> >> This is a common mistake people make with ZFS, and it sucks, but block= pointer rewrite was never implemented so that=92s just the way it is, to= o. That=92s another reason for FreeBSD-based front-ends to ZFS like Free= NAS. The GUI adds some seat-belts to prevent users from trivially doing = things like that. On the command line, all bets are off. >> >> - Jordan >> >>> On Apr 23, 2014, at 7:09 PM, Gena Guchin wrote:= >>> >>> Looking though the history, i DID add that disk ada7 (!) to the pool,= but I added it as a separate disk. I wanted to re-add the disk to the st= orage pool, but it added as a new disk=85 >>> this does help a lille.. >>> >>> >>> anything I can do now? >>> can I remove that vdev? > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"=FD= #=C00=12=CE=08=AF=D2a/=CCj=D7!y=DF=FF~=B7=9En=C7=FF=A2=B8>=FF=F9=9E=B2=C6= > --=20 -- Karl karl@denninger.net --------------ms030007010300090008070206 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA0MjMxNDIxNDdaMCMGCSqGSIb3DQEJBDEW BBSxvFc+L/WLIJdGijcz2bP2yWZPNTBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAAvJng/1F0vR/33JErBZqgI+mthDc ZLBcOKhJp7gFceH9+uJ4GRD0Ni4YRrdtruZpO13bH4uPBKclDisneLqxOEJrQDPmmfM881v9 g7lPEbPL1qVKqFyP1gKs6lhw3Cgdr9R8kgh3SiOiFmn0YwYJDMOIpLNfokOpB4zfd3d5d2yA t+ZUckcLrVR/WgNB6L+6vEsWtfvF2GSwzwl7tjAar48vdQU4iI6+9yYGXPb+YhdsoB/eD9Mb BygjWSbAknt9UhnVVjIjRuHBxx9Vv9hD9s81xkoxGD6CbCSqWWOq++UVoEFVivAh2XQg2cYy ifEXPi4TQBxUCLmJZL70sv9mQsc52k3Dor5CaR3UgIq1Ss0hkc8iff0e3kP8zE17aclJx+UJ g+NClhDzerNM85GBcKDsHdCBoHeNp5XPHMwKSmC4TyolzPIfK6I74LHF8WgzKK2PPg6gpOc1 3EP+sYSCXW8qynHhW2X/ylh26+4IE1ogOdqnH8FvgQ8I3XviHcvPBh4VkyoodpqOhfz5afPF qdAbiw2BHRJH/Rur+beooa5xY5hYRZ8xrtXafeOSETvGNfIerL+FWGURKRYyHROocaJGQlBv k1qHwKe7Ofv6w9ZMuFenf5qo1wJievlxHBJ9jrsnsHVMWz7RwsX8jCa1FS1j0rB46c+OJinp 9MGL08MAAAAAAAA= --------------ms030007010300090008070206-- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 14:22:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 03B5DBF0 for ; Wed, 23 Apr 2014 14:22:56 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id CA6911C5F for ; Wed, 23 Apr 2014 14:22:55 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id 9BBE47412D; Wed, 23 Apr 2014 07:22:54 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 29059-08; Wed, 23 Apr 2014 07:22:54 -0700 (PDT) Received: from [10.8.0.14] (unknown [10.8.0.14]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id B8FAA740FE; Wed, 23 Apr 2014 07:22:21 -0700 (PDT) Content-Type: text/plain; charset=windows-1251 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: ZFS unable to import pool From: Jordan Hubbard In-Reply-To: Date: Wed, 23 Apr 2014 19:21:52 +0500 Content-Transfer-Encoding: quoted-printable Message-Id: <0CDB194F-7856-4EC0-8D2A-E8547CC1FE8A@mail.turbofuzz.com> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> <888649C4-CC66-48A6-9901-BEA93D1BBFA3@mail.turbofuzz.com> To: Gennadiy Gulchin X-Mailer: Apple Mail (2.1874) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 14:22:56 -0000 Back when you had just formatted the disk, yeah, probably. Now, with = the additional changes, I would say no. Sorry. :( - Jordan On Apr 23, 2014, at 7:19 PM, Gennadiy Gulchin = wrote: > Any data can be salvaged? >=20 > --Gena >=20 >> On Apr 23, 2014, at 7:15 AM, Jordan Hubbard = wrote: >>=20 >> If you added a single disk to a pool, you have no choice but to = destroy the pool and start over. The single disk will essentially = degrade the performance of the whole pool, because it represents a = unique (100Mb/sec, typical) transaction group now, and if you lose that = one disk you will also lose the entire pool since it has no redundancy. >>=20 >> This is a common mistake people make with ZFS, and it sucks, but = block pointer rewrite was never implemented so that=92s just the way it = is, too. That=92s another reason for FreeBSD-based front-ends to ZFS = like FreeNAS. The GUI adds some seat-belts to prevent users from = trivially doing things like that. On the command line, all bets are = off. >>=20 >> - Jordan >>=20 >>> On Apr 23, 2014, at 7:09 PM, Gena Guchin = wrote: >>>=20 >>> Looking though the history, i DID add that disk ada7 (!) to the = pool, but I added it as a separate disk. I wanted to re-add the disk to = the storage pool, but it added as a new disk=85 >>> this does help a lille.. >>>=20 >>>=20 >>> anything I can do now?=20 >>> can I remove that vdev? >>=20 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 15:03:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BDB83825 for ; Wed, 23 Apr 2014 15:03:25 +0000 (UTC) Received: from mail-la0-x22f.google.com (mail-la0-x22f.google.com [IPv6:2a00:1450:4010:c03::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 42F5E1192 for ; Wed, 23 Apr 2014 15:03:25 +0000 (UTC) Received: by mail-la0-f47.google.com with SMTP id pn19so874033lab.6 for ; Wed, 23 Apr 2014 08:03:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:cc :content-type; bh=JpEuw7L5cQZtfAnlibl91gO6epWWiWDcMGykM+57d24=; b=kETEZQCJoq2jveujD+JXEvSw76mV/W3Vsh52winBsHGrcWK3WGrpqvOjDGYxSDXkAq bM9Elle96XrqlFqNuRanFr1iy5Vds19FpRIIwGNaWwx3IsGaGb/2p5MNbkpZfGHYfnwq fucjhbUkWb7vQJC6JOjkaQ686FKH8p3tdgnyee40XZnOe06JX1NSTq6ymQ5aHOhvcaks 6ylmeQBXJrrmTFqQ8mhcr/QpO/9r3XmO29xA4veLbny/PPX4xa9lhcgPSJRWizsnFxvi XFYrohANXSfjR/csntB3MTbwFyNsDyvFS1oB9ya+geYHS8lkl/11o1l6jToW9GfAG9Db 7T+w== MIME-Version: 1.0 X-Received: by 10.112.135.198 with SMTP id pu6mr1202260lbb.58.1398265402515; Wed, 23 Apr 2014 08:03:22 -0700 (PDT) Received: by 10.112.129.164 with HTTP; Wed, 23 Apr 2014 08:03:22 -0700 (PDT) In-Reply-To: <5357CC7B.2090003@denninger.net> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> <888649C4-CC66-48A6-9901-BEA93D1BBFA3@mail.turbofuzz.com> <5357CC7B.2090003@denninger.net> Date: Wed, 23 Apr 2014 16:03:22 +0100 Message-ID: Subject: Re: ZFS unable to import pool From: Tom Evans Cc: FreeBSD FS Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 15:03:25 -0000 On Wed, Apr 23, 2014 at 3:21 PM, Karl Denninger wrote: > /*Filesystem based "redundancy" is not a backup strategy!*/ > It's my (home) backup strategy :( Very few cost efficient ways to backup 15TB+ of data other than redundant spinning rust. Cheers Tom From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 15:10:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 23AE895C for ; Wed, 23 Apr 2014 15:10:42 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id DCE9F1253 for ; Wed, 23 Apr 2014 15:10:40 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s3NFAapk033157 for ; Wed, 23 Apr 2014 10:10:39 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Wed Apr 23 10:10:39 2014 Message-ID: <5357D7E7.605@denninger.net> Date: Wed, 23 Apr 2014 10:10:31 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS unable to import pool References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> <888649C4-CC66-48A6-9901-BEA93D1BBFA3@mail.turbofuzz.com> <5357CC7B.2090003@denninger.net> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms050304090202020309040908" X-Antivirus: avast! (VPS 140423-0, 04/23/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 15:10:42 -0000 This is a cryptographically signed message in MIME format. --------------ms050304090202020309040908 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 4/23/2014 10:03 AM, Tom Evans wrote: > On Wed, Apr 23, 2014 at 3:21 PM, Karl Denninger wr= ote: >> /*Filesystem based "redundancy" is not a backup strategy!*/ >> > It's my (home) backup strategy :( > > Very few cost efficient ways to backup 15TB+ of data other than > redundant spinning rust. > I have a large home system as well. But I do back it up to other spinning pieces of rust, and rotate the=20 backups out to a bank safe-deposit box. If I make a terrible mistake=20 (or my hardware and/or software does) I have a means of recovery. There = are no guarantees of course in that I COULD wind up with a bad disk in=20 the safe deposit box, but if my house burns down I have a shot at=20 recovery with high odds of success -- an act that would otherwise be=20 impossible. Partitioning my data off into "essentially archival,=20 read-almost-only" and "active" means that the former needs to be updated = rarely and the former is of small enough size that I don't go crazy=20 doing it either in money or time. And I *HAVE* had things like this happen -- twice in the last 20 years=20 I've had a disk adapter go insane and scribble on MULTIPLE spindles at=20 once. There is no RAID strategy that will protect you against this=20 event; you either have a backup or you're done. ZFS actually makes this easier with send/receive and the ability to=20 import a pool, send to it and then export it. The backup pool can have=20 compression turned on where for performance reasons it may not make=20 sense for the online pool to do so. And you can rotate that out fairly=20 easily too; you can take a 2-way mirror, add a third disk and let it=20 resilver, then split the third one off and remove it, giving you a=20 dismounted copy you can then stick in a box and yet if you need it --=20 it's there. --=20 -- Karl karl@denninger.net --------------ms050304090202020309040908 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA0MjMxNTEwMzFaMCMGCSqGSIb3DQEJBDEW BBSN9woTDov4UIO++zrgru03seKnQjBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAWWHcYxbxVf76zlMh4f+ETfHO3D7m ciefXwlw0I1tA9mCFo2vVrdRKxd9V7AOv6yy5mGsPJxWqMns5gpmIGwHewdFEWHcLsoDrNfV APagqG9mEUEdmVEXpTA9WXGYaES8t2ODUoPt07w1WwOU96CKmruwTvCnLhwiGe3KvfppqVWM ZYcUs/kX2Rr/5i4+gOBo8q2q1t+6QWtOy2zRHQVAH/rNgZKpefly3wK/tA6A3hmkfXiK8c1C R5Dy3Idxl8D38UZpO3Ow7GzcScu3UyXTCqblvGLq8zYzgT4GGvItMqyxseYJ7rkowRowuTcy 6EVha+j6Wcj+3Qt3UgH3WSiZhoUoj1sfsKayYMoCXrGy3PLTFHq9atfG55j5E284Y2T3JYUq UMS/XPPytwiQt16ZgScHqYISiOHR2mppUyTttLpQT8rkvE/WfGnVNNW4luz1Wd6UKop9hee5 rPfrlT6arUe/XQcdy48uoxl5R+AoZWnkD/qGH1WBmNmwsIIa/qnnEtBbBxdSogaEsDa3dNQh aXgapO4E7wWU2cboxjs01LlKdW0dQAW6aKesofrCdyWzel/V0wyrFG9MFmOk5xmxa4WVzWo9 ubfuYBMutXYzkbpjqOcD0TeCgIZeACxBPEUP2uC76jrcTmlFSEgJkA9nLOPysoPDFRC1PCyE Qx7ktKAAAAAAAAA= --------------ms050304090202020309040908-- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 16:20:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1D69DB48 for ; Wed, 23 Apr 2014 16:20:46 +0000 (UTC) Received: from st11p02mm-asmtp002.mac.com (st11p02mm-asmtpout002.mac.com [17.172.220.237]) by mx1.freebsd.org (Postfix) with ESMTP id D994219E4 for ; Wed, 23 Apr 2014 16:20:45 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from [192.168.10.73] (flipbeam.com [76.74.153.36]) by st11p02mm-asmtp002.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4H0000XS2I1X00@st11p02mm-asmtp002.mac.com> for freebsd-fs@freebsd.org; Wed, 23 Apr 2014 16:20:44 +0000 (GMT) Subject: Re: ZFS unable to import pool From: Gennadiy Gulchin X-Mailer: iPhone Mail (11D201) In-reply-to: <5357D7E7.605@denninger.net> Date: Wed, 23 Apr 2014 09:20:42 -0700 Message-id: <6305B1EB-F091-47FB-ADEA-6C2EF72C8FBD@icloud.com> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> <888649C4-CC66-48A6-9901-BEA93D1BBFA3@mail.turbofuzz.com> <5357CC7B.2090003@denninger.net> <5357D7E7.605@denninger.net> To: Karl Denninger X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaHxEKTEMXGx0EGx0YBBIZBBsTEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hueREKQ04XSxsYGmJCHx1SHXVnGXhzB xlkGxwYGkNoEQpYXBcZBBoEHQdNSx0SSEkcTAUbHQQbHRgEEhkEGxMQGx4aHxsRCl5ZF2FAc1A ZEQpMRhdsa2sRCkNaFx0cBB0eBBsfGQQZHBEKRFgXGBEKREkXGBEKQkUXZkV5cEAYTAVEchsRC kJOF2tFGlJQHkNcWVxoEQpCTBdmSH1ZYl1Se2JZHxEKQmwXbBkBQWwbUmZ/HmcRCkJAF2Vua2V /emdQZx1dEQpwaBdiRQVgGkNlYBMbfBEKcGgXZHlyZ01DQXMbcFkRCnBoF2NoGEdEXHpCW1hmE QpwaBdgTxNoT2MaSR9iXxEKcGgXZlwdXRtuQH1aQ3ARCnBsF2NZc0Rca14bHV5tEQ== X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-23_04:2014-04-23,2014-04-23,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404230247 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 16:20:46 -0000 Thanks for your help guys! The only think that makes my hope still survive is the disk I have added is the sAme physicscal disk as in raidz1 pool. Or having Just one disk outside of raidz1 and making this disk unavailable causes the whole array to tank? --Gena > On Apr 23, 2014, at 8:10 AM, Karl Denninger wrote: > >> On 4/23/2014 10:03 AM, Tom Evans wrote: >>> On Wed, Apr 23, 2014 at 3:21 PM, Karl Denninger wrote: >>> /*Filesystem based "redundancy" is not a backup strategy!*/ >> It's my (home) backup strategy :( >> >> Very few cost efficient ways to backup 15TB+ of data other than >> redundant spinning rust. > I have a large home system as well. > > But I do back it up to other spinning pieces of rust, and rotate the backups out to a bank safe-deposit box. If I make a terrible mistake (or my hardware and/or software does) I have a means of recovery. There are no guarantees of course in that I COULD wind up with a bad disk in the safe deposit box, but if my house burns down I have a shot at recovery with high odds of success -- an act that would otherwise be impossible. Partitioning my data off into "essentially archival, read-almost-only" and "active" means that the former needs to be updated rarely and the former is of small enough size that I don't go crazy doing it either in money or time. > > And I *HAVE* had things like this happen -- twice in the last 20 years I've had a disk adapter go insane and scribble on MULTIPLE spindles at once. There is no RAID strategy that will protect you against this event; you either have a backup or you're done. > > ZFS actually makes this easier with send/receive and the ability to import a pool, send to it and then export it. The backup pool can have compression turned on where for performance reasons it may not make sense for the online pool to do so. And you can rotate that out fairly easily too; you can take a 2-way mirror, add a third disk and let it resilver, then split the third one off and remove it, giving you a dismounted copy you can then stick in a box and yet if you need it -- it's there. > > -- > -- Karl > karl@denninger.net > > From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 16:20:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4F7DFBB9 for ; Wed, 23 Apr 2014 16:20:59 +0000 (UTC) Received: from mail-lb0-x22c.google.com (mail-lb0-x22c.google.com [IPv6:2a00:1450:4010:c04::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C6B4019EB for ; Wed, 23 Apr 2014 16:20:58 +0000 (UTC) Received: by mail-lb0-f172.google.com with SMTP id c11so985751lbj.17 for ; Wed, 23 Apr 2014 09:20:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=YmpNVjfcea0Qv0oz46EYdPe8sq1EbAJI1q7xc7yT8YY=; b=YuwgRM2sk0SwCJRcZ4hRsqaApBYliiI4Mzb1/9u23QH8jDjQ52DUeiRfcNuwkSwtZD 2g3lTj/WffvIp3IuWKf27cwRkwwKI5ipHupl/p7bwnQ4NPBiMC08RYGShEmGTmTCqMGH 73vwDjoulHVPVnRCMJ4ExOGnHZB1h3PIFFvZv5FqdXAGzqqPL6svbhCltkJjDoBY2yIn H8W4d1THHPzFjfjKHgFcLnSFy9yp+5I1dRrn+8WGD55rprBaL95jaop9UfdjFhUJm3zn oTnLVzp+7Cg36DjvGJKq/cpTjbINy4Y6lZnHC6/ikwMijEkcoASFbMMyF6qbNRMsJjUR 21xw== MIME-Version: 1.0 X-Received: by 10.112.131.8 with SMTP id oi8mr48216lbb.87.1398270056646; Wed, 23 Apr 2014 09:20:56 -0700 (PDT) Received: by 10.112.129.164 with HTTP; Wed, 23 Apr 2014 09:20:56 -0700 (PDT) In-Reply-To: <5357D7E7.605@denninger.net> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> <888649C4-CC66-48A6-9901-BEA93D1BBFA3@mail.turbofuzz.com> <5357CC7B.2090003@denninger.net> <5357D7E7.605@denninger.net> Date: Wed, 23 Apr 2014 17:20:56 +0100 Message-ID: Subject: Re: ZFS unable to import pool From: Tom Evans To: Karl Denninger Content-Type: text/plain; charset=UTF-8 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 16:20:59 -0000 On Wed, Apr 23, 2014 at 4:10 PM, Karl Denninger wrote: > I have a large home system as well. > > But I do back it up to other spinning pieces of rust, and rotate the backups > out to a bank safe-deposit box. If I make a terrible mistake (or my > hardware and/or software does) I have a means of recovery. There are no > guarantees of course in that I COULD wind up with a bad disk in the safe > deposit box, but if my house burns down I have a shot at recovery with high > odds of success -- an act that would otherwise be impossible. Partitioning > my data off into "essentially archival, read-almost-only" and "active" means > that the former needs to be updated rarely and the former is of small enough > size that I don't go crazy doing it either in money or time. I recently re-jigged my setup to do just this - the root pool and working set are on a pair of SSDs, and the rarely written and seldom read data lives on a special archive pool - 8 disk raidz2. > > And I *HAVE* had things like this happen -- twice in the last 20 years I've > had a disk adapter go insane and scribble on MULTIPLE spindles at once. > There is no RAID strategy that will protect you against this event; you > either have a backup or you're done. I didn't want to know that :) > > ZFS actually makes this easier with send/receive and the ability to import a > pool, send to it and then export it. The backup pool can have compression > turned on where for performance reasons it may not make sense for the online > pool to do so. And you can rotate that out fairly easily too; you can take > a 2-way mirror, add a third disk and let it resilver, then split the third > one off and remove it, giving you a dismounted copy you can then stick in a > box and yet if you need it -- it's there. I should be doing more of this, although it is trickier to do with a large pool. Splitting my datasets in to smaller chunks would help to backup to dismounted pools. Interesting advice, thanks Karl! Cheers Tom From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 16:26:41 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7B402CC6 for ; Wed, 23 Apr 2014 16:26:41 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 53AC21A2D for ; Wed, 23 Apr 2014 16:26:41 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id C88A4744F1; Wed, 23 Apr 2014 09:26:40 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 38125-01; Wed, 23 Apr 2014 09:26:40 -0700 (PDT) Received: from [10.8.0.14] (unknown [10.8.0.14]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id 5287D744D5; Wed, 23 Apr 2014 09:26:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=ixsystems.com; s=newknight0; t=1398270400; bh=1lULFI0xAMUK6P02ZAFZ6pe0OstJlP3k4bpL+MnGLoU=; h=Subject:From:In-Reply-To:Date:Cc:References:To; b=C/Q+w5y+9VKRHkq67nty+2RFsZ079L90kiMqCqIJMlBxhVmLwah0gX7MOVH/gP5HV CvYfACu1bZ+Mq05XYVKA06peBkV95VYtGifByRDhDesW4m9x1ceEoxopWTIaXma3SP VLfRXvPM9jDWbRn5mzQ9LODm1JJqbkf4ulK5Wfw0= Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: ZFS unable to import pool From: Jordan Hubbard In-Reply-To: <6305B1EB-F091-47FB-ADEA-6C2EF72C8FBD@icloud.com> Date: Wed, 23 Apr 2014 21:26:21 +0500 Content-Transfer-Encoding: quoted-printable Message-Id: <580B1D5E-3E83-455F-B7C3-B055258E3F20@ixsystems.com> References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com> <888649C4-CC66-48A6-9901-BEA93D1BBFA3@mail.turbofuzz.com> <5357CC7B.2090003@denninger.net> <5357D7E7.605@denninger.net> <6305B1EB-F091-47FB-ADEA-6C2EF72C8FBD@icloud.com> To: Gennadiy Gulchin X-Mailer: Apple Mail (2.1874) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 16:26:41 -0000 On Apr 23, 2014, at 9:20 PM, Gennadiy Gulchin = wrote: > The only think that makes my hope still survive is the disk I have = added is the sAme physicscal disk as in raidz1 pool. Or having > Just one disk outside of raidz1 and making this disk unavailable = causes the whole array to tank? Correct. You really don=92t want this configuration. :( - Jordan From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 16:30:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 56DB0E6F for ; Wed, 23 Apr 2014 16:30:35 +0000 (UTC) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.81]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1670F1A69 for ; Wed, 23 Apr 2014 16:30:34 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1Wd040-0002IT-Ko; Wed, 23 Apr 2014 18:30:25 +0200 Content-Type: text/plain; charset=iso-8859-15; format=flowed; delsp=yes To: "Hugo Lombard" , "Gennadiy Gulchin" Subject: Re: ZFS unable to import pool References: <20140423064203.GD2830@sludge.elizium.za.net> Date: Wed, 23 Apr 2014 18:30:22 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.17 (Win32) X-Authenticated-As-Hash: 398f5522cb258ce43cb679602f8cfe8b62a256d1 X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: -- X-Spam-Score: -2.9 X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED, BAYES_00 autolearn=disabled version=3.3.2 X-Scan-Signature: dfea3049d3b923820beb462d65569822 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 16:30:35 -0000 Are you sure you didn't use /dev/zero? You can't read anything from /dev/null AFAIK. Ronald. On Wed, 23 Apr 2014 09:48:16 +0200, Gennadiy Gulchin wrote: > Sorry, it was dd if=/dev/null of=/dev/ada7 bs=512 count=1... > > --Gena > >> On Apr 22, 2014, at 11:42 PM, Hugo Lombard wrote: >> >>> On Mon, Apr 21, 2014 at 12:29:27PM -0700, Gena Guchin wrote: >>> >>> I have this huge problem with my ZFS server. I have accidentally >>> formatted one of the drives in exported ZFS pool. >> >> Hello >> >> Apologies if I missed it, but can you please explain what happened >> during the time the disk got 'formatted'? >> >> Regards >> >> -- >> Hugo Lombard >> .___. >> (o,o) >> /) ) >> ---"-"--- > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 16:37:40 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 24E5BFE5 for ; Wed, 23 Apr 2014 16:37:40 +0000 (UTC) Received: from st11p02mm-asmtp002.mac.com (st11p02mm-asmtpout002.mac.com [17.172.220.237]) by mx1.freebsd.org (Postfix) with ESMTP id E4CB91B32 for ; Wed, 23 Apr 2014 16:37:39 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from [192.168.10.73] (flipbeam.com [76.74.153.36]) by st11p02mm-asmtp002.mac.com (Oracle Communications Messaging Server 7u4-27.08(7.0.4.27.7) 64bit (built Aug 22 2013)) with ESMTPSA id <0N4H00K6GSU9X040@st11p02mm-asmtp002.mac.com> for freebsd-fs@freebsd.org; Wed, 23 Apr 2014 16:37:23 +0000 (GMT) Subject: Re: ZFS unable to import pool From: Gennadiy Gulchin X-Mailer: iPhone Mail (11D201) In-reply-to: Date: Wed, 23 Apr 2014 09:37:21 -0700 Message-id: <7694263F-8EB3-4815-B07E-3C3DA279E302@icloud.com> References: <20140423064203.GD2830@sludge.elizium.za.net> To: Ronald Klop X-MANTSH: 1TEIXWV4bG1oaGkdHB0lGUkdDRl5PWBoaEhEKTEMXGx0EGx0YBBIZBBsSEBseGh8 aEQpYTRdLEQptfhcaEQpMWRcbGhsbEQpZSRcRClleF2hueREKQ04XSxsbGmJCH2lmHlxCGXhzB xlkGxwZHWRgEQpYXBcZBBoEHQdNSx0SSEkcTAUbHQQbHRgEEhkEGxIQGx4aHxsRCl5ZF2FAcG5 PEQpMRhdsa2sRCkNaFx0cBB0eBBsfGQQZHBEKRFgXGREKREkXGBEKQkUXZkV5cEAYTAVEchsRC kJOF2tFGlJQHkNcWVxoEQpCTBdmSH1ZYl1Se2JZHxEKQmwXbBkBQWwbUmZ/HmcRCkJAF2Vua2V /emdQZx1dEQpwZxdsQWUbaFB6RRJaUBEKcGgXbW1wfVlHbUhgf1ARCnBoF2xIHGRbXXxnZVtBE QpwaBdmHAVYcxlPbWNjZBEKcGgXZxpSXkRJRGMYRmcRCnBoF2NoEmABHnNbb10fEQpwZxdsfxN MRmtcEk4YGBEKcGcXbkNueG5FRll9HRoRCnB/F20bGnl7bVt/SEVuEQpwXxdoflxGck9QR15QZ BEKcGcXZXhQbGBeXkFhaRgRCnBsF2NZc0Rca14bHV5tEQpwTBdnXWcfXH1BGUtDeRE= X-CLX-Spam: false X-CLX-Score: 1011 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-04-23_04:2014-04-23,2014-04-23,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=2 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1404230252 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 16:37:40 -0000 Yep, /dev/zero it was :( --Gena > On Apr 23, 2014, at 9:30 AM, Ronald Klop wrote: > > Are you sure you didn't use /dev/zero? You can't read anything from /dev/null AFAIK. > > Ronald. > > >> On Wed, 23 Apr 2014 09:48:16 +0200, Gennadiy Gulchin wrote: >> >> Sorry, it was dd if=/dev/null of=/dev/ada7 bs=512 count=1... >> >> --Gena >> >>>> On Apr 22, 2014, at 11:42 PM, Hugo Lombard wrote: >>>> >>>> On Mon, Apr 21, 2014 at 12:29:27PM -0700, Gena Guchin wrote: >>>> >>>> I have this huge problem with my ZFS server. I have accidentally >>>> formatted one of the drives in exported ZFS pool. >>> >>> Hello >>> >>> Apologies if I missed it, but can you please explain what happened >>> during the time the disk got 'formatted'? >>> >>> Regards >>> >>> -- >>> Hugo Lombard >>> .___. >>> (o,o) >>> /) ) >>> ---"-"--- >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Apr 24 09:40:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E14CBB60 for ; Thu, 24 Apr 2014 09:40:16 +0000 (UTC) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.81]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A141D107F for ; Thu, 24 Apr 2014 09:40:16 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1WdG8b-0006Q7-1p; Thu, 24 Apr 2014 11:40:13 +0200 Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: "Gena Guchin" Subject: Re: ZFS unable to import pool References: <6DACDF6E-E1ED-49C0-975C-A91F68EA8840@icloud.com> Date: Thu, 24 Apr 2014 11:40:11 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: Quoted-Printable From: "Ronald Klop" Message-ID: In-Reply-To: <6DACDF6E-E1ED-49C0-975C-A91F68EA8840@icloud.com> User-Agent: Opera Mail/12.16 (Win32) X-Authenticated-As-Hash: 398f5522cb258ce43cb679602f8cfe8b62a256d1 X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: - X-Spam-Score: -1.0 X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED, BAYES_20 autolearn=disabled version=3.3.1 X-Scan-Signature: 05f4b5ff5716d86a3de6e4c92c1b0ccd Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 24 Apr 2014 09:40:17 -0000 On Tue, 22 Apr 2014 18:36:48 +0200, Gena Guchin = wrote: > Ronald, > > system does see the disk, ada7, in this case. Nothing has been = > disconnected from the system. > > what steps do you suggest I take with GEOM? As I read the rest of the discussion I don't think this will help you. = People with more knowledge about ZFS have already explained a lot of you= r = problem. Ronald. > > > thanks! > > > On Apr 22, 2014, at 2:49 AM, Ronald Klop wrote:= > >> On Mon, 21 Apr 2014 21:29:27 +0200, Gena Guchin = = >> wrote: >> >>> Hello FreeBSD users, >>> >>> my appologies for reposting, but I'd really need your help! >>> >>> >>> I have this huge problem with my ZFS server. I have accidentally = >>> formatted one of the drives in exported ZFS pool. and now I can=E2=80= =99t = >>> import the pool back. this is extremely important pool for me. devic= e = >>> that is missing is still attached to the system. Any help would be = >>> greatly appreciated. >>> >>> >>> >>> >>> #uname -a >>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16= = >>> 22:34:59 UTC 2014 = >>> root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 >>> >>> #zpool import >>> pool: storage >>> id: 11699153865862401654 >>> state: UNAVAIL >>> status: One or more devices are missing from the system. >>> action: The pool cannot be imported. Attach the missing >>> devices and try again. >>> see: http://illumos.org/msg/ZFS-8000-6X >>> config: >>> >>> storage UNAVAIL missing device >>> raidz1-0 DEGRADED >>> ada3 ONLINE >>> ada4 ONLINE >>> ada5 ONLINE >>> ada6 ONLINE >>> 248348789931078390 UNAVAIL cannot open >>> cache >>> ada1s2 >>> logs >>> ada1s1 ONLINE >>> >>> Additional devices are known to be part of this pool, though = >>> their >>> exact configuration cannot be determined. >>> >>> >>> # zpool list >>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>> >>> # zpool upgrade >>> This system supports ZFS pool feature flags. >>> >>> All pools are formatted using feature flags. >>> >>> Every feature flags pool has all supported features enabled. >>> >>> # zfs upgrade >>> This system is currently running ZFS filesystem version 5. >>> >>> All filesystems are formatted with the current version. >>> >>> >>> Thanks a lot! >> >> Does FreeBSD see the disk? Is it in /dev/ada2 (or another number)? >> If FreeBSD does not know anything about the disk, ZFS can't either. A= = >> reboot or some fiddling (partitioning?) with GEOM might make the disk= = >> reappear. >> >> Ronald. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"= From owner-freebsd-fs@FreeBSD.ORG Fri Apr 25 10:24:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 618424F7 for ; Fri, 25 Apr 2014 10:24:20 +0000 (UTC) Received: from r2-d2.netlabs.org (r2-d2.netlabs.org [213.238.45.90]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B617B1DDF for ; Fri, 25 Apr 2014 10:24:18 +0000 (UTC) Received: (qmail 35224 invoked by uid 89); 25 Apr 2014 10:17:35 -0000 Received: from unknown (HELO eternal.metropolis.netlabs.org) (ml-ktk@netlabs.org@213.144.156.18) by 0 with ESMTPA; 25 Apr 2014 10:17:35 -0000 Message-ID: <535A363E.5080000@netlabs.org> Date: Fri, 25 Apr 2014 12:17:34 +0200 From: Adrian Gschwend User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201403261230.s2QCU3vI095105@freefall.freebsd.org> <8659e58b9fabd9f553c8be5da5dc61fd@mail.mikej.com> <53341106.4060101@denninger.net> In-Reply-To: <53341106.4060101@denninger.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Apr 2014 10:24:20 -0000 On 27.03.14 12:52, Karl Denninger wrote: > Cross-posted over to -STABLE in the hope of expanding review and testing > by others. I'm happy to report that this definitely solved the issues I had on my box for years [1]. It is running faster and more stable than ever, thanks again Karl! What is the process with merging the patch in the development kernel, does this stay "alone" until a kernel/zfs dev decides to pick it up? Or is there a formal process? regards Adrian [1]: http://lists.freebsd.org/pipermail/freebsd-fs/2014-March/019043.html From owner-freebsd-fs@FreeBSD.ORG Sat Apr 26 19:47:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3DF6FAB8 for ; Sat, 26 Apr 2014 19:47:39 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id CE1A01DAD for ; Sat, 26 Apr 2014 19:47:38 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAIgIXFODaFve/2dsb2JhbABZhCyCZcF4gSF0gk9IQwINGQJfiFSXJo8foyMXgSmMWCSDKoFKBKtqg00hgSxC X-IronPort-AV: E=Sophos;i="4.97,934,1389762000"; d="scan'208";a="118145119" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 26 Apr 2014 15:47:31 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 18757B3EEF for ; Sat, 26 Apr 2014 15:47:31 -0400 (EDT) Date: Sat, 26 Apr 2014 15:47:31 -0400 (EDT) From: Rick Macklem To: FreeBSD Filesystems Message-ID: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> Subject: RFC: using ceph as a backend for an NFSv4.1 pNFS server MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 26 Apr 2014 19:47:39 -0000 Hi, The non-pNFS v4.1 server in the projects area is just about ready for head, I think. However, without pNFS, NFSv4.1 isn't all that interesting. The problem is that doing a pNFS server is a non-trivial exercise. I am now somewhat familiar with pNFS (from doing the client side), but have no expertise w.r.t. cluster file systems, etc. For those not familiar with pNFS, the basic idea is that the NFSv4.1 server becomes a metadata server (MDS) and hands out what are called layouts and devinfo, so that the client can access data server(s) (DS) to read/write the file. There are RFCs that define both block/volume (using iSCSI or similar) and object (using something called ODS2). Although I suspect there are many ways to do a pNFS server, I think that building it on top of a cluster file system may be the simplest. So, this leads me to... At a glance (just the web pages, I haven't looked at the source), it appears that ceph might be useful as a backend to a pNFS server. It has a POSIX interface (that could be used by the metadata server) as well as both object (not ODS2 I suspect) and block interfaces. The licensing appears to be LGPL, which isn't ideal, but I'd say better than GPLv3 (which is what Glustre appears to be). Does anyone have experience using ceph or some other cluster file system such that you might have some idea w.r.t. its usefulness for this? Any other comments w.r.t. this would be appreciated, including generic stuff like "we couldn't care less about pNFS" or technical details/opinions. Thanks in advance for any feedback, rick ps: I'm no where near committing to do this at this point and I do realize that even completing the ceph port to FreeBSD might be beyond my limited resources. From owner-freebsd-fs@FreeBSD.ORG Sat Apr 26 21:50:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 42E78BA8 for ; Sat, 26 Apr 2014 21:50:14 +0000 (UTC) Received: from mx1.fisglobal.com (mx1.fisglobal.com [199.200.24.190]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx1.fisglobal.com", Issuer "VeriSign Class 3 Secure Server CA - G3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 0F5BA18DB for ; Sat, 26 Apr 2014 21:50:13 +0000 (UTC) Received: from smarthost.fisglobal.com ([10.132.206.192]) by ltcfislmsgpa04.fnfis.com (8.14.5/8.14.5) with ESMTP id s3QLoBwL013713 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT); Sat, 26 Apr 2014 16:50:12 -0500 Received: from THEMADHATTER (10.242.181.54) by smarthost.fisglobal.com (10.132.206.192) with Microsoft SMTP Server id 14.3.174.1; Sat, 26 Apr 2014 16:50:10 -0500 From: Sender: Devin Teske To: "'Rick Macklem'" , "'FreeBSD Filesystems'" References: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> In-Reply-To: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> Subject: RE: using ceph as a backend for an NFSv4.1 pNFS server Date: Sat, 26 Apr 2014 14:50:02 -0700 Message-ID: <041b01cf6199$7b52d3b0$71f87b10$@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Outlook 15.0 Thread-Index: AQMH4zEt/tvueHJ48l9ljLvn2EPsLpizbDtA Content-Language: en-us X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96, 1.0.14, 0.0.0000 definitions=2014-04-26_02:2014-04-25,2014-04-26,1970-01-01 signatures=0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 26 Apr 2014 21:50:14 -0000 > -----Original Message----- > From: Rick Macklem [mailto:rmacklem@uoguelph.ca] > Sent: Saturday, April 26, 2014 12:48 PM > To: FreeBSD Filesystems > Subject: RFC: using ceph as a backend for an NFSv4.1 pNFS server > > Hi, > > The non-pNFS v4.1 server in the projects area is just about ready for head, I > think. However, without pNFS, NFSv4.1 isn't all that interesting. The problem > is that doing a pNFS server is a non-trivial exercise. I am now somewhat > familiar with pNFS (from doing the client side), but have no expertise w.r.t. > cluster file systems, etc. > > For those not familiar with pNFS, the basic idea is that the NFSv4.1 server > becomes a metadata server (MDS) and hands out what are called layouts and > devinfo, so that the client can access data server(s) (DS) to read/write the > file. There are RFCs that define both block/volume (using iSCSI or similar) and > object (using something called ODS2). > > Although I suspect there are many ways to do a pNFS server, I think that > building it on top of a cluster file system may be the simplest. > > So, this leads me to... > At a glance (just the web pages, I haven't looked at the source), it appears > that ceph might be useful as a backend to a pNFS server. > It has a POSIX interface (that could be used by the metadata server) as well > as both object (not ODS2 I suspect) and block interfaces. > > The licensing appears to be LGPL, which isn't ideal, but I'd say better than > GPLv3 (which is what Glustre appears to be). > > Does anyone have experience using ceph or some other cluster file system > such that you might have some idea w.r.t. its usefulness for this? > My spies @ Mozilla tell me it's all the rage there and they've been saying this for years. -- Devin _____________ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. From owner-freebsd-fs@FreeBSD.ORG Sun Apr 27 17:17:26 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 04F26D10 for ; Sun, 27 Apr 2014 17:17:26 +0000 (UTC) Received: from mail-yh0-f50.google.com (mail-yh0-f50.google.com [209.85.213.50]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B9309C56 for ; Sun, 27 Apr 2014 17:17:25 +0000 (UTC) Received: by mail-yh0-f50.google.com with SMTP id b6so2584836yha.9 for ; Sun, 27 Apr 2014 10:17:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type; bh=sFQCj9i42iHBiX2X11swISKZ2KKJFfWvbmruzJ+sekg=; b=UVP09mk0PVsTZK7K5KJkuxN1L0ZBSl6wAG0lWuEBlnZWWVp27Rr8gAs5YHo/CWxs8O T3RvtyXdLiAS9N0MXdGFkRUjWjKeIVrawxP5FS73TE/COwc5aMadoWWTt/Cch/GzLbhv UFsswR5+KjV07orpyq7+G2uRNoYOisO6hw8fLQZBqfwrJg/HXKxj4RG9cMWgfe+IOtUm Chupgo32IcRuIAjbA7ssL5kK+MlCMq+mNpymLNChSv+gpScZvRZLBqbICRVkNduzSF+q 8cc7pgPWc/DjZhu3FlkVqc3eZIcNHKuqo/FZhOdsb9W53541VQUYNiJ5KsPXdg2sgLHp 428g== X-Gm-Message-State: ALoCoQmBOSHxCghGkHmalxn5/0yLmq3jdVrr4t4UCUTfkfMNP0Dsj8Fl3ykYv58c6Mp6bNREXrpo X-Received: by 10.236.41.165 with SMTP id h25mr2012432yhb.126.1398617294246; Sun, 27 Apr 2014 09:48:14 -0700 (PDT) MIME-Version: 1.0 Received: by 10.170.159.212 with HTTP; Sun, 27 Apr 2014 09:47:53 -0700 (PDT) X-Originating-IP: [67.10.123.128] In-Reply-To: References: <6DACDF6E-E1ED-49C0-975C-A91F68EA8840@icloud.com> From: Wes Morgan Date: Sun, 27 Apr 2014 11:47:53 -0500 Message-ID: Subject: Re: ZFS unable to import pool To: Gena Guchin , freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Apr 2014 17:17:26 -0000 You stated before that all you did was zero the first sector of the drive, you should still be able to recover this. I would make an image of the drive first, and then try copying one of the backup labels over the corrupted ones. On Thu, Apr 24, 2014 at 4:40 AM, Ronald Klop wrote: > On Tue, 22 Apr 2014 18:36:48 +0200, Gena Guchin > wrote: > > Ronald, >> >> system does see the disk, ada7, in this case. Nothing has been >> disconnected from the system. >> >> what steps do you suggest I take with GEOM? >> > > As I read the rest of the discussion I don't think this will help you. > People with more knowledge about ZFS have already explained a lot of your > problem. > > Ronald. > > > > >> >> thanks! >> >> >> On Apr 22, 2014, at 2:49 AM, Ronald Klop wrote: >> >> On Mon, 21 Apr 2014 21:29:27 +0200, Gena Guchin >>> wrote: >>> >>> Hello FreeBSD users, >>>> >>>> my appologies for reposting, but I'd really need your help! >>>> >>>> >>>> I have this huge problem with my ZFS server. I have accidentally >>>> formatted one of the drives in exported ZFS pool. and now I can't import >>>> the pool back. this is extremely important pool for me. device that is >>>> missing is still attached to the system. Any help would be greatly >>>> appreciated. >>>> >>>> >>>> >>>> >>>> #uname -a >>>> FreeBSD XXX 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 >>>> 22:34:59 UTC 2014 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC >>>> amd64 >>>> >>>> #zpool import >>>> pool: storage >>>> id: 11699153865862401654 >>>> state: UNAVAIL >>>> status: One or more devices are missing from the system. >>>> action: The pool cannot be imported. Attach the missing >>>> devices and try again. >>>> see: http://illumos.org/msg/ZFS-8000-6X >>>> config: >>>> >>>> storage UNAVAIL missing device >>>> raidz1-0 DEGRADED >>>> ada3 ONLINE >>>> ada4 ONLINE >>>> ada5 ONLINE >>>> ada6 ONLINE >>>> 248348789931078390 UNAVAIL cannot open >>>> cache >>>> ada1s2 >>>> logs >>>> ada1s1 ONLINE >>>> >>>> Additional devices are known to be part of this pool, though their >>>> exact configuration cannot be determined. >>>> >>>> >>>> # zpool list >>>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>>> zroot 920G 17.9G 902G 1% 1.00x ONLINE - >>>> >>>> # zpool upgrade >>>> This system supports ZFS pool feature flags. >>>> >>>> All pools are formatted using feature flags. >>>> >>>> Every feature flags pool has all supported features enabled. >>>> >>>> # zfs upgrade >>>> This system is currently running ZFS filesystem version 5. >>>> >>>> All filesystems are formatted with the current version. >>>> >>>> >>>> Thanks a lot! >>>> >>> >>> Does FreeBSD see the disk? Is it in /dev/ada2 (or another number)? >>> If FreeBSD does not know anything about the disk, ZFS can't either. A >>> reboot or some fiddling (partitioning?) with GEOM might make the disk >>> reappear. >>> >>> Ronald. >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >> _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 06:20:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8722587C for ; Mon, 28 Apr 2014 06:20:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 684631F80 for ; Mon, 28 Apr 2014 06:20:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3S6K2EB055718 for ; Mon, 28 Apr 2014 06:20:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3S6K2qC055717; Mon, 28 Apr 2014 06:20:02 GMT (envelope-from gnats) Date: Mon, 28 Apr 2014 06:20:02 GMT Message-Id: <201404280620.s3S6K2qC055717@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: dfilter@FreeBSD.ORG (dfilter service) Subject: Re: kern/186574: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 06:20:02 -0000 The following reply was made to PR kern/186574; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/186574: commit references a PR Date: Mon, 28 Apr 2014 06:11:07 +0000 (UTC) Author: delphij Date: Mon Apr 28 06:11:03 2014 New Revision: 265039 URL: http://svnweb.freebsd.org/changeset/base/265039 Log: MFC r264467: Take into account when zpool history block grows exceeding 128KB in zpool(8) and zdb(8) by growing the buffer on demand with a cap of 1GB (specified in spa_history_create_obj()). PR: bin/186574 Submitted by: Andrew Childs (with changes) Modified: stable/10/cddl/contrib/opensolaris/cmd/zdb/zdb.c stable/10/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Directory Properties: stable/10/ (props changed) Modified: stable/10/cddl/contrib/opensolaris/cmd/zdb/zdb.c ============================================================================== --- stable/10/cddl/contrib/opensolaris/cmd/zdb/zdb.c Mon Apr 28 05:39:20 2014 (r265038) +++ stable/10/cddl/contrib/opensolaris/cmd/zdb/zdb.c Mon Apr 28 06:11:03 2014 (r265039) @@ -929,11 +929,16 @@ dump_dtl(vdev_t *vd, int indent) dump_dtl(vd->vdev_child[c], indent + 4); } +/* from spa_history.c: spa_history_create_obj() */ +#define HIS_BUF_LEN_DEF (128 << 10) +#define HIS_BUF_LEN_MAX (1 << 30) + static void dump_history(spa_t *spa) { nvlist_t **events = NULL; - char buf[SPA_MAXBLOCKSIZE]; + char *buf = NULL; + uint64_t bufsize = HIS_BUF_LEN_DEF; uint64_t resid, len, off = 0; uint_t num = 0; int error; @@ -942,8 +947,11 @@ dump_history(spa_t *spa) char tbuf[30]; char internalstr[MAXPATHLEN]; + if ((buf = malloc(bufsize)) == NULL) + (void) fprintf(stderr, "Unable to read history: " + "out of memory\n"); do { - len = sizeof (buf); + len = bufsize; if ((error = spa_history_get(spa, &off, &len, buf)) != 0) { (void) fprintf(stderr, "Unable to read history: " @@ -953,9 +961,26 @@ dump_history(spa_t *spa) if (zpool_history_unpack(buf, len, &resid, &events, &num) != 0) break; - off -= resid; + + /* + * If the history block is too big, double the buffer + * size and try again. + */ + if (resid == len) { + free(buf); + buf = NULL; + + bufsize <<= 1; + if ((bufsize >= HIS_BUF_LEN_MAX) || + ((buf = malloc(bufsize)) == NULL)) { + (void) fprintf(stderr, "Unable to read history: " + "out of memory\n"); + return; + } + } } while (len != 0); + free(buf); (void) printf("\nHistory:\n"); for (int i = 0; i < num; i++) { Modified: stable/10/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c ============================================================================== --- stable/10/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Mon Apr 28 05:39:20 2014 (r265038) +++ stable/10/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Mon Apr 28 06:11:03 2014 (r265039) @@ -3744,7 +3744,9 @@ zpool_history_unpack(char *buf, uint64_t return (0); } -#define HIS_BUF_LEN (128*1024) +/* from spa_history.c: spa_history_create_obj() */ +#define HIS_BUF_LEN_DEF (128 << 10) +#define HIS_BUF_LEN_MAX (1 << 30) /* * Retrieve the command history of a pool. @@ -3752,21 +3754,24 @@ zpool_history_unpack(char *buf, uint64_t int zpool_get_history(zpool_handle_t *zhp, nvlist_t **nvhisp) { - char buf[HIS_BUF_LEN]; + char *buf = NULL; + uint64_t bufsize = HIS_BUF_LEN_DEF; uint64_t off = 0; nvlist_t **records = NULL; uint_t numrecords = 0; int err, i; + if ((buf = malloc(bufsize)) == NULL) + return (ENOMEM); do { - uint64_t bytes_read = sizeof (buf); + uint64_t bytes_read = bufsize; uint64_t leftover; if ((err = get_history(zhp, buf, &off, &bytes_read)) != 0) break; /* if nothing else was read in, we're at EOF, just return */ - if (!bytes_read) + if (bytes_read == 0) break; if ((err = zpool_history_unpack(buf, bytes_read, @@ -3774,8 +3779,25 @@ zpool_get_history(zpool_handle_t *zhp, n break; off -= leftover; + /* + * If the history block is too big, double the buffer + * size and try again. + */ + if (leftover == bytes_read) { + free(buf); + buf = NULL; + + bufsize <<= 1; + if ((bufsize >= HIS_BUF_LEN_MAX) || + ((buf = malloc(bufsize)) == NULL)) { + err = ENOMEM; + break; + } + } + /* CONSTCOND */ } while (1); + free(buf); if (!err) { verify(nvlist_alloc(nvhisp, NV_UNIQUE_NAME, 0) == 0); _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 06:20:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 89E7D879 for ; Mon, 28 Apr 2014 06:20:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 689331DEB for ; Mon, 28 Apr 2014 06:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3S6K1aw055712 for ; Mon, 28 Apr 2014 06:20:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3S6K1nQ055711; Mon, 28 Apr 2014 06:20:01 GMT (envelope-from gnats) Date: Mon, 28 Apr 2014 06:20:01 GMT Message-Id: <201404280620.s3S6K1nQ055711@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: dfilter@FreeBSD.ORG (dfilter service) Subject: Re: kern/186574: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 06:20:01 -0000 The following reply was made to PR kern/186574; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/186574: commit references a PR Date: Mon, 28 Apr 2014 06:12:18 +0000 (UTC) Author: delphij Date: Mon Apr 28 06:12:15 2014 New Revision: 265041 URL: http://svnweb.freebsd.org/changeset/base/265041 Log: MFC r264467: Take into account when zpool history block grows exceeding 128KB in zpool(8) and zdb(8) by growing the buffer on demand with a cap of 1GB (specified in spa_history_create_obj()). PR: bin/186574 Submitted by: Andrew Childs (with changes) Modified: stable/8/cddl/contrib/opensolaris/cmd/zdb/zdb.c stable/8/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Directory Properties: stable/8/cddl/contrib/opensolaris/ (props changed) stable/8/cddl/contrib/opensolaris/lib/libzfs/ (props changed) Modified: stable/8/cddl/contrib/opensolaris/cmd/zdb/zdb.c ============================================================================== --- stable/8/cddl/contrib/opensolaris/cmd/zdb/zdb.c Mon Apr 28 06:11:44 2014 (r265040) +++ stable/8/cddl/contrib/opensolaris/cmd/zdb/zdb.c Mon Apr 28 06:12:15 2014 (r265041) @@ -929,11 +929,16 @@ dump_dtl(vdev_t *vd, int indent) dump_dtl(vd->vdev_child[c], indent + 4); } +/* from spa_history.c: spa_history_create_obj() */ +#define HIS_BUF_LEN_DEF (128 << 10) +#define HIS_BUF_LEN_MAX (1 << 30) + static void dump_history(spa_t *spa) { nvlist_t **events = NULL; - char buf[SPA_MAXBLOCKSIZE]; + char *buf = NULL; + uint64_t bufsize = HIS_BUF_LEN_DEF; uint64_t resid, len, off = 0; uint_t num = 0; int error; @@ -942,8 +947,11 @@ dump_history(spa_t *spa) char tbuf[30]; char internalstr[MAXPATHLEN]; + if ((buf = malloc(bufsize)) == NULL) + (void) fprintf(stderr, "Unable to read history: " + "out of memory\n"); do { - len = sizeof (buf); + len = bufsize; if ((error = spa_history_get(spa, &off, &len, buf)) != 0) { (void) fprintf(stderr, "Unable to read history: " @@ -953,9 +961,26 @@ dump_history(spa_t *spa) if (zpool_history_unpack(buf, len, &resid, &events, &num) != 0) break; - off -= resid; + + /* + * If the history block is too big, double the buffer + * size and try again. + */ + if (resid == len) { + free(buf); + buf = NULL; + + bufsize <<= 1; + if ((bufsize >= HIS_BUF_LEN_MAX) || + ((buf = malloc(bufsize)) == NULL)) { + (void) fprintf(stderr, "Unable to read history: " + "out of memory\n"); + return; + } + } } while (len != 0); + free(buf); (void) printf("\nHistory:\n"); for (int i = 0; i < num; i++) { Modified: stable/8/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c ============================================================================== --- stable/8/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Mon Apr 28 06:11:44 2014 (r265040) +++ stable/8/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Mon Apr 28 06:12:15 2014 (r265041) @@ -3736,7 +3736,9 @@ zpool_history_unpack(char *buf, uint64_t return (0); } -#define HIS_BUF_LEN (128*1024) +/* from spa_history.c: spa_history_create_obj() */ +#define HIS_BUF_LEN_DEF (128 << 10) +#define HIS_BUF_LEN_MAX (1 << 30) /* * Retrieve the command history of a pool. @@ -3744,21 +3746,24 @@ zpool_history_unpack(char *buf, uint64_t int zpool_get_history(zpool_handle_t *zhp, nvlist_t **nvhisp) { - char buf[HIS_BUF_LEN]; + char *buf = NULL; + uint64_t bufsize = HIS_BUF_LEN_DEF; uint64_t off = 0; nvlist_t **records = NULL; uint_t numrecords = 0; int err, i; + if ((buf = malloc(bufsize)) == NULL) + return (ENOMEM); do { - uint64_t bytes_read = sizeof (buf); + uint64_t bytes_read = bufsize; uint64_t leftover; if ((err = get_history(zhp, buf, &off, &bytes_read)) != 0) break; /* if nothing else was read in, we're at EOF, just return */ - if (!bytes_read) + if (bytes_read == 0) break; if ((err = zpool_history_unpack(buf, bytes_read, @@ -3766,8 +3771,25 @@ zpool_get_history(zpool_handle_t *zhp, n break; off -= leftover; + /* + * If the history block is too big, double the buffer + * size and try again. + */ + if (leftover == bytes_read) { + free(buf); + buf = NULL; + + bufsize <<= 1; + if ((bufsize >= HIS_BUF_LEN_MAX) || + ((buf = malloc(bufsize)) == NULL)) { + err = ENOMEM; + break; + } + } + /* CONSTCOND */ } while (1); + free(buf); if (!err) { verify(nvlist_alloc(nvhisp, NV_UNIQUE_NAME, 0) == 0); _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 06:20:03 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 94F3F87D for ; Mon, 28 Apr 2014 06:20:03 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 76EE01F9F for ; Mon, 28 Apr 2014 06:20:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3S6K3pj055725 for ; Mon, 28 Apr 2014 06:20:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3S6K3QC055724; Mon, 28 Apr 2014 06:20:03 GMT (envelope-from gnats) Date: Mon, 28 Apr 2014 06:20:03 GMT Message-Id: <201404280620.s3S6K3QC055724@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: dfilter@FreeBSD.ORG (dfilter service) Subject: Re: kern/186574: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 06:20:03 -0000 The following reply was made to PR kern/186574; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/186574: commit references a PR Date: Mon, 28 Apr 2014 06:11:47 +0000 (UTC) Author: delphij Date: Mon Apr 28 06:11:44 2014 New Revision: 265040 URL: http://svnweb.freebsd.org/changeset/base/265040 Log: MFC r264467: Take into account when zpool history block grows exceeding 128KB in zpool(8) and zdb(8) by growing the buffer on demand with a cap of 1GB (specified in spa_history_create_obj()). PR: bin/186574 Submitted by: Andrew Childs (with changes) Modified: stable/9/cddl/contrib/opensolaris/cmd/zdb/zdb.c stable/9/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Directory Properties: stable/9/cddl/contrib/opensolaris/ (props changed) stable/9/cddl/contrib/opensolaris/lib/libzfs/ (props changed) Modified: stable/9/cddl/contrib/opensolaris/cmd/zdb/zdb.c ============================================================================== --- stable/9/cddl/contrib/opensolaris/cmd/zdb/zdb.c Mon Apr 28 06:11:03 2014 (r265039) +++ stable/9/cddl/contrib/opensolaris/cmd/zdb/zdb.c Mon Apr 28 06:11:44 2014 (r265040) @@ -929,11 +929,16 @@ dump_dtl(vdev_t *vd, int indent) dump_dtl(vd->vdev_child[c], indent + 4); } +/* from spa_history.c: spa_history_create_obj() */ +#define HIS_BUF_LEN_DEF (128 << 10) +#define HIS_BUF_LEN_MAX (1 << 30) + static void dump_history(spa_t *spa) { nvlist_t **events = NULL; - char buf[SPA_MAXBLOCKSIZE]; + char *buf = NULL; + uint64_t bufsize = HIS_BUF_LEN_DEF; uint64_t resid, len, off = 0; uint_t num = 0; int error; @@ -942,8 +947,11 @@ dump_history(spa_t *spa) char tbuf[30]; char internalstr[MAXPATHLEN]; + if ((buf = malloc(bufsize)) == NULL) + (void) fprintf(stderr, "Unable to read history: " + "out of memory\n"); do { - len = sizeof (buf); + len = bufsize; if ((error = spa_history_get(spa, &off, &len, buf)) != 0) { (void) fprintf(stderr, "Unable to read history: " @@ -953,9 +961,26 @@ dump_history(spa_t *spa) if (zpool_history_unpack(buf, len, &resid, &events, &num) != 0) break; - off -= resid; + + /* + * If the history block is too big, double the buffer + * size and try again. + */ + if (resid == len) { + free(buf); + buf = NULL; + + bufsize <<= 1; + if ((bufsize >= HIS_BUF_LEN_MAX) || + ((buf = malloc(bufsize)) == NULL)) { + (void) fprintf(stderr, "Unable to read history: " + "out of memory\n"); + return; + } + } } while (len != 0); + free(buf); (void) printf("\nHistory:\n"); for (int i = 0; i < num; i++) { Modified: stable/9/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c ============================================================================== --- stable/9/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Mon Apr 28 06:11:03 2014 (r265039) +++ stable/9/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c Mon Apr 28 06:11:44 2014 (r265040) @@ -3736,7 +3736,9 @@ zpool_history_unpack(char *buf, uint64_t return (0); } -#define HIS_BUF_LEN (128*1024) +/* from spa_history.c: spa_history_create_obj() */ +#define HIS_BUF_LEN_DEF (128 << 10) +#define HIS_BUF_LEN_MAX (1 << 30) /* * Retrieve the command history of a pool. @@ -3744,21 +3746,24 @@ zpool_history_unpack(char *buf, uint64_t int zpool_get_history(zpool_handle_t *zhp, nvlist_t **nvhisp) { - char buf[HIS_BUF_LEN]; + char *buf = NULL; + uint64_t bufsize = HIS_BUF_LEN_DEF; uint64_t off = 0; nvlist_t **records = NULL; uint_t numrecords = 0; int err, i; + if ((buf = malloc(bufsize)) == NULL) + return (ENOMEM); do { - uint64_t bytes_read = sizeof (buf); + uint64_t bytes_read = bufsize; uint64_t leftover; if ((err = get_history(zhp, buf, &off, &bytes_read)) != 0) break; /* if nothing else was read in, we're at EOF, just return */ - if (!bytes_read) + if (bytes_read == 0) break; if ((err = zpool_history_unpack(buf, bytes_read, @@ -3766,8 +3771,25 @@ zpool_get_history(zpool_handle_t *zhp, n break; off -= leftover; + /* + * If the history block is too big, double the buffer + * size and try again. + */ + if (leftover == bytes_read) { + free(buf); + buf = NULL; + + bufsize <<= 1; + if ((bufsize >= HIS_BUF_LEN_MAX) || + ((buf = malloc(bufsize)) == NULL)) { + err = ENOMEM; + break; + } + } + /* CONSTCOND */ } while (1); + free(buf); if (!err) { verify(nvlist_alloc(nvhisp, NV_UNIQUE_NAME, 0) == 0); _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 06:31:59 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 49BDDBA6; Mon, 28 Apr 2014 06:31:59 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1F3F710CD; Mon, 28 Apr 2014 06:31:59 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3S6VwKt061577; Mon, 28 Apr 2014 06:31:58 GMT (envelope-from delphij@freefall.freebsd.org) Received: (from delphij@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3S6VwnE061573; Mon, 28 Apr 2014 06:31:58 GMT (envelope-from delphij) Date: Mon, 28 Apr 2014 06:31:58 GMT Message-Id: <201404280631.s3S6VwnE061573@freefall.freebsd.org> To: lorne@cons.org.nz, delphij@FreeBSD.org, freebsd-fs@FreeBSD.org, delphij@FreeBSD.org From: delphij@FreeBSD.org Subject: Re: kern/186574: [zfs] zpool history hangs (infinite loop) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 06:31:59 -0000 Synopsis: [zfs] zpool history hangs (infinite loop) State-Changed-From-To: open->closed State-Changed-By: delphij State-Changed-When: Mon Apr 28 06:31:27 UTC 2014 State-Changed-Why: A slightly changed version of patch committed and merged. Responsible-Changed-From-To: freebsd-fs->delphij Responsible-Changed-By: delphij Responsible-Changed-When: Mon Apr 28 06:31:27 UTC 2014 Responsible-Changed-Why: Take. Thanks for your submission! http://www.freebsd.org/cgi/query-pr.cgi?pr=186574 From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 11:06:46 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8CF9E40A for ; Mon, 28 Apr 2014 11:06:46 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 780EC1A9F for ; Mon, 28 Apr 2014 11:06:46 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3SB6krr086108 for ; Mon, 28 Apr 2014 11:06:46 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3SB6jl6086105 for freebsd-fs@FreeBSD.org; Mon, 28 Apr 2014 11:06:45 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 28 Apr 2014 11:06:45 GMT Message-Id: <201404281106.s3SB6jl6086105@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 11:06:46 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/188443 fs [smbfs] Segfault with tail(1) when mmap(2) called o kern/188328 fs [zfs] UPDATING should provide caveats for running `zpo o kern/188187 fs [zfs] [panic] 10-stable: Kernel panic on zpool import: o kern/187905 fs [zpool] Confusion zpool with a block size in HDD - blo o kern/187778 fs [zfs] Two ZFS filesystems mounted on / at same time o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix s kern/187414 fs [zfs] ZFS Write Deadlock on 8.4 o kern/187261 fs [fusefs] FUSE kernel panic when using socket / bind o kern/186942 fs [zfs] [panic] Fatal trap 12 (seems zfs related) o kern/186720 fs [xfs] is xfs now unsupported in the kernel? o kern/186652 fs [smbfs] [panic] crash during umount -a -t smbfs o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/186112 fs [zfs] [panic] ZFS Panic/Solaris Assert/zap.c:479 o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185734 fs [zfs] [panic] panic on stable/10 when writing to ZFS d o kern/185374 fs [msdosfs] [panic] Unmounting msdos filesystem in a bad o kern/184677 fs [zfs] [panic] ZFS snapshot umount kernel panic o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] [panic] Kernel panic in ZFS I/O: solaris assert: o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181791 fs [zfs] ZFS ARC Deadlock o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [softupdates] [panic] softdep_deallocate_dependencies: o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173235 fs [smbfs] [panic] Have received two crashes within 1 day o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172630 fs [zfs] [lor] zfs/zfs_vfsops.c kern/kern_descrip.c o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus f kern/172197 fs [zfs] Userquota (as well as groupquota) does not work o kern/172092 fs [zfs] [panic] zfs import panics kernel o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170523 fs [zfs] zfs rename pool@snapshot1 pool@snapshot2 UNMOUNT o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs [ufs] [patch] Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162195 fs [softupdates] [panic] panic with soft updates journali o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] [panic] zfs/dbuf.c: panic: solaris assert: arc_b o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] [panic] panic during ls(1) zfs snapshot director o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [softupdates] [panic] Kernel panic in softdep_disk_wri o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] [panic] zpool scrub causes panic if geli vdevs d o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 359 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 15:53:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 00266483 for ; Mon, 28 Apr 2014 15:53:49 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AB00D1DFE for ; Mon, 28 Apr 2014 15:53:49 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1WensJ-0005io-1H for freebsd-fs@freebsd.org; Mon, 28 Apr 2014 17:53:47 +0200 Received: from lara.cc.fer.hr ([161.53.72.113]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 28 Apr 2014 17:53:47 +0200 Received: from ivoras by lara.cc.fer.hr with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 28 Apr 2014 17:53:47 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Ivan Voras Subject: Re: RFC: using ceph as a backend for an NFSv4.1 pNFS server Date: Mon, 28 Apr 2014 17:53:34 +0200 Lines: 39 Message-ID: References: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="DVwAwwHxqsaBmFtmn9rVuvwt2h83rFfxn" X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: lara.cc.fer.hr User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.3.0 In-Reply-To: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> X-Enigmail-Version: 1.6 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 15:53:50 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --DVwAwwHxqsaBmFtmn9rVuvwt2h83rFfxn Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable On 26/04/2014 21:47, Rick Macklem wrote: > Any other comments w.r.t. this would be appreciated, including > generic stuff like "we couldn't care less about pNFS" or technical > details/opinions. >=20 > Thanks in advance for any feedback, rick > ps: I'm no where near committing to do this at this point and > I do realize that even completing the ceph port to FreeBSD > might be beyond my limited resources. What functionality from ceph would pNFS really need? Would pNFS need to be implemented with a single back-end storage like ceph or could it be modular? (I don't have much experience here but it looks like HDFS is becoming popular for some big-data applications). --DVwAwwHxqsaBmFtmn9rVuvwt2h83rFfxn Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iKYEARECAGYFAlNeeX5fFIAAAAAALgAoaXNzdWVyLWZwckBub3RhdGlvbnMub3Bl bnBncC5maWZ0aGhvcnNlbWFuLm5ldDYxNDE4MkQ3ODMwNDAwMDJFRUIzNDhFNUZE MDhENTA2M0RGRjFEMkMACgkQ/QjVBj3/HSzXFgCbBhafBD7xt+lk9XJKBcTqxNHs X20AnRgTfu75NtSn77JdW6LfPaTIpdkf =Cyor -----END PGP SIGNATURE----- --DVwAwwHxqsaBmFtmn9rVuvwt2h83rFfxn-- From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 16:09:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1C15DB85 for ; Mon, 28 Apr 2014 16:09:03 +0000 (UTC) Received: from cavuit01.kulnet.kuleuven.be (rhcavuit01.kulnet.kuleuven.be [IPv6:2a02:2c40:0:c0::25:129]) by mx1.freebsd.org (Postfix) with ESMTP id CD38610AF for ; Mon, 28 Apr 2014 16:09:02 +0000 (UTC) X-KULeuven-Envelope-From: bram.vandoren@ster.kuleuven.be X-Spam-Status: not spam, SpamAssassin (not cached, score=-48.726, required 5, autolearn=disabled, LOCAL_SMTPS -50.00, RDNS_NONE 1.27) X-KULeuven-Scanned: Found to be clean X-KULeuven-ID: 524731380D9.A1332 X-KULeuven-Information: Katholieke Universiteit Leuven Received: from icts-p-smtps-2.cc.kuleuven.be (icts-p-smtps-2e.kulnet.kuleuven.be [134.58.240.34]) by cavuit01.kulnet.kuleuven.be (Postfix) with ESMTP id 524731380D9; Mon, 28 Apr 2014 18:08:58 +0200 (CEST) Received: from miaplacidus.ster.kuleuven.be (unknown [10.33.178.95]) by icts-p-smtps-2.cc.kuleuven.be (Postfix) with ESMTP id 477BC20041; Mon, 28 Apr 2014 18:08:55 +0200 (CEST) Message-ID: <535E7D17.6000303@ster.kuleuven.be> Date: Mon, 28 Apr 2014 18:08:55 +0200 X-Kuleuven: This mail passed the K.U.Leuven mailcluster From: Bram Vandoren User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Rick Macklem Subject: Re: RFC: using ceph as a backend for an NFSv4.1 pNFS server References: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> In-Reply-To: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 16:09:03 -0000 Hi Rick, On 04/26/2014 09:47 PM, Rick Macklem wrote: > Any other comments w.r.t. this would be appreciated, including > generic stuff like "we couldn't care less about pNFS" or technical > details/opinions. I have some experience with Gluster (I suspect Ceph is similar). I don't think it's very useful: - These file systems are implemented in user space (except for the FUSE glue). Gluster includes a (user mode) NFS server itself so it can skip the kernel VFS layer and FUSE glue to provide NFS. - Gluster has it own network protocol. You can use this protocol to mount the volume on the client instead of NFS. - You can use a native API to access these filesystems instead of POSIX. The amount of people waiting for yet another method to access their cluster file system is probably limited. Thanks for your work on the FreeBSD NFS server. Cheers, Bram. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 16:12:21 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AF6F6F39 for ; Mon, 28 Apr 2014 16:12:21 +0000 (UTC) Received: from smtp.infracaninophile.co.uk (smtp6.infracaninophile.co.uk [IPv6:2001:8b0:151:1:3cd3:cd67:fafa:3d78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "smtp.infracaninophile.co.uk", Issuer "ca.infracaninophile.co.uk" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 59421116A for ; Mon, 28 Apr 2014 16:12:21 +0000 (UTC) Received: from ox-dell39.ox.adestra.com (no-reverse-dns.metronet-uk.com [85.199.232.226] (may be forged)) (authenticated bits=0) by smtp.infracaninophile.co.uk (8.14.8/8.14.8) with ESMTP id s3SGCA6M040000 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO) for ; Mon, 28 Apr 2014 17:12:15 +0100 (BST) (envelope-from matthew@freebsd.org) DKIM-Filter: OpenDKIM Filter v2.8.3 smtp.infracaninophile.co.uk s3SGCA6M040000 Authentication-Results: smtp.infracaninophile.co.uk/s3SGCA6M040000; dkim=none reason="no signature"; dkim-adsp=none X-Authentication-Warning: lucid-nonsense.infracaninophile.co.uk: Host no-reverse-dns.metronet-uk.com [85.199.232.226] (may be forged) claimed to be ox-dell39.ox.adestra.com Message-ID: <535E7DD2.3050600@freebsd.org> Date: Mon, 28 Apr 2014 17:12:02 +0100 From: Matthew Seaman User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: File system deadlock -- postgresql related? X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="Ae9QFTg5kUCRlpVacu9NDq5mWWmk5B4Vx" X-Virus-Scanned: clamav-milter 0.98.1 at lucid-nonsense.infracaninophile.co.uk X-Virus-Status: Clean X-Spam-Status: No, score=-1.0 required=5.0 tests=AWL,BAYES_00,RDNS_NONE, SPF_SOFTFAIL,URI_HEX autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on lucid-nonsense.infracaninophile.co.uk X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 16:12:21 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --Ae9QFTg5kUCRlpVacu9NDq5mWWmk5B4Vx Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi, We've a FreeBSD-9.2-RELEASE-p3 server running postgresql-9.3.4 which is giving us some problems. I know of the performance problems with postgresql-9.3 referenced here: http://postgresql.1045698.n5.nabble.com/Perfomance-degradation-9-3-vs-9-2= -for-FreeBSD-td5800835.html and also on this list. However, we're seeing occasional system freezes for periods of maybe 5 -- 10 minutes, with many log messages like so appearing: Apr 28 16:19:57 db-17a kernel: g_vfs_done():mfid1p1[WRITE(offset=3D1058001420288, length=3D32768)]error = =3D 11 Apr 28 16:19:57 db-17a kernel: g_vfs_done():mfid1p1[WRITE(offset=3D1058001649664, length=3D32768)]error = =3D 11 Apr 28 16:19:57 db-17a kernel: g_vfs_done():mfid1p1[WRITE(offset=3D1058002010112, length=3D32768)]error = =3D 11 All I can google about that problem suggests the 'error 11' is a deadlock that occurs primarily when using disk encryption or other setups where disk IO would be more complex than normal. But that is certainly not the case here --- it's a UFS2 filesystem on a hardware RAID10 using 15k RPM SAS drives. Any clues? Am I correct to conclude that the mmap related performance problem is somehow related to the g_vfs_done() log messages? Cheers, Matthew --Ae9QFTg5kUCRlpVacu9NDq5mWWmk5B4Vx Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQJ8BAEBCgBmBQJTXn3aXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQxOUYxNTRFQ0JGMTEyRTUwNTQ0RTNGMzAw MDUxM0YxMEUwQTlFNEU3AAoJEABRPxDgqeTnL1kP/0DKNGlpbKg86023sH0zqjQZ +pCnBxj2Vh5RBSdvekzKKSP1WUTkOCteGIRqabJMjuQ1barDXu5mvi9E4j+MEoHY 8HhYesVlAoyFJYzI0vbRr2clSKcsZcZcxn535QvNB1NfVhRepI/5lxeZj+yUqbN3 tBfjudtNj4aMAYVeRbQnB32CoSfYJgIjHPJCHf4TbvQOTUhTFegeFqWkF32cYDKh f8JRKlyfRoT+LrbjWhBtxCkHn8Uoe6o4K16w8/PgTb150aPUaNJNy8NjdiHddGSA OGXnqbz8Vv1jHM4i2avrbak6Ssc+Wcg+gwtPmB8Tfg6oKvwu2tKUbeYrPahiEt9x zp2VssqCpyoKSPt39FYSk5bd9XIOxsWtiKsRkK6eptxzegN2taJIUaoLyiLKlcPG VWytK5iFJQnQSUyR7vXKUs5GvUtr+VzU0ul1zTeZsXfTM+OlwVPJYGx3yxkSsbLA 1o5HpXdfi29JkGyO4f6pD8aUjGbpPENuLn5UcgkCgBm4uBFVVCnfas1JPBgf6JWG 9yOsgYHIT5mBxLN4df3rWsAxbxp3G56zdIeUUuwJVu0CGcG5s2AeioCMt8OqMnwl rkXU3YvMqawRvcdJWDsRA49xpP0/XLh7MVfHixlnLVDwB11q16KnJLCrG2PqGNPr tGKhXESWBMUmD/V4G6a7 =8x3a -----END PGP SIGNATURE----- --Ae9QFTg5kUCRlpVacu9NDq5mWWmk5B4Vx-- From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 16:27:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1323A6EC for ; Mon, 28 Apr 2014 16:27:25 +0000 (UTC) Received: from nm32-vm6.bullet.mail.bf1.yahoo.com (nm32-vm6.bullet.mail.bf1.yahoo.com [72.30.239.142]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9028712A4 for ; Mon, 28 Apr 2014 16:27:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1398702268; bh=/ukDlHQvLrs6CLqKJ0L3D1+5NCjZW97wxO63kLswbMs=; h=Received:Received:Received:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Content-Type:Mime-Version:Subject:From:In-Reply-To:Date:Content-Transfer-Encoding:Message-Id:References:To:X-Mailer; b=DZjf6MfCSjnXgwYJNVGeSHjLOtBU7LF7Xvy7y++PJRJycWgNvpLVNk46C8mqvxLcQPS4NWY26V1c0y6y9htL9QQ2A4qdxS/Q40RjW4grI883Y99Rcep5lB5HYwo0tMCTngey/8kp5EuNWC4NL+Ysgtvan4u6q8Bm60xPvTohmnc8W8F2CZzVjioG6zrQORLlYikO6fEXYmq9eLmgSDOko7ZGa/O4oKygnhmvqQIeZouqYLuTe7BHP/p6y3w4z9s8cmDLdJgC2EMNCi5g/Q6eciOtWusVswKXPh+BwBYS7c/Pogju9ZbVz15Xl/HbwMaeh5FxKGB3Axk/QuGI9ceI2g== DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s2048; d=yahoo.com; b=jP9W5HuNNsHSBGsXrGQmmpjP52CHt4rxeaht9NhqJp24L710rumfGUM5nJ5qC9VGIQmgeBUvClE8WZ6UOK/djRu17VLNwovKR49bZcdogwwHUTQ2vTEeagRQG6gMuEc0KDxhIOh+U873LoQ1rbiX5IOK7rDyG5rlaefPMXKglFBYdoLr+egoP5qaEpmH2kPXw7nv+EODUwBPA3WGdOwfKIAXCzpipySms/9HnmSY+Bhfq6smBlipacTrgNviOAwyhPg3avsIKlLaNRUEkir8zlNZBFjyVkLn/u/51FoXJn8ddvCXGBtzsmIxfuz5/tHqvDAAkhz3u2z5G6nyDlgMUg==; Received: from [98.139.212.151] by nm32.bullet.mail.bf1.yahoo.com with NNFMP; 28 Apr 2014 16:24:28 -0000 Received: from [98.139.211.194] by tm8.bullet.mail.bf1.yahoo.com with NNFMP; 28 Apr 2014 16:24:28 -0000 Received: from [127.0.0.1] by smtp203.mail.bf1.yahoo.com with NNFMP; 28 Apr 2014 16:24:28 -0000 X-Yahoo-Newman-Id: 479162.66533.bm@smtp203.mail.bf1.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: LRedW5cVM1n1VklX.gR9u9VnKIxTG1XqGAN0cZte9kH.NWh A.6ShJUo0BsyumDK4S9nqAcdlGDVaeWOx8WAZqBrfzyhX55wFm8hsMpXSfnY cWgjYIAPIXsoQBQHwaMj9cd4i745nB8YYVM8g8ZK7FRD0NE8ubd7W4.Mtp4H 1yFkm34vaNPK1tX59GuKRHsnToBfo7Kb3fqNJyKaPHoPLSKGNfRfBBwqGHdb zomglZV19gAuP0HS9u2CJf5MTqUC8MuZazQcd1FPkcPkrYgDRe6BRKcotowJ gaogY8C7eKZCHwlMHonK577XnFn91xJpc4HeD.7zQdRmaVKyLeH.AVejPZna Gzs9AAOqRbvJNi1h8qXScKB8jl0q0.q5pK.JhDXxoHAfDysPiMbvreg1WDkV FVFn1IXKjrnl.LYbkLUVEr.jYdfsbLP87w6N13eaKOKtpGFLWlbL9qebowY_ VBJ5emzX_.YsUfpFMGnJwEvAGUBs.yFdcGRNNN2gHVpcGUDe5R4nSxTRINxw oZQH5.SSssVEiJvTeYyEypVpMHxrpNcE- X-Yahoo-SMTP: QV71426swBDJZLO1mUY09m_wPtc7Mu8wKqfYeaNA_dmW6grnF6B_5LAWUw-- X-Rocket-Received: from [172.16.0.50] (chris@23.120.225.134 with plain [98.138.105.21]) by smtp203.mail.bf1.yahoo.com with SMTP; 28 Apr 2014 09:24:28 -0700 PDT Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\)) Subject: Re: RFC: using ceph as a backend for an NFSv4.1 pNFS server From: Chris BeHanna In-Reply-To: <535E7D17.6000303@ster.kuleuven.be> Date: Mon, 28 Apr 2014 11:24:26 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: References: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> <535E7D17.6000303@ster.kuleuven.be> To: FreeBSD FS X-Mailer: Apple Mail (2.1510) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 16:27:25 -0000 On Apr 28, 2014, at 11:08 AM, Bram Vandoren = wrote: > The amount of people waiting for yet another method to access their = cluster file system is probably limited. LANL, LLNL, Sandia, every major oil & gas exploration company, = large weather simulation centers, financial modelers, and let us not = forget movie and TV CGI studios. Yup, pretty limited. :-) pNFS is pretty much *the* standard going forward for accessing = large object-based (OSD) storage volumes. At least it was when I made a = living in that space. Regards, Chris BeHanna chris@behanna.org= From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 17:05:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2451B5D7 for ; Mon, 28 Apr 2014 17:05:11 +0000 (UTC) Received: from mail-ee0-x231.google.com (mail-ee0-x231.google.com [IPv6:2a00:1450:4013:c00::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id ABAC11695 for ; Mon, 28 Apr 2014 17:05:10 +0000 (UTC) Received: by mail-ee0-f49.google.com with SMTP id c41so5045426eek.8 for ; Mon, 28 Apr 2014 10:05:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=1aH7uEdFvKSI166MkfzORvObx3uEjf0x7HBamVhbe6A=; b=sD3wM0yT+KKvIivftZyaEXnPUlsALso/2UymxrwB9cn8yWBt1lYlsnTVzkPJWahZH/ YHvxmSlYRqfN7UYK68Ot3kLXpZiJkaxog0MDhLNpajN6Fb+k8mlG2MTgu3t4eY4vkEm9 PDmc01+R5/ICeHi/5Sko6TeNa1lEXrIQKAcV/LsjU295J+GizptRDQg4LgaVLIP3s3hk BykuTQsZFyp0Bfdo/MEV/n+tBmGggoO5IF9gvmd26QhTIBBCXmUvKMGDLO8dwiDONSKl 4UJkB7qg7vA8SQGnpMfHKZ/6Dtgr3QvZZwxleMZs1sr6TmMmuIDXNLpNBzmNdRTgRxEs n0Gw== X-Received: by 10.15.51.1 with SMTP id m1mr34658227eew.25.1398704708866; Mon, 28 Apr 2014 10:05:08 -0700 (PDT) Received: from strashydlo.home (acum57.neoplus.adsl.tpnet.pl. [83.11.92.57]) by mx.google.com with ESMTPSA id m42sm51865163eex.21.2014.04.28.10.05.07 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 28 Apr 2014 10:05:08 -0700 (PDT) Sender: =?UTF-8?Q?Edward_Tomasz_Napiera=C5=82a?= Subject: Re: RFC: using ceph as a backend for an NFSv4.1 pNFS server Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=iso-8859-2 From: =?iso-8859-2?Q?Edward_Tomasz_Napiera=B3a?= In-Reply-To: Date: Mon, 28 Apr 2014 19:05:03 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <33C0F291-0DA4-4658-B249-0B0E1F633832@FreeBSD.org> References: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> <535E7D17.6000303@ster.kuleuven.be> To: Chris BeHanna X-Mailer: Apple Mail (2.1283) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 17:05:11 -0000 Wiadomo=B6=E6 napisana przez Chris BeHanna w dniu 28 kwi 2014, o godz. = 18:24: > On Apr 28, 2014, at 11:08 AM, Bram Vandoren = wrote: >=20 >> The amount of people waiting for yet another method to access their = cluster file system is probably limited. >=20 > LANL, LLNL, Sandia, every major oil & gas exploration company, = large weather simulation centers, financial modelers, and let us not = forget movie and TV CGI studios. >=20 > Yup, pretty limited. :-) >=20 > pNFS is pretty much *the* standard going forward for accessing = large object-based (OSD) storage volumes. At least it was when I made a = living in that space. Does that mean the OSD - as in, the SCSI standard - is finally starting = to gain some traction? From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 17:11:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9C28B708 for ; Mon, 28 Apr 2014 17:11:55 +0000 (UTC) Received: from nm48.bullet.mail.bf1.yahoo.com (nm48.bullet.mail.bf1.yahoo.com [216.109.114.64]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2212B1745 for ; Mon, 28 Apr 2014 17:11:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1398704989; bh=7SkFUAQueuvnHnInXw/p4SqY5PnAi9M5BgAal+OCGao=; h=Received:Received:Received:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Content-Type:Mime-Version:Subject:From:In-Reply-To:Date:Content-Transfer-Encoding:Message-Id:References:To:X-Mailer; b=cHENqtFolKzFlNLXz+ITYGsprQH64ABEjdN6mbqU9Jm0JSbsgv0hrEOhEoPlnmrm96uOlrD1V2fr0de2xY5C3wxabzXSt5df3qfR9SVoUTB75BfK4R30TK36oVN8n1w0JUZpRaefeXj+kv7I2UpThdwRGE+wiMQfJVwm696FxILk4EBLydYdQAtCZFZvM4nqiRucaHPy7hFrgoM+XS4mlJag5lPd/s5Chj9/7Eo04edg8cYMn5hjAX43S/YFY1jhU/6ChnD/okohGoBQ8bMjmn6BtJS1jUfoMc/+wM83ug00ihiDmSoAMm7BVyHsWq4mWl2dX7xDjSUh2a120imQnA== DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s2048; d=yahoo.com; b=fxwNW3Ax1zEHZONH48oxI7KHgGSo+rJLksYNmNE5C0XgXOLClXZpa95w6GT4PGXTxQHU79N2JmhAdPBvDrwFvRwRd+9ii1icbZp6LUBTa7Eld8DJhalCQPpXq3XGa/Ddrt/yKgUG5nn+MX+8Upt0dxG7B4hFEkJdX3uTLPXHcKPcdPYa5l9e8gIqXSojfefkuK9Tktyl5y1TutO4eofXb8HS0DZEMkf/9TwuKzWB4JySeh+4DzC4ZJA7/kznFFLVbIUslsZ8IOSbN1splN96JzWrw+I7mnWFdhYDlQaQKh2/p82DC86m1Wt4zfbVaj99OOA4KvUcLOWVfbt9N9cJ6g==; Received: from [98.139.212.151] by nm48.bullet.mail.bf1.yahoo.com with NNFMP; 28 Apr 2014 17:09:49 -0000 Received: from [98.139.211.192] by tm8.bullet.mail.bf1.yahoo.com with NNFMP; 28 Apr 2014 17:09:44 -0000 Received: from [127.0.0.1] by smtp201.mail.bf1.yahoo.com with NNFMP; 28 Apr 2014 17:09:44 -0000 X-Yahoo-Newman-Id: 847112.4651.bm@smtp201.mail.bf1.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: ESh6Q.gVM1nfVUc62iWGV9Kgf5ujEVdwB7xbjVyff5itmRz _UUc9GXCkpCn63BDtbzL_hJ6LIe283RXAnOeG_RPGr9A0PFqFrSj9G_MnqIi ZwUiuHUAGLpeXny8euxhkcyn08wX1Ibc1yOfq.qRxQRSmnVJd5M1wwEQf5Ap YGN.bwvy29b5a_aOQfwKh80zit0p4EedsigWIt1jyxWegPzVzKpIMTNynD1d sKvjUcATibM0nXsiAomMkiI7Kbrw66ilbII.HEFfO.pSvoD27BQX_enWRPNt mHzWl3PQgrW9X2vhsotYA_eHkjJPCJxyUi4oEhS4MP3o3Lu6g9yCJKar15.t bcn3nJW5dL1LMQycBhXWoR34.BKPKkN.EPCZxSXFGEBoZg9sbYNIIExrnubu I0UGxwOh5DSMNsAMcCqTEhVwKJd_Y.tZLr7uP9CNf.C5CldlWcKcpcfjPsj7 2NOcOipLDu4P8qfbg6oNADsCe7TexAJhOprh3kWYGcn482tG2RNq1sWD9PlP xd2ZlUkSK5Xm4IQ_.JQLpRBvFEeLfFUbV6UGvicf0MKaYXfGaNMIw7Xpw3rL kDw-- X-Yahoo-SMTP: QV71426swBDJZLO1mUY09m_wPtc7Mu8wKqfYeaNA_dmW6grnF6B_5LAWUw-- X-Rocket-Received: from [172.16.0.50] (chris@23.120.225.134 with plain [63.250.193.228]) by smtp201.mail.bf1.yahoo.com with SMTP; 28 Apr 2014 10:09:44 -0700 PDT Content-Type: text/plain; charset=iso-8859-2 Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\)) Subject: Re: RFC: using ceph as a backend for an NFSv4.1 pNFS server From: Chris BeHanna In-Reply-To: <33C0F291-0DA4-4658-B249-0B0E1F633832@FreeBSD.org> Date: Mon, 28 Apr 2014 12:09:42 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: <749215C3-D515-4B38-94B3-97AF152EE5E8@behanna.org> References: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> <535E7D17.6000303@ster.kuleuven.be> <33C0F291-0DA4-4658-B249-0B0E1F633832@FreeBSD.org> To: FreeBSD FS X-Mailer: Apple Mail (2.1510) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 17:11:55 -0000 On Apr 28, 2014, at 12:05 PM, Edward Tomasz Napiera=B3a = wrote: > Wiadomo=B6=E6 napisana przez Chris BeHanna w dniu 28 kwi 2014, o godz. = 18:24: >> On Apr 28, 2014, at 11:08 AM, Bram Vandoren = wrote: >>=20 >>> The amount of people waiting for yet another method to access their = cluster file system is probably limited. >>=20 >> LANL, LLNL, Sandia, every major oil & gas exploration company, = large weather simulation centers, financial modelers, and let us not = forget movie and TV CGI studios. >>=20 >> Yup, pretty limited. :-) >>=20 >> pNFS is pretty much *the* standard going forward for accessing = large object-based (OSD) storage volumes. At least it was when I made a = living in that space. >=20 > Does that mean the OSD - as in, the SCSI standard - is finally = starting to gain some traction? There have been OSD-based commercial file systems in production = for more than ten years now, in all of the fields I mentioned above. How closely this tracks with the standard is something I would = have to leave to someone else to address. -- Chris BeHanna chris@behanna.org From owner-freebsd-fs@FreeBSD.ORG Mon Apr 28 17:40:05 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 662D1187 for ; Mon, 28 Apr 2014 17:40:05 +0000 (UTC) Received: from mail.ignoranthack.me (ujvl.x.rootbsd.net [199.102.79.106]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4447A19D6 for ; Mon, 28 Apr 2014 17:40:05 +0000 (UTC) Received: from [10.73.160.242] (nat-dip7.cfw-a-gci.corp.yahoo.com [209.131.62.116]) (using SSLv3 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: sbruno@ignoranthack.me) by mail.ignoranthack.me (Postfix) with ESMTPSA id E65861928E4 for ; Mon, 28 Apr 2014 17:40:03 +0000 (UTC) Subject: sys/boot/zfs out of bounds warning From: Sean Bruno To: FreeBSD FS Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature"; boundary="=-jZFGj/1IUJZOT3Ykd/EZ" Date: Mon, 28 Apr 2014 10:40:03 -0700 Message-ID: <1398706803.1089.3.camel@powernoodle.corp.yahoo.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: sbruno@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Apr 2014 17:40:05 -0000 --=-jZFGj/1IUJZOT3Ykd/EZ Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable looking at sys/boot things this morning, noted that there's a pretty obvious warning about an out of bounds access. I suspect that I don't see a layer of abstraction here that makes this ok, but I'm not sure. It seems like if this wasn't valid memory, things would go pretty badly=20 in zfs_lookup(). Is there a better way to do this memcpy() such that it doesn't trip a warning here? ----------------- In file included from /home/sbruno/bsd/head/sys/boot/zfs/zfs.c:49: /home/sbruno/bsd/head/sys/boot/zfs/zfsimpl.c:2080:19: warning: array index 264 is past the end of the array (which contains 192 elements) [-Warray-bounds] memcpy(path, &dn.dn_bonus[sizeof(znode_phys_t)], ^ ~~~~~~~~~~~~~~~~~~~~ /home/sbruno/bsd/head/sys/boot/zfs/../../cddl/boot/zfs/zfsimpl.h:790:2: note: array 'dn_bonus' declared here uint8_t dn_bonus[DN_MAX_BONUSLEN - sizeof (blkptr_t)]; ------ --=-jZFGj/1IUJZOT3Ykd/EZ Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAABAgAGBQJTXpJpAAoJEBkJRdwI6BaHIugH+wdDO3ncstCGMZP9w30PBviF 2wYwFzE6yMzdrgVK0lY7tkTjl4Q4NUnW0G0poufPN88bgA8BHCbcn/UV5MncTown 85ex5wrb1m56bc7IYMHf8cafSy4imUNRtVn29eXn6Cul2qVlSBFq5uFMti6R8AfZ t/O63VKpRFdqiS7CdZF+sY432VIPKpmyGbLv5xta5olV4FySNSe2il1wkIX3Hq2f iTYXYmV3E0iWFt2yexkxd3Ur+g1uYhga9OKXzbBi11cZiofghvXdWspEOPq1/sj+ opVJRnPzjVIvVL7G7+iu2wEbhIH84kYef4L+6a23rofen81rKe7ZE+zENHPolNc= =M5mD -----END PGP SIGNATURE----- --=-jZFGj/1IUJZOT3Ykd/EZ-- From owner-freebsd-fs@FreeBSD.ORG Tue Apr 29 00:38:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 133955C9; Tue, 29 Apr 2014 00:38:24 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id B91C6165E; Tue, 29 Apr 2014 00:38:23 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAMfzXlODaFve/2dsb2JhbABZhCyCZb8Bgw+BMXSCJQEBBSNIDhsOCgICDRkCWQaIVKU8o0UXgSmMWA8VNAeCb4FKBKtqg00hgSwBHyI X-IronPort-AV: E=Sophos;i="4.97,948,1389762000"; d="scan'208";a="118155325" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 28 Apr 2014 20:37:19 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 985B1B40FE; Mon, 28 Apr 2014 20:37:19 -0400 (EDT) Date: Mon, 28 Apr 2014 20:37:19 -0400 (EDT) From: Rick Macklem To: Ivan Voras Message-ID: <1459248112.3139531.1398731839613.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: RFC: using ceph as a backend for an NFSv4.1 pNFS server MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Apr 2014 00:38:24 -0000 Ivan Voras wrote: > On 26/04/2014 21:47, Rick Macklem wrote: > > > Any other comments w.r.t. this would be appreciated, including > > generic stuff like "we couldn't care less about pNFS" or technical > > details/opinions. > > > > Thanks in advance for any feedback, rick > > ps: I'm no where near committing to do this at this point and > > I do realize that even completing the ceph port to FreeBSD > > might be beyond my limited resources. > > What functionality from ceph would pNFS really need? Would pNFS need > to > be implemented with a single back-end storage like ceph or could it > be > modular? (I don't have much experience here but it looks like HDFS is > becoming popular for some big-data applications). > > Well, I doubt I can answer this, but here is a simple summary of what a pNFS server does: - The NFSv4.1/pNFS server (sometimes called a metadata server of MDS) handles all the normal NFS stuff including read/writes of the files. However, it can also hand out layouts, which tell the client where to read/write the file on another data server (DS). - There are RFCs to describe 3 ways the client can read/write data on a DS. 1 - File Layout, where the client uses a subset of NFSv4.1 (read/write + enough others to use them). 2 - Block/volume, where the client uses iSCSI to read/write blocks for the file's data. 3 - Object, where the object storage commands are used over iSCSI. I think you can see that any of these require a lot of work to be done "behind the curtains" so that the MDS server can know where the file's data lives (and it can be striped across multiple DSs, etc). To implement this "from the ground up" is way beyond my limited time/resources (and expertise). I hope that I can find an open source cluster file system that handles most of the "behind the curtains" stuff so that all the NFSv4.1 server needs to do is "ask the cluster file system where the file/object's data lives" and generate a layout from that. (I'm basically looking for a path of least work.;-) Exactly what is needed from the cluster fs isn`t obvious to me at this time (and depends on layout type) but here are some thoughts: - where the file`s data lives and the info needed for the layout so the client can read and write the file`s data at the DS. - when the file`s data location changes, so it can recall the stale layout - allowing the file to grow without the MDS having to do anything, when the client writes to the DS (the MDS needs to have a way to find out the current size of the file) - allow the DSs to be built easily, using FreeBSD and the cluster file system tools (ideally using underlying FreeBSD file systems like ZFS to avoid `yet another` file system) There are probably a lot more of these. My hunch is that doing this for even one cluster file system will be at/beyond my time/resource limits. I also suspect these cluster file systems are different enough that each would be a lot of effort, even ignoring the fact that none of them are ported to FreeBSD. I'd also like to avoid porting a file system into FreeBSD. What I do like about ceph (and glustre is similar, I think?) is that they are layered on top of a regular file system, so they can use ZFS for the actual storage handling. rick From owner-freebsd-fs@FreeBSD.ORG Tue Apr 29 01:08:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5435DF01; Tue, 29 Apr 2014 01:08:15 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id E9F4E1928; Tue, 29 Apr 2014 01:08:14 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAMr6XlODaFve/2dsb2JhbABZhCyCZb8Bgw+BMXSCJQEBBSNWGw4KAgINGQJZBohUpT+jQxeBKYxYDxU0B4JvgUoEq2qDTSGBLAEfIg X-IronPort-AV: E=Sophos;i="4.97,948,1389762000"; d="scan'208";a="118162562" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 28 Apr 2014 21:08:13 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id CA022B3EEF; Mon, 28 Apr 2014 21:08:13 -0400 (EDT) Date: Mon, 28 Apr 2014 21:08:13 -0400 (EDT) From: Rick Macklem To: Ivan Voras Message-ID: <969953602.3152044.1398733693818.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: RFC: using ceph as a backend for an NFSv4.1 pNFS server MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Apr 2014 01:08:15 -0000 Ivan Voras wrote: > On 26/04/2014 21:47, Rick Macklem wrote: > > > Any other comments w.r.t. this would be appreciated, including > > generic stuff like "we couldn't care less about pNFS" or technical > > details/opinions. > > > > Thanks in advance for any feedback, rick > > ps: I'm no where near committing to do this at this point and > > I do realize that even completing the ceph port to FreeBSD > > might be beyond my limited resources. > > What functionality from ceph would pNFS really need? Would pNFS need > to > be implemented with a single back-end storage like ceph or could it > be > modular? (I don't have much experience here but it looks like HDFS is > becoming popular for some big-data applications). > > >From a quick glance, HDFS (Hadoop) uses a weak consistency model and is designed for write once, read many files. (It does not appear to be POSIX compliant.) Swift sounds somewhat similar, in that it has a "eventually consistent" rule. I'll take a closer look at it, but at first glance it doesn't sound like it would be appropriate as a back end to a pNFS server. (pNFS clients expect to see what they get from NFS, except they can do the reads/writes other ways.) Thanks everyone for the comments so far, rick rick From owner-freebsd-fs@FreeBSD.ORG Tue Apr 29 04:29:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AA3C28C8; Tue, 29 Apr 2014 04:29:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2001:1868:205::9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9518DD1C; Tue, 29 Apr 2014 04:29:37 +0000 (UTC) Received: from hch by bombadil.infradead.org with local (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wezfl-0005Dl-0m; Tue, 29 Apr 2014 04:29:37 +0000 Date: Mon, 28 Apr 2014 21:29:37 -0700 From: Christoph Hellwig To: Rick Macklem Subject: Re: RFC: using ceph as a backend for an NFSv4.1 pNFS server Message-ID: <20140429042937.GA19366@infradead.org> References: <1459248112.3139531.1398731839613.JavaMail.root@uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1459248112.3139531.1398731839613.JavaMail.root@uoguelph.ca> User-Agent: Mutt/1.5.21 (2010-09-15) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html Cc: freebsd-fs@freebsd.org, Ivan Voras X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Apr 2014 04:29:38 -0000 On Mon, Apr 28, 2014 at 08:37:19PM -0400, Rick Macklem wrote: > 2 - Block/volume, where the client uses iSCSI to read/write blocks for > the file's data. There is nothing iSCSI specific in the block layout spec, even if that seems to be the reference implementation. Any block device that allows multiple initiators will do. From owner-freebsd-fs@FreeBSD.ORG Tue Apr 29 05:25:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D1D4A1C9; Tue, 29 Apr 2014 05:25:50 +0000 (UTC) Received: from mail-ve0-x22e.google.com (mail-ve0-x22e.google.com [IPv6:2607:f8b0:400c:c01::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7C2CD127A; Tue, 29 Apr 2014 05:25:50 +0000 (UTC) Received: by mail-ve0-f174.google.com with SMTP id oz11so9193851veb.33 for ; Mon, 28 Apr 2014 22:25:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:references:from:date:message-id:subject:to:cc :content-type; bh=33+NjYy9Ycrrie2YxflXFtom8ddscDvv6zKWo57TjKE=; b=WKCDMBCVIUYNU9onYmR6YwZMpfdDKY6Nwb5c/g9Rr2XF0S3F4nicbulfXbTZXCLeKS v5MYqCM2aVXWoRltpxLCTpWAKHNc5N86O2UxC6Uva64qINg5Srfap4b/8qoXZSdeLW1M kwUHuw4+2FwI/GKpPttr8JobFu9skS47JgBlSTDDl4OO/cRPtyzjpW2NtI+SRxdTsH8W JJ1W5AGDaezy+ugIYf0nuB1rxFYdb9MAzqertaMyrBgakL368Rb61x0PUzUib6vM2Z5h jVuioP7I/7BhhkT/OxWrNEx405aCREKwSZw4+cKF0Hqfl2dErygyKDA5fknXNoKsZhb7 0Nsw== X-Received: by 10.58.105.105 with SMTP id gl9mr27285204veb.3.1398749149589; Mon, 28 Apr 2014 22:25:49 -0700 (PDT) MIME-Version: 1.0 References: <1459248112.3139531.1398731839613.JavaMail.root@uoguelph.ca> <20140429042937.GA19366@infradead.org> From: Brent Welch Date: Tue, 29 Apr 2014 05:25:49 +0000 Message-ID: Subject: Re: RFC: using ceph as a backend for an NFSv4.1 pNFS server To: Christoph Hellwig , Rick Macklem Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org, Ivan Voras X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Apr 2014 05:25:51 -0000 Rick - you can reach out to the folks at Panasas, which is a FreeBSD shop with its own proprietary cluster file system that supports pNFS (via a NFS-Ganesha layer) and its own pNFS pre-cursor. That probably doesn't meet your needs exactly, but technically it is a dead-on match for what you are thinking about. Try Celeste Baranski, cbaranski@panasas.com On Mon Apr 28 2014 at 9:29:45 PM, Christoph Hellwig wrote: > On Mon, Apr 28, 2014 at 08:37:19PM -0400, Rick Macklem wrote: > > 2 - Block/volume, where the client uses iSCSI to read/write blocks for > > the file's data. > > There is nothing iSCSI specific in the block layout spec, even if that > seems to be the reference implementation. Any block device that allows > multiple initiators will do. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Apr 30 10:20:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7F8596E9 for ; Wed, 30 Apr 2014 10:20:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6AED31BCB for ; Wed, 30 Apr 2014 10:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s3UAK1XH038309 for ; Wed, 30 Apr 2014 10:20:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s3UAK0Nx038308; Wed, 30 Apr 2014 10:20:01 GMT (envelope-from gnats) Date: Wed, 30 Apr 2014 10:20:01 GMT Message-Id: <201404301020.s3UAK0Nx038308@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: arrowdodger <6yearold@gmail.com> Subject: Re: kern/188187: [zfs] [panic] 10-stable: Kernel panic on zpool import: integer divide fault X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: arrowdodger <6yearold@gmail.com> List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Apr 2014 10:20:01 -0000 The following reply was made to PR kern/188187; it has been noted by GNATS. From: arrowdodger <6yearold@gmail.com> To: bug-followup@freebsd.org, 6yearold@gmail.com Cc: Subject: Re: kern/188187: [zfs] [panic] 10-stable: Kernel panic on zpool import: integer divide fault Date: Wed, 30 Apr 2014 14:17:31 +0400 --14dae9d2fa3a0cf55304f83fdd88 Content-Type: text/plain; charset=UTF-8 Recent 11-CURRENT snapshot also panics with solaris assertion: zio->io_error != 0 failed at /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_mirror.c, line 576 --14dae9d2fa3a0cf55304f83fdd88 Content-Type: text/html; charset=UTF-8
Recent 11-CURRENT snapshot also panics with

solaris assertion: zio->io_error != 0 failed
at /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_mirror.c, line 576
--14dae9d2fa3a0cf55304f83fdd88-- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 30 22:25:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 71AFF814 for ; Wed, 30 Apr 2014 22:25:04 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [88.198.178.88]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id CC3791E70 for ; Wed, 30 Apr 2014 22:25:03 +0000 (UTC) Received: (qmail 69241 invoked by uid 89); 30 Apr 2014 22:20:51 -0000 Received: from unknown (HELO ?192.168.1.207?) (rainer@ultra-secure.de@217.71.83.52) by mail.ultra-secure.de with ESMTPA; 30 Apr 2014 22:20:51 -0000 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: RFC: using ceph as a backend for an NFSv4.1 pNFS server From: Rainer Duffner In-Reply-To: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> Date: Thu, 1 May 2014 00:20:49 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: References: <507714298.1684844.1398541651089.JavaMail.root@uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1874) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Apr 2014 22:25:04 -0000 Am 26.04.2014 um 21:47 schrieb Rick Macklem : > Hi, >=20 > The non-pNFS v4.1 server in the projects area is just about ready > for head, I think. However, without pNFS, NFSv4.1 isn't all that > interesting. The problem is that doing a pNFS server is a non-trivial > exercise. I am now somewhat familiar with pNFS (from doing the client > side), but have no expertise w.r.t. cluster file systems, etc. >=20 > For those not familiar with pNFS, the basic idea is that the NFSv4.1 > server becomes a metadata server (MDS) and hands out what are called > layouts and devinfo, so that the client can access data server(s) (DS) > to read/write the file. There are RFCs that define both block/volume > (using iSCSI or similar) and object (using something called ODS2). >=20 > Although I suspect there are many ways to do a pNFS server, I think > that building it on top of a cluster file system may be the simplest. >=20 > So, this leads me to... > At a glance (just the web pages, I haven't looked at the source), > it appears that ceph might be useful as a backend to a pNFS server. The guys at RedHat probably also believe in its usefulness:=20 = http://www.redhat.com/about/news/press-archive/2014/4/red-hat-to-acquire-i= nktank-provider-of-ceph I=92m not sure if this will make it harder to port or easier. ;-) Maybe this is something the FreeBSD Foundation should support? Of course, someone who can actually pull-off the port (and maintain it) = has to come forward first=85 That=92s actually one of the things I consider the worst outcome: a = one-off porting effort that isn=92t maintained and can=92t really be = used in production. From owner-freebsd-fs@FreeBSD.ORG Thu May 1 16:19:04 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 88AEA555 for ; Thu, 1 May 2014 16:19:04 +0000 (UTC) Received: from albert.catwhisker.org (mx.catwhisker.org [198.144.209.73]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 417911D96 for ; Thu, 1 May 2014 16:19:03 +0000 (UTC) Received: from albert.catwhisker.org (localhost [127.0.0.1]) by albert.catwhisker.org (8.14.8/8.14.8) with ESMTP id s41GIuWd033531; Thu, 1 May 2014 09:18:56 -0700 (PDT) (envelope-from david@albert.catwhisker.org) Received: (from david@localhost) by albert.catwhisker.org (8.14.8/8.14.8/Submit) id s41GIunE033530; Thu, 1 May 2014 09:18:56 -0700 (PDT) (envelope-from david) Date: Thu, 1 May 2014 09:18:56 -0700 From: David Wolfskill To: fs@freebsd.org Subject: SU+J: 185 processes in state "suspfs" for >8 hrs. ... not good, right? Message-ID: <20140501161856.GH1120@albert.catwhisker.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="T1STeyE7BzLxwmnN" Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: fs@freebsd.org, David Wolfskill List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 May 2014 16:19:04 -0000 --T1STeyE7BzLxwmnN Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable I'm probably abusing things somewhat, but limits are to be pushed, yeah...? :-} At work, we have some build servers, presently running FreeBSD/amd64 stable/9 @r257221. They have 2 "packages" with 6 cores each (Xeon(R) CPU X5690 @ 3.47GHz); SMT is enabled, so the scheduler sees 24 cores. The local "build space" is a RAID 5 array of 10 2TB drives with a single UFS2+SU file system on it (~15TB). The software builds are performed within a jail (that is intended to look like FreeBSD/i386 7.1-RELEASE). My test workload is to: * create a "sandbox" (by checking out the sources). * tar up the sandbox (for future iterations). * Iterate over: - Blow away the old sandbox - Unpack a new sandbox from the tarball - Enter the sandbox & perform a timed software build - exit the sandbox & record the results (exit status; elapsed time, &c.) For the "blow away the old sandbox" step, I used to just use "rm -rf" -- but because the sandbox is ... large, that tends to be rather on the slow side. So I cobbled up a shell script that essentially does: max_proc=3D$( sysctl -n kern.maxprocperuid ) max_proc=3D$(( $max_proc / 2 )) for sb in $@; do find $sb -type d -depth +$depth -prune -print0 | \ xargs -0 -n 1 -P $max_proc rm -f -r & wait rm -fr $sb done which tends to be faster, as the process is parallelized (vs., I suppose, "paralyzed" :-}). I have the use of a designated 'test machine," which I subject to my ... experiments. Based on various other events, as well as at least one suggestion from a colleague, I thought I'd try turning on soft updates journaling -- so I did. My first set of tests were inconclusive -- I saw the load averages increase quite a bit (from a max of ~18 to ~25); some build times were around the same, while one was quite a bit longer. It's possible that one of my colleagues was doing something on the machine, though I had tried to let them know that I was running timing tests and things were a bit more "experimental" than usual. So I fired off another round of tests yesterday evening. This morning, load average was around 0 (and plenty of time had elapsed), so I thought maybe the 2nd round of tests had completed. I was mistaken in that belief. :-/ On resuming the tmux session, I found: =2E.. [SU+J], iteration 5, terminated status 0 at Thu May 1 00:01:06 PDT 2= 014 So I hit ^T and saw: load: 0.15 cmd: rm 73825 [suspfs] 16479.66r 0.00u 1.38s 0% 1436k Hmmm... So I waited a few minutes and hit ^T again and ... nothing. No response. Hmm... I was able to get a response from another session within the jail, as well as on the host. It's now been almost 5 more hours after that, and logging in to the host, I see: test-machine(9.2)[6] ps axwwl | grep -cw suspfs 185 test-machine(9.2)[7]=20 I suspect that there may have been a bad interaction between what I was doing and some cron-initiated activity that started just after midnight. But this particular mode of failure seems .... well, "graceless" comes to mind (if I want to be quite charitable). I think I'd like to file a PR on this, but I'd like to provide some decent information with the PR -- and I'd like to be able to do Other Things with the test machine in the interim. So: does anyone have suggestions for information I might gather while the machine is in this state that might help figure out a way to prevent a recurrence? Thanks! [Please note that I've set Reply-To to include both the list and me, as I'm not subscribed.] Peace, david --=20 David H. Wolfskill david@catwhisker.org Taliban: Evil cowards with guns afraid of truth from a 14-year old girl. See http://www.catwhisker.org/~david/publickey.gpg for my public key. --T1STeyE7BzLxwmnN Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQJ8BAEBCgBmBQJTYnPtXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ4RThEMDY4QTIxMjc1MDZFRDIzODYzRTc4 QTY3RjlDOERFRjQxOTNCAAoJEIpn+cje9Bk7M/AP/3RvXjTi020uxHOYejS6j06q JbFOb1Zx+TUre5wneRpwugA/1QFKWHYdqdkLPaN9Wvo2vPMcozw+kaT03/nVR6vK CPbY6YMFu0ZkSgJxXYGiEV1+p5IU8uEfUN/28I4ih50LhZRba/b+B5/rJ0dKhJeB r/ArgVKvLNxojnIjo4v6pd5UOTzXGq47/IoQU2AGk3n/KEEWGMaLdl1JXixHdybm GRVazAGW52Oacx1SvlN0+6JlXob+D3fnfMvEbpASg/Wc0z2Ap8nhpDcHdL8irgMv sWmSd7nzRVW1OUU10GyXEsrViYce3sBwxQquV5HJzp915Pwls9J/9M2VplT0Blet al/UNwtlAeSY9uV3RVVOFSCeSyX+KiYnhRLyeQ5Yx4pEpFacH1lzoxLs7bwJmIsU 2isLa7TQybanqT0tCJA9IZrSQbJCvJbm9igbO3JzQI3i7mkfTwGP7Y8T4zt9TxrS OA9VxVhotYjUfcz67/JUnGpxUuJsHOcwiXmDETaOdIohFpjct50QVjJeirR3Diee 0TtOfwcqyxBrqPugOrMHkL04vQVyLnUPoHoee8tN2Ng9+k/ai+H+jn6ncEXq5hU3 FRQ8jQG/jp9y4mvpf9cHq5wdQOwG6IL6y0QypgzrkMeCBBuu+dta+0AYg8T03adi gXZ4mvpNaLwjLdcIibZQ =mzUh -----END PGP SIGNATURE----- --T1STeyE7BzLxwmnN-- From owner-freebsd-fs@FreeBSD.ORG Thu May 1 16:52:55 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BF1D1494 for ; Thu, 1 May 2014 16:52:55 +0000 (UTC) Received: from chez.mckusick.com (chez.mckusick.com [IPv6:2001:5a8:4:7e72:4a5b:39ff:fe12:452]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 62FDE12BF for ; Thu, 1 May 2014 16:52:55 +0000 (UTC) Received: from chez.mckusick.com (localhost [127.0.0.1]) by chez.mckusick.com (8.14.3/8.14.3) with ESMTP id s41GphgX089174; Thu, 1 May 2014 09:51:43 -0700 (PDT) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201405011651.s41GphgX089174@chez.mckusick.com> To: David Wolfskill Subject: Re: SU+J: 185 processes in state "suspfs" for >8 hrs. ... not good, right? In-reply-to: <20140501161856.GH1120@albert.catwhisker.org> Date: Thu, 01 May 2014 09:51:43 -0700 From: Kirk McKusick Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 May 2014 16:52:55 -0000 > Date: Thu, 1 May 2014 09:18:56 -0700 > From: David Wolfskill > To: fs@freebsd.org > Subject: SU+J: 185 processes in state "suspfs" for >8 hrs. .. not good, right? > > I'm probably abusing things somewhat, but limits are to be pushed, > yeah...? :-} > > At work, we have some build servers, presently running FreeBSD/amd64 > stable/9 @r257221. They have 2 "packages" with 6 cores each (Xeon(R) > CPU X5690 @ 3.47GHz); SMT is enabled, so the scheduler sees 24 > cores. The local "build space" is a RAID 5 array of 10 2TB drives > with a single UFS2+SU file system on it (~15TB). The software > builds are performed within a jail (that is intended to look like > FreeBSD/i386 7.1-RELEASE). > > ... The following fix for related problems was made to head and MFC'ed to stable/10 but not stable/9. *** stable/9/sys/ufs/ffs/ffs_vnops.c 2014-03-05 08:51:48.000000000 -0800 --- stable/9/sys/ufs/ffsffs_vnops.c 2014-05-01 09:41:35.000000000 -0700 *************** *** 258,266 **** continue; if (bp->b_lblkno > lbn) panic("ffs_syncvnode: syncing truncated data."); ! if (BUF_LOCK(bp, LK_EXCLUSIVE | LK_NOWAIT, NULL)) continue; - BO_UNLOCK(bo); if ((bp->b_flags & B_DELWRI) == 0) panic("ffs_fsync: not dirty"); /* --- 258,274 ---- continue; if (bp->b_lblkno > lbn) panic("ffs_syncvnode: syncing truncated data."); ! if (BUF_LOCK(bp, LK_EXCLUSIVE | LK_NOWAIT, NULL) == 0) { ! BO_UNLOCK(bo); ! } else if (wait != 0) { ! if (BUF_LOCK(bp, ! LK_EXCLUSIVE | LK_SLEEPFAIL | LK_INTERLOCK, ! BO_LOCKPTR(bo)) != 0) { ! bp->b_vflags &= ~BV_SCANNED; ! goto next; ! } ! } else continue; if ((bp->b_flags & B_DELWRI) == 0) panic("ffs_fsync: not dirty"); /* The associated comment is: If we fail to do a non-blocking acquire of a buf lock while doing a waiting sync pass we need to do a blocking acquire and restart. Another thread, typically the buf daemon, may have this buf locked and if we don't wait we can fail to sync the file. This lead to a great variety of softdep panics and deadlocks because we rely on all dependencies being flushed before proceeding in several cases. Let me know if it helps your problem. If it does, I will MFC it to 9. There have been several other fixes made to SU+J that are more likely to be the cause of your problem, but they are not easily back-ported to stable/9. So if this does not fix your problem my only suggestions are to turn off journaling or move to running on stable/10. Kirk McKusick From owner-freebsd-fs@FreeBSD.ORG Thu May 1 17:09:52 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C1BEAB2D for ; Thu, 1 May 2014 17:09:52 +0000 (UTC) Received: from albert.catwhisker.org (mx.catwhisker.org [198.144.209.73]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8E6551442 for ; Thu, 1 May 2014 17:09:52 +0000 (UTC) Received: from albert.catwhisker.org (localhost [127.0.0.1]) by albert.catwhisker.org (8.14.8/8.14.8) with ESMTP id s41H9prV033679; Thu, 1 May 2014 10:09:51 -0700 (PDT) (envelope-from david@albert.catwhisker.org) Received: (from david@localhost) by albert.catwhisker.org (8.14.8/8.14.8/Submit) id s41H9pUS033678; Thu, 1 May 2014 10:09:51 -0700 (PDT) (envelope-from david) Date: Thu, 1 May 2014 10:09:51 -0700 From: David Wolfskill To: Kirk McKusick Subject: Re: SU+J: 185 processes in state "suspfs" for >8 hrs. ... not good, right? Message-ID: <20140501170951.GI1120@albert.catwhisker.org> References: <20140501161856.GH1120@albert.catwhisker.org> <201405011651.s41GphgX089174@chez.mckusick.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="s8zlJy+nbqKJ93gX" Content-Disposition: inline In-Reply-To: <201405011651.s41GphgX089174@chez.mckusick.com> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: fs@freebsd.org, David Wolfskill List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 May 2014 17:09:52 -0000 --s8zlJy+nbqKJ93gX Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, May 01, 2014 at 09:51:43AM -0700, Kirk McKusick wrote: >>... >=20 > The following fix for related problems was made to head and MFC'ed > to stable/10 but not stable/9. >=20 > *** stable/9/sys/ufs/ffs/ffs_vnops.c 2014-03-05 08:51:48.000000000 -0800 > --- stable/9/sys/ufs/ffsffs_vnops.c 2014-05-01 09:41:35.000000000 -0700 > *************** > *** 258,266 **** > continue; > if (bp->b_lblkno > lbn) > panic("ffs_syncvnode: syncing truncated data."); > ! if (BUF_LOCK(bp, LK_EXCLUSIVE | LK_NOWAIT, NULL)) > continue; > - BO_UNLOCK(bo); > if ((bp->b_flags & B_DELWRI) =3D=3D 0) > panic("ffs_fsync: not dirty"); > /* > --- 258,274 ---- > continue; > if (bp->b_lblkno > lbn) > panic("ffs_syncvnode: syncing truncated data."); > ! if (BUF_LOCK(bp, LK_EXCLUSIVE | LK_NOWAIT, NULL) =3D=3D 0) { > ! BO_UNLOCK(bo); > ! } else if (wait !=3D 0) { > ! if (BUF_LOCK(bp, > ! LK_EXCLUSIVE | LK_SLEEPFAIL | LK_INTERLOCK, > ! BO_LOCKPTR(bo)) !=3D 0) { > ! bp->b_vflags &=3D ~BV_SCANNED; > ! goto next; > ! } > ! } else > continue; > if ((bp->b_flags & B_DELWRI) =3D=3D 0) > panic("ffs_fsync: not dirty"); > /* >=20 > The associated comment is: >=20 > If we fail to do a non-blocking acquire of a buf lock while doing a > waiting sync pass we need to do a blocking acquire and restart. > Another thread, typically the buf daemon, may have this buf locked and > if we don't wait we can fail to sync the file. This lead to a great > variety of softdep panics and deadlocks because we rely on all > dependencies being flushed before proceeding in several cases. Cool -- thanks! > Let me know if it helps your problem. If it does, I will MFC it to 9. > There have been several other fixes made to SU+J that are more likely > to be the cause of your problem, but they are not easily back-ported > to stable/9. So if this does not fix your problem my only suggestions > are to turn off journaling or move to running on stable/10. >=20 > Kirk McKusick Roger that. And yes, stable/10 is a goal -- but I *just* finally managed to get the machines migrated from 8.2-ish to 9.2. :-) (Note: I do not have direct control -- merely a measure of influence. :-}) Peace, david --=20 David H. Wolfskill david@catwhisker.org Taliban: Evil cowards with guns afraid of truth from a 14-year old girl. See http://www.catwhisker.org/~david/publickey.gpg for my public key. --s8zlJy+nbqKJ93gX Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQJ8BAEBCgBmBQJTYn/dXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ4RThEMDY4QTIxMjc1MDZFRDIzODYzRTc4 QTY3RjlDOERFRjQxOTNCAAoJEIpn+cje9Bk7IH4P/RtaLA4RxmpGOZJ6ndwkCOR9 Xm5Bwz2lH7SMUI4wqxylcy/9zkTJdqdzeliU22TNQ0mL0ldN50p7tnHTRi99pAO3 OTOTzmzqoKIDe+gzqR+tpHNumolg2+rTWHHMw2S/sT8brKsdUkYFN5zh1yb5T9kC dX9Oz6Lwht1xZfUFlrBg63aGdn+eqVxKbFD//WTTNAeRLpHPl4K22w+JhKmjxcp3 rrRMwTR0Vd99fW2z7zJ67hZFWkKVZ0i3c3KQMWHxzBbZXM9WS5pU4xCoWDkPOwCr ELQ3myZeV+2+72k9fe8voGKjsOiPDyyg07J+WU8ECqeymGUJLL9Haf+UXEfquqGR BEkPjpzW9ZuLvTx2DDWBgT5yZqI2cFh6WBA+GH1eQgSpaO+cfd8Az2s2VS+tl/61 NwGLIcr82LYnMW7Cx3d8L6VdB70UVaLNdIr7Vy7ER/x4THjQ9vuhgx7yShKgfTLz 1OTrxgTsHBvFPH88pmjQVTEI4KslCLsi8tE7TtYtsndE3bvAjULPbxrmefUkngSu m/zaxZdjmVGciv1m/GB6WaYwl+Qe22mADl9VTq761DgIxIpMLeyRZoAUBjNu2lTo 6AyR9OMoCiCelZKwhajUy6a9iLw+iDgvdVdt9dYanfrskz9HDqv59AJW/vXje8nm 43+333DfnlRqBO6FhrcL =4V2i -----END PGP SIGNATURE----- --s8zlJy+nbqKJ93gX-- From owner-freebsd-fs@FreeBSD.ORG Thu May 1 18:20:59 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8ACA5AEC for ; Thu, 1 May 2014 18:20:59 +0000 (UTC) Received: from albert.catwhisker.org (mx.catwhisker.org [198.144.209.73]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 564D81AC6 for ; Thu, 1 May 2014 18:20:58 +0000 (UTC) Received: from albert.catwhisker.org (localhost [127.0.0.1]) by albert.catwhisker.org (8.14.8/8.14.8) with ESMTP id s41IKvqJ034000; Thu, 1 May 2014 11:20:57 -0700 (PDT) (envelope-from david@albert.catwhisker.org) Received: (from david@localhost) by albert.catwhisker.org (8.14.8/8.14.8/Submit) id s41IKvhU033999; Thu, 1 May 2014 11:20:57 -0700 (PDT) (envelope-from david) Date: Thu, 1 May 2014 11:20:57 -0700 From: David Wolfskill To: Kirk McKusick Subject: Re: SU+J: 185 processes in state "suspfs" for >8 hrs. ... not good, right? Message-ID: <20140501182057.GJ1120@albert.catwhisker.org> References: <20140501161856.GH1120@albert.catwhisker.org> <201405011651.s41GphgX089174@chez.mckusick.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="94Ornb/7sD1MvElF" Content-Disposition: inline In-Reply-To: <201405011651.s41GphgX089174@chez.mckusick.com> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: fs@freebsd.org, David Wolfskill List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 May 2014 18:20:59 -0000 --94Ornb/7sD1MvElF Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, May 01, 2014 at 09:51:43AM -0700, Kirk McKusick wrote: > ... >=20 > The following fix for related problems was made to head and MFC'ed > to stable/10 but not stable/9. >=20 > *** stable/9/sys/ufs/ffs/ffs_vnops.c 2014-03-05 08:51:48.000000000 -0800 > --- stable/9/sys/ufs/ffsffs_vnops.c 2014-05-01 09:41:35.000000000 -0700 > *************** > *** 258,266 **** > continue; > if (bp->b_lblkno > lbn) > panic("ffs_syncvnode: syncing truncated data."); > ! if (BUF_LOCK(bp, LK_EXCLUSIVE | LK_NOWAIT, NULL)) > continue; > - BO_UNLOCK(bo); > if ((bp->b_flags & B_DELWRI) =3D=3D 0) > panic("ffs_fsync: not dirty"); > /* > --- 258,274 ---- > continue; > if (bp->b_lblkno > lbn) > panic("ffs_syncvnode: syncing truncated data."); > ! if (BUF_LOCK(bp, LK_EXCLUSIVE | LK_NOWAIT, NULL) =3D=3D 0) { > ! BO_UNLOCK(bo); > ! } else if (wait !=3D 0) { > ! if (BUF_LOCK(bp, > ! LK_EXCLUSIVE | LK_SLEEPFAIL | LK_INTERLOCK, > ! BO_LOCKPTR(bo)) !=3D 0) { > ! bp->b_vflags &=3D ~BV_SCANNED; > ! goto next; > ! } > ! } else > continue; > if ((bp->b_flags & B_DELWRI) =3D=3D 0) > panic("ffs_fsync: not dirty"); > /* >=20 > The associated comment is: >=20 > If we fail to do a non-blocking acquire of a buf lock while doing a > waiting sync pass we need to do a blocking acquire and restart. > Another thread, typically the buf daemon, may have this buf locked and > if we don't wait we can fail to sync the file. This lead to a great > variety of softdep panics and deadlocks because we rely on all > dependencies being flushed before proceeding in several cases. >=20 > Let me know if it helps your problem. If it does, I will MFC it to 9. > There have been several other fixes made to SU+J that are more likely > to be the cause of your problem, but they are not easily back-ported > to stable/9. So if this does not fix your problem my only suggestions > are to turn off journaling or move to running on stable/10. > ... Hrrrmmm... Looks as if the above reflects stable/10's r251171 (in particular, "Convert the bufobj lock to rwlock.") -- stable/9 doesn't seem to know about BO_LOCKPTR(), and gcc makes some assumptions. That doesn't turn out well. I think that migrating to stable/10 might make more sense than figuring out how to fix this, especially if there are other causes of the observed failure that are fixed in stable/10. Thanks.... Peace, david --=20 David H. Wolfskill david@catwhisker.org Taliban: Evil cowards with guns afraid of truth from a 14-year old girl. See http://www.catwhisker.org/~david/publickey.gpg for my public key. --94Ornb/7sD1MvElF Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQJ8BAEBCgBmBQJTYpCHXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ4RThEMDY4QTIxMjc1MDZFRDIzODYzRTc4 QTY3RjlDOERFRjQxOTNCAAoJEIpn+cje9Bk7l9EQAJ1hsxQq+vRo+KQy81yGcy/u r8n+SsL1PdFQX3VPHjaHs/fUY0if37rdlIAiwFbQP4EjPR1MSMBU4e9XLI6rB4Zh jxyTt9BlCZpx/jP3LveyM+F2weX6gFM8tiu5MpRTuiEQu4yYqGBJ1HygEj8isSDb kdA5TN/MBKLsbAS5B/WpI9/OD0Q4E1Q5sQArpzJYgVH/NTOo3HI71IhtuDwm+NDb BJBZVOP+TWjEnlS9BqYKdfiZgDaaaHq3YQC/mD+eEqqx51CtnF0RXTV/D+1hyyHc wfA8Y5l3s/NB31BER6okNkp+5Fh32f+dhdzYE3b6KO42j/KmE8j+5him03TDG2jC 4aBsPbhlgLsO1El+JMJeai4YOjJV27UG+yDDC4P4yUsV2l080QyBiTyi3NXvOM6C Fc71X4fEfE+1BZtbNvIaAB0i4RSyClSMaBca2IjoI7eAE9uxXt9+p9dTiDD3P7bs HhCMNV01KRKsmZBOLSBQxAPupUNw1MS/dNOxi573Wn45zJQVYjvP1u3xoTe3+5Ul zyWsTHT84laDTTj2S1R4SbHPH1ZV/Gvp61kfA+tOM6MsoZwcnm2csdBk5NH/WILp ECpLeSfRoSU/hK46XVXwO1LSmMzj0ALQ1hy0yqB7NW/ESJY+yqtWyvdSityfj2/8 9kbpr6oLpaUA9CWZ3D8H =hsUU -----END PGP SIGNATURE----- --94Ornb/7sD1MvElF-- From owner-freebsd-fs@FreeBSD.ORG Thu May 1 18:28:12 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DA751DFB for ; Thu, 1 May 2014 18:28:12 +0000 (UTC) Received: from chez.mckusick.com (chez.mckusick.com [IPv6:2001:5a8:4:7e72:4a5b:39ff:fe12:452]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B60391BDF for ; Thu, 1 May 2014 18:28:12 +0000 (UTC) Received: from chez.mckusick.com (localhost [127.0.0.1]) by chez.mckusick.com (8.14.3/8.14.3) with ESMTP id s41IRSpS010249; Thu, 1 May 2014 11:27:28 -0700 (PDT) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201405011827.s41IRSpS010249@chez.mckusick.com> To: fs@freebsd.org, David Wolfskill Subject: Re: SU+J: 185 processes in state "suspfs" for >8 hrs. ... not good, right? In-reply-to: <20140501182057.GJ1120@albert.catwhisker.org> Date: Thu, 01 May 2014 11:27:28 -0700 From: Kirk McKusick X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 May 2014 18:28:12 -0000 > Date: Thu, 1 May 2014 11:20:57 -0700 > From: David Wolfskill > To: Kirk McKusick > Cc: fs@freebsd.org > Subject: Re: SU+J: 185 processes in state "suspfs" for >8 hrs. ... not good, > right? > > On Thu, May 01, 2014 at 09:51:43AM -0700, Kirk McKusick wrote: > >> Let me know if it helps your problem. If it does, I will MFC it to 9. >> There have been several other fixes made to SU+J that are more likely >> to be the cause of your problem, but they are not easily back-ported >> to stable/9. So if this does not fix your problem my only suggestions >> are to turn off journaling or move to running on stable/10. >> ... > > Hrrrmmm... Looks as if the above reflects stable/10's r251171 (in > particular, "Convert the bufobj lock to rwlock.") -- stable/9 doesn't > seem to know about BO_LOCKPTR(), and gcc makes some assumptions. That > doesn't turn out well. > > I think that migrating to stable/10 might make more sense than figuring > out how to fix this, especially if there are other causes of the > observed failure that are fixed in stable/10. > > Thanks.... > > Peace, > david > -- > David H. Wolfskill david@catwhisker.org > Taliban: Evil cowards with guns afraid of truth from a 14-year old girl. > > See http://www.catwhisker.org/~david/publickey.gpg for my public key. I think that you have now discovered why Jeff did not MFC to stable/9. You are correct that putting in this fix requires seriously more work. Sorry about sending you down that path. Kirk McKusick From owner-freebsd-fs@FreeBSD.ORG Thu May 1 18:30:27 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 529B7E96 for ; Thu, 1 May 2014 18:30:27 +0000 (UTC) Received: from albert.catwhisker.org (mx.catwhisker.org [198.144.209.73]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 003D71BF3 for ; Thu, 1 May 2014 18:30:26 +0000 (UTC) Received: from albert.catwhisker.org (localhost [127.0.0.1]) by albert.catwhisker.org (8.14.8/8.14.8) with ESMTP id s41IUPnp034040; Thu, 1 May 2014 11:30:25 -0700 (PDT) (envelope-from david@albert.catwhisker.org) Received: (from david@localhost) by albert.catwhisker.org (8.14.8/8.14.8/Submit) id s41IUPBh034039; Thu, 1 May 2014 11:30:25 -0700 (PDT) (envelope-from david) Date: Thu, 1 May 2014 11:30:25 -0700 From: David Wolfskill To: Kirk McKusick Subject: Re: SU+J: 185 processes in state "suspfs" for >8 hrs. ... not good, right? Message-ID: <20140501183025.GK1120@albert.catwhisker.org> References: <20140501182057.GJ1120@albert.catwhisker.org> <201405011827.s41IRSpS010249@chez.mckusick.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="tLveGb43symFnK3n" Content-Disposition: inline In-Reply-To: <201405011827.s41IRSpS010249@chez.mckusick.com> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 May 2014 18:30:27 -0000 --tLveGb43symFnK3n Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, May 01, 2014 at 11:27:28AM -0700, Kirk McKusick wrote: > ... > I think that you have now discovered why Jeff did not MFC to stable/9. Quite so. :-) > You are correct that putting in this fix requires seriously more work. > Sorry about sending you down that path. > ... No apology needed -- it's good to have data I can point to to say that we need to update. :-) Peace, david --=20 David H. Wolfskill david@catwhisker.org Taliban: Evil cowards with guns afraid of truth from a 14-year old girl. See http://www.catwhisker.org/~david/publickey.gpg for my public key. --tLveGb43symFnK3n Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQJ8BAEBCgBmBQJTYpK/XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ4RThEMDY4QTIxMjc1MDZFRDIzODYzRTc4 QTY3RjlDOERFRjQxOTNCAAoJEIpn+cje9Bk7kocP/2Zad1DkpnlBUmPVEyVPdXU+ Op5xeXkEXdVxm+dbuSCjrm98EGqCcAD8R7wDusmBtyqzgCWsEfn3g9CoLux3w0TO Xq9TwQAxnGVoDTb54NZ7NT51bTeGBe1sUraMLNpAenWdvSqkvqNLlQcZbnmH/UZn F7KVXzKeYSjyoAzDejLX9CvSCcRv63Nth/wrHflfgM+HJyJr0bp9oe30awIPXqJY tNk40TngAM+Epg4SvYg6HghUHyedTy3MATQ0j2COQSfwXefQrevSdfjp/R018YTj fnCHscOzWbBNiD8To64KWKPG4I33VPa29jB3e189Ilfouqyx7sftIjiWW+y2SGKV 82dvMoQtamAaeFijOt988+euaD3cT/YIOL5VCiXS0lWHl2NeCcYa9DmmlhsLTVMh +eoNE6+O63i3LaV0jPJ65Y812r0Sqd4qUVNhkcWkpGMTLFA3rKHcWU2bubrHsHLw fTBV1Zg5F74FtM9n03G/9CwED97EIaeWM7oES1yuva9Ilo6jv7dFqGRUj9r+c1HS V1bQD+MrT98ZhJYCwkUdY9HUDEF4vlTlVZamNpFDjPpYJMUQQVBYUyDXhJ9Ac9Xd HXAjScNLnvx8JS7cXCWaKyiU2TC+BkpMFUV9VqUv8fpPwbFbEfWNjiusnRcCdgsf dyIbuvDcci8ZgYMV3C0H =jhog -----END PGP SIGNATURE----- --tLveGb43symFnK3n-- From owner-freebsd-fs@FreeBSD.ORG Fri May 2 09:17:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1671C666; Fri, 2 May 2014 09:17:43 +0000 (UTC) Received: from mail-qc0-x232.google.com (mail-qc0-x232.google.com [IPv6:2607:f8b0:400d:c01::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9BD121D02; Fri, 2 May 2014 09:17:42 +0000 (UTC) Received: by mail-qc0-f178.google.com with SMTP id i8so4415598qcq.23 for ; Fri, 02 May 2014 02:17:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=VMqaFDVgY0n8+oacavk5qjzAdwzai9wCyd2R9gp3K44=; b=FBiwzAQCW3vbcnJt+io4Xp5q/BMjUnzX4WDsrjwmdrpvyM0kZTmQP7GnMC3vAo8DzI rOJECsn1/vFw3RFyDCrxYe2n9exJNQdSJ+2ArU2uSN6am8W59lHH7TXuZiyLco1fzMaN 3jkwx3GkWKFhnu93vTUqU2pkJ4kxhWfCZfvB/8J4pzpNssRccyNZUMinKO49k8cFlVHh o3D4NEF28xsRqHs+ZnhPQjPYm8IaA6iQ3XcAveiyclRvNE1EnBniJibebd4wcpdIC7P1 apRK/uGX1rQf9oDYu5aLfHLg9JCVxyJMJRORuxJ/+uILnQU/usrfU7lpPcNIzrAEkK1k yuug== MIME-Version: 1.0 X-Received: by 10.140.19.133 with SMTP id 5mr19315315qgh.46.1399022261709; Fri, 02 May 2014 02:17:41 -0700 (PDT) Received: by 10.96.181.230 with HTTP; Fri, 2 May 2014 02:17:41 -0700 (PDT) Date: Fri, 2 May 2014 02:17:41 -0700 Message-ID: Subject: lindev - Unable to build head branch From: varanasi sainath To: freebsd-questions@freebsd.org, freebsd-fs@freebsd.org, freebsd-drivers@freebsd.org, eadler@FreeBSD.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 May 2014 09:17:43 -0000 Hi All, Unable to build head branch because of the following change http://svnweb.freebsd.org/base/head/sys/dev/?view=log Please remove the respective Makefile also Located at : http://svnweb.freebsd.org/base/head/sys/modules/lindev/Makefile?view=log Thanks, Sainath. -- Sainath Varanasi Hyderabad 09000855250 *My Website : http://s21embedded.webs.com * *Linked In Profile : http://in.linkedin.com/pub/sainathvaranasi ....* From owner-freebsd-fs@FreeBSD.ORG Fri May 2 09:26:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 651A6B14; Fri, 2 May 2014 09:26:01 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id F364E1DEB; Fri, 2 May 2014 09:26:00 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 902EA20E7088B; Fri, 2 May 2014 09:25:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.9 required=8.0 tests=AWL,BAYES_05,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 268AC20E70885; Fri, 2 May 2014 09:25:52 +0000 (UTC) Message-ID: From: "Steven Hartland" To: "varanasi sainath" , , , , References: Subject: Re: lindev - Unable to build head branch Date: Fri, 2 May 2014 10:25:54 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 May 2014 09:26:01 -0000 ----- Original Message ----- From: "varanasi sainath" > Hi All, > > Unable to build head branch because of the following change > http://svnweb.freebsd.org/base/head/sys/dev/?view=log > > Please remove the respective Makefile also > Located at : > http://svnweb.freebsd.org/base/head/sys/modules/lindev/Makefile?view=log I believe this has just been fixed by: http://svnweb.freebsd.org/changeset/base/265217 Regards Steve From owner-freebsd-fs@FreeBSD.ORG Fri May 2 21:32:20 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D1F0B78A; Fri, 2 May 2014 21:32:20 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A79261D74; Fri, 2 May 2014 21:32:20 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s42LWKcW074132; Fri, 2 May 2014 21:32:20 GMT (envelope-from ae@freefall.freebsd.org) Received: (from ae@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s42LWKIH074131; Fri, 2 May 2014 21:32:20 GMT (envelope-from ae) Date: Fri, 2 May 2014 21:32:20 GMT Message-Id: <201405022132.s42LWKIH074131@freefall.freebsd.org> To: johan@immortal.localhost.nl, ae@FreeBSD.org, freebsd-fs@FreeBSD.org From: ae@FreeBSD.org Subject: Re: kern/36566: [smbfs] System reboot with dead smb mount and umount X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 May 2014 21:32:20 -0000 Synopsis: [smbfs] System reboot with dead smb mount and umount State-Changed-From-To: open->closed State-Changed-By: ae State-Changed-When: Fri May 2 21:31:29 UTC 2014 State-Changed-Why: http://www.freebsd.org/cgi/query-pr.cgi?pr=36566 From owner-freebsd-fs@FreeBSD.ORG Fri May 2 21:34:20 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D466B940; Fri, 2 May 2014 21:34:20 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AC3031D8D; Fri, 2 May 2014 21:34:20 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s42LYKF2074343; Fri, 2 May 2014 21:34:20 GMT (envelope-from ae@freefall.freebsd.org) Received: (from ae@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s42LYKmn074342; Fri, 2 May 2014 21:34:20 GMT (envelope-from ae) Date: Fri, 2 May 2014 21:34:20 GMT Message-Id: <201405022134.s42LYKmn074342@freefall.freebsd.org> To: bsditer@gmail.com, ae@FreeBSD.org, freebsd-fs@FreeBSD.org, ae@FreeBSD.org From: ae@FreeBSD.org Subject: Re: kern/87859: [smbfs] System reboot while umount smbfs. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 May 2014 21:34:20 -0000 Synopsis: [smbfs] System reboot while umount smbfs. State-Changed-From-To: open->closed State-Changed-By: ae State-Changed-When: Fri May 2 21:33:23 UTC 2014 State-Changed-Why: Fixed in head/ and stable/10. Responsible-Changed-From-To: freebsd-fs->ae Responsible-Changed-By: ae Responsible-Changed-When: Fri May 2 21:33:23 UTC 2014 Responsible-Changed-Why: Take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=87859 From owner-freebsd-fs@FreeBSD.ORG Fri May 2 21:35:57 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BB42BAA8; Fri, 2 May 2014 21:35:57 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 91AE61DA4; Fri, 2 May 2014 21:35:57 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s42LZvZj074515; Fri, 2 May 2014 21:35:57 GMT (envelope-from ae@freefall.freebsd.org) Received: (from ae@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s42LZvEH074514; Fri, 2 May 2014 21:35:57 GMT (envelope-from ae) Date: Fri, 2 May 2014 21:35:57 GMT Message-Id: <201405022135.s42LZvEH074514@freefall.freebsd.org> To: h8msft@gmail.com, ae@FreeBSD.org, freebsd-fs@FreeBSD.org, ae@FreeBSD.org From: ae@FreeBSD.org Subject: Re: kern/139407: [smbfs] [panic] smb mount causes system crash if remote share no longer accessible X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 May 2014 21:35:57 -0000 Synopsis: [smbfs] [panic] smb mount causes system crash if remote share no longer accessible State-Changed-From-To: open->closed State-Changed-By: ae State-Changed-When: Fri May 2 21:34:30 UTC 2014 State-Changed-Why: Fixed in head/ and stable/10. Responsible-Changed-From-To: freebsd-fs->ae Responsible-Changed-By: ae Responsible-Changed-When: Fri May 2 21:34:30 UTC 2014 Responsible-Changed-Why: Take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=139407 From owner-freebsd-fs@FreeBSD.ORG Fri May 2 21:44:55 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C5465E91; Fri, 2 May 2014 21:44:55 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 99D091FB0; Fri, 2 May 2014 21:44:55 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s42Litxj077856; Fri, 2 May 2014 21:44:55 GMT (envelope-from ae@freefall.freebsd.org) Received: (from ae@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s42Litqj077855; Fri, 2 May 2014 21:44:55 GMT (envelope-from ae) Date: Fri, 2 May 2014 21:44:55 GMT Message-Id: <201405022144.s42Litqj077855@freefall.freebsd.org> To: jumper99@gmx.de, ae@FreeBSD.org, freebsd-fs@FreeBSD.org, ae@FreeBSD.org From: ae@FreeBSD.org Subject: Re: kern/161579: [smbfs] FreeBSD sometimes panics when an smb share is mounted and the serving machine is disconnected/rebooted X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 May 2014 21:44:55 -0000 Synopsis: [smbfs] FreeBSD sometimes panics when an smb share is mounted and the serving machine is disconnected/rebooted State-Changed-From-To: open->closed State-Changed-By: ae State-Changed-When: Fri May 2 21:44:18 UTC 2014 State-Changed-Why: Fixed in head/ and stable/10. Responsible-Changed-From-To: freebsd-fs->ae Responsible-Changed-By: ae Responsible-Changed-When: Fri May 2 21:44:18 UTC 2014 Responsible-Changed-Why: Take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=161579 From owner-freebsd-fs@FreeBSD.ORG Fri May 2 21:46:59 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 09396140; Fri, 2 May 2014 21:46:59 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D2D591FCB; Fri, 2 May 2014 21:46:58 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s42LkwlQ078162; Fri, 2 May 2014 21:46:58 GMT (envelope-from ae@freefall.freebsd.org) Received: (from ae@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s42LkvEa078150; Fri, 2 May 2014 21:46:57 GMT (envelope-from ae) Date: Fri, 2 May 2014 21:46:57 GMT Message-Id: <201405022146.s42LkvEa078150@freefall.freebsd.org> To: v.chernyadev@tradesoft.ru, ae@FreeBSD.org, freebsd-fs@FreeBSD.org, ae@FreeBSD.org From: ae@FreeBSD.org Subject: Re: kern/178412: [smbfs] Coredump when smbfs mounted X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 May 2014 21:46:59 -0000 Synopsis: [smbfs] Coredump when smbfs mounted State-Changed-From-To: open->closed State-Changed-By: ae State-Changed-When: Fri May 2 21:45:49 UTC 2014 State-Changed-Why: Fixed in head/ and stable/10. Responsible-Changed-From-To: freebsd-fs->ae Responsible-Changed-By: ae Responsible-Changed-When: Fri May 2 21:45:49 UTC 2014 Responsible-Changed-Why: Take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=178412 From owner-freebsd-fs@FreeBSD.ORG Fri May 2 21:47:41 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4768A1B0; Fri, 2 May 2014 21:47:41 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1CC1D1FD0; Fri, 2 May 2014 21:47:41 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s42LleWg078267; Fri, 2 May 2014 21:47:40 GMT (envelope-from ae@freefall.freebsd.org) Received: (from ae@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s42LleLk078266; Fri, 2 May 2014 21:47:40 GMT (envelope-from ae) Date: Fri, 2 May 2014 21:47:40 GMT Message-Id: <201405022147.s42LleLk078266@freefall.freebsd.org> To: nakal@web.de, ae@FreeBSD.org, freebsd-fs@FreeBSD.org, ae@FreeBSD.org From: ae@FreeBSD.org Subject: Re: kern/186652: [smbfs] [panic] crash during umount -a -t smbfs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 May 2014 21:47:41 -0000 Synopsis: [smbfs] [panic] crash during umount -a -t smbfs State-Changed-From-To: open->closed State-Changed-By: ae State-Changed-When: Fri May 2 21:47:05 UTC 2014 State-Changed-Why: Fixed in head/ and stable/10. Responsible-Changed-From-To: freebsd-fs->ae Responsible-Changed-By: ae Responsible-Changed-When: Fri May 2 21:47:05 UTC 2014 Responsible-Changed-Why: Take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=186652 From owner-freebsd-fs@FreeBSD.ORG Fri May 2 21:51:50 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F215B267; Fri, 2 May 2014 21:51:49 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C689A1092; Fri, 2 May 2014 21:51:49 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s42Lpn86078713; Fri, 2 May 2014 21:51:49 GMT (envelope-from ae@freefall.freebsd.org) Received: (from ae@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s42LpmDI078712; Fri, 2 May 2014 21:51:48 GMT (envelope-from ae) Date: Fri, 2 May 2014 21:51:48 GMT Message-Id: <201405022151.s42LpmDI078712@freefall.freebsd.org> To: tommy@anakin.ws, ae@FreeBSD.org, freebsd-fs@FreeBSD.org, ae@FreeBSD.org From: ae@FreeBSD.org Subject: Re: kern/173235: [smbfs] [panic] Have received two crashes within 1 day after installing new packages: Fatal trap 12: page fault in kernel mode X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 May 2014 21:51:50 -0000 Synopsis: [smbfs] [panic] Have received two crashes within 1 day after installing new packages: Fatal trap 12: page fault in kernel mode State-Changed-From-To: open->closed State-Changed-By: ae State-Changed-When: Fri May 2 21:49:54 UTC 2014 State-Changed-Why: Fixed in head@r264600 and stable/10@r265243. Responsible-Changed-From-To: freebsd-fs->ae Responsible-Changed-By: ae Responsible-Changed-When: Fri May 2 21:49:54 UTC 2014 Responsible-Changed-Why: Take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=173235 From owner-freebsd-fs@FreeBSD.ORG Sun May 4 05:17:30 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8E8659EB; Sun, 4 May 2014 05:17:30 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 640171DB4; Sun, 4 May 2014 05:17:30 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s445HUUZ092714; Sun, 4 May 2014 05:17:30 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s445HUtk092713; Sun, 4 May 2014 05:17:30 GMT (envelope-from linimon) Date: Sun, 4 May 2014 05:17:30 GMT Message-Id: <201405040517.s445HUtk092713@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/175328: [fusefs] [panic] fusefs kernel page fault X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 May 2014 05:17:30 -0000 Old Synopsis: [panic] fusefs kernel page fault New Synopsis: [fusefs] [panic] fusefs kernel page fault Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun May 4 05:17:10 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=175328 From owner-freebsd-fs@FreeBSD.ORG Sun May 4 05:17:56 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 335F2A9F; Sun, 4 May 2014 05:17:56 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 085F01DC1; Sun, 4 May 2014 05:17:56 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s445Htr7092781; Sun, 4 May 2014 05:17:55 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s445Htq9092780; Sun, 4 May 2014 05:17:55 GMT (envelope-from linimon) Date: Sun, 4 May 2014 05:17:55 GMT Message-Id: <201405040517.s445Htq9092780@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/182739: [fusefs] [panic] sysutils/fusefs-kmod kernel panic on rsync X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 May 2014 05:17:56 -0000 Old Synopsis: [panic] sysutils/fusefs-kmod kernel panic on rsync New Synopsis: [fusefs] [panic] sysutils/fusefs-kmod kernel panic on rsync Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun May 4 05:17:41 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=182739 From owner-freebsd-fs@FreeBSD.ORG Sun May 4 05:18:48 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 20A41B5E; Sun, 4 May 2014 05:18:48 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E9B6E1DD3; Sun, 4 May 2014 05:18:47 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s445Ils4092909; Sun, 4 May 2014 05:18:47 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s445Il5U092907; Sun, 4 May 2014 05:18:47 GMT (envelope-from linimon) Date: Sun, 4 May 2014 05:18:47 GMT Message-Id: <201405040518.s445Il5U092907@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/184013: [fusefs] truecrypt broken (probably fusefs issue) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 May 2014 05:18:48 -0000 Old Synopsis: truecrypt broken (probably fusefs issue) New Synopsis: [fusefs] truecrypt broken (probably fusefs issue) Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun May 4 05:15:39 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=184013 From owner-freebsd-fs@FreeBSD.ORG Sun May 4 16:29:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BB835449 for ; Sun, 4 May 2014 16:29:43 +0000 (UTC) Received: from mail-ee0-x22c.google.com (mail-ee0-x22c.google.com [IPv6:2a00:1450:4013:c00::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 50CA515F6 for ; Sun, 4 May 2014 16:29:43 +0000 (UTC) Received: by mail-ee0-f44.google.com with SMTP id c41so4563847eek.17 for ; Sun, 04 May 2014 09:29:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=daMK1YeriaJM/EAvrrFb4CEf99qQsjvY5c/+ZGglN/M=; b=S15qAvWstm1C3xoM6y4cSu2AdHSqlpDrOdl2yOmB84fXX64PeXCLTWpnylXIkJ+CiB BwoGLcfPRqBBs9D+WQTiVcC0WkbQ7wXeeuUta7NnvtxdIeDJw1MCb4+zdQWMA5Zq5SDX Bt83/FLXIYZTJiq9vmJpP+k+RAKm4TDe8LBwIFRiCeP3ZAbwPMev53KgI9reVOUODnbs U9dtX48sVe+pjc/o1IWoIVnWUqRgkDV3x0SI/5+QUAC5No0/GsqzCJWZ/zJd8yIlFVso jcbBeYki63029NHCe+oQZwLhvCsromi9CiFe9S+G1FH/q9NNTvop77HwZo2mCqMQfhE8 xSYw== X-Received: by 10.15.53.135 with SMTP id r7mr2700657eew.102.1399220981468; Sun, 04 May 2014 09:29:41 -0700 (PDT) Received: from strashydlo.home (aeef175.neoplus.adsl.tpnet.pl. [79.186.109.175]) by mx.google.com with ESMTPSA id a42sm22132649ees.10.2014.05.04.09.29.40 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 04 May 2014 09:29:40 -0700 (PDT) Sender: =?UTF-8?Q?Edward_Tomasz_Napiera=C5=82a?= Subject: Re: ZFS ACL inheritance with aclmode=passthrough Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=iso-8859-2 From: =?iso-8859-2?Q?Edward_Tomasz_Napiera=B3a?= In-Reply-To: <52125FF9.4080005@gmail.com> Date: Sun, 4 May 2014 18:29:38 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <586DA3CC-58F1-45B9-9775-17D879C7FE5B@FreeBSD.org> References: <52125FF9.4080005@gmail.com> To: Andrey Russev X-Mailer: Apple Mail (2.1283) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 May 2014 16:29:43 -0000 Wiadomo=B6=E6 napisana przez Andrey Russev w dniu 19 sie 2013, o godz. = 20:12: > Hello, > it looks like ZFS ACL inheritance implementation in 8.4-RELEASE does = not match the manual page. In case aclinherit=3Drestricted and = aclmode=3Dpassthrough all permissions inherited from allow ACEs are = masked(?) by group permissions. For example, ACEs of parent directory = are >=20 > group:wheel:rwxp----------:-d----:allow > owner@:rwxp--aARWcCos:------:allow > group@:r-x---a-R-c--s:------:allow > everyone@:r-x---a-R-c--s:------:allow >=20 > but ACEs of child directory are >=20 > group:wheel:r-x-----------:-d----:allow > owner@:rwxp--aARWcCos:------:allow > group@:r-x---a-R-c--s:------:allow > everyone@:r-x---a-R-c--s:------:allow >=20 > I think that first entry must be copied without modification. It works = this way in 8.1-RELEASE. >=20 > I believe that this difference was introduced by r224174 in lines: >=20 > 1732 zfs_acl_chmod(vap->va_type, acl_ids->z_mode, > 1733 (zfsvfs->z_acl_inherit =3D=3D = ZFS_ACL_RESTRICTED), > 1734 acl_ids->z_aclp); >=20 > because function zfs_acl_chmod applies group mask to all allow ACEs if = third argument is non zero and everything works as expected when = aclinherit=3Dpassthrough. Am I right? First of all, sorry for delay. No idea where that time went. I think your analysis is correct. However, I think it's not something = we should touch. It's either a documentation bug - in which case the manual page = should be updated - or a semantics issue that should be dealt with by upstream = (which probably means OpenZFS) and then imported; it would be bad for FreeBSD to diverge from other ZFS implementations in file permission semantics. From owner-freebsd-fs@FreeBSD.ORG Sun May 4 18:08:55 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6C392333; Sun, 4 May 2014 18:08:55 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3F00C1135; Sun, 4 May 2014 18:08:55 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s44I8tCL090639; Sun, 4 May 2014 18:08:55 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s44I8tOk090638; Sun, 4 May 2014 18:08:55 GMT (envelope-from linimon) Date: Sun, 4 May 2014 18:08:55 GMT Message-Id: <201405041808.s44I8tOk090638@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 May 2014 18:08:55 -0000 Old Synopsis: zfs panic on root mount 10-stable New Synopsis: [zfs] zfs panic on root mount 10-stable Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun May 4 18:08:31 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=189355 From owner-freebsd-fs@FreeBSD.ORG Sun May 4 18:20:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A8E3D64A for ; Sun, 4 May 2014 18:20:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7C9AE1207 for ; Sun, 4 May 2014 18:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s44IK12l094875 for ; Sun, 4 May 2014 18:20:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s44IK1Fg094874; Sun, 4 May 2014 18:20:01 GMT (envelope-from gnats) Date: Sun, 4 May 2014 18:20:01 GMT Message-Id: <201405041820.s44IK1Fg094874@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Steven Hartland List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 May 2014 18:20:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: "Steven Hartland" To: , Cc: Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Sun, 4 May 2014 19:16:43 +0100 What version of stable, what zpool layout are you using on what hardware? Regards Steve From owner-freebsd-fs@FreeBSD.ORG Sun May 4 23:00:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D05CDC25 for ; Sun, 4 May 2014 23:00:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BE7661AED for ; Sun, 4 May 2014 23:00:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s44N01Qk000146 for ; Sun, 4 May 2014 23:00:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s44N01YV000145; Sun, 4 May 2014 23:00:01 GMT (envelope-from gnats) Date: Sun, 4 May 2014 23:00:01 GMT Message-Id: <201405042300.s44N01YV000145@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 May 2014 23:00:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: Steven Hartland , "bug-followup@freebsd.org" Cc: Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Sun, 4 May 2014 22:54:34 +0000 --_7e6d6f21-3c4f-4028-b48b-906f77411ad3_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable 10.0-STABLE #0 r265265 running in VMware. i run this virtual machine nice f= reebsd 7. Problems with ZFS started with 10.0 release. Sometimes it panicked at boot = during initial mount after unclean shutdown=2C but restarting virtual machi= ne fixed problem. Reported problem is after good shutdown and update to 10-STABLE. ZFS versio= n 5=2C pool version 5000. Architecture is i386 (32-bit). If you know how to make panic dump work during initial mount let me know i = will capture more data. = --_7e6d6f21-3c4f-4028-b48b-906f77411ad3_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
10.0-STABLE #0 r265265 running i= n VMware. i run this virtual machine nice freebsd 7.

Problems with Z= FS started with 10.0 release. Sometimes it panicked at boot during initial = mount after unclean shutdown=2C but restarting virtual machine fixed proble= m.

Reported problem is after good shutdown and update to 10-STABLE. = ZFS version 5=2C pool version 5000. Architecture is i386 (32-bit).

I= f you know how to make panic dump work during initial mount let me know i w= ill capture more data.
= --_7e6d6f21-3c4f-4028-b48b-906f77411ad3_-- From owner-freebsd-fs@FreeBSD.ORG Mon May 5 03:19:32 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 882916C9; Mon, 5 May 2014 03:19:32 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 59977159F; Mon, 5 May 2014 03:19:32 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s453JWdS001194; Mon, 5 May 2014 03:19:32 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s453JWVL001193; Mon, 5 May 2014 03:19:32 GMT (envelope-from linimon) Date: Mon, 5 May 2014 03:19:32 GMT Message-Id: <201405050319.s453JWVL001193@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/183077: [opensolaris] [patch] don't have the compiler inline txg_quiesce so that zilstat works X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 May 2014 03:19:32 -0000 Old Synopsis: don't have the compiler inline txg_quiesce so that zilstat works New Synopsis: [opensolaris] [patch] don't have the compiler inline txg_quiesce so that zilstat works Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon May 5 03:18:50 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=183077 From owner-freebsd-fs@FreeBSD.ORG Mon May 5 11:06:43 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6E55BD0D for ; Mon, 5 May 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5AD0B1CE6 for ; Mon, 5 May 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s45B6hmX083095 for ; Mon, 5 May 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s45B6gle083093 for freebsd-fs@FreeBSD.org; Mon, 5 May 2014 11:06:42 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 5 May 2014 11:06:42 GMT Message-Id: <201405051106.s45B6gle083093@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 May 2014 11:06:43 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/189355 fs [zfs] zfs panic on root mount 10-stable o kern/188443 fs [smbfs] Segfault with tail(1) when mmap(2) called o kern/188328 fs [zfs] UPDATING should provide caveats for running `zpo o kern/188187 fs [zfs] [panic] 10-stable: Kernel panic on zpool import: o kern/187905 fs [zpool] Confusion zpool with a block size in HDD - blo o kern/187778 fs [zfs] Two ZFS filesystems mounted on / at same time o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix s kern/187414 fs [zfs] ZFS Write Deadlock on 8.4 o kern/187261 fs [fusefs] FUSE kernel panic when using socket / bind o kern/186942 fs [zfs] [panic] Fatal trap 12 (seems zfs related) o kern/186720 fs [xfs] is xfs now unsupported in the kernel? o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/186112 fs [zfs] [panic] ZFS Panic/Solaris Assert/zap.c:479 o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185734 fs [zfs] [panic] panic on stable/10 when writing to ZFS d o kern/185374 fs [msdosfs] [panic] Unmounting msdos filesystem in a bad o kern/184677 fs [zfs] [panic] ZFS snapshot umount kernel panic o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/184013 fs [fusefs] truecrypt broken (probably fusefs issue) o kern/183077 fs [opensolaris] [patch] don't have the compiler inline t o kern/182739 fs [fusefs] [panic] sysutils/fusefs-kmod kernel panic on o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] [panic] Kernel panic in ZFS I/O: solaris assert: o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181791 fs [zfs] ZFS ARC Deadlock o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175328 fs [fusefs] [panic] fusefs kernel page fault o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [softupdates] [panic] softdep_deallocate_dependencies: o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172630 fs [zfs] [lor] zfs/zfs_vfsops.c kern/kern_descrip.c o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus f kern/172197 fs [zfs] Userquota (as well as groupquota) does not work o kern/172092 fs [zfs] [panic] zfs import panics kernel o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170523 fs [zfs] zfs rename pool@snapshot1 pool@snapshot2 UNMOUNT o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs [ufs] [patch] Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162195 fs [softupdates] [panic] panic with soft updates journali o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] [panic] zfs/dbuf.c: panic: solaris assert: arc_b o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] [panic] panic during ls(1) zfs snapshot director o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [softupdates] [panic] Kernel panic in softdep_disk_wri o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] [panic] zpool scrub causes panic if geli vdevs d o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 357 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon May 5 18:30:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5407FCF2 for ; Mon, 5 May 2014 18:30:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2879DE6B for ; Mon, 5 May 2014 18:30:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s45IU1KA041427 for ; Mon, 5 May 2014 18:30:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s45IU0vD041426; Mon, 5 May 2014 18:30:01 GMT (envelope-from gnats) Date: Mon, 5 May 2014 18:30:01 GMT Message-Id: <201405051830.s45IU0vD041426@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: John Baldwin Subject: Re: kern/175328: [fusefs] [panic] fusefs kernel page fault Reply-To: John Baldwin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 May 2014 18:30:01 -0000 The following reply was made to PR kern/175328; it has been noted by GNATS. From: John Baldwin To: bug-followup@freebsd.org, denns@cknw.com Cc: mirror176@cox.net Subject: Re: kern/175328: [fusefs] [panic] fusefs kernel page fault Date: Mon, 5 May 2014 14:15:02 -0400 Please try applying the patch at http://people.freebsd.org/~jhb/patches/fuse_port.patch It fixed a similar panic for me. -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Mon May 5 18:30:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6200ACF4 for ; Mon, 5 May 2014 18:30:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 37EAFE6D for ; Mon, 5 May 2014 18:30:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s45IU2RJ041433 for ; Mon, 5 May 2014 18:30:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s45IU1k6041432; Mon, 5 May 2014 18:30:01 GMT (envelope-from gnats) Date: Mon, 5 May 2014 18:30:01 GMT Message-Id: <201405051830.s45IU1k6041432@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: John Baldwin Subject: Re: kern/182739: [fusefs] [panic] sysutils/fusefs-kmod kernel panic on rsync Reply-To: John Baldwin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 May 2014 18:30:02 -0000 The following reply was made to PR kern/182739; it has been noted by GNATS. From: John Baldwin To: bug-followup@freebsd.org, admin@3dr.org Cc: mirror176@cox.net Subject: Re: kern/182739: [fusefs] [panic] sysutils/fusefs-kmod kernel panic on rsync Date: Mon, 5 May 2014 14:15:34 -0400 Please try the patch at http://people.freebsd.org/~jhb/patches/fuse_port.patch -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Tue May 6 00:50:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EA5A1F5C for ; Tue, 6 May 2014 00:50:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C91C8913 for ; Tue, 6 May 2014 00:50:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s460o0Om070386 for ; Tue, 6 May 2014 00:50:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s460o0Ru070369; Tue, 6 May 2014 00:50:00 GMT (envelope-from gnats) Date: Tue, 6 May 2014 00:50:00 GMT Message-Id: <201405060050.s460o0Ru070369@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: "Steven Hartland" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 May 2014 00:50:02 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: "Steven Hartland" To: , Cc: Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Tue, 6 May 2014 01:44:29 +0100 This is a multi-part message in MIME format. ------=_NextPart_000_0300_01CF68CC.B8749960 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=original Content-Transfer-Encoding: 7bit Can you try building a debug kernel with the attached patch. It should allow you to configure a dump device before the root device is mounted by specifying it in /boot/loader.conf kern.shutdown.defaultdumpdev="ada4p3" Ensure you choose a valid device such a swap device. If all goes well that will enable you to get a proper stack trace from the dump. Regards Steve ------=_NextPart_000_0300_01CF68CC.B8749960 Content-Type: application/octet-stream; name="default-dump-dev.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="default-dump-dev.patch" Quick patch which configures kernel dump location prior to mounting root.=0A= =0A= The location is controlled by the new tunable:=0A= kern.shutdown.defaultdumpdev=0A= =0A= An example of configuring it would be to add the following to = /boot/loader.conf:=0A= kern.shutdown.defaultdumpdev=3D"ada4p3"=0A= =0A= This would configure kernel dumps on ata disk 4 partition 3.=0A= =0A= The usual rules should be maintained when picking a device i.e. choose a = device=0A= use for swap or otherwise unused.=0A= --- sys/kern/kern_shutdown.c.orig 2014-05-04 23:37:01.954116628 +0000=0A= +++ sys/kern/kern_shutdown.c 2014-05-06 00:28:54.591101862 +0000=0A= @@ -50,12 +50,14 @@ __FBSDID("$FreeBSD: releng/10.0/sys/kern=0A= #include =0A= #include =0A= #include =0A= +#include =0A= #include =0A= #include =0A= #include =0A= #include =0A= #include =0A= #include =0A= +#include =0A= #include =0A= #include =0A= #include =0A= @@ -72,6 +74,9 @@ __FBSDID("$FreeBSD: releng/10.0/sys/kern=0A= =0A= #include =0A= =0A= +#include =0A= +#include =0A= +=0A= #include =0A= #include =0A= #include =0A= @@ -245,6 +250,72 @@ print_uptime(void)=0A= printf("%lds\n", (long)ts.tv_sec);=0A= }=0A= =0A= +static char defaultdumpdev[MAXPATHLEN];=0A= +TUNABLE_STR("kern.shutdown.defaultdumpdev", defaultdumpdev,=0A= + sizeof(defaultdumpdev));=0A= +SYSCTL_STRING(_kern_shutdown, OID_AUTO, defaultdumpdev, CTLFLAG_RDTUN,=0A= + defaultdumpdev, 0, "Default device for early kernel dumps");=0A= +=0A= +int=0A= +setdumpdev(char *devname)=0A= +{=0A= + struct thread *td =3D curthread;=0A= + int error, i, ref;=0A= + struct g_consumer *cp;=0A= + struct g_kerneldump kd;=0A= + struct cdev_priv *cdp;=0A= + struct cdev *dev;=0A= + struct cdevsw *dsw;=0A= +=0A= + if (devname =3D=3D NULL || strlen(devname) =3D=3D 0)=0A= + return (set_dumper(NULL, NULL));=0A= +=0A= + dev =3D NULL;=0A= + dev_lock();=0A= + TAILQ_FOREACH(cdp, &cdevp_list, cdp_list) {=0A= + dev =3D &cdp->cdp_c;=0A= + if (strcmp(dev->si_name, devname) =3D=3D 0)=0A= + break;=0A= + dev =3D NULL;=0A= + }=0A= + dev_unlock();=0A= +=0A= + if (dev =3D=3D NULL)=0A= + return (ENOENT);=0A= +=0A= + dsw =3D dev_refthread(dev, &ref);=0A= + if (dsw =3D=3D NULL)=0A= + return (ENXIO);=0A= +=0A= + error =3D dsw->d_open(dev, FREAD, 0, td);=0A= + if (error !=3D 0) {=0A= + dev_relthread(dev, ref);=0A= + return (error);=0A= + }=0A= +=0A= + cp =3D dev->si_drv2;=0A= + kd.offset =3D 0;=0A= + kd.length =3D OFF_MAX;=0A= + i =3D sizeof(kd);=0A= + error =3D g_io_getattr("GEOM::kerneldump", cp, &i, &kd);=0A= + if (error =3D=3D 0) {=0A= + error =3D set_dumper(&kd.di, devtoname(dev));=0A= + if (error =3D=3D 0)=0A= + dev->si_flags |=3D SI_DUMPDEV;=0A= + }=0A= +=0A= + (void)dev->si_devsw->d_close(dev, FREAD, 0, td);=0A= + dev_relthread(dev, ref);=0A= +=0A= + return (error);=0A= +}=0A= +=0A= +int=0A= +setdumpdev_default(void)=0A= +{=0A= + return (setdumpdev(defaultdumpdev));=0A= +}=0A= +=0A= int=0A= doadump(boolean_t textdump)=0A= {=0A= --- sys/kern/init_main.c.orig 2014-05-05 18:06:24.008837474 +0000=0A= +++ sys/kern/init_main.c 2014-05-05 22:39:52.964175470 +0000=0A= @@ -697,6 +697,8 @@ start_init(void *dummy)=0A= struct thread *td;=0A= struct proc *p;=0A= =0A= + setdumpdev_default();=0A= +=0A= mtx_lock(&Giant);=0A= =0A= GIANT_REQUIRED;=0A= --- sys/sys/conf.h.orig 2014-05-05 02:20:24.408440686 +0000=0A= +++ sys/sys/conf.h 2014-05-05 15:21:06.150967151 +0000=0A= @@ -338,6 +338,8 @@ int set_dumper(struct dumperinfo *, cons=0A= int dump_write(struct dumperinfo *, void *, vm_offset_t, off_t, size_t);=0A= void dumpsys(struct dumperinfo *);=0A= int doadump(boolean_t);=0A= +int setdumpdev_default(void);=0A= +int setdumpdev(char *);=0A= extern int dumping; /* system is dumping */=0A= =0A= #endif /* _KERNEL */=0A= ------=_NextPart_000_0300_01CF68CC.B8749960-- From owner-freebsd-fs@FreeBSD.ORG Wed May 7 17:58:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0B770117 for ; Wed, 7 May 2014 17:58:04 +0000 (UTC) Received: from limerock03.mail.cornell.edu (limerock03.mail.cornell.edu [128.84.12.34]) by mx1.freebsd.org (Postfix) with ESMTP id C443D8F9 for ; Wed, 7 May 2014 17:58:03 +0000 (UTC) Received: from limerock02.mail.cornell.edu (limerock02.mail.cornell.edu [128.84.12.100]) by limerock03.mail.cornell.edu (8.14.4/8.14.4_cu) with ESMTP id s47Hvu7G013778 for ; Wed, 7 May 2014 13:57:56 -0400 X-CornellRouted: This message has been Routed already. Received: from exchange.cornell.edu (cashub07.exchange.cornell.edu [10.16.197.26]) by limerock02.mail.cornell.edu (8.14.4/8.14.4_cu) with ESMTP id s47HvqGo023960 for ; Wed, 7 May 2014 13:57:56 -0400 Received: from na01-bl2-obe.outbound.protection.outlook.com (207.46.163.210) by exchange.cornell.edu (128.253.27.26) with Microsoft SMTP Server (TLS) id 14.3.158.1; Wed, 7 May 2014 13:57:55 -0400 Received: from BY2PR04MB096.namprd04.prod.outlook.com (10.242.37.153) by BY2PR04MB095.namprd04.prod.outlook.com (10.242.37.149) with Microsoft SMTP Server (TLS) id 15.0.929.12; Wed, 7 May 2014 17:57:48 +0000 Received: from BY2PR04MB096.namprd04.prod.outlook.com ([169.254.5.71]) by BY2PR04MB096.namprd04.prod.outlook.com ([169.254.5.71]) with mapi id 15.00.0929.001; Wed, 7 May 2014 17:57:47 +0000 From: "Marty J. Sullivan" To: "freebsd-fs@freebsd.org" Subject: nfsv4 server with ACL's for RHEL clients Thread-Topic: nfsv4 server with ACL's for RHEL clients Thread-Index: Ac9qGu4C3SRWaSaqQ0SEwfb3dPCttA== Date: Wed, 7 May 2014 17:57:47 +0000 Message-ID: <89bb0dc035824b8f9c05da1615b030aa@BY2PR04MB096.namprd04.prod.outlook.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [132.236.71.226] x-forefront-prvs: 0204F0BDE2 x-forefront-antispam-report: SFV:NSPM; SFS:(10009001)(6009001)(428001)(199002)(189002)(74502001)(74662001)(15975445006)(85852003)(92566001)(81342001)(83072002)(99286001)(79102001)(77982001)(20776003)(81542001)(99396002)(19580395003)(19300405004)(87936001)(80022001)(66066001)(76576001)(15202345003)(2656002)(50986999)(74316001)(54356999)(4396001)(76482001)(33646001)(86362001)(83322001)(31966008)(16236675002)(75432001)(101416001)(19625215002)(46102001)(21314002)(24736002); DIR:OUT; SFP:1101; SCL:1; SRVR:BY2PR04MB095; H:BY2PR04MB096.namprd04.prod.outlook.com; FPR:3E32F006.ACDE8E89.3AF061BB.88E9D831.20386; MLV:sfv; PTR:InfoNoRecords; A:1; MX:1; LANG:en; received-spf: None (: cornell.edu does not designate permitted sender hosts) MIME-Version: 1.0 X-OriginatorOrg: cornell.edu Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 May 2014 17:58:04 -0000 I am testing FreeBSD 10.0 for use as a ZFS storage server. Currently I am t= esting Active Directory integration and serving files via AFP, SMB/CIFS, an= d NFSv4. My current production environment contains mostly Linux (CentOS/RH= EL) and OSX machines all bound to the same Active Directory domain. So far, I have gotten the Active Directory authentication set up via Samba4= .1+Winbind and it is working nicely as are the related CIFS shares. I also = have AFP set up via afpd and it is also working great. ACL's a treated the = same way as they are on other systems in my production environment. Where I am having trouble is getting NFSv4 to work with ACL's. First off, I= am very used to NFS on Linux and so the /etc/exports syntax is almost cert= ainly what is causing my troubles. On RHEL, here is what my /etc/exports mi= ght look like: /data mycomputer.mydomain.com(rw,no_root_squash) And I start mountd with the option "--manage-gids" so that gid's are not ma= naged by the client (since they would then be limited to 16 groups). This w= orks great and ACL's work fine across all of my Linux systems. On FreeBSD, this is what I have for my /etc/exports at the current time: V4: / mycomputer.mydomain.com /data -maproot=3Droot -network xxx.xxx.xxx.xxx -mask xxx.xxx.xxx.xxx Now, I've read many posts about this syntax and I can't seem to find a stra= ight answer as to whether the "/data" entry below the "V4:" entry applies t= o NFSv4 or NFSv3. Either way, it doesn't really work. I've tried tinkering = with these exports in many permutations and I just can't get it to work. Mo= st of the time the machine will be denied access (due to bad exports file).= Other times, it will mount but will just say "Input/Output error" when I t= ry to read from the share. And finally, sometimes I can mount the share on = an RHEL system, but when I use nfs4_getfacl, it says that the operation is = not supported by the server. My other concern is, even if I get the ACL's to work, mountd on the FreeBSD= server doesn't have a similar option to --manage-gids so the NFS group lim= itation will apply to the RHEL clients. I've read about gssd and kerberizin= g, but I don't feel like that's possible on the RHEL clients. So how do I s= olve this problem?? Any help with this is appreciated. From owner-freebsd-fs@FreeBSD.ORG Wed May 7 22:05:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3F499F2C for ; Wed, 7 May 2014 22:05:29 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 041F23FE for ; Wed, 7 May 2014 22:05:28 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqYEAPmsalODaFve/2dsb2JhbABaDoNHWIJnunyGa1EBgTN0giUBAQEDAQEBASArIAsFFhgCAg0ZAikBCRgBDQYIBwQBHASIDAMJCA2qWp5nFoY3F4EqjFAHAQEGFTQHgnSBSwSWWoQZkT+CdFwhMAV7AQgXIg X-IronPort-AV: E=Sophos;i="4.97,1006,1389762000"; d="scan'208";a="120771078" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 07 May 2014 18:05:21 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id B4DD7B3F12; Wed, 7 May 2014 18:05:21 -0400 (EDT) Date: Wed, 7 May 2014 18:05:21 -0400 (EDT) From: Rick Macklem To: "Marty J. Sullivan" Message-ID: <539317876.4358954.1399500321729.JavaMail.root@uoguelph.ca> In-Reply-To: <89bb0dc035824b8f9c05da1615b030aa@BY2PR04MB096.namprd04.prod.outlook.com> Subject: Re: nfsv4 server with ACL's for RHEL clients MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 May 2014 22:05:29 -0000 Marty J. Sullivan wrote: > I am testing FreeBSD 10.0 for use as a ZFS storage server. Currently > I am testing Active Directory integration and serving files via AFP, > SMB/CIFS, and NFSv4. My current production environment contains > mostly Linux (CentOS/RHEL) and OSX machines all bound to the same > Active Directory domain. > > So far, I have gotten the Active Directory authentication set up via > Samba4.1+Winbind and it is working nicely as are the related CIFS > shares. I also have AFP set up via afpd and it is also working > great. ACL's a treated the same way as they are on other systems in > my production environment. > > Where I am having trouble is getting NFSv4 to work with ACL's. First > off, I am very used to NFS on Linux and so the /etc/exports syntax > is almost certainly what is causing my troubles. On RHEL, here is > what my /etc/exports might look like: > > /data mycomputer.mydomain.com(rw,no_root_squash) > > And I start mountd with the option "--manage-gids" so that gid's are > not managed by the client (since they would then be limited to 16 > groups). This works great and ACL's work fine across all of my Linux > systems. > > > > On FreeBSD, this is what I have for my /etc/exports at the current > time: > > V4: / mycomputer.mydomain.com > /data -maproot=root -network xxx.xxx.xxx.xxx -mask xxx.xxx.xxx.xxx > > Now, I've read many posts about this syntax and I can't seem to find > a straight answer as to whether the "/data" entry below the "V4:" > entry applies to NFSv4 or NFSv3. The line applies to both. However you have not exported "/". The "V4:" line just defines where the NFSv4 root is, it does not export any file system. If you change the above to: V4: /data -network xxx.xxx.xxx.xxx -mask xxx.xxx.xxx.xxx /data -maproot=root -network xxx.xxx.xxx.xxx -mask xxx.xxx.xxx.xxx Then "/data" is mounted via: # mount -t nfs -o vers=4 :/ /mnt > Either way, it doesn't really work. > I've tried tinkering with these exports in many permutations and I > just can't get it to work. Most of the time the machine will be > denied access (due to bad exports file). Other times, it will mount > but will just say "Input/Output error" when I try to read from the > share. And finally, sometimes I can mount the share on an RHEL > system, but when I use nfs4_getfacl, it says that the operation is > not supported by the server. > At one time, the Linux client tried to munge a POSIX draft ACL into an NFSv4 ACL. I have no idea if nfs4_getfacl suports native NFSv4 (aka Windows style) ACLs. > My other concern is, even if I get the ACL's to work, mountd on the > FreeBSD server doesn't have a similar option to --manage-gids so the > NFS group limitation will apply to the RHEL clients. I've read about > gssd and kerberizing, but I don't feel like that's possible on the > RHEL clients. So how do I solve this problem?? > There is nothing like "--manage-gids" on FreeBSD. If you have users in more than 16 groups, the NFS server will only see the first 16 (or 17 if the Linux client doesn't duplicate the gid in the gid list for the AUTH_SYS authenticator). The only way to avoid this would be using Kerberos instead of AUTH_SYS. rick > Any help with this is appreciated. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri May 9 20:22:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 83FEADA1 for ; Fri, 9 May 2014 20:22:23 +0000 (UTC) Received: from mail-lb0-x230.google.com (mail-lb0-x230.google.com [IPv6:2a00:1450:4010:c04::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EF4D9F68 for ; Fri, 9 May 2014 20:22:22 +0000 (UTC) Received: by mail-lb0-f176.google.com with SMTP id p9so5755046lbv.21 for ; Fri, 09 May 2014 13:22:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=mcuRBFyLDeDZPzmoHJ8bG9Mj7z+9gLcdK7ibHy86OX0=; b=H3TZusJnVSYsl/G13h1FFJbr/W8iGNPQb4ikJ16/jK/0pDF0WGobd/C8be+DnGSPa2 hc/3O83WQRnd3IDF4zf5fjFeK2DM64+1NHmHLZKwkwzOGfxhAXsY8gBzWJ2Y5cE/vJnx CxMCr4TbUn23YkbgEBUXH3PCryaLvZ5Dt17FBTviPtfoSgdqR41PRksRXzmRz4+MhTO6 kBtS8FYJXP/BLvQ9TD4QFwZUQJQVDjBUQY7W4tpq5jP7GWRp3VuMaKMHGgFvPfR+xMBQ vAHuhtGQ1B92sqSiBh1m5sdvw1ijM+ylQ8wkuwI1w/aPCFwU0qP2+5uuEEZeho/Uc6eB 1uCA== MIME-Version: 1.0 X-Received: by 10.112.26.199 with SMTP id n7mr12374866lbg.27.1399666940760; Fri, 09 May 2014 13:22:20 -0700 (PDT) Received: by 10.114.78.102 with HTTP; Fri, 9 May 2014 13:22:20 -0700 (PDT) Date: Fri, 9 May 2014 13:22:20 -0700 Message-ID: Subject: Data loss in ZFS filesystem: directory disappears after extattr issues From: javocado To: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 May 2014 20:22:23 -0000 OS: freebsd 8.3-RELEASE I am seeing an issue where a directory in a ZFS filesystem has just disappeared. When I do an 'ls' in that filesystem, I get: # ls ls: backup: No such file or directory regular_file1 regular_file2 regular_file3 # ... so the OS knows it is there, but then it can't show it. Note that I was running just a plain old 'ls' command, so if it really wasn't there, it would just show nothing. Also, I can autocomplete the "backup" directory from the shell ... so somehow it sees it there, but won't do anything with it. This behavior also appears in the snapshots I am making. Unfortunately I did not notice this before several older snapshots had been destroyed. So I'll never know whether the snapshots (rotated daily) taken before this directory became inaccessible were also affected. I have tried the following, but nothing has fixed the issue: - rebooting - scrub'ing the pool - zfs send | receive to a new filesystem The only interesting thing about this filesystem, and this directory, is that it contains a lot of files with extended attributes. Further, these extattrs were crashing the system regularly until we applied a patch from Pawel: http://pastebin.com/0mu50B7T ... now the crashing has stopped, but that directory has ... disappeared. This directory that is now gone is absolutely the directory whose contents were causing the crashes. We have a lot of questions about this, but maybe we can start with: What in the world is going on here ? Thanks. From owner-freebsd-fs@FreeBSD.ORG Mon May 12 11:06:43 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7CB72A5D for ; Mon, 12 May 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6876D26C0 for ; Mon, 12 May 2014 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4CB6hB6067793 for ; Mon, 12 May 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4CB6hZn067791 for freebsd-fs@FreeBSD.org; Mon, 12 May 2014 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 12 May 2014 11:06:43 GMT Message-Id: <201405121106.s4CB6hZn067791@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 May 2014 11:06:43 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/189355 fs [zfs] zfs panic on root mount 10-stable o kern/188443 fs [smbfs] Segfault with tail(1) when mmap(2) called o kern/188328 fs [zfs] UPDATING should provide caveats for running `zpo o kern/188187 fs [zfs] [panic] 10-stable: Kernel panic on zpool import: o kern/187905 fs [zpool] Confusion zpool with a block size in HDD - blo o kern/187778 fs [zfs] Two ZFS filesystems mounted on / at same time o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix s kern/187414 fs [zfs] ZFS Write Deadlock on 8.4 o kern/187261 fs [fusefs] FUSE kernel panic when using socket / bind o kern/186942 fs [zfs] [panic] Fatal trap 12 (seems zfs related) o kern/186720 fs [xfs] is xfs now unsupported in the kernel? o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/186112 fs [zfs] [panic] ZFS Panic/Solaris Assert/zap.c:479 o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185734 fs [zfs] [panic] panic on stable/10 when writing to ZFS d o kern/185374 fs [msdosfs] [panic] Unmounting msdos filesystem in a bad o kern/184677 fs [zfs] [panic] ZFS snapshot umount kernel panic o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/184013 fs [fusefs] truecrypt broken (probably fusefs issue) o kern/183077 fs [opensolaris] [patch] don't have the compiler inline t o kern/182739 fs [fusefs] [panic] sysutils/fusefs-kmod kernel panic on o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] [panic] Kernel panic in ZFS I/O: solaris assert: o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181791 fs [zfs] ZFS ARC Deadlock o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175328 fs [fusefs] [panic] fusefs kernel page fault o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [softupdates] [panic] softdep_deallocate_dependencies: o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172630 fs [zfs] [lor] zfs/zfs_vfsops.c kern/kern_descrip.c o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus f kern/172197 fs [zfs] Userquota (as well as groupquota) does not work o kern/172092 fs [zfs] [panic] zfs import panics kernel o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170523 fs [zfs] zfs rename pool@snapshot1 pool@snapshot2 UNMOUNT o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs [ufs] [patch] Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162195 fs [softupdates] [panic] panic with soft updates journali o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] [panic] zfs/dbuf.c: panic: solaris assert: arc_b o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] [panic] panic during ls(1) zfs snapshot director o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [softupdates] [panic] Kernel panic in softdep_disk_wri o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] [panic] zpool scrub causes panic if geli vdevs d o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 357 problems total. From owner-freebsd-fs@FreeBSD.ORG Wed May 14 14:20:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A05959AD for ; Wed, 14 May 2014 14:20:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 71A80244D for ; Wed, 14 May 2014 14:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4EEK0x4088402 for ; Wed, 14 May 2014 14:20:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4EEK0qT088400; Wed, 14 May 2014 14:20:00 GMT (envelope-from gnats) Date: Wed, 14 May 2014 14:20:00 GMT Message-Id: <201405141420.s4EEK0qT088400@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: "Steven Hartland" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 May 2014 14:20:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: "Steven Hartland" To: , Cc: Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Wed, 14 May 2014 15:11:10 +0100 Any luck with the patch Radim? From owner-freebsd-fs@FreeBSD.ORG Thu May 15 05:08:58 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0B573B12; Thu, 15 May 2014 05:08:58 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D240B2123; Thu, 15 May 2014 05:08:57 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4F58vro007586; Thu, 15 May 2014 05:08:57 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4F58vcs007585; Thu, 15 May 2014 05:08:57 GMT (envelope-from linimon) Date: Thu, 15 May 2014 05:08:57 GMT Message-Id: <201405150508.s4F58vcs007585@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/189826: [zfs] zpool create using gmirror partition hard-hangs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 05:08:58 -0000 Synopsis: [zfs] zpool create using gmirror partition hard-hangs Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu May 15 05:08:46 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=189826 From owner-freebsd-fs@FreeBSD.ORG Thu May 15 12:15:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7E04EE05 for ; Thu, 15 May 2014 12:15:15 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 369462166 for ; Thu, 15 May 2014 12:15:14 +0000 (UTC) Received: from firewall.mikej.com (162-238-140-44.lightspeed.lsvlky.sbcglobal.net [162.238.140.44]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s4FCFC9e069195 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Thu, 15 May 2014 08:15:12 -0400 (EDT) (envelope-from mikej@mikej.com) Received: from mail.mikej.com (firewall [192.168.6.63]) by firewall.mikej.com (8.14.8/8.14.7) with ESMTP id s4FCEp9n014935 for ; Thu, 15 May 2014 08:15:11 -0400 (EDT) (envelope-from mikej@mikej.com) X-Authentication-Warning: firewall.mikej.com: Host firewall [192.168.6.63] claimed to be mail.mikej.com MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Thu, 15 May 2014 08:14:51 -0400 From: Michael Jung To: freebsd-fs@freebsd.org Subject: Understanding ASHIFT SSD and adding a mirrored vdev Message-ID: <5703300e3b052a3f4d14f1957ded08ce@mail.mikej.com> X-Sender: mikej@mikej.com User-Agent: Roundcube Webmail/1.0.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 12:15:15 -0000 Hi: I have read so many threads about ASHIFT on SSD my head is spinning. This is my current partition created with a 10-stable installer. 2x3TB drives in mirror and ashift=12 looks correct. I added the SSD which defaulted to ashift=9 and this is where my question is. Is alignment on SSD important as from what I read we never really know the flash layout? I hope there is a simply answer to this ;-) Secondly, I want to add another vdev 2x mirrored 3tb drives and it seems that ashift would be vdev specific so I would need to gnop the new drives before add the new vdev? then zfs add zroot mirror ada3 ada4 Regards, --mikej root@firewall:/home/mikej # zdb -C zroot MOS Configuration: version: 5000 name: 'zroot' state: 0 txg: 62296 pool_guid: 15958487588614144860 hostid: 2738252912 hostname: 'firewall' vdev_children: 2 vdev_tree: type: 'root' id: 0 guid: 15958487588614144860 children[0]: type: 'mirror' id: 0 guid: 3592648208679324941 metaslab_array: 33 metaslab_shift: 34 ashift: 12 asize: 2983407648768 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 17048837379599232125 path: '/dev/gptid/a514b75b-d9f9-11e3-9b31-001b211e2e44' phys_path: '/dev/gptid/a514b75b-d9f9-11e3-9b31-001b211e2e44' whole_disk: 1 DTL: 157 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 2010069802937618121 path: '/dev/gptid/a604d635-d9f9-11e3-9b31-001b211e2e44' phys_path: '/dev/gptid/a604d635-d9f9-11e3-9b31-001b211e2e44' whole_disk: 1 DTL: 156 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 12355431372362468741 path: '/dev/ada2p1' phys_path: '/dev/ada2p1' whole_disk: 1 metaslab_array: 195 metaslab_shift: 26 ashift: 9 asize: 8585216000 is_log: 1 create_txg: 13825 features_for_read: root@firewall:/home/mikej # root@firewall:/home/mikej # zpool status pool: zroot state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0 in 1h14m with 0 errors on Tue May 13 08:42:29 2014 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/a514b75b-d9f9-11e3-9b31-001b211e2e44 ONLINE 0 0 0 gptid/a604d635-d9f9-11e3-9b31-001b211e2e44 ONLINE 0 0 0 logs ada2p1 ONLINE 0 0 0 cache ada2p2 ONLINE 0 0 0 errors: No known data errors root@firewall:/home/mikej # root@firewall:/home/mikej # gpart list ada0 Geom name: ada0 modified: false state: OK fwheads: 16 fwsectors: 63 last: 5860533134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada0p1 Mediasize: 524288 (512K) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 rawuuid: a4adfc12-d9f9-11e3-9b31-001b211e2e44 rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: gptboot0 length: 524288 offset: 20480 type: freebsd-boot index: 1 end: 1063 start: 40 2. Name: ada0p2 Mediasize: 17179869184 (16G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e1 rawuuid: a4e5bce0-d9f9-11e3-9b31-001b211e2e44 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: swap0 length: 17179869184 offset: 544768 type: freebsd-swap index: 2 end: 33555495 start: 1064 3. Name: ada0p3 Mediasize: 2983412547584 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e2 rawuuid: a514b75b-d9f9-11e3-9b31-001b211e2e44 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: zfs0 length: 2983412547584 offset: 17180413952 type: freebsd-zfs index: 3 end: 5860533127 start: 33555496 Consumers: 1. Name: ada0 Mediasize: 3000592982016 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r2w2e5 root@firewall:/home/mikej # gpart list ada1 Geom name: ada1 modified: false state: OK fwheads: 16 fwsectors: 63 last: 5860533134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada1p1 Mediasize: 524288 (512K) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 rawuuid: a5a0aa88-d9f9-11e3-9b31-001b211e2e44 rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: gptboot1 length: 524288 offset: 20480 type: freebsd-boot index: 1 end: 1063 start: 40 2. Name: ada1p2 Mediasize: 17179869184 (16G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e1 rawuuid: a5d8019e-d9f9-11e3-9b31-001b211e2e44 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: swap1 length: 17179869184 offset: 544768 type: freebsd-swap index: 2 end: 33555495 start: 1064 3. Name: ada1p3 Mediasize: 2983412547584 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e2 rawuuid: a604d635-d9f9-11e3-9b31-001b211e2e44 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: zfs1 length: 2983412547584 offset: 17180413952 type: freebsd-zfs index: 3 end: 5860533127 start: 33555496 Consumers: 1. Name: ada1 Mediasize: 3000592982016 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r2w2e5 root@firewall:/home/mikej # gpart list ada2 Geom name: ada2 modified: false state: OK fwheads: 16 fwsectors: 63 last: 250069646 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada2p1 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 17408 Mode: r1w1e1 rawuuid: 31d66c14-daba-11e3-a0ae-001b211e2e44 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: logs length: 8589934592 offset: 17408 type: freebsd-zfs index: 1 end: 16777249 start: 34 2. Name: ada2p2 Mediasize: 119445707264 (111G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 17408 Mode: r1w1e1 rawuuid: 37b41a84-daba-11e3-a0ae-001b211e2e44 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: cache length: 119445707264 offset: 8589952000 type: freebsd-zfs index: 2 end: 250069646 start: 16777250 Consumers: 1. Name: ada2 Mediasize: 128035676160 (119G) Sectorsize: 512 Mode: r2w2e4 From owner-freebsd-fs@FreeBSD.ORG Thu May 15 12:40:04 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2331B36A for ; Thu, 15 May 2014 12:40:04 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 04804237C for ; Thu, 15 May 2014 12:40:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4FCe1TI087809 for ; Thu, 15 May 2014 12:40:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4FCe1Hw087808; Thu, 15 May 2014 12:40:01 GMT (envelope-from gnats) Date: Thu, 15 May 2014 12:40:01 GMT Message-Id: <201405151240.s4FCe1Hw087808@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: Re: kern/189355: zfs panic on 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 12:40:04 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: "bug-followup@FreeBSD.org" , "killing@multiplay.co.uk" Cc: Subject: Re: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 12:38:20 +0000 --_a717f13e-e8ab-4cae-9f7f-5ec14329a6c1_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable With your path=2C i was able to capture crash dump. This patch should be ad= ded to mainstream bsd to help people with capture boot panics. Here is beginning of core.txt file. Let me know if you need more info. Trying to mount root from zfs:root []... Fatal double fault: eip =3D 0xc0cb801e esp =3D 0xd93eff98 ebp =3D 0xd93f0300 cpuid =3D 0=3B apic id =3D 00 panic: double fault cpuid =3D 0 KDB: enter: panic Reading symbols from /boot/kernel/zfs.ko.symbols...done. Loaded symbols for /boot/kernel/zfs.ko.symbols Reading symbols from /boot/kernel/krpc.ko.symbols...done. Loaded symbols for /boot/kernel/krpc.ko.symbols Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. Loaded symbols for /boot/kernel/opensolaris.ko.symbols Reading symbols from /boot/kernel/if_vmx.ko.symbols...done. Loaded symbols for /boot/kernel/if_vmx.ko.symbols Reading symbols from /boot/kernel/cc_hd.ko.symbols...done. Loaded symbols for /boot/kernel/cc_hd.ko.symbols Reading symbols from /boot/kernel/h_ertt.ko.symbols...done. Loaded symbols for /boot/kernel/h_ertt.ko.symbols #0 doadump (textdump=3D0) at pcpu.h:233 233 pcpu.h: No such file or directory. in pcpu.h (kgdb) #0 doadump (textdump=3D0) at pcpu.h:233 #1 0xc04c87d1 in db_dump (dummy=3D-1066847139=2C dummy2=3D0=2C dummy3=3D-1= =2C dummy4=3D0xc0a808b4 "") at /usr/src/sys/ddb/db_command.c:543 #2 0xc04c82cb in db_command (cmd_table=3D) at /usr/src/sys/ddb/db_command.c:449 #3 0xc04c8010 in db_command_loop () at /usr/src/sys/ddb/db_command.c:502 #4 0xc04ca8a1 in db_trap (type=3D=2C code=3D) at /usr/src/sys/ddb/db_main.c:231 #5 0xc0693bd4 in kdb_trap (type=3D=2C code=3D=2C tf=3D) at /usr/src/sys/kern/subr_kdb.c:656 #6 0xc093918f in trap (frame=3D) at /usr/src/sys/i386/i386/trap.c:712 #7 0xc092336c in calltrap () at /usr/src/sys/i386/i386/exception.s:170 #8 0xc069345d in kdb_enter (why=3D0xc09a7ef8 "panic"=2C msg=3D) at cpufunc.h:71 #9 0xc065710f in panic (fmt=3D) at /usr/src/sys/kern/kern_shutdown.c:823 #10 0xc0939acb in dblfault_handler () at /usr/src/sys/i386/i386/trap.c:1072 #11 0xc0cb801e in vdev_queue_io_to_issue (vq=3D0xc46a4b00) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_queue.c:471 #12 0xc0cb7fb8 in vdev_queue_io (zio=3D0xc4855000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_queue.c:744 #13 0xc0cd84ee in zio_vdev_io_start (ziop=3D) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/zio.c:2607 #14 0xc0cd4c18 in zio_execute (zio=3D0xc4855000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/zio.c:1350 #15 0xc0cb74e4 in vdev_mirror_io_start (zio=3D0xc4823894) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_mirror.c:284 = --_a717f13e-e8ab-4cae-9f7f-5ec14329a6c1_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
With your path=2C i was able to = capture crash dump. This patch should be added to mainstream bsd to help pe= ople with capture boot panics.

Here is beginning of core.txt file. L= et me know if you need more info.

Trying to mount root from zfs:root= []...

Fatal double fault:
eip =3D 0xc0cb801e
esp =3D 0xd93eff= 98
ebp =3D 0xd93f0300
cpuid =3D 0=3B apic id =3D 00
panic: double = fault
cpuid =3D 0
KDB: enter: panic

Reading symbols from /boot= /kernel/zfs.ko.symbols...done.
Loaded symbols for /boot/kernel/zfs.ko.sy= mbols
Reading symbols from /boot/kernel/krpc.ko.symbols...done.
Loade= d symbols for /boot/kernel/krpc.ko.symbols
Reading symbols from /boot/ke= rnel/opensolaris.ko.symbols...done.
Loaded symbols for /boot/kernel/open= solaris.ko.symbols
Reading symbols from /boot/kernel/if_vmx.ko.symbols..= .done.
Loaded symbols for /boot/kernel/if_vmx.ko.symbols
Reading symb= ols from /boot/kernel/cc_hd.ko.symbols...done.
Loaded symbols for /boot/= kernel/cc_hd.ko.symbols
Reading symbols from /boot/kernel/h_ertt.ko.symb= ols...done.
Loaded symbols for /boot/kernel/h_ertt.ko.symbols
#0 = =3B doadump (textdump=3D0) at pcpu.h:233
233 =3B =3B =3B&nbs= p=3B pcpu.h: No such file or directory.
 =3B =3B =3B =3B=  =3B =3B =3B in pcpu.h
(kgdb) #0 =3B doadump (textdump= =3D0) at pcpu.h:233
#1 =3B 0xc04c87d1 in db_dump (dummy=3D-106684713= 9=2C dummy2=3D0=2C dummy3=3D-1=2C
 =3B =3B =3B dummy4=3D0xc0= a808b4 "") at /usr/src/sys/ddb/db_command.c:543
#2 =3B 0xc04c82cb in= db_command (cmd_table=3D<=3Bvalue optimized out>=3B)
 =3B = =3B =3B at /usr/src/sys/ddb/db_command.c:449
#3 =3B 0xc04c8010 i= n db_command_loop () at /usr/src/sys/ddb/db_command.c:502
#4 =3B 0xc= 04ca8a1 in db_trap (type=3D<=3Bvalue optimized out>=3B=2C
 =3B&n= bsp=3B =3B code=3D<=3Bvalue optimized out>=3B) at /usr/src/sys/ddb/= db_main.c:231
#5 =3B 0xc0693bd4 in kdb_trap (type=3D<=3Bvalue opti= mized out>=3B=2C
 =3B =3B =3B code=3D<=3Bvalue optimized= out>=3B=2C tf=3D<=3Bvalue optimized out>=3B)
 =3B =3B&nbs= p=3B at /usr/src/sys/kern/subr_kdb.c:656
#6 =3B 0xc093918f in trap (= frame=3D<=3Bvalue optimized out>=3B)
 =3B =3B =3B at /us= r/src/sys/i386/i386/trap.c:712
#7 =3B 0xc092336c in calltrap () at /= usr/src/sys/i386/i386/exception.s:170
#8 =3B 0xc069345d in kdb_enter= (why=3D0xc09a7ef8 "panic"=2C
 =3B =3B =3B msg=3D<=3Bvalue= optimized out>=3B) at cpufunc.h:71
#9 =3B 0xc065710f in panic (fm= t=3D<=3Bvalue optimized out>=3B)
 =3B =3B =3B at /usr/sr= c/sys/kern/kern_shutdown.c:823
#10 0xc0939acb in dblfault_handler () at = /usr/src/sys/i386/i386/trap.c:1072
#11 0xc0cb801e in vdev_queue_io_to_is= sue (vq=3D0xc46a4b00)
 =3B =3B =3B at /usr/src/sys/modules/z= fs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c:471
#12= 0xc0cb7fb8 in vdev_queue_io (zio=3D0xc4855000)
 =3B =3B =3B= at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/vdev_queue.c:744
#13 0xc0cd84ee in zio_vdev_io_start (ziop=3D<=3Bva= lue optimized out>=3B)
 =3B =3B =3B at /usr/src/sys/module= s/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:2607
#14 0x= c0cd4c18 in zio_execute (zio=3D0xc4855000)
 =3B =3B =3B at /= usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zi= o.c:1350
#15 0xc0cb74e4 in vdev_mirror_io_start (zio=3D0xc4823894)
&n= bsp=3B =3B =3B at /usr/src/sys/modules/zfs/../../cddl/contrib/opens= olaris/uts/common/fs/zfs/vdev_mirror.c:284

= --_a717f13e-e8ab-4cae-9f7f-5ec14329a6c1_-- From owner-freebsd-fs@FreeBSD.ORG Thu May 15 12:45:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 48C5372D for ; Thu, 15 May 2014 12:45:14 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 0CB1C2429 for ; Thu, 15 May 2014 12:45:13 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 939C720E7088A; Thu, 15 May 2014 12:45:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 4DAED20E70885; Thu, 15 May 2014 12:45:02 +0000 (UTC) Message-ID: <426F6341FBF145FA9D98CBA06030CCA6@multiplay.co.uk> From: "Steven Hartland" To: "Michael Jung" , References: <5703300e3b052a3f4d14f1957ded08ce@mail.mikej.com> Subject: Re: Understanding ASHIFT SSD and adding a mirrored vdev Date: Thu, 15 May 2014 13:44:58 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 12:45:14 -0000 ----- Original Message ----- From: "Michael Jung" To: Sent: Thursday, May 15, 2014 1:14 PM Subject: Understanding ASHIFT SSD and adding a mirrored vdev > Hi: > > I have read so many threads about ASHIFT on SSD my head is spinning. > This is my current partition created with a 10-stable installer. > > 2x3TB drives in mirror and ashift=12 looks correct. I added the SSD > which defaulted to ashift=9 and this is where my question is. > > Is alignment on SSD important as from what I read we never really know > the flash layout? I hope there is a simply answer to this ;-) SSD's are generally 4K devices yes, but they also almost always lie about their 4k ness, instead reporting 512byte sectors for max compatibility. To work around devices which lie FreeBSD has device "quirks", we've included 4k quirks for quite a large number of SSD's but the list is ever growning so its not garanteed that your SSD will have a quirk to correct its lies, if in fact it is lieing. > Secondly, I want to add another vdev 2x mirrored 3tb drives and it seems > that ashift would be vdev specific so I would need to gnop the new > drives before add the new vdev? I've just MFC'ed r264850 to 10-stable, seems I forgot to tag with MFC so didn't get a reminder, so if you update to the latest source you'll have a new sysctl: vfs.zfs.min_auto_ashift This can be used to force the ashift to be artificually increased, which is useful for forward compatibility. > Instead use: sysctl vfs.zfs.min_auto_ashift=12 > then > > zfs add zroot mirror ada3 ada4 Also what is your SSD can you provide the details from: camcontrol identify ada2 This will allow us to create a new quirk if needed. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Thu May 15 13:00:54 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6BD85131 for ; Thu, 15 May 2014 13:00:54 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 3127A2580 for ; Thu, 15 May 2014 13:00:53 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 3080220E7088A; Thu, 15 May 2014 13:00:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 10ABF20E70885; Thu, 15 May 2014 13:00:49 +0000 (UTC) Message-ID: <14010473114D42CC92756838300EEE64@multiplay.co.uk> From: "Steven Hartland" To: "Radim Kolar" , References: <201405151240.s4FCe1Hw087808@freefall.freebsd.org> Subject: Re: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 14:00:44 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 13:00:54 -0000 ----- Original Message ----- From: "Radim Kolar" snip > #11 0xc0cb801e in vdev_queue_io_to_issue (vq=3D0xc46a4b00) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= > s/zfs/vdev_queue.c:471 > #12 0xc0cb7fb8 in vdev_queue_io (zio=3D0xc4855000) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= > s/zfs/vdev_queue.c:744 > #13 0xc0cd84ee in zio_vdev_io_start (ziop=3D) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= > s/zfs/zio.c:2607 > #14 0xc0cd4c18 in zio_execute (zio=3D0xc4855000) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= > s/zfs/zio.c:1350 > #15 0xc0cb74e4 in vdev_mirror_io_start (zio=3D0xc4823894) > at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= > s/zfs/vdev_mirror.c:284 Its not totally clear as the faulting line shouldn't ever do so but I suspect you may be seeing corruption earlier on due to some edge cases in the new queing code. This could be something that has already has a fixed in head but wasn't expected to get triggered without TRIM queueing which isn't in 10-stable yet. The fix is also likely to change based on feedback from the openzfs guys, hence isn't in 10-stable yet. Can you try appling the following patches from head to your 10-stable and see if that helps:- http://svnweb.freebsd.org/base?view=revision&revision=265046 http://svnweb.freebsd.org/base?view=revision&revision=265321 Regards Steve From owner-freebsd-fs@FreeBSD.ORG Thu May 15 13:35:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D3779EDB for ; Thu, 15 May 2014 13:35:00 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 166D528D4 for ; Thu, 15 May 2014 13:34:59 +0000 (UTC) Received: from firewall.mikej.com (162-238-140-44.lightspeed.lsvlky.sbcglobal.net [162.238.140.44]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s4FDYpTX076361 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Thu, 15 May 2014 09:34:51 -0400 (EDT) (envelope-from mikej@mikej.com) Received: from mail.mikej.com (firewall [192.168.6.63]) by firewall.mikej.com (8.14.8/8.14.7) with ESMTP id s4FDYUOQ023409; Thu, 15 May 2014 09:34:50 -0400 (EDT) (envelope-from mikej@mikej.com) X-Authentication-Warning: firewall.mikej.com: Host firewall [192.168.6.63] claimed to be mail.mikej.com MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Thu, 15 May 2014 09:34:30 -0400 From: Michael Jung To: Steven Hartland Subject: Re: Understanding ASHIFT SSD and adding a mirrored vdev In-Reply-To: <426F6341FBF145FA9D98CBA06030CCA6@multiplay.co.uk> References: <5703300e3b052a3f4d14f1957ded08ce@mail.mikej.com> <426F6341FBF145FA9D98CBA06030CCA6@multiplay.co.uk> Message-ID: <914cba301cda86cd3f4b117c4350832b@mail.mikej.com> X-Sender: mikej@mikej.com User-Agent: Roundcube Webmail/1.0.0 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 13:35:00 -0000 On , Steven Hartland wrote: > ----- Original Message ----- From: "Michael Jung" > To: > Sent: Thursday, May 15, 2014 1:14 PM > Subject: Understanding ASHIFT SSD and adding a mirrored vdev > > >> Hi: >> >> I have read so many threads about ASHIFT on SSD my head is spinning. >> This is my current partition created with a 10-stable installer. 2x3TB >> drives in mirror and ashift=12 looks correct. I added the SSD >> which defaulted to ashift=9 and this is where my question is. >> >> Is alignment on SSD important as from what I read we never really know >> the flash layout? I hope there is a simply answer to this ;-) > > SSD's are generally 4K devices yes, but they also almost always lie > about their 4k ness, instead reporting 512byte sectors for max > compatibility. > > To work around devices which lie FreeBSD has device "quirks", we've > included 4k quirks for quite a large number of SSD's but the list is > ever growning so its not garanteed that your SSD will have a quirk > to correct its lies, if in fact it is lieing. > >> Secondly, I want to add another vdev 2x mirrored 3tb drives and it >> seems that ashift would be vdev specific so I would need to gnop the >> new >> drives before add the new vdev? > > I've just MFC'ed r264850 to 10-stable, seems I forgot to tag with MFC > so > didn't get a reminder, so if you update to the latest source you'll > have > a new sysctl: vfs.zfs.min_auto_ashift > > This can be used to force the ashift to be artificually increased, > which > is useful for forward compatibility. > >> > > Instead use: sysctl vfs.zfs.min_auto_ashift=12 > >> then >> >> zfs add zroot mirror ada3 ada4 > > Also what is your SSD can you provide the details from: > camcontrol identify ada2 > > This will allow us to create a new quirk if needed. > > Regards > Steve Steve thanks for you quick reply, and I'm updating now. Here are two different SSD, they both claim to 512 bytes logically and physically. root@charon:/usr/home/mikej # camcontrol identify ada1 pass2: ATA-9 SATA 3.x device pass2: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes) protocol ATA/ATAPI-9 SATA 3.x device model Samsung SSD 840 PRO Series firmware revision DXM05B0Q serial number S1ANNSAD905507F WWN 50025385a00fff63 cylinders 16383 heads 16 sectors/track 63 sector size logical 512, physical 512, offset 0 LBA supported 250069680 sectors LBA48 supported 250069680 sectors PIO supported PIO4 DMA supported WDMA2 UDMA6 media RPM non-rotating Feature Support Enabled Value Vendor read ahead yes yes write cache yes yes flush cache yes yes overlap no Tagged Command Queuing (TCQ) no no Native Command Queuing (NCQ) yes 32 tags SMART yes yes microcode download yes yes security yes no power management yes yes advanced power management no no automatic acoustic management no no media status notification no no power-up in Standby no no write-read-verify yes no 0/0x0 unload no no free-fall no no Data Set Management (DSM/TRIM) yes DSM - max 512byte blocks yes 8 DSM - deterministic read yes zeroed Host Protected Area (HPA) yes no 250069680/250069680 HPA - Security no root@charon:/usr/home/mikej # root@firewall:/home/mikej # camcontrol identify ada2 pass2: ATA-8 SATA 2.x device pass2: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) protocol ATA/ATAPI-8 SATA 2.x device model CT128V4SSD2 firmware revision S5FAMM25 serial number 539C0738093100005042 cylinders 16383 heads 16 sectors/track 63 sector size logical 512, physical 512, offset 0 LBA supported 250069680 sectors LBA48 supported 250069680 sectors PIO supported PIO4 DMA supported WDMA2 UDMA5 media RPM non-rotating Feature Support Enabled Value Vendor read ahead yes yes write cache yes yes flush cache yes yes overlap no Tagged Command Queuing (TCQ) no no Native Command Queuing (NCQ) yes 32 tags NCQ Queue Management no NCQ Streaming no Receive & Send FPDMA Queued no SMART yes yes microcode download yes yes security yes no power management yes yes advanced power management yes no 0/0x00 automatic acoustic management no no media status notification no no power-up in Standby no no write-read-verify no no unload yes yes general purpose logging yes yes free-fall no no Data Set Management (DSM/TRIM) yes DSM - max 512byte blocks yes not specified DSM - deterministic read no Host Protected Area (HPA) yes no 250069680/250069680 HPA - Security no root@firewall:/home/mikej # --mikej From owner-freebsd-fs@FreeBSD.ORG Thu May 15 14:51:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4D4E5878; Thu, 15 May 2014 14:51:03 +0000 (UTC) Received: from blu0-omc3-s19.blu0.hotmail.com (blu0-omc3-s19.blu0.hotmail.com [65.55.116.94]) by mx1.freebsd.org (Postfix) with ESMTP id 171B42FF2; Thu, 15 May 2014 14:51:02 +0000 (UTC) Received: from BLU179-W70 ([65.55.116.73]) by blu0-omc3-s19.blu0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Thu, 15 May 2014 07:50:55 -0700 X-TMN: [BEocUIBX9xfs4nfRymGYqr/nLTZgnm5X] X-Originating-Email: [hsn@sendmail.cz] Message-ID: From: Radim Kolar To: Steven Hartland , "freebsd-fs@FreeBSD.org" , "bug-followup@freebsd.org" Subject: RE: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 14:50:55 +0000 Importance: Normal In-Reply-To: <14010473114D42CC92756838300EEE64@multiplay.co.uk> References: <201405151240.s4FCe1Hw087808@freefall.freebsd.org>, <14010473114D42CC92756838300EEE64@multiplay.co.uk> MIME-Version: 1.0 X-OriginalArrivalTime: 15 May 2014 14:50:55.0752 (UTC) FILETIME=[1369B080:01CF704D] Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 14:51:03 -0000 > This could be something that has already has a fixed in head but wasn't e= xpected > to get triggered without TRIM queueing which isn't in 10-stable yet. The = fix > is also likely to change based on feedback from the openzfs guys=2C hence= isn't > in 10-stable yet. only workaround to get 10-STABLE boot without panic is to boot 10.0 and the= n import/export pool. 265046 is already merged in 10-stable. Now building stable with 265152 and 265321 merged. = From owner-freebsd-fs@FreeBSD.ORG Thu May 15 15:00:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1B42DBE7 for ; Thu, 15 May 2014 15:00:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 096E620A0 for ; Thu, 15 May 2014 15:00:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4FF01BF039251 for ; Thu, 15 May 2014 15:00:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4FF01Ec039250; Thu, 15 May 2014 15:00:01 GMT (envelope-from gnats) Date: Thu, 15 May 2014 15:00:01 GMT Message-Id: <201405151500.s4FF01Ec039250@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: zfs panic on 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 15:00:02 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: Steven Hartland , "freebsd-fs@FreeBSD.org" , "bug-followup@freebsd.org" Cc: Subject: RE: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 14:50:55 +0000 --_eb78ed80-5e1c-4bbf-9389-5250a3119c5e_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable > This could be something that has already has a fixed in head but wasn't e= xpected > to get triggered without TRIM queueing which isn't in 10-stable yet. The = fix > is also likely to change based on feedback from the openzfs guys=2C hence= isn't > in 10-stable yet. only workaround to get 10-STABLE boot without panic is to boot 10.0 and the= n import/export pool. 265046 is already merged in 10-stable. Now building stable with 265152 and 265321 merged. = --_eb78ed80-5e1c-4bbf-9389-5250a3119c5e_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
>=3B This could be something t= hat has already has a fixed in head but wasn't expected
>=3B to g= et triggered without TRIM queueing which isn't in 10-stable yet. The fix>=3B is also likely to change based on feedback from the openzfs guys=2C= hence isn't
>=3B in 10-stable yet.

only workaround to get 10-S= TABLE boot without panic is to boot 10.0 and then import/export pool.
265046 is already merged in 10-stable.
Now building stable with 265152= and 265321 merged.
= --_eb78ed80-5e1c-4bbf-9389-5250a3119c5e_-- From owner-freebsd-fs@FreeBSD.ORG Thu May 15 15:05:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 48EEAED5; Thu, 15 May 2014 15:05:51 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 0C3F9215F; Thu, 15 May 2014 15:05:50 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 9FBFF20E7088B; Thu, 15 May 2014 15:05:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 5391D20E70885; Thu, 15 May 2014 15:05:43 +0000 (UTC) Message-ID: <6431D5A668064EDAB39D938A740F76E8@multiplay.co.uk> From: "Steven Hartland" To: "Radim Kolar" , "freebsd-fs@FreeBSD.org" , References: <201405151240.s4FCe1Hw087808@freefall.freebsd.org>, <14010473114D42CC92756838300EEE64@multiplay.co.uk> Subject: Re: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 16:05:39 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-2"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 15:05:51 -0000 ----- Original Message ----- From: "Radim Kolar" >> This could be something that has already has a fixed in head but wasn't expected >> to get triggered without TRIM queueing which isn't in 10-stable yet. The fix >> is also likely to change based on feedback from the openzfs guys, hence isn't >> in 10-stable yet. > > only workaround to get 10-STABLE boot without panic is to boot 10.0 and then import/export pool > > > 265046 is already merged in 10-stable. Ah yes I did merge that one just in case. > Now building stable with 265152 and 265321 merged. You shouldn't need 265152 as thats just for queued TRIM's and may just confuse things further. I'm not sure 265321 on its own will make a difference here as thats a fix for stack overflow and as your stack in the trace is only 14 deep shouldn't be your case. Once your done building and if it does still panic can you confirm the code line at the panic location with kgdb as well as the details of the zio being processed under pretty print. From owner-freebsd-fs@FreeBSD.ORG Thu May 15 15:10:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9C17D12A for ; Thu, 15 May 2014 15:10:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 89F4121B2 for ; Thu, 15 May 2014 15:10:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4FFA2V9043140 for ; Thu, 15 May 2014 15:10:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4FFA1jd043139; Thu, 15 May 2014 15:10:01 GMT (envelope-from gnats) Date: Thu, 15 May 2014 15:10:01 GMT Message-Id: <201405151510.s4FFA1jd043139@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/189355: zfs panic on 10-stable Reply-To: "Steven Hartland" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 15:10:02 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: "Steven Hartland" To: "Radim Kolar" , "freebsd-fs@FreeBSD.org" , Cc: Subject: Re: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 16:05:39 +0100 ----- Original Message ----- From: "Radim Kolar" >> This could be something that has already has a fixed in head but wasn't expected >> to get triggered without TRIM queueing which isn't in 10-stable yet. The fix >> is also likely to change based on feedback from the openzfs guys, hence isn't >> in 10-stable yet. > > only workaround to get 10-STABLE boot without panic is to boot 10.0 and then import/export pool > > > 265046 is already merged in 10-stable. Ah yes I did merge that one just in case. > Now building stable with 265152 and 265321 merged. You shouldn't need 265152 as thats just for queued TRIM's and may just confuse things further. I'm not sure 265321 on its own will make a difference here as thats a fix for stack overflow and as your stack in the trace is only 14 deep shouldn't be your case. Once your done building and if it does still panic can you confirm the code line at the panic location with kgdb as well as the details of the zio being processed under pretty print. From owner-freebsd-fs@FreeBSD.ORG Thu May 15 15:30:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4E8C4FB7 for ; Thu, 15 May 2014 15:30:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3C7472382 for ; Thu, 15 May 2014 15:30:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4FFU0PK050581 for ; Thu, 15 May 2014 15:30:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4FFU0d6050580; Thu, 15 May 2014 15:30:00 GMT (envelope-from gnats) Date: Thu, 15 May 2014 15:30:00 GMT Message-Id: <201405151530.s4FFU0d6050580@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Reply-To: Karl Denninger X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 15:30:01 -0000 The following reply was made to PR kern/187594; it has been noted by GNATS. From: Karl Denninger To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Date: Thu, 15 May 2014 10:05:34 -0500 This is a cryptographically signed message in MIME format. --------------ms050404000608010301080104 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable I have now been running the latest delta as posted 26 March -- it is=20 coming up on two months now, has been stable here and I've seen several=20 positive reports and no negative ones on impact for others. Performance=20 continues to be "as expected." Is there an expectation on this being merged forward and/or MFC'd? --=20 -- Karl karl@denninger.net --------------ms050404000608010301080104 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA1MTUxNTA1MzRaMCMGCSqGSIb3DQEJBDEW BBSbaVBm+ygF/PjHv+0foTPNgyAZHDBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIADfjFv6rTjcQz+JkCtjZT46+zYdFw eGsHTyUH/UplZMAEw0ZKjz23rJwI5tG/bZFXI3XKqQl1cENsPHLydVEqEz15Qj1k2QlD/FyM zYuy0ylyiH96jpPHQ68DznVTahj5Oiv24s7mZExPlDtzLtO8KtbdIEy75aT5PpP7Er+ZhVLL 55cY4F+CMxcsU5SzmMYtmURF9is0SucxSFe1KOv+By+ZyEtb+DCIsaq9eFR1ZrgOJ+18hgXs V+IL4inluJs9teLG6lxz7kM+0y3zsEhtwNxYmsSMDI2AGzjZsrSB5Aps9j/x2TZ3L8pHECye mshDGcd3ukuXLrnRl6wEUqsXzRkcylcsNgVK/rREeItnGTL4wzJaMPCFpow0U5iBNVBP6jgI xjKygNNg3LWmKapDcbsgl3MVKSmaijXqkMREwAZX66Tq/8ZMhB9KWoUa8j0+itwW6HX+hOai J3j2/z6vmh1aiz1F5I8vjlp6fjSBj2YThWYbGYJMWQKecUcOi94f0W0SvZobTKCfbrrJgvyY 1W2u7m1J5FAKu6qKLdfRWAcy3HL5H6FTdUAmzcd1/3VErrLluvfQE6ZmvefD1286AEpfoM2B qjA5JTu9qy3i3gOwtaHEUbQmMjjhrSEO7z8ID1BdTPmvCaVFY7aFZ2GxB/iXu0n3p/CI5c3K FNWTkFMAAAAAAAA= --------------ms050404000608010301080104-- From owner-freebsd-fs@FreeBSD.ORG Thu May 15 18:20:06 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1209C185; Thu, 15 May 2014 18:20:06 +0000 (UTC) Received: from BLU004-OMC3S6.hotmail.com (blu004-omc3s6.hotmail.com [65.55.116.81]) by mx1.freebsd.org (Postfix) with ESMTP id CCFC9238D; Thu, 15 May 2014 18:20:05 +0000 (UTC) Received: from BLU179-W50 ([65.55.116.72]) by BLU004-OMC3S6.hotmail.com with Microsoft SMTPSVC(7.5.7601.22678); Thu, 15 May 2014 11:18:59 -0700 X-TMN: [fBfe31S6qAW72jkUBRL9oH3LhpvFl+22] X-Originating-Email: [hsn@sendmail.cz] Message-ID: From: Radim Kolar To: Steven Hartland , "freebsd-fs@FreeBSD.org" , "bug-followup@freebsd.org" Subject: RE: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 18:18:59 +0000 Importance: Normal In-Reply-To: References: <201405151240.s4FCe1Hw087808@freefall.freebsd.org>, <14010473114D42CC92756838300EEE64@multiplay.co.uk>, MIME-Version: 1.0 X-OriginalArrivalTime: 15 May 2014 18:18:59.0730 (UTC) FILETIME=[24718F20:01CF706A] Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 18:20:06 -0000 > 265046 is already merged in 10-stable. > Now building stable with 265152 and 265321 merged. these 2 patches did not remove panic on boot. I am compiling kernel with makeoptions DEBUG=3D-g WITH_CTF=3D1 are there any options which i can use for creating more debugable dump? suc= h as add -O0 somewhere? = From owner-freebsd-fs@FreeBSD.ORG Thu May 15 18:30:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C1403354 for ; Thu, 15 May 2014 18:30:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AF4F8249E for ; Thu, 15 May 2014 18:30:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4FIU1BM031331 for ; Thu, 15 May 2014 18:30:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4FIU1QI031330; Thu, 15 May 2014 18:30:01 GMT (envelope-from gnats) Date: Thu, 15 May 2014 18:30:01 GMT Message-Id: <201405151830.s4FIU1QI031330@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: zfs panic on 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 18:30:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: Steven Hartland , "freebsd-fs@FreeBSD.org" , "bug-followup@freebsd.org" Cc: Subject: RE: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 18:18:59 +0000 --_6580e2cf-a4f8-46f1-bf32-6f82ec9394a2_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable > 265046 is already merged in 10-stable. > Now building stable with 265152 and 265321 merged. these 2 patches did not remove panic on boot. I am compiling kernel with makeoptions DEBUG=3D-g WITH_CTF=3D1 are there any options which i can use for creating more debugable dump? suc= h as add -O0 somewhere? = --_6580e2cf-a4f8-46f1-bf32-6f82ec9394a2_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
>=3B 265046 is already merged = in 10-stable.
>=3B Now building stable with= 265152 and 265321 merged.

these 2 patches did not remove panic on b= oot. I am compiling kernel with
makeoptions DEBUG=3D-g WITH_CTF=3D1
<= br>are there any options which i can use for creating more debugable dump? = such as add -O0 somewhere?
= --_6580e2cf-a4f8-46f1-bf32-6f82ec9394a2_-- From owner-freebsd-fs@FreeBSD.ORG Thu May 15 18:33:29 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 01199461 for ; Thu, 15 May 2014 18:33:29 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id B99922565 for ; Thu, 15 May 2014 18:33:28 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 53AA920E7088A; Thu, 15 May 2014 18:33:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: *** X-Spam-Status: No, score=3.2 required=8.0 tests=AWL,BAYES_40,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 2721B20E70886; Thu, 15 May 2014 18:33:22 +0000 (UTC) Message-ID: <84946126FE6B4B03BD1D2B6CDEE053C3@multiplay.co.uk> From: "Steven Hartland" To: "Radim Kolar" , References: <201405151830.s4FIU1QI031330@freefall.freebsd.org> Subject: Re: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 19:33:18 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 18:33:29 -0000 ----- Original Message ----- From: "Radim Kolar" > --_6580e2cf-a4f8-46f1-bf32-6f82ec9394a2_ > Content-Type: text/plain; charset="iso-8859-2" > Content-Transfer-Encoding: quoted-printable > > > 265046 is already merged in 10-stable. > > Now building stable with 265152 and 265321 merged. > > these 2 patches did not remove panic on boot. I am compiling kernel with > makeoptions DEBUG=3D-g WITH_CTF=3D1 > > are there any options which i can use for creating more debugable dump? suc= > h as add -O0 somewhere? Standard options are good enough for a kgdb session nothing else should be needed. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Thu May 15 18:44:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D61C3B35; Thu, 15 May 2014 18:44:44 +0000 (UTC) Received: from blu0-omc3-s23.blu0.hotmail.com (blu0-omc3-s23.blu0.hotmail.com [65.55.116.98]) by mx1.freebsd.org (Postfix) with ESMTP id 9E10C26C0; Thu, 15 May 2014 18:44:44 +0000 (UTC) Received: from BLU179-W62 ([65.55.116.72]) by blu0-omc3-s23.blu0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Thu, 15 May 2014 11:44:38 -0700 X-TMN: [C6v0u9YD5TlDWcoJZmVPAhhkv19jhyNA9rlOl2sEjYY=] X-Originating-Email: [hsn@sendmail.cz] Message-ID: From: Radim Kolar To: Steven Hartland , "freebsd-fs@FreeBSD.org" , "bug-followup@freebsd.org" Subject: RE: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 18:44:37 +0000 Importance: Normal In-Reply-To: References: <201405151240.s4FCe1Hw087808@freefall.freebsd.org>, <14010473114D42CC92756838300EEE64@multiplay.co.uk>, , MIME-Version: 1.0 X-OriginalArrivalTime: 15 May 2014 18:44:38.0041 (UTC) FILETIME=[B9593090:01CF706D] Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 18:44:44 -0000 crash stack from patched kernel #7 0xc092336c in calltrap () at /usr/src/sys/i386/i386/exception.s:170 #8 0xc069345d in kdb_enter (why=3D0xc09a7ef8 "panic"=2C msg=3D) at cpufunc.h:71 #9 0xc065710f in panic (fmt=3D) at /usr/src/sys/kern/kern_shutdown.c:823 #10 0xc0939acb in dblfault_handler () at /usr/src/sys/i386/i386/trap.c:1072 #11 0xc0cb8187 in vdev_queue_io_to_issue (vq=3D0xc46ab300) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_queue.c:489 #12 0xc0cb8119 in vdev_queue_io (zio=3D0xc486c5b8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_queue.c:768 #13 0xc0cd8704 in zio_vdev_io_start (ziop=3D) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/zio.c:2617 #14 0xc0cd4e28 in zio_execute (zio=3D0xc486c5b8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/zio.c:1357 #15 0xc0cb7604 in vdev_mirror_io_start (zio=3D0xc476f000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_mirror.c:284 #16 0xc0cd854f in zio_vdev_io_start (ziop=3D) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/zio.c:2537 #17 0xc0cd4e28 in zio_execute (zio=3D0xc476f000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/zio.c:1357 #18 0xc0ca0ddc in spa_load_verify_cb (zilog=3D0x0=2C bp=3D=2C dnp=3D0xc4b0fa00=2C arg=3D) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/spa.c:1887 #19 0xc0c5eb79 in traverse_visitbp (td=3D0xd93f1120=2C dnp=3D0xc46ab300=2C bp=3D0xc4b18100=2C zb=3D0xd93f06b0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/dmu_traverse.c:272 #20 0xc0c5edb0 in traverse_visitbp (td=3D0xd93f1120=2C dnp=3D0xc4b0fa00=2C = bp=3D0x80=2C zb=3D0xd93f07a8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/dmu_traverse.c:306 failing line 489 is: if (avl_numnodes(&vq->vq_active_tree) >=3D zfs_vdev_max_active) return (ZIO_PRIORITY_NUM_QUEUEABLE)=3B = From owner-freebsd-fs@FreeBSD.ORG Thu May 15 18:50:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B07C7E31 for ; Thu, 15 May 2014 18:50:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9BDA5270E for ; Thu, 15 May 2014 18:50:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4FIo0nv037649 for ; Thu, 15 May 2014 18:50:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4FIo0wI037636; Thu, 15 May 2014 18:50:00 GMT (envelope-from gnats) Date: Thu, 15 May 2014 18:50:00 GMT Message-Id: <201405151850.s4FIo0wI037636@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: zfs panic on 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 18:50:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: Steven Hartland , "freebsd-fs@FreeBSD.org" , "bug-followup@freebsd.org" Cc: Subject: RE: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 18:44:37 +0000 --_3fd876d8-95d2-45ec-9c06-a4c1351e561f_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable crash stack from patched kernel #7 0xc092336c in calltrap () at /usr/src/sys/i386/i386/exception.s:170 #8 0xc069345d in kdb_enter (why=3D0xc09a7ef8 "panic"=2C msg=3D) at cpufunc.h:71 #9 0xc065710f in panic (fmt=3D) at /usr/src/sys/kern/kern_shutdown.c:823 #10 0xc0939acb in dblfault_handler () at /usr/src/sys/i386/i386/trap.c:1072 #11 0xc0cb8187 in vdev_queue_io_to_issue (vq=3D0xc46ab300) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_queue.c:489 #12 0xc0cb8119 in vdev_queue_io (zio=3D0xc486c5b8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_queue.c:768 #13 0xc0cd8704 in zio_vdev_io_start (ziop=3D) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/zio.c:2617 #14 0xc0cd4e28 in zio_execute (zio=3D0xc486c5b8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/zio.c:1357 #15 0xc0cb7604 in vdev_mirror_io_start (zio=3D0xc476f000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_mirror.c:284 #16 0xc0cd854f in zio_vdev_io_start (ziop=3D) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/zio.c:2537 #17 0xc0cd4e28 in zio_execute (zio=3D0xc476f000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/zio.c:1357 #18 0xc0ca0ddc in spa_load_verify_cb (zilog=3D0x0=2C bp=3D=2C dnp=3D0xc4b0fa00=2C arg=3D) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/spa.c:1887 #19 0xc0c5eb79 in traverse_visitbp (td=3D0xd93f1120=2C dnp=3D0xc46ab300=2C bp=3D0xc4b18100=2C zb=3D0xd93f06b0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/dmu_traverse.c:272 #20 0xc0c5edb0 in traverse_visitbp (td=3D0xd93f1120=2C dnp=3D0xc4b0fa00=2C = bp=3D0x80=2C zb=3D0xd93f07a8) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/dmu_traverse.c:306 failing line 489 is: if (avl_numnodes(&vq->vq_active_tree) >=3D zfs_vdev_max_active) return (ZIO_PRIORITY_NUM_QUEUEABLE)=3B = --_3fd876d8-95d2-45ec-9c06-a4c1351e561f_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
crash stack from patched kernel<= br>
#7 =3B 0xc092336c in calltrap () at /usr/src/sys/i386/i386/excep= tion.s:170
#8 =3B 0xc069345d in kdb_enter (why=3D0xc09a7ef8 "panic"= =2C
 =3B =3B =3B msg=3D<=3Bvalue optimized out>=3B) at c= pufunc.h:71
#9 =3B 0xc065710f in panic (fmt=3D<=3Bvalue optimized = out>=3B)
 =3B =3B =3B at /usr/src/sys/kern/kern_shutdown.c= :823
#10 0xc0939acb in dblfault_handler () at /usr/src/sys/i386/i386/tra= p.c:1072
#11 0xc0cb8187 in vdev_queue_io_to_issue (vq=3D0xc46ab300)
&= nbsp=3B =3B =3B at /usr/src/sys/modules/zfs/../../cddl/contrib/open= solaris/uts/common/fs/zfs/vdev_queue.c:489
#12 0xc0cb8119 in vdev_queue_= io (zio=3D0xc486c5b8)
 =3B =3B =3B at /usr/src/sys/modules/z= fs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c:768
#13= 0xc0cd8704 in zio_vdev_io_start (ziop=3D<=3Bvalue optimized out>=3B) =3B =3B =3B at /usr/src/sys/modules/zfs/../../cddl/contrib/o= pensolaris/uts/common/fs/zfs/zio.c:2617
#14 0xc0cd4e28 in zio_execute (z= io=3D0xc486c5b8)
 =3B =3B =3B at /usr/src/sys/modules/zfs/..= /../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1357
#15 0xc0cb7604= in vdev_mirror_io_start (zio=3D0xc476f000)
 =3B =3B =3B at = /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/v= dev_mirror.c:284
#16 0xc0cd854f in zio_vdev_io_start (ziop=3D<=3Bvalue= optimized out>=3B)
 =3B =3B =3B at /usr/src/sys/modules/z= fs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:2537
#17 0xc0c= d4e28 in zio_execute (zio=3D0xc476f000)
 =3B =3B =3B at /usr= /src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c= :1357
#18 0xc0ca0ddc in spa_load_verify_cb (zilog=3D0x0=2C bp=3D<=3Bva= lue optimized out>=3B=2C
 =3B =3B =3B dnp=3D0xc4b0fa00=2C = arg=3D<=3Bvalue optimized out>=3B)
 =3B =3B =3B at /usr/= src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:= 1887
#19 0xc0c5eb79 in traverse_visitbp (td=3D0xd93f1120=2C dnp=3D0xc46a= b300=2C
 =3B =3B =3B bp=3D0xc4b18100=2C zb=3D0xd93f06b0)
=  =3B =3B =3B at /usr/src/sys/modules/zfs/../../cddl/contrib/ope= nsolaris/uts/common/fs/zfs/dmu_traverse.c:272
#20 0xc0c5edb0 in traverse= _visitbp (td=3D0xd93f1120=2C dnp=3D0xc4b0fa00=2C bp=3D0x80=2C
 =3B&n= bsp=3B =3B zb=3D0xd93f07a8)
 =3B =3B =3B at /usr/src/sys= /modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.= c:306

failing line 489 is:

 =3B =3B =3B =3B&n= bsp=3B =3B =3B if (avl_numnodes(&=3Bvq->=3Bvq_active_tree) >= =3B=3D zfs_vdev_max_active)
 =3B =3B =3B =3B =3B&nbs= p=3B =3B =3B =3B =3B =3B =3B =3B =3B = =3B return (ZIO_PRIORITY_NUM_QUEUEABLE)=3B


= --_3fd876d8-95d2-45ec-9c06-a4c1351e561f_-- From owner-freebsd-fs@FreeBSD.ORG Thu May 15 18:59:37 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F1933501; Thu, 15 May 2014 18:59:37 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id B32672807; Thu, 15 May 2014 18:59:37 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 0B00C20E7088B; Thu, 15 May 2014 18:59:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.9 required=8.0 tests=AWL,BAYES_05,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id C0F1E20E70885; Thu, 15 May 2014 18:59:32 +0000 (UTC) Message-ID: From: "Steven Hartland" To: "Radim Kolar" , "freebsd-fs@FreeBSD.org" , References: <201405151240.s4FCe1Hw087808@freefall.freebsd.org>, <14010473114D42CC92756838300EEE64@multiplay.co.uk>, , Subject: Re: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 19:59:28 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-2"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 18:59:38 -0000 ----- Original Message ----- From: "Radim Kolar" failing line 489 is: if (avl_numnodes(&vq->vq_active_tree) >= zfs_vdev_max_active) return (ZIO_PRIORITY_NUM_QUEUEABLE); Ok so thats what I thought it was could you see what vq is? From owner-freebsd-fs@FreeBSD.ORG Thu May 15 19:00:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8B867579 for ; Thu, 15 May 2014 19:00:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 60E0E2813 for ; Thu, 15 May 2014 19:00:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4FJ002T040848 for ; Thu, 15 May 2014 19:00:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4FJ00dg040847; Thu, 15 May 2014 19:00:00 GMT (envelope-from gnats) Date: Thu, 15 May 2014 19:00:00 GMT Message-Id: <201405151900.s4FJ00dg040847@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/189355: zfs panic on 10-stable Reply-To: "Steven Hartland" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 19:00:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: "Steven Hartland" To: "Radim Kolar" , "freebsd-fs@FreeBSD.org" , Cc: Subject: Re: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 19:59:28 +0100 ----- Original Message ----- From: "Radim Kolar" failing line 489 is: if (avl_numnodes(&vq->vq_active_tree) >= zfs_vdev_max_active) return (ZIO_PRIORITY_NUM_QUEUEABLE); Ok so thats what I thought it was could you see what vq is? From owner-freebsd-fs@FreeBSD.ORG Thu May 15 20:38:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E795C2C7; Thu, 15 May 2014 20:38:44 +0000 (UTC) Received: from BLU004-OMC3S2.hotmail.com (blu004-omc3s2.hotmail.com [65.55.116.77]) by mx1.freebsd.org (Postfix) with ESMTP id A0684210E; Thu, 15 May 2014 20:38:44 +0000 (UTC) Received: from BLU179-W46 ([65.55.116.74]) by BLU004-OMC3S2.hotmail.com with Microsoft SMTPSVC(7.5.7601.22678); Thu, 15 May 2014 13:37:38 -0700 X-TMN: [nUV99pW8uLJzo6z11ojkcUjKC7VDohPP] X-Originating-Email: [hsn@sendmail.cz] Message-ID: From: Radim Kolar To: Steven Hartland , "freebsd-fs@FreeBSD.org" , "bug-followup@freebsd.org" Subject: RE: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 20:37:38 +0000 Importance: Normal In-Reply-To: References: <201405151240.s4FCe1Hw087808@freefall.freebsd.org>, <14010473114D42CC92756838300EEE64@multiplay.co.uk>, , , MIME-Version: 1.0 X-OriginalArrivalTime: 15 May 2014 20:37:38.0362 (UTC) FILETIME=[82BC11A0:01CF707D] Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 20:38:45 -0000 > Ok so thats what I thought it was could you see what vq is? (kgdb) up 11 #11 0xc0cb8187 in vdev_queue_io_to_issue (vq=3D0xc46ab300) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_queue.c:489 489 if (avl_numnodes(&vq->vq_active_tree) >=3D zfs_vdev_max_act= ive) Current language: auto=3B currently minimal (kgdb) print vq $1 =3D (vdev_queue_t *) 0xc46ab300 (kgdb) print *vq $2 =3D {vq_vdev =3D 0xc46ab000=2C vq_class =3D {{vqc_active =3D 0=2C vqc_qu= eued_tree =3D { avl_root =3D 0x0=2C avl_compar =3D 0xc0cb7de0 =2C avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}=2C { vqc_active =3D 0=2C vqc_queued_tree =3D {avl_root =3D 0x0=2C avl_compar =3D 0xc0cb7de0 =2C avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}=2C { vqc_active =3D 1=2C vqc_queued_tree =3D {avl_root =3D 0x0=2C avl_compar =3D 0xc0cb7d70 =2C avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}=2C { vqc_active =3D 0=2C vqc_queued_tree =3D {avl_root =3D 0x0=2C avl_compar =3D 0xc0cb7d70 =2C avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}=2C { vqc_active =3D 0=2C vqc_queued_tree =3D {avl_root =3D 0xc486c794=2C avl_compar =3D 0xc0cb7d70 =2C avl_offset =3D 476=2C avl_numnodes =3D 1=2C avl_size =3D 732}}=2C { vqc_active =3D 0=2C vqc_queued_tree =3D {avl_root =3D 0x0=2C avl_compar =3D 0xc0cb7d70 =2C avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}}=2C vq_active_tree =3D {avl_root =3D 0xc476fa70=2C avl_compar =3D 0xc0cb7d70 =2C avl_offset =3D= 476=2C avl_numnodes =3D 1=2C avl_size =3D 732}=2C vq_last_offset =3D 594196633= 6=2C vq_io_complete_ts =3D 7702783485=2C vq_lock =3D {lock_object =3D { lo_name =3D 0xc0d6b0ff "vq->vq_lock"=2C lo_flags =3D 40960000=2C lo_d= ata =3D 0=2C lo_witness =3D 0x0}=2C sx_lock =3D 3290710800}} (kgdb) = From owner-freebsd-fs@FreeBSD.ORG Thu May 15 20:40:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A33D7452 for ; Thu, 15 May 2014 20:40:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 908AD212D for ; Thu, 15 May 2014 20:40:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4FKe1wu076483 for ; Thu, 15 May 2014 20:40:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4FKe14i076482; Thu, 15 May 2014 20:40:01 GMT (envelope-from gnats) Date: Thu, 15 May 2014 20:40:01 GMT Message-Id: <201405152040.s4FKe14i076482@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: zfs panic on 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 20:40:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: Steven Hartland , "freebsd-fs@FreeBSD.org" , "bug-followup@freebsd.org" Cc: Subject: RE: kern/189355: zfs panic on 10-stable Date: Thu, 15 May 2014 20:37:38 +0000 --_8e795f21-165a-4c0e-8324-2a5a181326d7_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable > Ok so thats what I thought it was could you see what vq is? (kgdb) up 11 #11 0xc0cb8187 in vdev_queue_io_to_issue (vq=3D0xc46ab300) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/vdev_queue.c:489 489 if (avl_numnodes(&vq->vq_active_tree) >=3D zfs_vdev_max_act= ive) Current language: auto=3B currently minimal (kgdb) print vq $1 =3D (vdev_queue_t *) 0xc46ab300 (kgdb) print *vq $2 =3D {vq_vdev =3D 0xc46ab000=2C vq_class =3D {{vqc_active =3D 0=2C vqc_qu= eued_tree =3D { avl_root =3D 0x0=2C avl_compar =3D 0xc0cb7de0 =2C avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}=2C { vqc_active =3D 0=2C vqc_queued_tree =3D {avl_root =3D 0x0=2C avl_compar =3D 0xc0cb7de0 =2C avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}=2C { vqc_active =3D 1=2C vqc_queued_tree =3D {avl_root =3D 0x0=2C avl_compar =3D 0xc0cb7d70 =2C avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}=2C { vqc_active =3D 0=2C vqc_queued_tree =3D {avl_root =3D 0x0=2C avl_compar =3D 0xc0cb7d70 =2C avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}=2C { vqc_active =3D 0=2C vqc_queued_tree =3D {avl_root =3D 0xc486c794=2C avl_compar =3D 0xc0cb7d70 =2C avl_offset =3D 476=2C avl_numnodes =3D 1=2C avl_size =3D 732}}=2C { vqc_active =3D 0=2C vqc_queued_tree =3D {avl_root =3D 0x0=2C avl_compar =3D 0xc0cb7d70 =2C avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}}=2C vq_active_tree =3D {avl_root =3D 0xc476fa70=2C avl_compar =3D 0xc0cb7d70 =2C avl_offset =3D= 476=2C avl_numnodes =3D 1=2C avl_size =3D 732}=2C vq_last_offset =3D 594196633= 6=2C vq_io_complete_ts =3D 7702783485=2C vq_lock =3D {lock_object =3D { lo_name =3D 0xc0d6b0ff "vq->vq_lock"=2C lo_flags =3D 40960000=2C lo_d= ata =3D 0=2C lo_witness =3D 0x0}=2C sx_lock =3D 3290710800}} (kgdb) = --_8e795f21-165a-4c0e-8324-2a5a181326d7_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
>=3B Ok so thats what I though= t it was could you see what vq is?
(kgdb) up 11
#11 0xc0cb8187 in vde= v_queue_io_to_issue (vq=3D0xc46ab300)
 =3B =3B =3B at /usr/s= rc/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_qu= eue.c:489
489 =3B =3B =3B =3B =3B =3B =3B&nb= sp=3B =3B =3B =3B =3B if (avl_numnodes(&=3Bvq->=3Bvq_a= ctive_tree) >=3B=3D zfs_vdev_max_active)
Current language: =3B aut= o=3B currently minimal
(kgdb) print vq
$1 =3D (vdev_queue_t *) 0xc46a= b300
(kgdb) print *vq
$2 =3D {vq_vdev =3D 0xc46ab000=2C vq_class =3D = {{vqc_active =3D 0=2C vqc_queued_tree =3D {
 =3B =3B =3B&nbs= p=3B =3B =3B =3B avl_root =3D 0x0=2C
 =3B =3B = =3B =3B =3B =3B =3B avl_compar =3D 0xc0cb7de0 <=3Bvdev_qu= eue_timestamp_compare>=3B=2C
 =3B =3B =3B =3B =3B&= nbsp=3B =3B avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 73= 2}}=2C {
 =3B =3B =3B =3B =3B vqc_active =3D 0=2C vq= c_queued_tree =3D {avl_root =3D 0x0=2C
 =3B =3B =3B =3B&= nbsp=3B =3B =3B avl_compar =3D 0xc0cb7de0 <=3Bvdev_queue_timestam= p_compare>=3B=2C
 =3B =3B =3B =3B =3B =3B = =3B avl_offset =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}=2C {
=  =3B =3B =3B =3B =3B vqc_active =3D 1=2C vqc_queued_tre= e =3D {avl_root =3D 0x0=2C
 =3B =3B =3B =3B =3B = =3B =3B avl_compar =3D 0xc0cb7d70 <=3Bvdev_queue_offset_compare>=3B= =2C
 =3B =3B =3B =3B =3B =3B =3B avl_offset = =3D 476=2C avl_numnodes =3D 0=2C avl_size =3D 732}}=2C {
 =3B = =3B =3B =3B =3B vqc_active =3D 0=2C vqc_queued_tree =3D {avl_ro= ot =3D 0x0=2C
 =3B =3B =3B =3B =3B =3B =3B a= vl_compar =3D 0xc0cb7d70 <=3Bvdev_queue_offset_compare>=3B=2C
 = =3B =3B =3B =3B =3B =3B =3B avl_offset =3D 476=2C a= vl_numnodes =3D 0=2C avl_size =3D 732}}=2C {
 =3B =3B =3B&nb= sp=3B =3B vqc_active =3D 0=2C vqc_queued_tree =3D {avl_root =3D 0xc486c= 794=2C
 =3B =3B =3B =3B =3B =3B =3B avl_comp= ar =3D 0xc0cb7d70 <=3Bvdev_queue_offset_compare>=3B=2C
 =3B = =3B =3B =3B =3B =3B =3B avl_offset =3D 476=2C avl_numno= des =3D 1=2C avl_size =3D 732}}=2C {
 =3B =3B =3B =3B&nb= sp=3B vqc_active =3D 0=2C vqc_queued_tree =3D {avl_root =3D 0x0=2C
 = =3B =3B =3B =3B =3B =3B =3B avl_compar =3D 0xc0cb7d= 70 <=3Bvdev_queue_offset_compare>=3B=2C
 =3B =3B =3B&nbs= p=3B =3B =3B =3B avl_offset =3D 476=2C avl_numnodes =3D 0=2C av= l_size =3D 732}}}=2C
 =3B vq_active_tree =3D {avl_root =3D 0xc476fa7= 0=2C
 =3B =3B =3B avl_compar =3D 0xc0cb7d70 <=3Bvdev_queue= _offset_compare>=3B=2C avl_offset =3D 476=2C
 =3B =3B =3B = avl_numnodes =3D 1=2C avl_size =3D 732}=2C vq_last_offset =3D 5941966336=2C=
 =3B vq_io_complete_ts =3D 7702783485=2C vq_lock =3D {lock_object = =3D {
 =3B =3B =3B =3B =3B lo_name =3D 0xc0d6b0ff "v= q->=3Bvq_lock"=2C lo_flags =3D 40960000=2C lo_data =3D 0=2C
 =3B&n= bsp=3B =3B =3B =3B lo_witness =3D 0x0}=2C sx_lock =3D 329071080= 0}}
(kgdb)
= --_8e795f21-165a-4c0e-8324-2a5a181326d7_-- From owner-freebsd-fs@FreeBSD.ORG Fri May 16 02:55:26 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 169E6BC9; Fri, 16 May 2014 02:55:26 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DDD692D70; Fri, 16 May 2014 02:55:25 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4G2tPoP032196; Fri, 16 May 2014 02:55:25 GMT (envelope-from araujo@freefall.freebsd.org) Received: (from araujo@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4G2tPmI032195; Fri, 16 May 2014 02:55:25 GMT (envelope-from araujo) Date: Fri, 16 May 2014 02:55:25 GMT Message-Id: <201405160255.s4G2tPmI032195@freefall.freebsd.org> To: araujo@FreeBSD.org, freebsd-fs@FreeBSD.org, rmacklem@FreeBSD.org From: araujo@FreeBSD.org Subject: Re: kern/167048: [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 May 2014 02:55:26 -0000 Synopsis: [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NFS Responsible-Changed-From-To: freebsd-fs->rmacklem Responsible-Changed-By: araujo Responsible-Changed-When: Fri May 16 02:55:25 UTC 2014 Responsible-Changed-Why: That is rmacklem@ area. http://www.freebsd.org/cgi/query-pr.cgi?pr=167048 From owner-freebsd-fs@FreeBSD.ORG Fri May 16 05:44:30 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9666A4B8 for ; Fri, 16 May 2014 05:44:30 +0000 (UTC) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client CN "people.fsn.hu", Issuer "StartCom Class 1 Primary Intermediate Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4A3602ABB for ; Fri, 16 May 2014 05:44:29 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 7BDB1138C928; Fri, 16 May 2014 07:35:41 +0200 (CEST) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 9.9328] X-CRM114-CacheID: sfid-20140516_07354_9E0C8488 X-CRM114-Status: Good ( pR: 9.9328 ) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Fri May 16 07:35:41 2014 X-DSPAM-Confidence: 0.9946 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 5375a3ad272301511813044 X-DSPAM-Factors: 27, From*"Nagy, Attila" , 0.00029, wrote+>, 0.00178, >+I, 0.00238, >+>, 0.00304, >+>, 0.00304, >+Is, 0.00308, ZFS, 0.00436, Subject*ZFS, 0.00476, Received*online.co.hu+[195.228.243.99]), 0.00523, Received*[195.228.243.99]), 0.00523, Received*online.co.hu, 0.00523, Received*(japan.t, 0.00523, Received*(japan.t+online.co.hu, 0.00523, wrote, 0.00551, "+>, 0.00581, Subject*fix, 0.00653, Received*from+japan.t, 0.00653, Received*online.private+(japan.t, 0.00653, Is+there, 0.00653, Received*japan.t+online.private, 0.00653, Received*japan.t, 0.00653, To*FreeBSD.org, 0.00714, >+Well, 0.00746, "as, 0.00869, Subject*[patch], 0.00869, boxes+and, 0.99000, X-Spambayes-Classification: ham; 0.00 Received: from japan.t-online.private (japan.t-online.co.hu [195.228.243.99]) by people.fsn.hu (Postfix) with ESMTPSA id B1268138C90D; Fri, 16 May 2014 07:35:36 +0200 (CEST) Message-ID: <5375A3A8.3010406@fsn.hu> Date: Fri, 16 May 2014 07:35:36 +0200 From: "Nagy, Attila" MIME-Version: 1.0 To: Karl Denninger , freebsd-fs@FreeBSD.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201405151530.s4FFU0d6050580@freefall.freebsd.org> In-Reply-To: <201405151530.s4FFU0d6050580@freefall.freebsd.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 May 2014 05:44:30 -0000 On 05/15/14 17:30, Karl Denninger wrote: > > I have now been running the latest delta as posted 26 March -- it is=20 > coming up on two months now, has been stable here and I've seen several=20 > positive reports and no negative ones on impact for others. Performance=20 > continues to be "as expected." > > Is there an expectation on this being merged forward and/or MFC'd? > Well, the expectation is quite high -at least from my side :-)-. We struggle with stable/10 boxes and ZFS since they are introduced in our environment, while stable/9 goes nicely under the same workload. OS 10 swaps a lot to allow ARC to grow and without swap space, it starts killing random processes after 20-30 days, depending on how much RAM it has and how big I set the arc_max size (without it, the situation is even worse). I wonder, nobody uses stable/10 with ZFS? From owner-freebsd-fs@FreeBSD.ORG Fri May 16 07:32:42 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4A0404A6 for ; Fri, 16 May 2014 07:32:42 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 99CBC228F for ; Fri, 16 May 2014 07:32:41 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id KAA26558; Fri, 16 May 2014 10:32:37 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WlCdA-0008Ah-Ua; Fri, 16 May 2014 10:32:36 +0300 Message-ID: <5375BEDC.9090202@FreeBSD.org> Date: Fri, 16 May 2014 10:31:40 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: "Nagy, Attila" , freebsd-fs@FreeBSD.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201405151530.s4FFU0d6050580@freefall.freebsd.org> <5375A3A8.3010406@fsn.hu> In-Reply-To: <5375A3A8.3010406@fsn.hu> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 May 2014 07:32:42 -0000 on 16/05/2014 08:35 Nagy, Attila said the following: > On 05/15/14 17:30, Karl Denninger wrote: >> I have now been running the latest delta as posted 26 March -- it is=20 >> coming up on two months now, has been stable here and I've seen several=20 >> positive reports and no negative ones on impact for others. Performance=20 >> continues to be "as expected." >> Is there an expectation on this being merged forward and/or MFC'd? >> > Well, the expectation is quite high -at least from my side :-)-. > We struggle with stable/10 boxes and ZFS since they are introduced in our > environment, while stable/9 goes nicely under the same workload. > OS 10 swaps a lot to allow ARC to grow and without swap space, it starts killing > random processes after 20-30 days, depending on how much RAM it has and how big > I set the arc_max size (without it, the situation is even worse). > > I wonder, nobody uses stable/10 with ZFS? Please try to upgrade to r265945 or later. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Fri May 16 07:40:03 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 12FD2597 for ; Fri, 16 May 2014 07:40:03 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0141622D6 for ; Fri, 16 May 2014 07:40:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4G7e281056741 for ; Fri, 16 May 2014 07:40:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4G7e2Kq056740; Fri, 16 May 2014 07:40:02 GMT (envelope-from gnats) Date: Fri, 16 May 2014 07:40:02 GMT Message-Id: <201405160740.s4G7e2Kq056740@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 May 2014 07:40:03 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: Steven Hartland , "bug-followup@FreeBSD.org" Cc: Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Fri, 16 May 2014 07:31:54 +0000 --_4a42f3dc-e58e-4892-96bd-2ccfd3354b76_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable line number will be probably wrong=2C i had such cases with clang in 10.0 a= lready if optimization is used. I am rebuilding kernel without optimalizati= on now. = --_4a42f3dc-e58e-4892-96bd-2ccfd3354b76_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
line number will be probably wro= ng=2C i had such cases with clang in 10.0 already if optimization is used. = I am rebuilding kernel without optimalization now.
= --_4a42f3dc-e58e-4892-96bd-2ccfd3354b76_-- From owner-freebsd-fs@FreeBSD.ORG Fri May 16 15:20:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4E4AC6AD for ; Fri, 16 May 2014 15:20:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3C2D829C2 for ; Fri, 16 May 2014 15:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4GFK1DU043695 for ; Fri, 16 May 2014 15:20:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4GFK1Q1043694; Fri, 16 May 2014 15:20:01 GMT (envelope-from gnats) Date: Fri, 16 May 2014 15:20:01 GMT Message-Id: <201405161520.s4GFK1Q1043694@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: "Steven Hartland" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 May 2014 15:20:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: "Steven Hartland" To: , "Radim Kolar" Cc: Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Fri, 16 May 2014 16:12:53 +0100 Seems this didn't get through first time so resending. Hmm now that is odd as vq->vq_active_tree looks quite sane with avl_numnodes = 1, so theirs one outstanding zio in the active tree. What might be of interest here is the stack indicates your zio is a scrubbing read so there is something not quite right on the pool. Is there some where you can tar up: 1. The kernel dump 2. Your current sources 3. Your current kernel Then email me link privately to smh at freebsd.org, so I can have a look in more detail? From owner-freebsd-fs@FreeBSD.ORG Sat May 17 12:01:00 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3F3287F8; Sat, 17 May 2014 12:01:00 +0000 (UTC) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client CN "people.fsn.hu", Issuer "StartCom Class 1 Primary Intermediate Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id AB0C52AD9; Sat, 17 May 2014 12:00:57 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id EF3CA13591AD; Sat, 17 May 2014 14:00:47 +0200 (CEST) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 9.0576] X-CRM114-CacheID: sfid-20140517_14004_056B6022 X-CRM114-Status: Good ( pR: 9.0576 ) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Sat May 17 14:00:47 2014 X-DSPAM-Confidence: 0.9839 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 53774f6f766602767713200 X-DSPAM-Factors: 27, From*Attila Nagy , 0.00010, From*Attila, 0.00374, Subject*ZFS, 0.00476, References*FreeBSD.org>, 0.00476, To*FreeBSD.org>, 0.00476, References*fsn.hu>, 0.00499, Received*(japan.t+online.co.hu, 0.00523, Received*(japan.t, 0.00523, Received*online.co.hu, 0.00523, Received*[195.228.243.99]), 0.00523, Received*online.co.hu+[195.228.243.99]), 0.00523, From*Attila+Nagy, 0.00653, Subject*fix, 0.00653, From*Nagy+, 0.01000, From*Nagy, 0.01298, User-Agent*Mozilla/5.0+(X11, 0.02883, To*freebsd+fs, 0.03389, Subject*Re, 0.03658, User-Agent*Linux, 0.04501, Received*ESMTPSA+id, 0.04884, User-Agent*1.8.1.23)+Gecko/20090817, 0.04999, Subject*problem, 0.04999, To*fs, 0.04999, X-Spambayes-Classification: ham; 0.00 Received: from [192.168.3.2] (japan.t-online.co.hu [195.228.243.99]) by people.fsn.hu (Postfix) with ESMTPSA id 19E2A1359195; Sat, 17 May 2014 14:00:42 +0200 (CEST) Message-ID: <53774F69.3090402@fsn.hu> Date: Sat, 17 May 2014 14:00:41 +0200 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 To: Andriy Gapon , freebsd-fs@FreeBSD.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201405151530.s4FFU0d6050580@freefall.freebsd.org> <5375A3A8.3010406@fsn.hu> <5375BEDC.9090202@FreeBSD.org> In-Reply-To: <5375BEDC.9090202@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1" X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 May 2014 12:01:00 -0000 On 05/16/2014 09:31 AM, Andriy Gapon wrote: on 16/05/2014 08:35 Nagy, Attila said the following: On 05/15/14 17:30, Karl Denninger wrote: I have now been running the latest delta as posted 26 March -- it is=20 coming up on two months now, has been stable here and I've seen several=20 positive reports and no negative ones on impact for others. Performance=20 continues to be "as expected." Is there an expectation on this being merged forward and/or MFC'd? Well, the expectation is quite high -at least from my side :-)-. We struggle with stable/10 boxes and ZFS since they are introduced in our environment, while stable/9 goes nicely under the same workload. OS 10 swaps a lot to allow ARC to grow and without swap space, it starts killing random processes after 20-30 days, depending on how much RAM it has and how big I set the arc_max size (without it, the situation is even worse). I wonder, nobody uses stable/10 with ZFS? Please try to upgrade to r265945 or later. Thanks for the pointer, I will try it! From owner-freebsd-fs@FreeBSD.ORG Sat May 17 17:53:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1BBD65F0 for ; Sat, 17 May 2014 17:53:12 +0000 (UTC) Received: from mail-vc0-x22e.google.com (mail-vc0-x22e.google.com [IPv6:2607:f8b0:400c:c03::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D17CA28DA for ; Sat, 17 May 2014 17:53:11 +0000 (UTC) Received: by mail-vc0-f174.google.com with SMTP id lh14so7816852vcb.33 for ; Sat, 17 May 2014 10:53:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:cc :content-type; bh=32XVZeD84/wFnUXcPelP/+1VJ/8zVy9lRVXa3NJ1YrU=; b=Vtnr4UUtwf0Mn6xoYl1XwrNfo90iXDlZMDup//Ebmb5maOBKa5zRVWTbsI2Ok8KbQW 8iQc1MIEaIyaBMYiQFZipYKOyri1urFhYNn87jUCNc9FnOvza9v7zBV8MqTz3ovoJZp8 tjLcdTrZyJ/rEBcwP2vAON4RKVF6YecErLTBMfUU7ZP7NezUWDa+06jl653r/e9CiMli +ZCzKITja7l/xttMUBXbyCoSAenprOiK6etadoTZzp8KG1/ing5hCf7UaUl3zrmvpqrF 9jlSw7bCJgX7+vt5jeH5yjsGsuCdJdzlcer0WjQQd+KA50ROIBVSqBY7Fafvp3uwZtgO yKmg== X-Received: by 10.220.98.143 with SMTP id q15mr3890387vcn.38.1400349190991; Sat, 17 May 2014 10:53:10 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.238.169 with HTTP; Sat, 17 May 2014 10:52:40 -0700 (PDT) In-Reply-To: <5375BEDC.9090202@FreeBSD.org> References: <201405151530.s4FFU0d6050580@freefall.freebsd.org> <5375A3A8.3010406@fsn.hu> <5375BEDC.9090202@FreeBSD.org> From: Matthias Gamsjager Date: Sat, 17 May 2014 19:52:40 +0200 Message-ID: Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Cc: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 May 2014 17:53:12 -0000 > Please try to upgrade to r265945 or later. So far so good. No swapping, good ARC size after 1 day of uptime. From owner-freebsd-fs@FreeBSD.ORG Sat May 17 17:53:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 161DB65B for ; Sat, 17 May 2014 17:53:24 +0000 (UTC) Received: from dmz-mailsec-scanner-6.mit.edu (dmz-mailsec-scanner-6.mit.edu [18.7.68.35]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 965CB28E1 for ; Sat, 17 May 2014 17:53:23 +0000 (UTC) X-AuditID: 12074423-f79916d000000c54-3e-5377a0dda3c1 Received: from mailhub-auth-1.mit.edu ( [18.9.21.35]) (using TLS with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by dmz-mailsec-scanner-6.mit.edu (Symantec Messaging Gateway) with SMTP id 74.DA.03156.DD0A7735; Sat, 17 May 2014 13:48:14 -0400 (EDT) Received: from outgoing.mit.edu (outgoing-auth-1.mit.edu [18.9.28.11]) by mailhub-auth-1.mit.edu (8.13.8/8.9.2) with ESMTP id s4HHmDJ4008666 for ; Sat, 17 May 2014 13:48:13 -0400 Received: from multics.mit.edu (system-low-sipb.mit.edu [18.187.2.37]) (authenticated bits=56) (User authenticated as kaduk@ATHENA.MIT.EDU) by outgoing.mit.edu (8.13.8/8.12.4) with ESMTP id s4HHmBka002759 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT) for ; Sat, 17 May 2014 13:48:13 -0400 Received: (from kaduk@localhost) by multics.mit.edu (8.12.9.20060308) id s4HHmB8f026287; Sat, 17 May 2014 13:48:11 -0400 (EDT) Date: Sat, 17 May 2014 13:48:11 -0400 (EDT) From: Benjamin Kaduk X-X-Sender: kaduk@multics.mit.edu To: freebsd-fs@freebsd.org Subject: Add an assert that v_holdcnt >= v_usecount? Message-ID: User-Agent: Alpine 1.10 (GSO 962 2008-03-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrNIsWRmVeSWpSXmKPExsUixCmqrHtvQXmwwcxLTBbHHv9kc2D0mPFp PksAYxSXTUpqTmZZapG+XQJXxt+r+xkLNrNXtJ/dytrA+Ju1i5GTQ0LAROLcvS/MELaYxIV7 69m6GLk4hARmM0kcPn6LEcI5xyhxYHkLE4Rzn0li4oRTzBBOA6PEpB+NTCD9LALaEktPHgaz 2QTUJB7vbYbaoSix+dQksB0iAlISM0++ZgexhYF2Tz72ESjOwcEr4Chx64g5SFhUQEdi9f4p LCA2r4CgxMmZT8BsZgFLiX9rf7FOYOSfhSQ1C0lqASPTKkbZlNwq3dzEzJzi1GTd4uTEvLzU Il0zvdzMEr3UlNJNjKAwY3dR3sH456DSIUYBDkYlHt4E+7JgIdbEsuLK3EOMkhxMSqK8crPK g4X4kvJTKjMSizPii0pzUosPMUpwMCuJ8GbOBsrxpiRWVqUW5cOkpDlYlMR531pbBQsJpCeW pGanphakFsFkZTg4lCR4hYHxJCRYlJqeWpGWmVOCkGbi4AQZzgM0/Op8kOHFBYm5xZnpEPlT jIpS4rz/QBICIImM0jy4XlgaeMUoDvSKMK8syAoeYAqB634FNJgJaPCbvaUgg0sSEVJSDYzW l1PXXiuMsIvQOFUqkVaY68zVI+9eGBqWl66aJdbSwGM0b/aZpPOL//XcMX/3w23f9t9Lladf Kwy6UTxhpn3CBYb7UglVjXf/V3r8v7fhqp7j+emCix4cP8v+gj8s+3nYZwWPxye+ZzEt37F0 5nXeG0bhr17tdG8982bD53txUwxvld85fG2OEktxRqKhFnNRcSIAupW8Y94CAAA= X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 May 2014 17:53:24 -0000 jhb was helping me debug a crashy openafs build in one of my VMs, and the symptoms seemed to indicate that a vnode had been partially destroyed before vgone() was called from vflush(), as if some buggy filesystem code had called vdrop() instead of vrele() or something similar. In a quick check, it didn't look like we had any assertions that would catch such bugs, so I tried adding something like this: Index: sys/kern/vfs_subr.c =================================================================== --- sys/kern/vfs_subr.c (revision 266330) +++ sys/kern/vfs_subr.c (working copy) @@ -2343,6 +2343,8 @@ if (vp->v_holdcnt <= 0) panic("vdrop: holdcnt %d", vp->v_holdcnt); vp->v_holdcnt--; + VNASSERT(vp->v_holdcnt >= vp->v_usecount, vp, + ("hold count less than use count")); if (vp->v_holdcnt > 0) { VI_UNLOCK(vp); return; Does that seem like something that would be generally useful? -Ben From owner-freebsd-fs@FreeBSD.ORG Sat May 17 19:22:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BB4CC528; Sat, 17 May 2014 19:22:39 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5B2692FE4; Sat, 17 May 2014 19:22:39 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.8/8.14.8) with ESMTP id s4HJMU2P006987; Sat, 17 May 2014 22:22:30 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua s4HJMU2P006987 Received: (from kostik@localhost) by tom.home (8.14.8/8.14.8/Submit) id s4HJMT5P006986; Sat, 17 May 2014 22:22:29 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sat, 17 May 2014 22:22:29 +0300 From: Konstantin Belousov To: Benjamin Kaduk Subject: Re: Add an assert that v_holdcnt >= v_usecount? Message-ID: <20140517192229.GA74331@kib.kiev.ua> References: MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="o8B5w8807wLEcI70" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 May 2014 19:22:39 -0000 --o8B5w8807wLEcI70 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, May 17, 2014 at 01:48:11PM -0400, Benjamin Kaduk wrote: > jhb was helping me debug a crashy openafs build in one of my VMs, and the= =20 > symptoms seemed to indicate that a vnode had been partially destroyed=20 > before vgone() was called from vflush(), as if some buggy filesystem code= =20 > had called vdrop() instead of vrele() or something similar. In a quick= =20 > check, it didn't look like we had any assertions that would catch such=20 > bugs, so I tried adding something like this: >=20 > Index: sys/kern/vfs_subr.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- sys/kern/vfs_subr.c (revision 266330) > +++ sys/kern/vfs_subr.c (working copy) > @@ -2343,6 +2343,8 @@ > if (vp->v_holdcnt <=3D 0) > panic("vdrop: holdcnt %d", vp->v_holdcnt); > vp->v_holdcnt--; > + VNASSERT(vp->v_holdcnt >=3D vp->v_usecount, vp, > + ("hold count less than use count")); > if (vp->v_holdcnt > 0) { > VI_UNLOCK(vp); > return; >=20 > Does that seem like something that would be generally useful? This is reasonable. As a note, I never seen such corruption of the otherwise valid vnode state= =20 ever. There were a lot of leaks, but never mismatched vget/vdrop. --o8B5w8807wLEcI70 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQIcBAEBAgAGBQJTd7b1AAoJEJDCuSvBvK1B9C8P/jawSVLbXjzWqD5rjdUCoJ+k 3XaEe4uokIf9yOxbZ2sAlq9rDQahgL8xABCfHfY/FSCMOfWmmyx10Kb/CFZfpajV cqpzxPNj9oshdxKcYYqrbSji//vZBovby1/RIfzcvXkCKnBGv09oJts56jw0ZORv QjEdDj12vbtDfyhzffPl68vYBg4TFQDzmSOuhg2cf8iCJcdsN8+XEdG3Ln6OMGsf NOLePIJy3b+ioJocakthzVj+ITnpEYp258U5WeZSd2wdbDSZ0wIxEt6r1pt4Mhua RdUCr8QFOST5vm1lAPk7x8KByOyrCk1tOibSnnKJn2z2oO1OOw9ngURI1PwVDf+l 3YCjGJhA0yXbBFC4Se82kD0NqXmR9sX+z7MuAu+FXnHGnjknPb/xpZ+O37R5dk6Y g6C1ZvwmDguOtK7VjpixD+jl+YjhMKEUJzAFwy5FYq8WEMA/x2qClK+/4R8sg4xD PLi57oSdDABCEZhVmThDIbf5BlfAox1WcONCEIrs7cS++SiGBzmN+gl+j0YjCDns dNCkKAoKtOsYPfSK/yIeMdxuWsLMqODsSI8mZeiK4pE2SDNCCkntSxoMmXZfhvbR AEY+RRYoPUuPhC/Wie2IR44k8Pg8GMrrKT5xaUlmj/GjhOFnW4OG8nASE+wR2OCk P0B2by5Ntez80E/7GOBg =N3p7 -----END PGP SIGNATURE----- --o8B5w8807wLEcI70-- From owner-freebsd-fs@FreeBSD.ORG Sat May 17 20:58:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8353FB77; Sat, 17 May 2014 20:58:04 +0000 (UTC) Received: from mail-ee0-x22d.google.com (mail-ee0-x22d.google.com [IPv6:2a00:1450:4013:c00::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E8DC32675; Sat, 17 May 2014 20:58:03 +0000 (UTC) Received: by mail-ee0-f45.google.com with SMTP id d49so2462176eek.32 for ; Sat, 17 May 2014 13:58:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=KY9+5AghWzQFe9gnGiWoOFL1+02/mtS6UtBFkcksxcM=; b=wIlpvAueRlLrvzP4P+c7ClLvQbHJ0akRpI0zdXIwAwIiQKK+zoD+rfzktqtjvpO0dS g0F8dVRIXuGaRTTWrxmPVaCAufnaBov1bg2QWKHWDsuilt55wpkptIszd2YLlxjs9v5W CmP7HQArE1BPN2axMI3IXXe1wr+mk8czvPF9OiP93337E7TK/+tOoikDnTM//nCfBW/+ JSkOOJ4JT4tekI6xOihCOUIiKkkW3IFuDiOSfXDu8U6xj2AJ8eLXuGmA/tcOIBWFqRnG yHoCKo3EmRzPiY4oAx25VyGQmCCQkAdvikDJOiZ5bw7H/lyzHQQRslagFSZPgz2UQgXK OFmQ== X-Received: by 10.14.202.5 with SMTP id c5mr4871882eeo.94.1400360282024; Sat, 17 May 2014 13:58:02 -0700 (PDT) Received: from strashydlo.home (adfh214.neoplus.adsl.tpnet.pl. [79.184.111.214]) by mx.google.com with ESMTPSA id i4sm29794831eeg.28.2014.05.17.13.58.00 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 17 May 2014 13:58:01 -0700 (PDT) Sender: =?UTF-8?Q?Edward_Tomasz_Napiera=C5=82a?= Subject: Re: Add an assert that v_holdcnt >= v_usecount? Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=iso-8859-2 From: =?iso-8859-2?Q?Edward_Tomasz_Napiera=B3a?= In-Reply-To: <20140517192229.GA74331@kib.kiev.ua> Date: Sat, 17 May 2014 22:57:59 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <071EBBB0-CDC5-47B6-A98C-5D4AD6A21855@FreeBSD.org> References: <20140517192229.GA74331@kib.kiev.ua> To: Konstantin Belousov X-Mailer: Apple Mail (2.1283) Cc: freebsd-fs@freebsd.org, Benjamin Kaduk X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 May 2014 20:58:04 -0000 Wiadomo=B6=E6 napisana przez Konstantin Belousov w dniu 17 maj 2014, o = godz. 21:22: > On Sat, May 17, 2014 at 01:48:11PM -0400, Benjamin Kaduk wrote: >> jhb was helping me debug a crashy openafs build in one of my VMs, and = the=20 >> symptoms seemed to indicate that a vnode had been partially destroyed=20= >> before vgone() was called from vflush(), as if some buggy filesystem = code=20 >> had called vdrop() instead of vrele() or something similar. In a = quick=20 >> check, it didn't look like we had any assertions that would catch = such=20 >> bugs, so I tried adding something like this: >>=20 >> Index: sys/kern/vfs_subr.c >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> --- sys/kern/vfs_subr.c (revision 266330) >> +++ sys/kern/vfs_subr.c (working copy) >> @@ -2343,6 +2343,8 @@ >> if (vp->v_holdcnt <=3D 0) >> panic("vdrop: holdcnt %d", vp->v_holdcnt); >> vp->v_holdcnt--; >> + VNASSERT(vp->v_holdcnt >=3D vp->v_usecount, vp, >> + ("hold count less than use count")); >> if (vp->v_holdcnt > 0) { >> VI_UNLOCK(vp); >> return; >>=20 >> Does that seem like something that would be generally useful? >=20 > This is reasonable. >=20 > As a note, I never seen such corruption of the otherwise valid vnode = state=20 > ever. There were a lot of leaks, but never mismatched vget/vdrop. In a filesystem that's already in the tree, or being developed by = someone who knows how VFS works - sure. But assertions like this are great time saver for someone who is new to VFS. From owner-freebsd-fs@FreeBSD.ORG Sat May 17 21:43:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5CD113E4 for ; Sat, 17 May 2014 21:43:04 +0000 (UTC) Received: from mail.egr.msu.edu (hill.egr.msu.edu [35.9.37.162]) by mx1.freebsd.org (Postfix) with ESMTP id 32FB52A25 for ; Sat, 17 May 2014 21:43:03 +0000 (UTC) Received: from hill (localhost [127.0.0.1]) by mail.egr.msu.edu (Postfix) with ESMTP id D29F338851 for ; Sat, 17 May 2014 17:42:55 -0400 (EDT) X-Virus-Scanned: amavisd-new at egr.msu.edu Received: from mail.egr.msu.edu ([127.0.0.1]) by hill (hill.egr.msu.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Wgphf5THSfIr for ; Sat, 17 May 2014 17:42:55 -0400 (EDT) Received: from EGR authenticated sender Message-ID: <5377D7DF.5020409@egr.msu.edu> Date: Sat, 17 May 2014 17:42:55 -0400 From: Adam McDougall User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix References: <201405151530.s4FFU0d6050580@freefall.freebsd.org> <5375A3A8.3010406@fsn.hu> <5375BEDC.9090202@FreeBSD.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 May 2014 21:43:04 -0000 On 05/17/2014 13:52, Matthias Gamsjager wrote: >> Please try to upgrade to r265945 or later. > > So far so good. No swapping, good ARC size after 1 day of uptime. > _______________________________________________ > Me too, I've been running it about 2 weeks on my home desktop with 4G ram where in the past it would eventually start swapping a few hundred megs with X, chromium, thunderbird, and a few terminals open. I previously tried just vm.lowmem_period=0 which helped but did not solve the issue. I intend to try a more recent -stable r265945 or later without any patches but I haven't yet and it takes time after each change to see what happens. From owner-freebsd-fs@FreeBSD.ORG Sun May 18 01:51:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0300C20E for ; Sun, 18 May 2014 01:51:48 +0000 (UTC) Received: from mailbox.supranet.net (mailbox.supranet.net [IPv6:2607:f4e0:100:111::9]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D83082C87 for ; Sun, 18 May 2014 01:51:47 +0000 (UTC) Received: from [209.204.169.179] (helo=[192.168.1.201]) by mailbox.supranet.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.82 (FreeBSD)) (envelope-from ) id 1WlqGP-0008UG-6g; Sat, 17 May 2014 20:51:45 -0500 Date: Sat, 17 May 2014 18:51:54 -0700 From: Jeff Chan Reply-To: Jeff Chan X-Priority: 3 (Normal) Message-ID: <1129127016.20140517185154@supranet.net> To: freebsd-fs@freebsd.org Subject: ZFS snapshot restore not quite working; missing steps? MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 May 2014 01:51:48 -0000 We're trying to do a ZFS snapshot restore of a ZFS non-RAIDZ* FreeBSD 9.2-RELEASE to a new bare system configured with ZFS RAIDZ, and it's not quite working right. The restore itself seems to complete, but we're not able to successfully boot the resulting system. We can't recall the exact error we got, but think the ZFS cache file was not found. Presumably we're missing some steps, don't have all magic for ZFS booting from a FreeBSD root partition installed/configured correctly, etc. What did we miss? (We have a private network to serve the snapshot over NFS from a third server on 192.168.0.2.) (There may be some line wrap below.) (Comments in (parens)) (Some outputs in [brackets]) On the old system, call it foo: foo: [103]% zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 169G 3.40T 545M legacy zroot/home 15.1G 3.40T 10.3G /home zroot/home/username 4.04G 3.40T 2.74G /home/username zroot/bar 128G 3.40T 97K /bar zroot/bar/prod 126G 3.40T 73.8G /bar/prod zroot/bar/test 2.18G 3.40T 2.18G /bar/test zroot/tmp 1.35G 3.40T 1.35G /tmp zroot/usr 9.60G 3.40T 9.52G /usr zroot/var 14.0G 3.40T 14.0G /var foo: [104]% zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 3.62T 170G 3.46T 4% 1.00x ONLINE - On the new system: Boot from a FreeBSD 9.2-RELEASE installer USB flash drive into LiveCD, then: (bring up NFS private LAN) ifconfig ix3 192.168.0.3 netmask 255.255.255.0 up [ix2: Could not setup receive structures [ix2: Could not setup receive structures -> sysctl kern.ipc.nmbclusters=3D131072 sysctl kern.ipc.nmbjumbo9=3D38400 (mount remote NFS backup directory) mount -t nfs 192.168.0.2:/home/backup /mnt mkdir /var/mnt sysctl kern.disks [kern.disks: da0 mfid2 mfid1 mfid0] gpart create -s gpt mfid0 gpart create -s gpt mfid1 gpart create -s gpt mfid2 gpart add -s 222 -a 4k -t freebsd-boot -l boot0 mfid0 gpart add -s 222 -a 4k -t freebsd-boot -l boot1 mfid1 gpart add -s 222 -a 4k -t freebsd-boot -l boot2 mfid2 gpart add -s 8g -a 4k -t freebsd-swap -l swap0 mfid0 gpart add -s 8g -a 4k -t freebsd-swap -l swap1 mfid1 gpart add -s 8g -a 4k -t freebsd-swap -l swap2 mfid2 gpart add -a 4k -t freebsd-zfs -l disk0 mfid0 gpart add -a 4k -t freebsd-zfs -l disk1 mfid1 gpart add -a 4k -t freebsd-zfs -l disk2 mfid2 (Clear any old zfs data) dd if=3D/dev/zero of=3D/dev/mfid0p3 count=3D560 bs=3D512 dd if=3D/dev/zero of=3D/dev/mfid1p3 count=3D560 bs=3D512 dd if=3D/dev/zero of=3D/dev/mfid2p3 count=3D560 bs=3D512 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid0 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid1 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid2 zpool create -f -m none -o altroot=3D/var/mnt -o cachefile=3D/tmp/zpool.cac= he zroot raidz mfid0 mfid1 mfid2 gunzip -c /mnt/foo-backup-full/zroot@20140516183425.gz | zfs receive -vdFu = zroot zpool set bootfs=3Dzroot zroot zpool get all zroot zfs get all zroot zfs set mountpoint=3D/var/mnt zroot (add swap to fstab) vi /var/mnt/var/mnt/etc/fstab [/dev/mfid0p2 none swap sw 0 0 [/dev/mfid1p2 none swap sw 0 0 [/dev/mfid2p2 none swap sw 0 0 vi /var/mnt/var/mnt/etc/rc.conf [ change adapter name & IP ] vi /var/mnt/var/mnt/etc/pf.conf [ change adapter name ] [vi /var/mnt/var/mnt/boot/loader.conf] (not performed) cd / zpool export zroot zpool import -o altroot=3D/var/mnt -o cachefile=3D/tmp/zpool.cache zroot [zfs set mountpoint=3D/ zroot] (not performed) cp /tmp/zpool.cache /var/mnt/boot/zfs zpool get all zroot zpool set cachefile=3D/boot/zfs/zpool.cache zroot zfs unmount -a zfs set mountpoint=3Dlegacy zroot zfs set mountpoint=3D/home zroot/home zfs set mountpoint=3D/bar zroot/bar zfs set mountpoint=3D/tmp zroot/tmp zfs set mountpoint=3D/usr zroot/usr zfs set mountpoint=3D/var zroot/var [remove USB] reboot Any hints at what we missed/goofed? Cheers, Jeff C. -- Jeff Chan mailto:jeffc@supranet.net From owner-freebsd-fs@FreeBSD.ORG Sun May 18 05:12:15 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A2D07D84; Sun, 18 May 2014 05:12:15 +0000 (UTC) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps1.elischer.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 6E3B22A34; Sun, 18 May 2014 05:12:14 +0000 (UTC) Received: from jre-mbp.elischer.org (ppp121-45-232-70.lns20.per1.internode.on.net [121.45.232.70]) (authenticated bits=0) by vps1.elischer.org (8.14.8/8.14.8) with ESMTP id s4I5C8vb053752 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Sat, 17 May 2014 22:12:12 -0700 (PDT) (envelope-from julian@elischer.org) Message-ID: <53784122.8090607@elischer.org> Date: Sun, 18 May 2014 13:12:02 +0800 From: Julian Elischer User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: =?UTF-8?B?QWxpIE9rYW4gWcOcS1NFTA==?= , freebsd-hackers@freebsd.org Subject: Re: hard reset impacts on ufs file system References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 May 2014 05:12:15 -0000 On 5/16/14, 10:44 PM, Ali Okan YÜKSEL wrote: > incident: > == > file corruption after hard reset on FreeBSD 8.3 > > > details: > == > Hard reset examined on freebsd 8.3. after reboot libncurses.so.8 was 0 > byte. And I couldn't login to system. It gave me error message about > /bin/sh - libncurses.so.8 corruption. (libncurses depens /bin/sh I guess) > I found the problem by using fixed shell. > > > solution: > == > I copied libncurses.so.8 from another system. When I did it problem solved. Unfortunattely libraries do seem to be one of the more common victims of this sort of thing but I have never worked out how. However,as you said, recovery is relatively easy by booting single user and specifying /rescue/sh as your shell. > > > question: > == > Is it known situation? Did you ever live a problem like this? yeas I have seen similar. I don't know why libraries are in that state though. > > > > Kind regards, > Ali Okan Yüksel > > > > From owner-freebsd-fs@FreeBSD.ORG Sun May 18 10:09:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D9A22FD6 for ; Sun, 18 May 2014 10:09:11 +0000 (UTC) Received: from mailbox.supranet.net (mailbox.supranet.net [IPv6:2607:f4e0:100:111::9]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A8D3E2CA9 for ; Sun, 18 May 2014 10:09:11 +0000 (UTC) Received: from [209.204.169.179] (helo=[192.168.1.201]) by mailbox.supranet.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.82 (FreeBSD)) (envelope-from ) id 1Wly1l-0003a8-2I for freebsd-fs@freebsd.org; Sun, 18 May 2014 05:09:09 -0500 Date: Sun, 18 May 2014 03:09:07 -0700 From: Jeff Chan Reply-To: Jeff Chan X-Priority: 3 (Normal) Message-ID: <1198503903.20140518030907@supranet.net> To: freebsd-fs@freebsd.org Subject: Re: ZFS snapshot restore not quite working; missing steps? In-Reply-To: <1129127016.20140517185154@supranet.net> References: <1129127016.20140517185154@supranet.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 May 2014 10:09:12 -0000 On Saturday, May 17, 2014, 6:51:54 PM, Jeff Chan wrote: > We're trying to do a ZFS snapshot restore of a ZFS non-RAIDZ* FreeBSD > 9.2-RELEASE to a new bare system configured with ZFS RAIDZ, and it's not > quite working right. The restore itself seems to complete, but we're > not able to successfully boot the resulting system. We can't recall > the exact error we got, but think the ZFS cache file was not found. > Presumably we're missing some steps, don't have all magic for ZFS > booting from a FreeBSD root partition installed/configured correctly, > etc. What did we miss? > (We have a private network to serve the snapshot over NFS from a third > server on 192.168.0.2.) > (There may be some line wrap below.) > (Comments in (parens)) > (Some outputs in [brackets]) > On the old system, call it foo: > foo: [103]% zfs list > NAME USED AVAIL REFER MOUNTPOINT > zroot 169G 3.40T 545M legacy > zroot/home 15.1G 3.40T 10.3G /home > zroot/home/username 4.04G 3.40T 2.74G /home/username > zroot/bar 128G 3.40T 97K /bar > zroot/bar/prod 126G 3.40T 73.8G /bar/prod > zroot/bar/test 2.18G 3.40T 2.18G /bar/test > zroot/tmp 1.35G 3.40T 1.35G /tmp > zroot/usr 9.60G 3.40T 9.52G /usr > zroot/var 14.0G 3.40T 14.0G /var > foo: [104]% zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zroot 3.62T 170G 3.46T 4% 1.00x ONLINE - > On the new system: > Boot from a FreeBSD 9.2-RELEASE installer USB flash drive > into LiveCD, then: > (bring up NFS private LAN) > ifconfig ix3 192.168.0.3 netmask 255.255.255.0 up > [ix2: Could not setup receive structures > [ix2: Could not setup receive structures ->> > sysctl kern.ipc.nmbclusters=3D131072 > sysctl kern.ipc.nmbjumbo9=3D38400 > (mount remote NFS backup directory) > mount -t nfs 192.168.0.2:/home/backup /mnt > mkdir /var/mnt > sysctl kern.disks > [kern.disks: da0 mfid2 mfid1 mfid0] > gpart create -s gpt mfid0 > gpart create -s gpt mfid1 > gpart create -s gpt mfid2 > gpart add -s 222 -a 4k -t freebsd-boot -l boot0 mfid0 > gpart add -s 222 -a 4k -t freebsd-boot -l boot1 mfid1 > gpart add -s 222 -a 4k -t freebsd-boot -l boot2 mfid2 > gpart add -s 8g -a 4k -t freebsd-swap -l swap0 mfid0 > gpart add -s 8g -a 4k -t freebsd-swap -l swap1 mfid1 > gpart add -s 8g -a 4k -t freebsd-swap -l swap2 mfid2 > gpart add -a 4k -t freebsd-zfs -l disk0 mfid0 > gpart add -a 4k -t freebsd-zfs -l disk1 mfid1 > gpart add -a 4k -t freebsd-zfs -l disk2 mfid2 > (Clear any old zfs data) > dd if=3D/dev/zero of=3D/dev/mfid0p3 count=3D560 bs=3D512 > dd if=3D/dev/zero of=3D/dev/mfid1p3 count=3D560 bs=3D512 > dd if=3D/dev/zero of=3D/dev/mfid2p3 count=3D560 bs=3D512 > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid0 > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid1 > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid2 > zpool create -f -m none -o altroot=3D/var/mnt -o > cachefile=3D/tmp/zpool.cache zroot raidz mfid0 mfid1 mfid2 > gunzip -c /mnt/foo-backup-full/zroot@20140516183425.gz | zfs receive -vdF= u zroot > zpool set bootfs=3Dzroot zroot > zpool get all zroot > zfs get all zroot > zfs set mountpoint=3D/var/mnt zroot > (add swap to fstab) > vi /var/mnt/var/mnt/etc/fstab > [/dev/mfid0p2 none swap sw 0 0 > [/dev/mfid1p2 none swap sw 0 0 > [/dev/mfid2p2 none swap sw 0 0 > vi /var/mnt/var/mnt/etc/rc.conf [ change adapter name & IP ] > vi /var/mnt/var/mnt/etc/pf.conf [ change adapter name ] > [vi /var/mnt/var/mnt/boot/loader.conf] (not performed) > cd / > zpool export zroot > zpool import -o altroot=3D/var/mnt -o cachefile=3D/tmp/zpool.cache zroot > [zfs set mountpoint=3D/ zroot] (not performed) > cp /tmp/zpool.cache /var/mnt/boot/zfs > zpool get all zroot > zpool set cachefile=3D/boot/zfs/zpool.cache zroot > zfs unmount -a > zfs set mountpoint=3Dlegacy zroot > zfs set mountpoint=3D/home zroot/home > zfs set mountpoint=3D/bar zroot/bar > zfs set mountpoint=3D/tmp zroot/tmp > zfs set mountpoint=3D/usr zroot/usr > zfs set mountpoint=3D/var zroot/var > [remove USB] > reboot Additional information, gpart list on the new system: root@:~ # gpart list Geom name: da0 modified: false state: OK fwheads: 255 fwsectors: 63 last: 7821311 first: 0 entries: 8 scheme: BSD Providers: 1. Name: da0a Mediasize: 734167040 (700M) Sectorsize: 512 Mode: r1w0e1 rawtype: 7 length: 734167040 offset: 0 type: freebsd-ufs index: 1 end: 1433919 start: 0 Consumers: 1. Name: da0 Mediasize: 4004511744 (3.7G) Sectorsize: 512 Mode: r1w0e1 Geom name: mfid0 modified: false state: OK fwheads: 255 fwsectors: 63 last: 5859373022 first: 34 entries: 128 scheme: GPT Providers: 1. Name: mfid0p1 Mediasize: 110592 (108k) Sectorsize: 512 Stripesize: 0 Stripeoffset: 20480 Mode: r0w0e0 rawuuid: fe8923b4-ddd6-11e3-8d53-a0369f312ea0 rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: boot0 length: 110592 offset: 20480 type: freebsd-boot index: 1 end: 255 start: 40 2. Name: mfid0p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r0w0e0 rawuuid: 0863e7f4-ddd7-11e3-8d53-a0369f312ea0 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: swap0 length: 8589934592 offset: 131072 type: freebsd-swap index: 2 end: 16777471 start: 256 3. Name: mfid0p3 Mediasize: 2991408918528 (2.7T) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r0w0e0 rawuuid: 08a7dd63-ddd7-11e3-8d53-a0369f312ea0 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: disk0 length: 2991408918528 offset: 8590065664 type: freebsd-zfs index: 3 end: 5859373015 start: 16777472 Consumers: 1. Name: mfid0 Mediasize: 2999999004672 (2.7T) Sectorsize: 512 Mode: r0w0e0 Geom name: mfid1 modified: false state: OK fwheads: 255 fwsectors: 63 last: 5859373022 first: 34 entries: 128 scheme: GPT Providers: 1. Name: mfid1p1 Mediasize: 110592 (108k) Sectorsize: 512 Stripesize: 0 Stripeoffset: 20480 Mode: r0w0e0 rawuuid: 02b8c77a-ddd7-11e3-8d53-a0369f312ea0 rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: boot1 length: 110592 offset: 20480 type: freebsd-boot index: 1 end: 255 start: 40 2. Name: mfid1p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r0w0e0 rawuuid: 086d2831-ddd7-11e3-8d53-a0369f312ea0 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: swap1 length: 8589934592 offset: 131072 type: freebsd-swap index: 2 end: 16777471 start: 256 3. Name: mfid1p3 Mediasize: 2991408918528 (2.7T) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r0w0e0 rawuuid: 08b854b6-ddd7-11e3-8d53-a0369f312ea0 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: disk1 length: 2991408918528 offset: 8590065664 type: freebsd-zfs index: 3 end: 5859373015 start: 16777472 Consumers: 1. Name: mfid1 Mediasize: 2999999004672 (2.7T) Sectorsize: 512 Mode: r0w0e0 Geom name: mfid2 modified: false state: OK fwheads: 255 fwsectors: 63 last: 5859373022 first: 34 entries: 128 scheme: GPT Providers: 1. Name: mfid2p1 Mediasize: 110592 (108k) Sectorsize: 512 Stripesize: 0 Stripeoffset: 20480 Mode: r0w0e0 rawuuid: 03c0b756-ddd7-11e3-8d53-a0369f312ea0 rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: boot2 length: 110592 offset: 20480 type: freebsd-boot index: 1 end: 255 start: 40 2. Name: mfid2p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r0w0e0 rawuuid: 087d8c31-ddd7-11e3-8d53-a0369f312ea0 rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: swap2 length: 8589934592 offset: 131072 type: freebsd-swap index: 2 end: 16777471 start: 256 3. Name: mfid2p3 Mediasize: 2991408918528 (2.7T) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r0w0e0 rawuuid: 08c61aed-ddd7-11e3-8d53-a0369f312ea0 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: disk2 length: 2991408918528 offset: 8590065664 type: freebsd-zfs index: 3 end: 5859373015 start: 16777472 Consumers: 1. Name: mfid2 Mediasize: 2999999004672 (2.7T) Sectorsize: 512 Mode: r0w0e0 -- Jeff Chan mailto:jeffc@supranet.net From owner-freebsd-fs@FreeBSD.ORG Sun May 18 10:25:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B64FF2D7 for ; Sun, 18 May 2014 10:25:55 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 2A6A12DE8 for ; Sun, 18 May 2014 10:25:54 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id B314C20E7088B; Sun, 18 May 2014 10:25:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 6F7BF20E70886; Sun, 18 May 2014 10:25:42 +0000 (UTC) Message-ID: From: "Steven Hartland" To: "Jeff Chan" , References: <1129127016.20140517185154@supranet.net> <1198503903.20140518030907@supranet.net> Subject: Re: ZFS snapshot restore not quite working; missing steps? Date: Sun, 18 May 2014 11:25:39 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 May 2014 10:25:55 -0000 Not used legacy here have you tried: zfs set mountpoint=/ zroot Silly question given your using mfi have confirmed your controller / machine bios are set to boot from the relavent disk? Also have you confirmed zfs in /boot/loader.conf e.g. zfs_load="YES" vfs.root.mountfrom="zfs:zroot" Finally do be aware its not ideal to use a RAID controller for ZFS your better of with a HBA which does less "fancy" stuff when communicating with the disks allowing ZFS do see whats really going on. Regards Steve ----- Original Message ----- From: "Jeff Chan" To: Sent: Sunday, May 18, 2014 11:09 AM Subject: Re: ZFS snapshot restore not quite working; missing steps? > On Saturday, May 17, 2014, 6:51:54 PM, Jeff Chan wrote: >> We're trying to do a ZFS snapshot restore of a ZFS non-RAIDZ* FreeBSD >> 9.2-RELEASE to a new bare system configured with ZFS RAIDZ, and it's not >> quite working right. The restore itself seems to complete, but we're >> not able to successfully boot the resulting system. We can't recall >> the exact error we got, but think the ZFS cache file was not found. > >> Presumably we're missing some steps, don't have all magic for ZFS >> booting from a FreeBSD root partition installed/configured correctly, >> etc. What did we miss? > >> (We have a private network to serve the snapshot over NFS from a third >> server on 192.168.0.2.) > > >> (There may be some line wrap below.) > >> (Comments in (parens)) > >> (Some outputs in [brackets]) > > > > >> On the old system, call it foo: > > >> foo: [103]% zfs list >> NAME USED AVAIL REFER MOUNTPOINT >> zroot 169G 3.40T 545M legacy >> zroot/home 15.1G 3.40T 10.3G /home >> zroot/home/username 4.04G 3.40T 2.74G /home/username >> zroot/bar 128G 3.40T 97K /bar >> zroot/bar/prod 126G 3.40T 73.8G /bar/prod >> zroot/bar/test 2.18G 3.40T 2.18G /bar/test >> zroot/tmp 1.35G 3.40T 1.35G /tmp >> zroot/usr 9.60G 3.40T 9.52G /usr >> zroot/var 14.0G 3.40T 14.0G /var >> foo: [104]% zpool list >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> zroot 3.62T 170G 3.46T 4% 1.00x ONLINE - > > > >> On the new system: > > >> Boot from a FreeBSD 9.2-RELEASE installer USB flash drive >> into LiveCD, then: > >> (bring up NFS private LAN) >> ifconfig ix3 192.168.0.3 netmask 255.255.255.0 up >> [ix2: Could not setup receive structures >> [ix2: Could not setup receive structures > ->> >> sysctl kern.ipc.nmbclusters=131072 >> sysctl kern.ipc.nmbjumbo9=38400 > >> (mount remote NFS backup directory) >> mount -t nfs 192.168.0.2:/home/backup /mnt >> mkdir /var/mnt > >> sysctl kern.disks >> [kern.disks: da0 mfid2 mfid1 mfid0] > >> gpart create -s gpt mfid0 >> gpart create -s gpt mfid1 >> gpart create -s gpt mfid2 >> gpart add -s 222 -a 4k -t freebsd-boot -l boot0 mfid0 >> gpart add -s 222 -a 4k -t freebsd-boot -l boot1 mfid1 >> gpart add -s 222 -a 4k -t freebsd-boot -l boot2 mfid2 >> gpart add -s 8g -a 4k -t freebsd-swap -l swap0 mfid0 >> gpart add -s 8g -a 4k -t freebsd-swap -l swap1 mfid1 >> gpart add -s 8g -a 4k -t freebsd-swap -l swap2 mfid2 >> gpart add -a 4k -t freebsd-zfs -l disk0 mfid0 >> gpart add -a 4k -t freebsd-zfs -l disk1 mfid1 >> gpart add -a 4k -t freebsd-zfs -l disk2 mfid2 >> (Clear any old zfs data) >> dd if=/dev/zero of=/dev/mfid0p3 count=560 bs=512 >> dd if=/dev/zero of=/dev/mfid1p3 count=560 bs=512 >> dd if=/dev/zero of=/dev/mfid2p3 count=560 bs=512 >> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid0 >> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid1 >> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid2 >> zpool create -f -m none -o altroot=/var/mnt -o >> cachefile=/tmp/zpool.cache zroot raidz mfid0 mfid1 mfid2 > >> gunzip -c /mnt/foo-backup-full/zroot@20140516183425.gz | zfs receive -vdFu zroot >> zpool set bootfs=zroot zroot >> zpool get all zroot >> zfs get all zroot > >> zfs set mountpoint=/var/mnt zroot >> (add swap to fstab) >> vi /var/mnt/var/mnt/etc/fstab >> [/dev/mfid0p2 none swap sw 0 0 >> [/dev/mfid1p2 none swap sw 0 0 >> [/dev/mfid2p2 none swap sw 0 0 > >> vi /var/mnt/var/mnt/etc/rc.conf [ change adapter name & IP ] >> vi /var/mnt/var/mnt/etc/pf.conf [ change adapter name ] >> [vi /var/mnt/var/mnt/boot/loader.conf] (not performed) >> cd / >> zpool export zroot >> zpool import -o altroot=/var/mnt -o cachefile=/tmp/zpool.cache zroot >> [zfs set mountpoint=/ zroot] (not performed) >> cp /tmp/zpool.cache /var/mnt/boot/zfs >> zpool get all zroot >> zpool set cachefile=/boot/zfs/zpool.cache zroot >> zfs unmount -a >> zfs set mountpoint=legacy zroot >> zfs set mountpoint=/home zroot/home >> zfs set mountpoint=/bar zroot/bar >> zfs set mountpoint=/tmp zroot/tmp >> zfs set mountpoint=/usr zroot/usr >> zfs set mountpoint=/var zroot/var > >> [remove USB] >> reboot > > > > Additional information, gpart list on the new system: > > > root@:~ # gpart list > Geom name: da0 > modified: false > state: OK > fwheads: 255 > fwsectors: 63 > last: 7821311 > first: 0 > entries: 8 > scheme: BSD > Providers: > 1. Name: da0a > Mediasize: 734167040 (700M) > Sectorsize: 512 > Mode: r1w0e1 > rawtype: 7 > length: 734167040 > offset: 0 > type: freebsd-ufs > index: 1 > end: 1433919 > start: 0 > Consumers: > 1. Name: da0 > Mediasize: 4004511744 (3.7G) > Sectorsize: 512 > Mode: r1w0e1 > > Geom name: mfid0 > modified: false > state: OK > fwheads: 255 > fwsectors: 63 > last: 5859373022 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: mfid0p1 > Mediasize: 110592 (108k) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 20480 > Mode: r0w0e0 > rawuuid: fe8923b4-ddd6-11e3-8d53-a0369f312ea0 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: boot0 > length: 110592 > offset: 20480 > type: freebsd-boot > index: 1 > end: 255 > start: 40 > 2. Name: mfid0p2 > Mediasize: 8589934592 (8.0G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 131072 > Mode: r0w0e0 > rawuuid: 0863e7f4-ddd7-11e3-8d53-a0369f312ea0 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap0 > length: 8589934592 > offset: 131072 > type: freebsd-swap > index: 2 > end: 16777471 > start: 256 > 3. Name: mfid0p3 > Mediasize: 2991408918528 (2.7T) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 131072 > Mode: r0w0e0 > rawuuid: 08a7dd63-ddd7-11e3-8d53-a0369f312ea0 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: disk0 > length: 2991408918528 > offset: 8590065664 > type: freebsd-zfs > index: 3 > end: 5859373015 > start: 16777472 > Consumers: > 1. Name: mfid0 > Mediasize: 2999999004672 (2.7T) > Sectorsize: 512 > Mode: r0w0e0 > > Geom name: mfid1 > modified: false > state: OK > fwheads: 255 > fwsectors: 63 > last: 5859373022 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: mfid1p1 > Mediasize: 110592 (108k) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 20480 > Mode: r0w0e0 > rawuuid: 02b8c77a-ddd7-11e3-8d53-a0369f312ea0 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: boot1 > length: 110592 > offset: 20480 > type: freebsd-boot > index: 1 > end: 255 > start: 40 > 2. Name: mfid1p2 > Mediasize: 8589934592 (8.0G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 131072 > Mode: r0w0e0 > rawuuid: 086d2831-ddd7-11e3-8d53-a0369f312ea0 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap1 > length: 8589934592 > offset: 131072 > type: freebsd-swap > index: 2 > end: 16777471 > start: 256 > 3. Name: mfid1p3 > Mediasize: 2991408918528 (2.7T) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 131072 > Mode: r0w0e0 > rawuuid: 08b854b6-ddd7-11e3-8d53-a0369f312ea0 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: disk1 > length: 2991408918528 > offset: 8590065664 > type: freebsd-zfs > index: 3 > end: 5859373015 > start: 16777472 > Consumers: > 1. Name: mfid1 > Mediasize: 2999999004672 (2.7T) > Sectorsize: 512 > Mode: r0w0e0 > > Geom name: mfid2 > modified: false > state: OK > fwheads: 255 > fwsectors: 63 > last: 5859373022 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: mfid2p1 > Mediasize: 110592 (108k) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 20480 > Mode: r0w0e0 > rawuuid: 03c0b756-ddd7-11e3-8d53-a0369f312ea0 > rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f > label: boot2 > length: 110592 > offset: 20480 > type: freebsd-boot > index: 1 > end: 255 > start: 40 > 2. Name: mfid2p2 > Mediasize: 8589934592 (8.0G) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 131072 > Mode: r0w0e0 > rawuuid: 087d8c31-ddd7-11e3-8d53-a0369f312ea0 > rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b > label: swap2 > length: 8589934592 > offset: 131072 > type: freebsd-swap > index: 2 > end: 16777471 > start: 256 > 3. Name: mfid2p3 > Mediasize: 2991408918528 (2.7T) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 131072 > Mode: r0w0e0 > rawuuid: 08c61aed-ddd7-11e3-8d53-a0369f312ea0 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: disk2 > length: 2991408918528 > offset: 8590065664 > type: freebsd-zfs > index: 3 > end: 5859373015 > start: 16777472 > Consumers: > 1. Name: mfid2 > Mediasize: 2999999004672 (2.7T) > Sectorsize: 512 > Mode: r0w0e0 > > > > > -- > Jeff Chan > mailto:jeffc@supranet.net > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sun May 18 10:53:31 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2D085678 for ; Sun, 18 May 2014 10:53:31 +0000 (UTC) Received: from mailbox.supranet.net (mailbox.supranet.net [IPv6:2607:f4e0:100:111::9]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EF1062FC1 for ; Sun, 18 May 2014 10:53:30 +0000 (UTC) Received: from [209.204.169.179] (helo=[192.168.1.201]) by mailbox.supranet.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.82 (FreeBSD)) (envelope-from ) id 1Wlyie-0005Nw-7P for freebsd-fs@freebsd.org; Sun, 18 May 2014 05:53:28 -0500 Date: Sun, 18 May 2014 03:53:26 -0700 From: Jeff Chan Reply-To: Jeff Chan X-Priority: 3 (Normal) Message-ID: <236659679.20140518035326@supranet.net> To: freebsd-fs@freebsd.org Subject: Re: ZFS snapshot restore not quite working; missing steps? In-Reply-To: References: <1129127016.20140517185154@supranet.net> <1198503903.20140518030907@supranet.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 May 2014 10:53:31 -0000 On Sunday, May 18, 2014, 3:25:39 AM, Steven Hartland wrote: > Not used legacy here have you tried: > zfs set mountpoint=3D/ zroot It was tried initially. Trying again a second time zfs set mountpoint=3Dlegacy zroot was tried. > Silly question given your using mfi have confirmed your controller / > machine bios are set to boot from the relavent disk? Not silly; yes it does boot from the right disk. > Also have you confirmed zfs in /boot/loader.conf e.g. > zfs_load=3D"YES" > vfs.root.mountfrom=3D"zfs:zroot" Yes. > Finally do be aware its not ideal to use a RAID controller for ZFS > your better of with a HBA which does less "fancy" stuff when > communicating with the disks allowing ZFS do see whats really going > on. Yes, indeed. It's an LSI 2208 which doesn't seem to be reflashable into HBA like a LSI 2008, so we're running each of the three disks as a separate RAID-0, with RAID hardware read and write caching turned off (as it supposedly works faster with ZFS that way). Would have preferred HBA, but we're trying to do the best with the hardware we have.=20 We wonder if things like the MBR and zfs boot loader are happy. How does one review whether those are set up correctly? Cheers, Jeff C. > Regards > Steve > ----- Original Message -----=20 > From: "Jeff Chan" > To: > Sent: Sunday, May 18, 2014 11:09 AM > Subject: Re: ZFS snapshot restore not quite working; missing steps? >> On Saturday, May 17, 2014, 6:51:54 PM, Jeff Chan wrote: >>> We're trying to do a ZFS snapshot restore of a ZFS non-RAIDZ* FreeBSD >>> 9.2-RELEASE to a new bare system configured with ZFS RAIDZ, and it's not >>> quite working right. The restore itself seems to complete, but we're >>> not able to successfully boot the resulting system. We can't recall >>> the exact error we got, but think the ZFS cache file was not found. >>=20 >>> Presumably we're missing some steps, don't have all magic for ZFS >>> booting from a FreeBSD root partition installed/configured correctly, >>> etc. What did we miss? >>=20 >>> (We have a private network to serve the snapshot over NFS from a third >>> server on 192.168.0.2.) >>=20 >>=20 >>> (There may be some line wrap below.) >>=20 >>> (Comments in (parens)) >>=20 >>> (Some outputs in [brackets]) >>=20 >>=20 >>=20 >>=20 >>> On the old system, call it foo: >>=20 >>=20 >>> foo: [103]% zfs list >>> NAME USED AVAIL REFER MOUNTPOINT >>> zroot 169G 3.40T 545M legacy >>> zroot/home 15.1G 3.40T 10.3G /home >>> zroot/home/username 4.04G 3.40T 2.74G /home/username >>> zroot/bar 128G 3.40T 97K /bar >>> zroot/bar/prod 126G 3.40T 73.8G /bar/prod >>> zroot/bar/test 2.18G 3.40T 2.18G /bar/test >>> zroot/tmp 1.35G 3.40T 1.35G /tmp >>> zroot/usr 9.60G 3.40T 9.52G /usr >>> zroot/var 14.0G 3.40T 14.0G /var >>> foo: [104]% zpool list >>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>> zroot 3.62T 170G 3.46T 4% 1.00x ONLINE - >>=20 >>=20 >>=20 >>> On the new system: >>=20 >>=20 >>> Boot from a FreeBSD 9.2-RELEASE installer USB flash drive >>> into LiveCD, then: >>=20 >>> (bring up NFS private LAN) >>> ifconfig ix3 192.168.0.3 netmask 255.255.255.0 up >>> [ix2: Could not setup receive structures >>> [ix2: Could not setup receive structures >> ->> >>> sysctl kern.ipc.nmbclusters=3D131072 >>> sysctl kern.ipc.nmbjumbo9=3D38400 >>=20 >>> (mount remote NFS backup directory) >>> mount -t nfs 192.168.0.2:/home/backup /mnt >>> mkdir /var/mnt >>=20 >>> sysctl kern.disks >>> [kern.disks: da0 mfid2 mfid1 mfid0] >>=20 >>> gpart create -s gpt mfid0 >>> gpart create -s gpt mfid1 >>> gpart create -s gpt mfid2 >>> gpart add -s 222 -a 4k -t freebsd-boot -l boot0 mfid0 >>> gpart add -s 222 -a 4k -t freebsd-boot -l boot1 mfid1 >>> gpart add -s 222 -a 4k -t freebsd-boot -l boot2 mfid2 >>> gpart add -s 8g -a 4k -t freebsd-swap -l swap0 mfid0 >>> gpart add -s 8g -a 4k -t freebsd-swap -l swap1 mfid1 >>> gpart add -s 8g -a 4k -t freebsd-swap -l swap2 mfid2 >>> gpart add -a 4k -t freebsd-zfs -l disk0 mfid0 >>> gpart add -a 4k -t freebsd-zfs -l disk1 mfid1 >>> gpart add -a 4k -t freebsd-zfs -l disk2 mfid2 >>> (Clear any old zfs data) >>> dd if=3D/dev/zero of=3D/dev/mfid0p3 count=3D560 bs=3D512 >>> dd if=3D/dev/zero of=3D/dev/mfid1p3 count=3D560 bs=3D512 >>> dd if=3D/dev/zero of=3D/dev/mfid2p3 count=3D560 bs=3D512 >>> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid0 >>> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid1 >>> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 mfid2 >>> zpool create -f -m none -o altroot=3D/var/mnt -o >>> cachefile=3D/tmp/zpool.cache zroot raidz mfid0 mfid1 mfid2 >>=20 >>> gunzip -c /mnt/foo-backup-full/zroot@20140516183425.gz | zfs receive -v= dFu zroot >>> zpool set bootfs=3Dzroot zroot >>> zpool get all zroot >>> zfs get all zroot >>=20 >>> zfs set mountpoint=3D/var/mnt zroot >>> (add swap to fstab) >>> vi /var/mnt/var/mnt/etc/fstab >>> [/dev/mfid0p2 none swap sw 0 0 >>> [/dev/mfid1p2 none swap sw 0 0 >>> [/dev/mfid2p2 none swap sw 0 0 >>=20 >>> vi /var/mnt/var/mnt/etc/rc.conf [ change adapter name & IP ] >>> vi /var/mnt/var/mnt/etc/pf.conf [ change adapter name ] >>> [vi /var/mnt/var/mnt/boot/loader.conf] (not performed) >>> cd / >>> zpool export zroot >>> zpool import -o altroot=3D/var/mnt -o cachefile=3D/tmp/zpool.cache zroot >>> [zfs set mountpoint=3D/ zroot] (not performed) >>> cp /tmp/zpool.cache /var/mnt/boot/zfs >>> zpool get all zroot >>> zpool set cachefile=3D/boot/zfs/zpool.cache zroot >>> zfs unmount -a >>> zfs set mountpoint=3Dlegacy zroot >>> zfs set mountpoint=3D/home zroot/home >>> zfs set mountpoint=3D/bar zroot/bar >>> zfs set mountpoint=3D/tmp zroot/tmp >>> zfs set mountpoint=3D/usr zroot/usr >>> zfs set mountpoint=3D/var zroot/var >>=20 >>> [remove USB] >>> reboot >>=20 >>=20 >>=20 >> Additional information, gpart list on the new system: >>=20 >>=20 >> root@:~ # gpart list >> Geom name: da0 >> modified: false >> state: OK >> fwheads: 255 >> fwsectors: 63 >> last: 7821311 >> first: 0 >> entries: 8 >> scheme: BSD >> Providers: >> 1. Name: da0a >> Mediasize: 734167040 (700M) >> Sectorsize: 512 >> Mode: r1w0e1 >> rawtype: 7 >> length: 734167040 >> offset: 0 >> type: freebsd-ufs >> index: 1 >> end: 1433919 >> start: 0 >> Consumers: >> 1. Name: da0 >> Mediasize: 4004511744 (3.7G) >> Sectorsize: 512 >> Mode: r1w0e1 >>=20 >> Geom name: mfid0 >> modified: false >> state: OK >> fwheads: 255 >> fwsectors: 63 >> last: 5859373022 >> first: 34 >> entries: 128 >> scheme: GPT >> Providers: >> 1. Name: mfid0p1 >> Mediasize: 110592 (108k) >> Sectorsize: 512 >> Stripesize: 0 >> Stripeoffset: 20480 >> Mode: r0w0e0 >> rawuuid: fe8923b4-ddd6-11e3-8d53-a0369f312ea0 >> rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f >> label: boot0 >> length: 110592 >> offset: 20480 >> type: freebsd-boot >> index: 1 >> end: 255 >> start: 40 >> 2. Name: mfid0p2 >> Mediasize: 8589934592 (8.0G) >> Sectorsize: 512 >> Stripesize: 0 >> Stripeoffset: 131072 >> Mode: r0w0e0 >> rawuuid: 0863e7f4-ddd7-11e3-8d53-a0369f312ea0 >> rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b >> label: swap0 >> length: 8589934592 >> offset: 131072 >> type: freebsd-swap >> index: 2 >> end: 16777471 >> start: 256 >> 3. Name: mfid0p3 >> Mediasize: 2991408918528 (2.7T) >> Sectorsize: 512 >> Stripesize: 0 >> Stripeoffset: 131072 >> Mode: r0w0e0 >> rawuuid: 08a7dd63-ddd7-11e3-8d53-a0369f312ea0 >> rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b >> label: disk0 >> length: 2991408918528 >> offset: 8590065664 >> type: freebsd-zfs >> index: 3 >> end: 5859373015 >> start: 16777472 >> Consumers: >> 1. Name: mfid0 >> Mediasize: 2999999004672 (2.7T) >> Sectorsize: 512 >> Mode: r0w0e0 >>=20 >> Geom name: mfid1 >> modified: false >> state: OK >> fwheads: 255 >> fwsectors: 63 >> last: 5859373022 >> first: 34 >> entries: 128 >> scheme: GPT >> Providers: >> 1. Name: mfid1p1 >> Mediasize: 110592 (108k) >> Sectorsize: 512 >> Stripesize: 0 >> Stripeoffset: 20480 >> Mode: r0w0e0 >> rawuuid: 02b8c77a-ddd7-11e3-8d53-a0369f312ea0 >> rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f >> label: boot1 >> length: 110592 >> offset: 20480 >> type: freebsd-boot >> index: 1 >> end: 255 >> start: 40 >> 2. Name: mfid1p2 >> Mediasize: 8589934592 (8.0G) >> Sectorsize: 512 >> Stripesize: 0 >> Stripeoffset: 131072 >> Mode: r0w0e0 >> rawuuid: 086d2831-ddd7-11e3-8d53-a0369f312ea0 >> rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b >> label: swap1 >> length: 8589934592 >> offset: 131072 >> type: freebsd-swap >> index: 2 >> end: 16777471 >> start: 256 >> 3. Name: mfid1p3 >> Mediasize: 2991408918528 (2.7T) >> Sectorsize: 512 >> Stripesize: 0 >> Stripeoffset: 131072 >> Mode: r0w0e0 >> rawuuid: 08b854b6-ddd7-11e3-8d53-a0369f312ea0 >> rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b >> label: disk1 >> length: 2991408918528 >> offset: 8590065664 >> type: freebsd-zfs >> index: 3 >> end: 5859373015 >> start: 16777472 >> Consumers: >> 1. Name: mfid1 >> Mediasize: 2999999004672 (2.7T) >> Sectorsize: 512 >> Mode: r0w0e0 >>=20 >> Geom name: mfid2 >> modified: false >> state: OK >> fwheads: 255 >> fwsectors: 63 >> last: 5859373022 >> first: 34 >> entries: 128 >> scheme: GPT >> Providers: >> 1. Name: mfid2p1 >> Mediasize: 110592 (108k) >> Sectorsize: 512 >> Stripesize: 0 >> Stripeoffset: 20480 >> Mode: r0w0e0 >> rawuuid: 03c0b756-ddd7-11e3-8d53-a0369f312ea0 >> rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f >> label: boot2 >> length: 110592 >> offset: 20480 >> type: freebsd-boot >> index: 1 >> end: 255 >> start: 40 >> 2. Name: mfid2p2 >> Mediasize: 8589934592 (8.0G) >> Sectorsize: 512 >> Stripesize: 0 >> Stripeoffset: 131072 >> Mode: r0w0e0 >> rawuuid: 087d8c31-ddd7-11e3-8d53-a0369f312ea0 >> rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b >> label: swap2 >> length: 8589934592 >> offset: 131072 >> type: freebsd-swap >> index: 2 >> end: 16777471 >> start: 256 >> 3. Name: mfid2p3 >> Mediasize: 2991408918528 (2.7T) >> Sectorsize: 512 >> Stripesize: 0 >> Stripeoffset: 131072 >> Mode: r0w0e0 >> rawuuid: 08c61aed-ddd7-11e3-8d53-a0369f312ea0 >> rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b >> label: disk2 >> length: 2991408918528 >> offset: 8590065664 >> type: freebsd-zfs >> index: 3 >> end: 5859373015 >> start: 16777472 >> Consumers: >> 1. Name: mfid2 >> Mediasize: 2999999004672 (2.7T) >> Sectorsize: 512 >> Mode: r0w0e0 >>=20 >>=20 >>=20 >>=20 >> -- >> Jeff Chan >> mailto:jeffc@supranet.net >>=20 >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> -- Jeff Chan mailto:jeffc@supranet.net http://www.jeffchan.com/ From owner-freebsd-fs@FreeBSD.ORG Sun May 18 11:25:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 95F3B7EE for ; Sun, 18 May 2014 11:25:29 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 2E4A221A5 for ; Sun, 18 May 2014 11:25:28 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 2BE2820E7088B; Sun, 18 May 2014 11:25:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id DB33020E70886; Sun, 18 May 2014 11:25:22 +0000 (UTC) Message-ID: <3E3DCC6DED864A519B02A398B92D1994@multiplay.co.uk> From: "Steven Hartland" To: "Jeff Chan" , References: <1129127016.20140517185154@supranet.net> <1198503903.20140518030907@supranet.net> <236659679.20140518035326@supranet.net> Subject: Re: ZFS snapshot restore not quite working; missing steps? Date: Sun, 18 May 2014 12:25:22 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 May 2014 11:25:29 -0000 ----- Original Message ----- From: "Jeff Chan" To: Sent: Sunday, May 18, 2014 11:53 AM Subject: Re: ZFS snapshot restore not quite working; missing steps? > On Sunday, May 18, 2014, 3:25:39 AM, Steven Hartland wrote: >> Not used legacy here have you tried: >> zfs set mountpoint=/ zroot > > It was tried initially. Trying again a second time > > zfs set mountpoint=legacy zroot > > was tried. > >> Silly question given your using mfi have confirmed your controller / >> machine bios are set to boot from the relavent disk? > > Not silly; yes it does boot from the right disk. > >> Also have you confirmed zfs in /boot/loader.conf e.g. >> zfs_load="YES" >> vfs.root.mountfrom="zfs:zroot" > > Yes. > >> Finally do be aware its not ideal to use a RAID controller for ZFS >> your better of with a HBA which does less "fancy" stuff when >> communicating with the disks allowing ZFS do see whats really going >> on. > > Yes, indeed. It's an LSI 2208 which doesn't seem to be reflashable > into HBA like a LSI 2008, so we're running each of the three disks as > a separate RAID-0, with RAID hardware read and write caching turned > off (as it supposedly works faster with ZFS that way). Would have > preferred HBA, but we're trying to do the best with the hardware we > have. Yer here you there; we've had to do the same, this is layout we use which Is that configured by mfsbsd (http://mfsbsd.vx.sk/): zroot 22.8G 55.5G 144K none zroot/root 20.8G 55.5G 19.4G / zroot/root/tmp 592K 55.5G 592K /tmp zroot/root/var 1.39G 55.5G 1.39G /var zroot/swap 266M 57.2G 266M - Pool layout:- zpool status zroot pool: zroot state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/d27aba97-5c20-11e2-a017-00259088112a ONLINE 0 0 0 gptid/d3226aa9-5c20-11e2-a017-00259088112a ONLINE 0 0 0 errors: No known data errors > We wonder if things like the MBR and zfs boot loader are happy. How > does one review whether those are set up correctly? Not had any issues here As a test I would a micro install using mfsbsd, only takes a few mins which will ensure the basic layout is all good and the machine boots. If all good you can then boot back to the install, restore from you backup and reboot. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Sun May 18 20:52:07 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 58DC2619; Sun, 18 May 2014 20:52:07 +0000 (UTC) Received: from dmz-mailsec-scanner-5.mit.edu (dmz-mailsec-scanner-5.mit.edu [18.7.68.34]) by mx1.freebsd.org (Postfix) with ESMTP id BDCD62AFA; Sun, 18 May 2014 20:52:06 +0000 (UTC) X-AuditID: 12074422-f79376d000000c58-fe-53791d6f26d4 Received: from mailhub-auth-2.mit.edu ( [18.7.62.36]) (using TLS with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by dmz-mailsec-scanner-5.mit.edu (Symantec Messaging Gateway) with SMTP id 04.85.03160.F6D19735; Sun, 18 May 2014 16:52:00 -0400 (EDT) Received: from outgoing.mit.edu (outgoing-auth-1.mit.edu [18.9.28.11]) by mailhub-auth-2.mit.edu (8.13.8/8.9.2) with ESMTP id s4IKpx3W006764; Sun, 18 May 2014 16:51:59 -0400 Received: from multics.mit.edu (system-low-sipb.mit.edu [18.187.2.37]) (authenticated bits=56) (User authenticated as kaduk@ATHENA.MIT.EDU) by outgoing.mit.edu (8.13.8/8.12.4) with ESMTP id s4IKpvvV011774 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Sun, 18 May 2014 16:51:58 -0400 Received: (from kaduk@localhost) by multics.mit.edu (8.12.9.20060308) id s4IKpuCU019332; Sun, 18 May 2014 16:51:56 -0400 (EDT) Date: Sun, 18 May 2014 16:51:56 -0400 (EDT) From: Benjamin Kaduk X-X-Sender: kaduk@multics.mit.edu To: Konstantin Belousov Subject: Re: Add an assert that v_holdcnt >= v_usecount? In-Reply-To: <20140517192229.GA74331@kib.kiev.ua> Message-ID: References: <20140517192229.GA74331@kib.kiev.ua> User-Agent: Alpine 1.10 (GSO 962 2008-03-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrHIsWRmVeSWpSXmKPExsUixG6nolsgWxls8OQro8WkOa9ZLY49/slm 0TDtMZsDs8eMT/NZPHbOussewBTFZZOSmpNZllqkb5fAldE07y1LwWemiqUTbzM1MC5j6mLk 5JAQMJFoeXuHEcIWk7hwbz0biC0kMJtJ4szP7C5GLiB7I6PEhCfb2SCcQ0wSv29/ZYJwGhgl 7n14zgrSwiKgLXFy3l12EJtNQE3i8d5mVoixihKbT01iBrFFBHQlPi7YA2YzC9hItB26B7ZO WMBCovvwMbBeTgFDiT13Z4DFeQUcJbZe+MQEcVKaxN6fj8FqRAV0JFbvn8ICUSMocXLmExaI mZYS5/5cZ5vAKDQLSWoWktQCRqZVjLIpuVW6uYmZOcWpybrFyYl5ealFuqZ6uZkleqkppZsY wcHsorSD8edBpUOMAhyMSjy8DZfKg4VYE8uKK3MPMUpyMCmJ8k4UrQwW4kvKT6nMSCzOiC8q zUktPsQowcGsJMLre7siWIg3JbGyKrUoHyYlzcGiJM771toqWEggPbEkNTs1tSC1CCYrw8Gh JMGbLgM0VLAoNT21Ii0zpwQhzcTBCTKcB2h4CEgNb3FBYm5xZjpE/hSjopQ4LytIQgAkkVGa B9cLSzavGMWBXhHm1Qep4gEmKrjuV0CDmYAGv9lbCjK4JBEhJdXA6LUomvWU9xm2Tf+yuU0c 0l1YAjfXVLoE1fye3b90iljLb02FFxdXPRbd6PW/562WSuYf04Coj3I+P+1+8R/mDpWdtVZD yDC9Wn6/6oSGRPEu5c5Ll9n+L5TidA/5wFb4r0Ojaf/vU0mHn2VpHvCfnPchoeSj1PR91++t 5u19teJEiMU2qRmqSizFGYmGWsxFxYkARbRXtREDAAA= Cc: freebsd-fs@freebsd.org, Benjamin Kaduk X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 May 2014 20:52:07 -0000 On Sat, 17 May 2014, Konstantin Belousov wrote: > As a note, I never seen such corruption of the otherwise valid vnode state > ever. There were a lot of leaks, but never mismatched vget/vdrop. My current hypothesis for what I was actually seeing is that the libafs.ko was built against GENERIC but the running kernel was using DEBUG_VFS_LOCKS, etc.. -Ben From owner-freebsd-fs@FreeBSD.ORG Sun May 18 21:04:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 81B4F92C for ; Sun, 18 May 2014 21:04:36 +0000 (UTC) Received: from ita.aagh.net (unknown [IPv6:2a03:9800:10:11::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 45F662BC1 for ; Sun, 18 May 2014 21:04:36 +0000 (UTC) Received: from cpc8-hart10-2-0-cust548.11-3.cable.virginm.net ([62.252.182.37] helo=voi.aagh.net ident=mailnull) by ita.aagh.net with esmtps (TLSv1.2:DHE-RSA-AES256-GCM-SHA384:256) (Exim 4.82 (FreeBSD)) (envelope-from ) id 1Wm8G1-0008FW-Ge for freebsd-fs@freebsd.org; Sun, 18 May 2014 22:04:33 +0100 Received: from freaky by voi.aagh.net with local (Exim 4.82 (FreeBSD)) (envelope-from ) id 1Wm8G1-000NLk-0t for freebsd-fs@freebsd.org; Sun, 18 May 2014 22:04:33 +0100 Date: Sun, 18 May 2014 22:04:33 +0100 From: Thomas Hurst To: freebsd-fs@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Message-ID: <20140518210432.GA87981@voi.aagh.net> References: <201405151530.s4FFU0d6050580@freefall.freebsd.org> <5375A3A8.3010406@fsn.hu> <5375BEDC.9090202@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Organization: Not much. User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: 62.252.182.37 X-SA-Exim-Mail-From: tom@hur.st X-SA-Exim-Scanned: No (on ita.aagh.net); SAEximRunCond expanded to false X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 May 2014 21:04:36 -0000 * Matthias Gamsjager (mgamsjager@gmail.com) wrote: > > Please try to upgrade to r265945 or later. > > So far so good. No swapping, good ARC size after 1 day of uptime. 2 days without a byte of swap in use, down from a regular 1-2GB for the better part of a year. Woo. Illustrative graph: http://i.imgur.com/TX0TWCv.png Compare activity after the reboot in the middle with the one at the end. -- Thomas 'Freaky' Hurst http://hur.st/ From owner-freebsd-fs@FreeBSD.ORG Mon May 19 11:06:44 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CFBD1311 for ; Mon, 19 May 2014 11:06:44 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B74642DAC for ; Mon, 19 May 2014 11:06:44 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4JB6ig1079996 for ; Mon, 19 May 2014 11:06:44 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4JB6iXH079993 for freebsd-fs@FreeBSD.org; Mon, 19 May 2014 11:06:44 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 19 May 2014 11:06:44 GMT Message-Id: <201405191106.s4JB6iXH079993@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 May 2014 11:06:44 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/189826 fs [zfs] zpool create using gmirror partition hard-hangs o kern/189355 fs [zfs] zfs panic on root mount 10-stable o kern/188443 fs [smbfs] Segfault with tail(1) when mmap(2) called o kern/188328 fs [zfs] UPDATING should provide caveats for running `zpo o kern/188187 fs [zfs] [panic] 10-stable: Kernel panic on zpool import: o kern/187905 fs [zpool] Confusion zpool with a block size in HDD - blo o kern/187778 fs [zfs] Two ZFS filesystems mounted on / at same time o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix s kern/187414 fs [zfs] ZFS Write Deadlock on 8.4 o kern/187261 fs [fusefs] FUSE kernel panic when using socket / bind o kern/186942 fs [zfs] [panic] Fatal trap 12 (seems zfs related) o kern/186720 fs [xfs] is xfs now unsupported in the kernel? o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/186112 fs [zfs] [panic] ZFS Panic/Solaris Assert/zap.c:479 o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185734 fs [zfs] [panic] panic on stable/10 when writing to ZFS d o kern/185374 fs [msdosfs] [panic] Unmounting msdos filesystem in a bad o kern/184677 fs [zfs] [panic] ZFS snapshot umount kernel panic o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/184013 fs [fusefs] truecrypt broken (probably fusefs issue) o kern/183077 fs [opensolaris] [patch] don't have the compiler inline t o kern/182739 fs [fusefs] [panic] sysutils/fusefs-kmod kernel panic on o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] [panic] Kernel panic in ZFS I/O: solaris assert: o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181791 fs [zfs] ZFS ARC Deadlock o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175328 fs [fusefs] [panic] fusefs kernel page fault o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [softupdates] [panic] softdep_deallocate_dependencies: o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172630 fs [zfs] [lor] zfs/zfs_vfsops.c kern/kern_descrip.c o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus f kern/172197 fs [zfs] Userquota (as well as groupquota) does not work o kern/172092 fs [zfs] [panic] zfs import panics kernel o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170523 fs [zfs] zfs rename pool@snapshot1 pool@snapshot2 UNMOUNT o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs [ufs] [patch] Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162195 fs [softupdates] [panic] panic with soft updates journali o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] [panic] zfs/dbuf.c: panic: solaris assert: arc_b o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] [panic] panic during ls(1) zfs snapshot director o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [softupdates] [panic] Kernel panic in softdep_disk_wri o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] [panic] zpool scrub causes panic if geli vdevs d o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 357 problems total. From owner-freebsd-fs@FreeBSD.ORG Tue May 20 02:18:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0753B1AE for ; Tue, 20 May 2014 02:18:48 +0000 (UTC) Received: from mailbox.supranet.net (mailbox.supranet.net [IPv6:2607:f4e0:100:111::9]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D81DE2F60 for ; Tue, 20 May 2014 02:18:47 +0000 (UTC) Received: from [209.204.169.179] (helo=[192.168.1.201]) by mailbox.supranet.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.82 (FreeBSD)) (envelope-from ) id 1WmZdd-000L0l-Lz for freebsd-fs@freebsd.org; Mon, 19 May 2014 21:18:45 -0500 Date: Mon, 19 May 2014 19:18:44 -0700 From: Jeff Chan Reply-To: Jeff Chan X-Priority: 3 (Normal) Message-ID: <1879018434.20140519191844@supranet.net> To: freebsd-fs@freebsd.org Subject: Re: ZFS snapshot restore not quite working; missing steps? In-Reply-To: <3E3DCC6DED864A519B02A398B92D1994@multiplay.co.uk> References: <1129127016.20140517185154@supranet.net> <1198503903.20140518030907@supranet.net> <236659679.20140518035326@supranet.net> <3E3DCC6DED864A519B02A398B92D1994@multiplay.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 02:18:48 -0000 Update: using the most current FreeBSD 9.0 - 9.2 ZFS instructions on the wiki page for the initial configuration caused the booting to ZFS to work: https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE including the GNOP trick, since these drives are "advanced format" and greater than 2TB size, and the -a 4k alignment to 4k sectors when creating the ZFS partitions. Jeff C. From owner-freebsd-fs@FreeBSD.ORG Tue May 20 02:26:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EAC792EF for ; Tue, 20 May 2014 02:26:48 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id AD05A200E for ; Tue, 20 May 2014 02:26:47 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id EA65B20E7088B; Tue, 20 May 2014 02:26:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id BF86C20E70886; Tue, 20 May 2014 02:26:35 +0000 (UTC) Message-ID: From: "Steven Hartland" To: "Jeff Chan" , References: <1129127016.20140517185154@supranet.net> <1198503903.20140518030907@supranet.net> <236659679.20140518035326@supranet.net> <3E3DCC6DED864A519B02A398B92D1994@multiplay.co.uk> <1879018434.20140519191844@supranet.net> Subject: Re: ZFS snapshot restore not quite working; missing steps? Date: Tue, 20 May 2014 03:26:35 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 02:26:49 -0000 ----- Original Message ----- From: "Jeff Chan" > Update: using the most current FreeBSD 9.0 - 9.2 ZFS instructions > on the wiki page for the initial configuration caused the booting > to ZFS to work: > > https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE > > including the GNOP trick, since these drives are "advanced format" and > greater than 2TB size, and the -a 4k alignment to 4k sectors when > creating the ZFS partitions. On later versions you can set vfs.zfs.min_auto_ashift=12 to achieve the same thing as the GNOP trick, but thats only needed if your drive doesn't have a 4k quirk in our codebase. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Tue May 20 03:52:22 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B73C7274; Tue, 20 May 2014 03:52:22 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8C0442734; Tue, 20 May 2014 03:52:22 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4K3qMhb027365; Tue, 20 May 2014 03:52:22 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4K3qMs7027364; Tue, 20 May 2014 03:52:22 GMT (envelope-from linimon) Date: Tue, 20 May 2014 03:52:22 GMT Message-Id: <201405200352.s4K3qMs7027364@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/189865: [zfs] [patch] zfs_dirty_data_max{, _max, _percent} not exported as loader tunables X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 03:52:22 -0000 Old Synopsis: zfs_dirty_data_max{,_max,_percent} not exported as loader tunables New Synopsis: [zfs] [patch] zfs_dirty_data_max{,_max,_percent} not exported as loader tunables Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Tue May 20 03:51:56 UTC 2014 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=189865 From owner-freebsd-fs@FreeBSD.ORG Tue May 20 05:14:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1684B94E for ; Tue, 20 May 2014 05:14:43 +0000 (UTC) Received: from mailbox.supranet.net (mailbox.supranet.net [IPv6:2607:f4e0:100:111::9]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E6A752CA1 for ; Tue, 20 May 2014 05:14:42 +0000 (UTC) Received: from [209.204.169.179] (helo=[192.168.1.201]) by mailbox.supranet.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.82 (FreeBSD)) (envelope-from ) id 1WmcNs-0003Mr-OS; Tue, 20 May 2014 00:14:40 -0500 Date: Mon, 19 May 2014 22:14:38 -0700 From: Jeff Chan Reply-To: Jeff Chan X-Priority: 3 (Normal) Message-ID: <1787422322.20140519221438@supranet.net> To: "Steven Hartland" Subject: Re: ZFS snapshot restore not quite working; missing steps? In-Reply-To: References: <1129127016.20140517185154@supranet.net> <1198503903.20140518030907@supranet.net> <236659679.20140518035326@supranet.net> <3E3DCC6DED864A519B02A398B92D1994@multiplay.co.uk> <1879018434.20140519191844@supranet.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 05:14:43 -0000 On Monday, May 19, 2014, 7:26:35 PM, Steven Hartland wrote: > ----- Original Message -----=20 > From: "Jeff Chan" >> Update: using the most current FreeBSD 9.0 - 9.2 ZFS instructions >> on the wiki page for the initial configuration caused the booting >> to ZFS to work: >>=20 >> https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE >>=20 >> including the GNOP trick, since these drives are "advanced format" and >> greater than 2TB size, and the -a 4k alignment to 4k sectors when >> creating the ZFS partitions. > On later versions you can set vfs.zfs.min_auto_ashift=3D12 to achieve the > same thing as the GNOP trick, but thats only needed if your drive doesn't > have a 4k quirk in our codebase. Thanks Steve, Do you have a reference for vfs.zfs.min_auto_ashift=3D12? Cheers, Jeff C. -- Jeff Chan mailto:jeffc@supranet.net http://www.jeffchan.com/ From owner-freebsd-fs@FreeBSD.ORG Tue May 20 08:40:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DB0A9202 for ; Tue, 20 May 2014 08:40:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C84332C28 for ; Tue, 20 May 2014 08:40:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4K8e1iI098237 for ; Tue, 20 May 2014 08:40:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4K8e1NP098236; Tue, 20 May 2014 08:40:01 GMT (envelope-from gnats) Date: Tue, 20 May 2014 08:40:01 GMT Message-Id: <201405200840.s4K8e1NP098236@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/189865: [zfs] [patch] zfs_dirty_data_max{, _max, _percent} not exported as loader tunables Reply-To: "Steven Hartland" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 08:40:01 -0000 The following reply was made to PR kern/189865; it has been noted by GNATS. From: "Steven Hartland" To: , Cc: Subject: Re: kern/189865: [zfs] [patch] zfs_dirty_data_max{,_max,_percent} not exported as loader tunables Date: Tue, 20 May 2014 09:38:04 +0100 Exposing zfs_dirty_data_max directly doesn't make sense as its a calculated value based off zfs_dirty_data_max_percent% of all memory and capped at zfs_dirty_data_max_max. Given this it could be limited via setting zfs_dirty_data_max_max. The following could be exposed:- zfs_dirty_data_max_max zfs_dirty_data_max_percent zfs_dirty_data_sync zfs_delay_min_dirty_percent zfs_delay_scale Would that forfull your requirement? Regards Steve From owner-freebsd-fs@FreeBSD.ORG Tue May 20 10:48:19 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 530DBE8A for ; Tue, 20 May 2014 10:48:19 +0000 (UTC) Received: from mail-yk0-x230.google.com (mail-yk0-x230.google.com [IPv6:2607:f8b0:4002:c07::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1335E2726 for ; Tue, 20 May 2014 10:48:19 +0000 (UTC) Received: by mail-yk0-f176.google.com with SMTP id q9so178033ykb.21 for ; Tue, 20 May 2014 03:48:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=pHcJPhX+x9QZYEJkeSc/mYUlANHsIxSpRRWAlWg+wA0=; b=NKQvos2OryKnWYT1Tyt0TEC5OsVefvACmTuWO9ywVbyDz4XPULEmA226cDTaGyk2S/ LeZD6xw3VsuD29cZWOgltYCFdkCp8GO25geMtqX9/Ze9E4WuoLuGsTN/FiCVgbdUPqmQ 0e93r7+F6ksMdWzD/ZTwqKOTVS3wHztkMnejktrDvM/sgwzFMewe8NpMeKU/gfnXaivt h4kyYWLNTQ7RyqZTcCmC7zSR8eZ9y23yjpTc51WAg3VyL+M0S6zzxCBq6bA/q+PdsUMP vEXHmLG3B7NOoLEQxLp1/m+vAWftGo8KNR6kubNvKDDkUxm1mpDfCjRPHkaBWqxGv7FM gt/A== MIME-Version: 1.0 X-Received: by 10.236.135.104 with SMTP id t68mr62103625yhi.35.1400582898275; Tue, 20 May 2014 03:48:18 -0700 (PDT) Received: by 10.170.54.8 with HTTP; Tue, 20 May 2014 03:48:18 -0700 (PDT) In-Reply-To: <1787422322.20140519221438@supranet.net> References: <1129127016.20140517185154@supranet.net> <1198503903.20140518030907@supranet.net> <236659679.20140518035326@supranet.net> <3E3DCC6DED864A519B02A398B92D1994@multiplay.co.uk> <1879018434.20140519191844@supranet.net> <1787422322.20140519221438@supranet.net> Date: Tue, 20 May 2014 11:48:18 +0100 Message-ID: Subject: Re: ZFS snapshot restore not quite working; missing steps? From: krad To: Jeff Chan Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 10:48:19 -0000 this is what my pool cachefile is set to [root@carrera /home/krad]# ls -ltr /boot/zfs/zpool.cache -rw-r--r-- 1 root wheel 1060 Mar 31 11:01 /boot/zfs/zpool.cache [root@carrera /home/krad]# df -h /boot/zfs/. Filesystem Size Used Avail Capacity Mounted on spool/ROOT/april 11G 1.4G 9.7G 12% / [root@carrera /home/krad]# zpool get cachefile,bootfs spool NAME PROPERTY VALUE SOURCE spool cachefile - default spool bootfs spool/ROOT/april local you are setting it differently, try the default setting and see if it helps. On 20 May 2014 06:14, Jeff Chan wrote: > On Monday, May 19, 2014, 7:26:35 PM, Steven Hartland wrote: > > ----- Original Message ----- > > From: "Jeff Chan" > > > >> Update: using the most current FreeBSD 9.0 - 9.2 ZFS instructions > >> on the wiki page for the initial configuration caused the booting > >> to ZFS to work: > >> > >> https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE > >> > >> including the GNOP trick, since these drives are "advanced format" and > >> greater than 2TB size, and the -a 4k alignment to 4k sectors when > >> creating the ZFS partitions. > > > On later versions you can set vfs.zfs.min_auto_ashift=12 to achieve the > > same thing as the GNOP trick, but thats only needed if your drive doesn't > > have a 4k quirk in our codebase. > > Thanks Steve, > Do you have a reference for vfs.zfs.min_auto_ashift=12? > > Cheers, > > Jeff C. > -- > Jeff Chan > mailto:jeffc@supranet.net > http://www.jeffchan.com/ > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue May 20 12:49:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DB1FA3CF; Tue, 20 May 2014 12:49:47 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 82D2721BF; Tue, 20 May 2014 12:49:47 +0000 (UTC) Received: from firewall.mikej.com (162-238-140-44.lightspeed.lsvlky.sbcglobal.net [162.238.140.44]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s4KCniXS051862 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Tue, 20 May 2014 08:49:45 -0400 (EDT) (envelope-from mikej@mikej.com) Received: from firewall.mikej.com (localhost [127.0.0.1]) by firewall.mikej.com (8.14.8/8.14.8) with ESMTP id s4KCnNUV092714 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 20 May 2014 08:49:43 -0400 (EDT) (envelope-from mikej@mikej.com) Received: (from www@localhost) by firewall.mikej.com (8.14.8/8.14.8/Submit) id s4KCnMs7092713; Tue, 20 May 2014 08:49:22 -0400 (EDT) (envelope-from mikej@mikej.com) X-Authentication-Warning: firewall.mikej.com: www set sender to mikej@mikej.com using -f To: Steven Hartland Subject: Re: ZFS snapshot restore not quite working; missing =?UTF-8?Q?steps=3F?= MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Tue, 20 May 2014 08:49:22 -0400 From: Michael Jung In-Reply-To: References: <1129127016.20140517185154@supranet.net> <1198503903.20140518030907@supranet.net> <236659679.20140518035326@supranet.net> <3E3DCC6DED864A519B02A398B92D1994@multiplay.co.uk> <1879018434.20140519191844@supranet.net> Message-ID: <2dce8d576665e2f89fef4567e54589e4@mail.mikej.com> X-Sender: mikej@mikej.com User-Agent: Roundcube Webmail/1.0.1 Cc: freebsd-fs@freebsd.org, owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 12:49:47 -0000 On , Steven Hartland wrote: > ----- Original Message ----- From: "Jeff Chan" > > >> Update: using the most current FreeBSD 9.0 - 9.2 ZFS instructions >> on the wiki page for the initial configuration caused the booting >> to ZFS to work: >> >> https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE >> >> including the GNOP trick, since these drives are "advanced format" and >> greater than 2TB size, and the -a 4k alignment to 4k sectors when >> creating the ZFS partitions. > > On later versions you can set vfs.zfs.min_auto_ashift=12 to achieve the > same thing as the GNOP trick, but thats only needed if your drive > doesn't > have a 4k quirk in our codebase. > > Regards > Steve > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" Steve, FreeBSD firewall 10.0-STABLE FreeBSD 10.0-STABLE #0 r266313: Sat May 17 11:52:18 EDT 2014 root@firewall:/usr/obj/usr/src/sys/VT amd64 Even though /boot/loader.conf contains vfs.zfs.min_auto_ashift=12 the value does not get set. [mikej@firewall ~]$ cat /boot/loader.conf zfs_load="YES" kern.maxswzone=16777216 vfs.zfs.min_auto_ashift=12 [mikej@firewall ~]$ [mikej@firewall ~]$ sysctl -a | grep ashift vfs.zfs.max_auto_ashift: 13 vfs.zfs.min_auto_ashift: 9 [mikej@firewall ~]$ Regards, --mikej From owner-freebsd-fs@FreeBSD.ORG Tue May 20 13:33:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8A119C7; Tue, 20 May 2014 13:33:56 +0000 (UTC) Received: from mail-oa0-x22d.google.com (mail-oa0-x22d.google.com [IPv6:2607:f8b0:4003:c02::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 43E1D25F5; Tue, 20 May 2014 13:33:56 +0000 (UTC) Received: by mail-oa0-f45.google.com with SMTP id l6so505087oag.18 for ; Tue, 20 May 2014 06:33:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Kbao+SYo4A9R8KfKo26sP4ZbWj1I6F8TF2gQQYxXx4Q=; b=qYVKx6I/rWBDFinezjpbp5BVkdXCDXca80LCg/tW0vY6YqWIboyhoBdOdu4kagmCag VKGwtqeDs5ldkcxczsVV+ivV+lVOmG5/2PO2jNCH8/ZG/HeUDAI+vw5QkKu63k06MG/H qDXBNjxgOYfVLhVbUzOUZenU8nR4aEQDqVYfE+Tf/BLSBGeuU+GzNm0OAI23lwe6FlJ8 j10l769f3E1JWze+BrZ4SS4no2yQkUdMx+ntfzVayeYb0/496klSxt0bnqmBbf29Yr7j g+BInvyMWkA5TOlfkuU8ZxUjIvAELxrEF4aS6PNDj2oso12BQx97WEkkrWwsfy27Sn0z KyrQ== MIME-Version: 1.0 X-Received: by 10.182.43.132 with SMTP id w4mr43632924obl.41.1400592835421; Tue, 20 May 2014 06:33:55 -0700 (PDT) Received: by 10.76.116.231 with HTTP; Tue, 20 May 2014 06:33:55 -0700 (PDT) In-Reply-To: <2dce8d576665e2f89fef4567e54589e4@mail.mikej.com> References: <1129127016.20140517185154@supranet.net> <1198503903.20140518030907@supranet.net> <236659679.20140518035326@supranet.net> <3E3DCC6DED864A519B02A398B92D1994@multiplay.co.uk> <1879018434.20140519191844@supranet.net> <2dce8d576665e2f89fef4567e54589e4@mail.mikej.com> Date: Tue, 20 May 2014 14:33:55 +0100 Message-ID: Subject: Re: ZFS snapshot restore not quite working; missing steps? From: krad To: Michael Jung Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD FS , owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 13:33:56 -0000 put it in sysctl.conf its only really needed at pool creation time anyhow, once the pool is built the ashift is set On 20 May 2014 13:49, Michael Jung wrote: > On , Steven Hartland wrote: > >> ----- Original Message ----- From: "Jeff Chan" >> >> >> Update: using the most current FreeBSD 9.0 - 9.2 ZFS instructions >>> on the wiki page for the initial configuration caused the booting >>> to ZFS to work: >>> >>> https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE >>> >>> including the GNOP trick, since these drives are "advanced format" and >>> greater than 2TB size, and the -a 4k alignment to 4k sectors when >>> creating the ZFS partitions. >>> >> >> On later versions you can set vfs.zfs.min_auto_ashift=12 to achieve the >> same thing as the GNOP trick, but thats only needed if your drive doesn't >> have a 4k quirk in our codebase. >> >> Regards >> Steve >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > > > > Steve, > > FreeBSD firewall 10.0-STABLE FreeBSD 10.0-STABLE #0 r266313: Sat May 17 > 11:52:18 EDT 2014 root@firewall:/usr/obj/usr/src/sys/VT amd64 > > > Even though /boot/loader.conf contains vfs.zfs.min_auto_ashift=12 the > value does not get set. > > [mikej@firewall ~]$ cat /boot/loader.conf > zfs_load="YES" > kern.maxswzone=16777216 > vfs.zfs.min_auto_ashift=12 > [mikej@firewall ~]$ > > > [mikej@firewall ~]$ sysctl -a | grep ashift > vfs.zfs.max_auto_ashift: 13 > vfs.zfs.min_auto_ashift: 9 > [mikej@firewall ~]$ > > > Regards, > --mikej > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue May 20 13:45:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 82362385; Tue, 20 May 2014 13:45:34 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 42EBC26F3; Tue, 20 May 2014 13:45:34 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 40C4520E7088C; Tue, 20 May 2014 13:45:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 7E26D20E7088A; Tue, 20 May 2014 13:45:28 +0000 (UTC) Message-ID: <7FB2155517174DB8856518F49301DA34@multiplay.co.uk> From: "Steven Hartland" To: "Michael Jung" References: <1129127016.20140517185154@supranet.net> <1198503903.20140518030907@supranet.net> <236659679.20140518035326@supranet.net> <3E3DCC6DED864A519B02A398B92D1994@multiplay.co.uk> <1879018434.20140519191844@supranet.net> <2dce8d576665e2f89fef4567e54589e4@mail.mikej.com> Subject: Re: ZFS snapshot restore not quite working; missing steps? Date: Tue, 20 May 2014 14:45:20 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 13:45:34 -0000 ----- Original Message ----- From: "Michael Jung" To: "Steven Hartland" Cc: "Jeff Chan" ; ; Sent: Tuesday, May 20, 2014 1:49 PM Subject: Re: ZFS snapshot restore not quite working; missing steps? > On , Steven Hartland wrote: >> ----- Original Message ----- From: "Jeff Chan" >> >> >>> Update: using the most current FreeBSD 9.0 - 9.2 ZFS instructions >>> on the wiki page for the initial configuration caused the booting >>> to ZFS to work: >>> >>> https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE >>> >>> including the GNOP trick, since these drives are "advanced format" and >>> greater than 2TB size, and the -a 4k alignment to 4k sectors when >>> creating the ZFS partitions. >> >> On later versions you can set vfs.zfs.min_auto_ashift=12 to achieve the >> same thing as the GNOP trick, but thats only needed if your drive >> doesn't >> have a 4k quirk in our codebase. > > Steve, > > FreeBSD firewall 10.0-STABLE FreeBSD 10.0-STABLE #0 r266313: Sat May 17 > 11:52:18 EDT 2014 root@firewall:/usr/obj/usr/src/sys/VT amd64 > > > Even though /boot/loader.conf contains vfs.zfs.min_auto_ashift=12 the > value does not get set. > > [mikej@firewall ~]$ cat /boot/loader.conf > zfs_load="YES" > kern.maxswzone=16777216 > vfs.zfs.min_auto_ashift=12 > [mikej@firewall ~]$ > > > [mikej@firewall ~]$ sysctl -a | grep ashift > vfs.zfs.max_auto_ashift: 13 > vfs.zfs.min_auto_ashift: 9 > [mikej@firewall ~]$ Due to the restrictions in the current tunable / sysctl interface only sysctl's can be proc backed, which is needed for validation of the input values for these values, hence both those values are are sysctl only and not tunables. If you put them in /etc/sysctl.conf instead it will work. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Wed May 21 04:10:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 872BEA36 for ; Wed, 21 May 2014 04:10:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 72F79252A for ; Wed, 21 May 2014 04:10:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4L4A1u1025235 for ; Wed, 21 May 2014 04:10:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4L4A1GQ025225; Wed, 21 May 2014 04:10:01 GMT (envelope-from gnats) Date: Wed, 21 May 2014 04:10:01 GMT Message-Id: <201405210410.s4L4A1GQ025225@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Nathaniel W Filardo Subject: Re: kern/189865: [zfs] [patch] zfs_dirty_data_max{,_max,_percent} not exported as loader tunables Reply-To: Nathaniel W Filardo X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 May 2014 04:10:01 -0000 The following reply was made to PR kern/189865; it has been noted by GNATS. From: Nathaniel W Filardo To: Steven Hartland Cc: bug-followup@freebsd.org Subject: Re: kern/189865: [zfs] [patch] zfs_dirty_data_max{,_max,_percent} not exported as loader tunables Date: Wed, 21 May 2014 00:09:20 -0400 --wwU9tsYnHnYeRAKj Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, May 20, 2014 at 09:38:04AM +0100, Steven Hartland wrote: > Exposing zfs_dirty_data_max directly doesn't make sense as its > a calculated value based off zfs_dirty_data_max_percent% of > all memory and capped at zfs_dirty_data_max_max. I'm pretty sure the intention is that it is computed that way only if not set already -- there's a comparison for =3D=3D 0 before the value is assign= ed. See arc_init(): http://fxr.watson.org/fxr/source/cddl/contrib/opensolaris/uts/common/fs/zfs= /arc.c?im=3Dexcerpts#L4150 And in the Old World, the zfs.write_limit_override was similarly exported to override the similar computation of zfs.write_limit_max. That said, no, I don't really care too much about this particular tunable; I was just mirroring Solaris. =20 > Given this it could be limited via setting zfs_dirty_data_max_max. Sure. =20 > The following could be exposed:- > zfs_dirty_data_max_max > zfs_dirty_data_max_percent > zfs_dirty_data_sync > zfs_delay_min_dirty_percent > zfs_delay_scale >=20 > Would that forfull your requirement? It's overkill for my case, but yes, those should probably all be exposed. Cheers, --nwf; --wwU9tsYnHnYeRAKj Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEARECAAYFAlN8Ju8ACgkQTeQabvr9Tc/DSgCfal/L6J9s0NE+FhhAG5E+IS0J u4QAn0u66uK6MINAPKpcdCgVNSwJhIwk =Mnqx -----END PGP SIGNATURE----- --wwU9tsYnHnYeRAKj-- From owner-freebsd-fs@FreeBSD.ORG Wed May 21 13:12:54 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BEB819AC; Wed, 21 May 2014 13:12:54 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 936642654; Wed, 21 May 2014 13:12:54 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4LDCs4e042635; Wed, 21 May 2014 13:12:54 GMT (envelope-from smh@freefall.freebsd.org) Received: (from smh@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4LDCsVm042634; Wed, 21 May 2014 13:12:54 GMT (envelope-from smh) Date: Wed, 21 May 2014 13:12:54 GMT Message-Id: <201405211312.s4LDCsVm042634@freefall.freebsd.org> To: smh@FreeBSD.org, freebsd-fs@FreeBSD.org, smh@FreeBSD.org From: smh@FreeBSD.org Subject: Re: kern/189865: [zfs] [patch] zfs_dirty_data_max{, _max, _percent} not exported as loader tunables X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 May 2014 13:12:54 -0000 Synopsis: [zfs] [patch] zfs_dirty_data_max{,_max,_percent} not exported as loader tunables Responsible-Changed-From-To: freebsd-fs->smh Responsible-Changed-By: smh Responsible-Changed-When: Wed May 21 13:12:54 UTC 2014 Responsible-Changed-Why: I'll take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=189865 From owner-freebsd-fs@FreeBSD.ORG Thu May 22 03:51:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 988352FA; Thu, 22 May 2014 03:51:14 +0000 (UTC) Received: from mx1.scaleengine.net (beauharnois2.bhs1.scaleengine.net [142.4.218.15]) by mx1.freebsd.org (Postfix) with ESMTP id 58D4F2FB9; Thu, 22 May 2014 03:51:14 +0000 (UTC) Received: from [10.1.1.1] (S01060001abad1dea.hm.shawcable.net [50.70.146.73]) (Authenticated sender: allanjude.freebsd@scaleengine.com) by mx1.scaleengine.net (Postfix) with ESMTPSA id 57F3B7A5EC; Thu, 22 May 2014 03:51:12 +0000 (UTC) Message-ID: <537D7431.4070103@freebsd.org> Date: Wed, 21 May 2014 23:51:13 -0400 From: Allan Jude User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Benedict Reuschling , Warren Block , Eitan Adler , freebsd-fs@freebsd.org Subject: [patch] zfs sysctl patch X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="UmPQgGsCjoFNi1qQdlPmNlfrflLMA4U1l" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 03:51:14 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --UmPQgGsCjoFNi1qQdlPmNlfrflLMA4U1l Content-Type: multipart/mixed; boundary="------------030809030105050304080906" This is a multi-part message in MIME format. --------------030809030105050304080906 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable A recent commit (r266497 by smh) added a number of new sysctls for ZFS Two of these had minor typos, and the phrasing of another was very awkwar= d. --------------- Improve sysctl descriptions for: vfs.zfs.dirty_data_max vfs.zfs.dirty_data_max_max vfs.zfs.dirty_data_sync --=20 Allan Jude --------------030809030105050304080906 Content-Type: text/plain; charset=windows-1252; name="src.zfs.sysctl_desc.diff" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="src.zfs.sysctl_desc.diff" Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c (revision 2= 66517) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c (working co= py) @@ -144,13 +144,13 @@ TUNABLE_QUAD("vfs.zfs.dirty_data_max", &zfs_dirty_data_max); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, dirty_data_max, CTLFLAG_RWTUN, &zfs_dirty_data_max, 0, - "The dirty space limit in bytes after which new writes are halted un= til " - "space becomes available"); + "The maximum amount of dirty data in bytes after which new writes ar= e " + "halted until space becomes available"); =20 TUNABLE_QUAD("vfs.zfs.dirty_data_max_max", &zfs_dirty_data_max_max); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, dirty_data_max_max, CTLFLAG_RDTUN, &zfs_dirty_data_max_max, 0, - "The absolute cap on diry_data_max when auto calculating"); + "The absolute cap on dirty_data_max when auto calculating"); =20 TUNABLE_INT("vfs.zfs.dirty_data_max_percent", &zfs_dirty_data_max_percen= t); SYSCTL_INT(_vfs_zfs, OID_AUTO, dirty_data_max_percent, CTLFLAG_RDTUN, @@ -160,7 +160,7 @@ TUNABLE_QUAD("vfs.zfs.dirty_data_sync", &zfs_dirty_data_sync); SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, dirty_data_sync, CTLFLAG_RWTUN, &zfs_dirty_data_sync, 0, - "Force at txg if the number of dirty buffer bytes exceed this value"= ); + "Force a txg if the number of dirty buffer bytes exceed this value")= ; =20 static int sysctl_zfs_delay_min_dirty_percent(SYSCTL_HANDLER_ARGS); /* No zfs_delay_min_dirty_percent tunable due to limit requirements */ --------------030809030105050304080906-- --UmPQgGsCjoFNi1qQdlPmNlfrflLMA4U1l Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJTfXQzAAoJEJrBFpNRJZKfsGgP/Rua9QTnEf3UgRJEtpndNbwW OWxPKJvSScn4CFqxLO65KMZnA1HiRVV5qdgxztF7fdc4yiJv6E5PFwk899etKcy+ DWM9BcB56ssAJZ6gA6Tfm0oI7mrT5OC3RacJJBGAkUlMt4GzzNqwsZH4fV9FR/X8 GJFek6NHJcAIFYI5flRd6DXIK+pkDka8e4CrW55kTJZ/W+IYD3KlFjLFILSV2cqB 3dn0UtQUkKjDpKEYM3GenmT5/8DlXflu6H+27UYaTeJTz0WZs0Awr2We6mFI2Fxy PuhPagLois7mmLVxbl+TuwGW6uIfFF78OpiNHQXDn2m5GJI9WSM8OQhj98yrisnH xTwvCLv2fp7CheFRGzOZfUFhog03JDrNqWYsbSlBZ/I3XHRqNNNPuEJFU4l575H7 mBqBpRt+hRe4dnU6WPOQlVfiOiOSLmldd8RuiVomXhByOlnwxqLIcouhy6g/g+OK umD3sGMLMicfOFbJcizZsjxS9+UxXJNiZYa9fT2lnguzSBfu2szfJQn5/HPnzF1G 5pKTYNWgr4bi6P7RuLqBLuS2eHKaood1h3k7PeSo/uJF4oBfS02K3RjUaXzRG1LW z2qNC3aXNzj1J6K3KshVvGAkfmWXsgqEmBnC3lU5NI0tABav0D3eAq9MmNpnqdlV kRk6f/Na1CgCE9ZdM2lp =MQAS -----END PGP SIGNATURE----- --UmPQgGsCjoFNi1qQdlPmNlfrflLMA4U1l-- From owner-freebsd-fs@FreeBSD.ORG Thu May 22 04:25:31 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C1ACAA5B; Thu, 22 May 2014 04:25:31 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "wonkity.com", Issuer "wonkity.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 70C51224A; Thu, 22 May 2014 04:25:31 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.8/8.14.8) with ESMTP id s4M4PTAF029123 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 21 May 2014 22:25:30 -0600 (MDT) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.8/8.14.8/Submit) with ESMTP id s4M4PTwx029120; Wed, 21 May 2014 22:25:29 -0600 (MDT) (envelope-from wblock@wonkity.com) Date: Wed, 21 May 2014 22:25:29 -0600 (MDT) From: Warren Block To: Allan Jude Subject: Re: [patch] zfs sysctl patch In-Reply-To: <537D7431.4070103@freebsd.org> Message-ID: References: <537D7431.4070103@freebsd.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Wed, 21 May 2014 22:25:30 -0600 (MDT) Cc: freebsd-fs@freebsd.org, Benedict Reuschling , Eitan Adler X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 04:25:31 -0000 On Wed, 21 May 2014, Allan Jude wrote: > A recent commit (r266497 by smh) added a number of new sysctls for ZFS > > Two of these had minor typos, and the phrasing of another was very awkward. > > --------------- > > Improve sysctl descriptions for: > vfs.zfs.dirty_data_max > vfs.zfs.dirty_data_max_max > vfs.zfs.dirty_data_sync Nice. Approved for the doc side, but please also get approval from smh. From owner-freebsd-fs@FreeBSD.ORG Thu May 22 04:56:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 72CC04DA; Thu, 22 May 2014 04:56:49 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 36A85248F; Thu, 22 May 2014 04:56:48 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 495CF20E7088E; Thu, 22 May 2014 04:56:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id D468C20E7088B; Thu, 22 May 2014 04:56:42 +0000 (UTC) Message-ID: <7B840D2D10124A4FAC40C69E91E6C20D@multiplay.co.uk> From: "Steven Hartland" To: "Warren Block" , "Allan Jude" References: <537D7431.4070103@freebsd.org> Subject: Re: [patch] zfs sysctl patch Date: Thu, 22 May 2014 05:56:45 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, Benedict Reuschling , Eitan Adler X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 04:56:49 -0000 ----- Original Message ----- From: "Warren Block" To: "Allan Jude" Cc: ; "Benedict Reuschling" ; "Eitan Adler" Sent: Thursday, May 22, 2014 5:25 AM Subject: Re: [patch] zfs sysctl patch > On Wed, 21 May 2014, Allan Jude wrote: > >> A recent commit (r266497 by smh) added a number of new sysctls for ZFS >> >> Two of these had minor typos, and the phrasing of another was very awkward. >> >> --------------- >> >> Improve sysctl descriptions for: >> vfs.zfs.dirty_data_max >> vfs.zfs.dirty_data_max_max >> vfs.zfs.dirty_data_sync > > Nice. Approved for the doc side, but please also get approval from smh. All good for me, thanks for reviewing and picking these up. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Thu May 22 05:21:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6FD71D2D; Thu, 22 May 2014 05:21:51 +0000 (UTC) Received: from smtp2.bway.net (smtp2.bway.net [216.220.96.28]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 49CC3271F; Thu, 22 May 2014 05:21:50 +0000 (UTC) Received: from [10.3.2.108] (foon.sporktines.com [96.57.144.66]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: spork@bway.net) by smtp2.bway.net (Postfix) with ESMTPSA id 2EE87958A5; Thu, 22 May 2014 01:21:40 -0400 (EDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) Subject: Re: [patch] zfs sysctl patch From: Charles Sprickman In-Reply-To: Date: Thu, 22 May 2014 01:21:40 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: <020C690B-C468-494F-8E35-3A527E2546E1@bway.net> References: <537D7431.4070103@freebsd.org> To: Warren Block X-Mailer: Apple Mail (2.1878.2) Cc: freebsd-fs@freebsd.org, Benedict Reuschling , Eitan Adler , Allan Jude X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 05:21:51 -0000 On May 22, 2014, at 12:25 AM, Warren Block wrote: > On Wed, 21 May 2014, Allan Jude wrote: >=20 >> A recent commit (r266497 by smh) added a number of new sysctls for = ZFS >>=20 >> Two of these had minor typos, and the phrasing of another was very = awkward. >>=20 >> --------------- >>=20 >> Improve sysctl descriptions for: >> vfs.zfs.dirty_data_max >> vfs.zfs.dirty_data_max_max >> vfs.zfs.dirty_data_sync >=20 > Nice. Approved for the doc side, but please also get approval from = smh. Vaguely OT, but where in the docs will the descriptions of these tunables land? And is there any place in the docs where all zfs tunables are collected (eg: sysctl vars and loader.conf vars)? Thanks! Charles > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu May 22 05:30:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0B6FCE21; Thu, 22 May 2014 05:30:14 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id C06182761; Thu, 22 May 2014 05:30:13 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 3ED7820E7088E; Thu, 22 May 2014 05:30:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 20D2020E7088B; Thu, 22 May 2014 05:30:08 +0000 (UTC) Message-ID: <3A7805ED58BE400CA7BCBDF0A071978C@multiplay.co.uk> From: "Steven Hartland" To: "Charles Sprickman" , "Warren Block" References: <537D7431.4070103@freebsd.org> <020C690B-C468-494F-8E35-3A527E2546E1@bway.net> Subject: Re: [patch] zfs sysctl patch Date: Thu, 22 May 2014 06:30:09 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, Benedict Reuschling , Eitan Adler , Allan Jude X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 05:30:14 -0000 ----- Original Message ----- From: "Charles Sprickman" > On May 22, 2014, at 12:25 AM, Warren Block wrote: > >> On Wed, 21 May 2014, Allan Jude wrote: >> >>> A recent commit (r266497 by smh) added a number of new sysctls for ZFS >>> >>> Two of these had minor typos, and the phrasing of another was very awkward. >>> >>> --------------- >>> >>> Improve sysctl descriptions for: >>> vfs.zfs.dirty_data_max >>> vfs.zfs.dirty_data_max_max >>> vfs.zfs.dirty_data_sync >> >> Nice. Approved for the doc side, but please also get approval from smh. > > Vaguely OT, but where in the docs will the descriptions of these > tunables land? sysctl -d will trigger them to be displayed. > And is there any place in the docs where all zfs tunables are > collected (eg: sysctl vars and loader.conf vars)? No actual document, just sysctl -d Regards Steve From owner-freebsd-fs@FreeBSD.ORG Thu May 22 05:32:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7AA6DFD7; Thu, 22 May 2014 05:32:01 +0000 (UTC) Received: from mx1.scaleengine.net (beauharnois2.bhs1.scaleengine.net [142.4.218.15]) by mx1.freebsd.org (Postfix) with ESMTP id 52D0027DA; Thu, 22 May 2014 05:32:00 +0000 (UTC) Received: from [10.1.1.1] (S01060001abad1dea.hm.shawcable.net [50.70.146.73]) (Authenticated sender: allanjude.freebsd@scaleengine.com) by mx1.scaleengine.net (Postfix) with ESMTPSA id 85EA97A70B; Thu, 22 May 2014 05:31:58 +0000 (UTC) Message-ID: <537D8BCF.8070805@freebsd.org> Date: Thu, 22 May 2014 01:31:59 -0400 From: Allan Jude User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Charles Sprickman , Warren Block Subject: Re: [patch] zfs sysctl patch References: <537D7431.4070103@freebsd.org> <020C690B-C468-494F-8E35-3A527E2546E1@bway.net> In-Reply-To: <020C690B-C468-494F-8E35-3A527E2546E1@bway.net> X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="bRaC7a11T8gauwfqReQkiNKlrAu2iP3vO" Cc: freebsd-fs@freebsd.org, Benedict Reuschling , Eitan Adler X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 05:32:01 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --bRaC7a11T8gauwfqReQkiNKlrAu2iP3vO Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On 2014-05-22 01:21, Charles Sprickman wrote: > On May 22, 2014, at 12:25 AM, Warren Block wrote: >=20 >> On Wed, 21 May 2014, Allan Jude wrote: >> >>> A recent commit (r266497 by smh) added a number of new sysctls for ZF= S >>> >>> Two of these had minor typos, and the phrasing of another was very aw= kward. >>> >>> --------------- >>> >>> Improve sysctl descriptions for: >>> vfs.zfs.dirty_data_max >>> vfs.zfs.dirty_data_max_max >>> vfs.zfs.dirty_data_sync >> >> Nice. Approved for the doc side, but please also get approval from sm= h. >=20 > Vaguely OT, but where in the docs will the descriptions of these > tunables land? >=20 > And is there any place in the docs where all zfs tunables are > collected (eg: sysctl vars and loader.conf vars)? >=20 > Thanks! >=20 > Charles >=20 >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20 These descriptions are in 'sysctl -d' There are descriptions of many of the important sysctls going into the docs, although it is not committed yet. You can preview here: http://www.allanjude.com/zfs_handbook/zfs-advanced.html --=20 Allan Jude --bRaC7a11T8gauwfqReQkiNKlrAu2iP3vO Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJTfYvSAAoJEJrBFpNRJZKfKrgP/3hpLWfPwoo36RXIByW8/rXr dZ4U9wFUwp1is5DzHbA+S+EwdYQk8dfSdIywq1MAjgXCiyXmr1gb3MsoIezkza4E 26wwRxB1veJ4lPDoz4F55aQxGCu61TBWoa8sOsITN8tIGw0NurjdtvcTwlBtpOL7 mzVOHVM2o8iQEp2Mb7zNACAoROcBa8mmUTuBp79o367WNwbojKg5/M4q22C877Jv Vp9FMmDfzORc3zOF+BENa1l1Qb44gaQW0cxrdgSH58UB7441Z5xgygpm5z8K0Ig6 mVC22kply1imPwOWoVPfBb+5xDETEXdNM4rs5TuFcmrQ9wwqD1NKxHoPjUv0nAbJ UdHTc0umDzr1WnMEtlCGjas8PcUAekiVykPorEp5ZnBz4AsvkSWsmogSCLDMztcK +yK0whDGVUs6MbxIrOb2mtMlSlO7NIov/8in74Ad8D5WfY4ufN0yAUvudzaGEwUK R6MKulNcNCzMDI54I8nakdDMfme51dFlWfcCn1wCKOwfG8bcDb9Bz6iWGGrPng13 CPox+SPj5iBl1xQbscJGBYoXGZvty011YgJhaAHaucDcnDujV+A3fsErJmF7pB/s ePDghNIFk4nfeglEk4cosAxQrtfSSgFq5PWhxQ4165DIckmBC3nd3ZmyXuGhf41h bhjiDdRcFdAbDPIjaf8o =0l2u -----END PGP SIGNATURE----- --bRaC7a11T8gauwfqReQkiNKlrAu2iP3vO-- From owner-freebsd-fs@FreeBSD.ORG Thu May 22 10:38:28 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 22E37636 for ; Thu, 22 May 2014 10:38:28 +0000 (UTC) Received: from mailbox.supranet.net (mailbox.supranet.net [IPv6:2607:f4e0:100:111::9]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0365322E5 for ; Thu, 22 May 2014 10:38:27 +0000 (UTC) Received: from [209.204.169.179] (helo=[192.168.1.201]) by mailbox.supranet.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.82 (FreeBSD)) (envelope-from ) id 1WnQOH-0003N2-VS for freebsd-fs@freebsd.org; Thu, 22 May 2014 05:38:26 -0500 Date: Thu, 22 May 2014 03:38:24 -0700 From: Jeff Chan Reply-To: Jeff Chan X-Priority: 3 (Normal) Message-ID: <719056985.20140522033824@supranet.net> To: freebsd-fs@freebsd.org Subject: Turn off RAID read and write caching with ZFS? MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 10:38:28 -0000 As mentioned before we have a server with the LSI 2208 RAID chip which apparently doesn't seem to have HBA firmware available. (If anyone knows of one, please let me know.) Therefore we are running each drive as separate, individual RAID0, and we've turned off the RAID harware read and write caching on the claim it performs better with ZFS, such as: http://forums.freenas.org/index.php?threads/disable-cache-flush.12253/ " cyberjock, Apr 7, 2013 AAh. You have a RAID controller with on-card RAM. Based on my testing with 3 different RAID controllers that had RAM and benchmark and real world tests, here's my recommended settings for ZFS users: 1. Disable your on-card write cache. Believe it or not this improves write performance significantly. I was very disappointed with this choice, but it seems to be a universal truth. I upgraded one of the cards to 4GB of cache a few months before going to ZFS and I'm disappointed that I wasted my money. It helped a LOT on the Windows server, but in FreeBSD it's a performance killer. :(" 2. If your RAID controller supports read-ahead cache, you should be setting to either "disabled", the most "conservative"(smallest read-ahead) or "normal"(medium size read-ahead). I found that "conservative" was better for random reads from lots of users and the "normal" was better for things where you were constantly reading a file in order(such as copying a single very large file). If you choose anything else for the read-ahead size the latency of your zpool will go way up because any read by the zpool will be multiplied by 100x because the RAID card is constantly reading a bunch of sectors before and after the one sector or area requested." Does anyone have any comments or test results about this? I have not attempted to test it independently. Should we run with RAID hardware caching on or off? Cheers, Jeff C. -- Jeff Chan mailto:jeffc@supranet.net http://www.jeffchan.com/ From owner-freebsd-fs@FreeBSD.ORG Thu May 22 12:52:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A16D79F2 for ; Thu, 22 May 2014 12:52:20 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 649ED2F18 for ; Thu, 22 May 2014 12:52:19 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s4MCq8gt014400 for ; Thu, 22 May 2014 07:52:09 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Thu May 22 07:52:09 2014 Message-ID: <537DF2F3.10604@denninger.net> Date: Thu, 22 May 2014 07:52:03 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Turn off RAID read and write caching with ZFS? References: <719056985.20140522033824@supranet.net> In-Reply-To: <719056985.20140522033824@supranet.net> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030208020007040007000000" X-Antivirus: avast! (VPS 140521-1, 05/21/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 12:52:20 -0000 This is a cryptographically signed message in MIME format. --------------ms030208020007040007000000 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 5/22/2014 5:38 AM, Jeff Chan wrote: > As mentioned before we have a server with the LSI 2208 RAID chip which > apparently doesn't seem to have HBA firmware available. (If anyone > knows of one, please let me know.) Therefore we are running each drive= > as separate, individual RAID0, and we've turned off the RAID harware > read and write caching on the claim it performs better with ZFS, such > as: > > > http://forums.freenas.org/index.php?threads/disable-cache-flush.12253/ > > " cyberjock, Apr 7, 2013 > > AAh. You have a RAID controller with on-card RAM. Based on my > testing with 3 different RAID controllers that had RAM and benchmark > and real world tests, here's my recommended settings for ZFS users: > > 1. Disable your on-card write cache. Believe it or not this > improves write performance significantly. I was very disappointed with > this choice, but it seems to be a universal truth. I upgraded one of > the cards to 4GB of cache a few months before going to ZFS and I'm > disappointed that I wasted my money. It helped a LOT on the Windows > server, but in FreeBSD it's a performance killer. :(" > > 2. If your RAID controller supports read-ahead cache, you should > be setting to either "disabled", the most "conservative"(smallest > read-ahead) or "normal"(medium size read-ahead). I found that > "conservative" was better for random reads from lots of users and the > "normal" was better for things where you were constantly reading a > file in order(such as copying a single very large file). If you choose > anything else for the read-ahead size the latency of your zpool will > go way up because any read by the zpool will be multiplied by 100x > because the RAID card is constantly reading a bunch of sectors before > and after the one sector or area requested." > > > > Does anyone have any comments or test results about this? I have not > attempted to test it independently. Should we run with RAID hardware > caching on or off? > That's mostly-right. Write-caching is very evil in a ZFS world, because ZFS checksums each=20 block. If the filesystem gets back an "OK" for a block not actually on=20 the disk ZFS will presume the checksum is ok. If that assumption proves = to be false down the road you're going to have a very bad day. READ caching is not so simple. The problem that comes about is that in=20 order to obtain the best speed from a spinning piece of rust you must=20 read whole tracks. If you don't you take a latency penalty every time=20 you want a sector, because you must wait for the rust to pass under the=20 head. If you read a single sector and then come back to read a second=20 one inter-sector gap sync is lost and you get to wait for another rotatio= n. Therefore what you WANT for spinning rust in virtually all cases is for=20 all reads coming off the rust to be one full **TRACK** in size. If you=20 wind up only using one sector of that track you still don't get hurt=20 materially because you had to wait for the rotational latency anyway as=20 soon as you move the head. Unfortunately this stopped being easy to figure out quite a long time=20 ago in the disk drive world with the sort of certainty that you need to=20 best-optimize workload. It used to be that ST506-style drives had 17=20 sectors per track and RLL 2,7 ones had 26. Then areal density became=20 the limit and variable geometry showed up, frustrating an operating=20 system (or disk controller!) that tried to, at the driver level, issue=20 one DMA command per physical track in an attempt to capitalize on the=20 fact that all but the first sector read for a given rotation were=20 essentially "free". Modern drives typically try to compensate for their=20 variable-geometryness through their own read-ahead cache, but the exact=20 details of their algorithm are typically not exposed. What I would love to find is a "buffered" controller that recognizes all = of this and works as follows: 1. Writes, when committed, are committed and no return is made until=20 storage has written the data and claims it's on the disk. If the=20 sector(s) written are in the buffer memory (from a previous read in 2=20 below) then the write physically alters both the disk AND the buffer. 2. Reads are always one full track in size and go into the buffer memory = on a LRU basis. A read for a sector already in the buffer memory=20 results in no physical I/O taking place. The controller does not store=20 sectors per-se in the buffer, it stores tracks. This requires that the=20 adapter be able to discern the *actual* underlying geometry of the drive = so it knows where track boundaries are. Yes, I know drive caches=20 themselves try to do this, but how well do they manage? Evidence=20 suggests that it's not particularly effective. Without this read cache is a crapshoot that gets difficult to tune and=20 is very workload-dependent in terms of what delivers best performance. =20 All you can do is tune (if you're able with a given controller) and test.= --=20 -- Karl karl@denninger.net --------------ms030208020007040007000000 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA1MjIxMjUyMDNaMCMGCSqGSIb3DQEJBDEW BBSvff0Qm625KGVcraX266RLbfn6ODBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAK5XZy5GieJaKJXpeuhNaOC6vRDjg 9fs/ovWaEYWt2FLgO2rtb2vFNihDLcsgd2JdbEfo/6/9Z01ve5jng2JbkFWfvDxjpZxQxZ8d jBsn9PASBYwdicfz4Or1A8erlj3tKU1IEmx7zJBkXj8kOFdsftoBo3hhdfBQXmzlFxI1pcKJ j+5KWmMDIUHxbVaf/lRBxjLxsGSrTZkVUtOCjIxR671WJ45+2R9dBRgC4R+Az/mMt530cVq7 bsWlNfnH8nZashxx3omYkMVAFjs81ffKJFKvlTL40PJt4rnoiYc2PxXQftSoORLyeJ8kvJvH 9KHmX1XpRtU2IczgGg9/KcJgYDvmJMTq/SP+lfpR+IPFkDubn24NDnW7VI2st8WomR1kXhtA BSc13h3GeMT67CxyZGzGNu6AY3KrjVGmzPyxvVC9nDj+l4CMz35gCIkx4YjCbwyL4nMp3C8h ex6e8HH4aV5etTja2s65gJXrFwzEVjgsW9NJaazhv6hCc3yWeoa05Vd9W/3NB77VKWZSLXzZ /qTGu8Iaqbm2yhkvezfHaplLgxik/E3zsVF28vSwq+TFg4s5FUdJVLst/D84RGYbRlbiotnL EXGLrP9cu0CBoYWZDe1xZwrodutGCkF4y15ppSsHmUaz+/jw7A3QTKoy72O56Jo7souUM4/5 4r1y7h0AAAAAAAA= --------------ms030208020007040007000000-- From owner-freebsd-fs@FreeBSD.ORG Thu May 22 12:54:41 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7CB6CBCA; Thu, 22 May 2014 12:54:41 +0000 (UTC) Received: from BLU004-OMC4S3.hotmail.com (blu004-omc4s3.hotmail.com [65.55.111.142]) by mx1.freebsd.org (Postfix) with ESMTP id 45A222F34; Thu, 22 May 2014 12:54:40 +0000 (UTC) Received: from BLU179-W50 ([65.55.111.136]) by BLU004-OMC4S3.hotmail.com with Microsoft SMTPSVC(7.5.7601.22678); Thu, 22 May 2014 05:53:35 -0700 X-TMN: [rwdVpqWJ2nA0RJCkmcFAIPgKJJmR/3Hk] X-Originating-Email: [hsn@sendmail.cz] Message-ID: From: Radim Kolar To: "freebsd-fs@FreeBSD.org" , "bug-followup@freebsd.org" Subject: RE: kern/189355: zfs panic on 10-stable Date: Thu, 22 May 2014 12:53:34 +0000 Importance: Normal In-Reply-To: References: <201405151240.s4FCe1Hw087808@freefall.freebsd.org>, <14010473114D42CC92756838300EEE64@multiplay.co.uk>, , , , , MIME-Version: 1.0 X-OriginalArrivalTime: 22 May 2014 12:53:35.0173 (UTC) FILETIME=[D7CAD750:01CF75BC] Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 12:54:41 -0000 i have troubles to compile kernel with optimalization disabled to get accur= ate line where it crashes. i run into two problems while trying to compile = ZFS without optimalization. =20 kern/190101 kern/190103 = From owner-freebsd-fs@FreeBSD.ORG Thu May 22 13:00:03 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8B970DB1 for ; Thu, 22 May 2014 13:00:03 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7A9062F78 for ; Thu, 22 May 2014 13:00:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4MD01oZ079883 for ; Thu, 22 May 2014 13:00:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4MD01n2079881; Thu, 22 May 2014 13:00:01 GMT (envelope-from gnats) Date: Thu, 22 May 2014 13:00:01 GMT Message-Id: <201405221300.s4MD01n2079881@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: zfs panic on 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 13:00:03 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: "freebsd-fs@FreeBSD.org" , "bug-followup@freebsd.org" Cc: Subject: RE: kern/189355: zfs panic on 10-stable Date: Thu, 22 May 2014 12:53:34 +0000 --_d28ea596-b6a3-4db2-994e-4a7a36055ba8_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable i have troubles to compile kernel with optimalization disabled to get accur= ate line where it crashes. i run into two problems while trying to compile = ZFS without optimalization. =20 kern/190101 kern/190103 = --_d28ea596-b6a3-4db2-994e-4a7a36055ba8_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
i have troubles to compile =
 kernel with optimalization disabled to get accurate line where it crashes. =
 i run into two problems while trying to compile ZFS without optimalization.=
 

kern/190101
kern/190103
= --_d28ea596-b6a3-4db2-994e-4a7a36055ba8_-- From owner-freebsd-fs@FreeBSD.ORG Thu May 22 13:35:02 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0D17763D for ; Thu, 22 May 2014 13:35:02 +0000 (UTC) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C555C2283 for ; Thu, 22 May 2014 13:35:01 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id s4MDXq5N004747; Thu, 22 May 2014 08:33:52 -0500 (CDT) Date: Thu, 22 May 2014 08:33:52 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Karl Denninger Subject: Re: Turn off RAID read and write caching with ZFS? In-Reply-To: <537DF2F3.10604@denninger.net> Message-ID: References: <719056985.20140522033824@supranet.net> <537DF2F3.10604@denninger.net> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Thu, 22 May 2014 08:33:52 -0500 (CDT) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 13:35:02 -0000 On Thu, 22 May 2014, Karl Denninger wrote: > > Write-caching is very evil in a ZFS world, because ZFS checksums each block. > If the filesystem gets back an "OK" for a block not actually on the disk ZFS > will presume the checksum is ok. If that assumption proves to be false down > the road you're going to have a very bad day. I don't agree with the above statement. Non-volatile write caching is very beneficial for zfs since it allows transactions (particularly synchronous zil writes) to complete much quicker. This is important for NFS servers and for databases. What is important is that the cache either be non-volatile (e.g. battery-backed RAM) or absolutely observe zfs's cache flush requests. Volatile caches which don't obey cache flush requests can result in a corrupted pool on power loss, system panic, or controller failure. Some plug-in RAID cards have poorly performing firmware which causes problems. Only testing or experience from other users can help identify such cards so that they can be avoided or set to their least harmful configuration. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Thu May 22 13:37:06 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 78D436E3; Thu, 22 May 2014 13:37:06 +0000 (UTC) Received: from mailout12.t-online.de (mailout12.t-online.de [194.25.134.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mailout00.t-online.de", Issuer "TeleSec ServerPass DE-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 0BCFC22AE; Thu, 22 May 2014 13:37:05 +0000 (UTC) Received: from fwd39.aul.t-online.de (fwd39.aul.t-online.de [172.20.27.138]) by mailout12.t-online.de (Postfix) with SMTP id 3C03C5C4E24; Thu, 22 May 2014 15:36:57 +0200 (CEST) Received: from [192.168.119.11] (TWaJz+ZFghGbpZoIvpuM0bt7Fc2wiz7sE0Rb+JKptIngSi3F+O0tj-mJZhCVe+FZka@[84.154.114.101]) by fwd39.t-online.de with esmtp id 1WnTB2-4DgmgK0; Thu, 22 May 2014 15:36:56 +0200 Message-ID: <537DFD70.3010705@freebsd.org> Date: Thu, 22 May 2014 15:36:48 +0200 From: Stefan Esser User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Turn off RAID read and write caching with ZFS? References: <719056985.20140522033824@supranet.net> <537DF2F3.10604@denninger.net> In-Reply-To: <537DF2F3.10604@denninger.net> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-ID: TWaJz+ZFghGbpZoIvpuM0bt7Fc2wiz7sE0Rb+JKptIngSi3F+O0tj-mJZhCVe+FZka X-TOI-MSGID: 2212d1d6-0e3d-46a8-9b05-61af33c701c4 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 13:37:06 -0000 Am 22.05.2014 14:52, schrieb Karl Denninger: [...] > Modern drives typically try to compensate for their > variable-geometryness through their own read-ahead cache, but the exact > details of their algorithm are typically not exposed. > > What I would love to find is a "buffered" controller that recognizes all > of this and works as follows: > > 1. Writes, when committed, are committed and no return is made until > storage has written the data and claims it's on the disk. If the > sector(s) written are in the buffer memory (from a previous read in 2 > below) then the write physically alters both the disk AND the buffer. > > 2. Reads are always one full track in size and go into the buffer memory > on a LRU basis. A read for a sector already in the buffer memory > results in no physical I/O taking place. The controller does not store > sectors per-se in the buffer, it stores tracks. This requires that the > adapter be able to discern the *actual* underlying geometry of the drive > so it knows where track boundaries are. Yes, I know drive caches > themselves try to do this, but how well do they manage? Evidence > suggests that it's not particularly effective. In the old times, controllers implemented read-ahead, either under control of the host-adapter or the host OS (e.g. the based on the detection of sequential access patterns). This changed, when large on-drive caches became practical. Drives now do aggressive read-ahead caching, but without the penalty this had, in the old times. I do not know, whether this applies to all current drives, but since it is old technology, I assume so: The sector layout is reversed on each track - higher numbered sectors come first. The drive starts reading data into its cache as soon as the head receives stable data and it stops only when the whole requested range of sectors has been read. E.g. if you request sectors 10 to 20, the drive may have the read head positioned when sector 30 comes along. Starting at that sector, data is read from sectors 30, 29, ..., 10 and stored in the drive's cache. Only after sector 10 has been read, data is transferred to the requesting host adapter, while the drive seeks to the next track to operate on. This scheme offers opportunistic read-ahead, which does not increase the random access seek times. The old method required the head to stay on the track for some milliseconds to read sectors following the requested block on the vague chance, that this data might later be requested. The new method just starts reading as soon as there is data under the read head. This needs more cache on the drive, but does not add latency for read-ahead. The disadvantage is, that you never know how much read-ahead there will be, it depends on the rotational position of the disk when the seek ends. And if the first sector read from the track is in the middle of the requested range, the drive needs to read the whole track to fulfil the request, but that would happen with equal probability with the old sector layout as well. > Without this read cache is a crapshoot that gets difficult to tune and > is very workload-dependent in terms of what delivers best performance. > All you can do is tune (if you're able with a given controller) and test. The read-ahead of reverse sectors as described above does not have any negative side-effect. On average, you'll read half a track into the drive's cache whenever you request a single sector. A controller that implements read-ahead does this by increasing the amount of data requested from the drive. This leads to a higher probability that a full track must be read to satisfy the request and will thus increase latencies observed by the application. Rergards, STefan From owner-freebsd-fs@FreeBSD.ORG Thu May 22 14:00:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A0E19CCD for ; Thu, 22 May 2014 14:00:45 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4A16C24B3 for ; Thu, 22 May 2014 14:00:44 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s4ME0ck0040533 for ; Thu, 22 May 2014 09:00:38 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Thu May 22 09:00:38 2014 Message-ID: <537E0301.4010509@denninger.net> Date: Thu, 22 May 2014 09:00:33 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Turn off RAID read and write caching with ZFS? [SB QUAR: Thu May 22 08:33:59 2014] References: <719056985.20140522033824@supranet.net> <537DF2F3.10604@denninger.net> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms000509040906010807010604" X-Antivirus: avast! (VPS 140521-1, 05/21/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 14:00:45 -0000 This is a cryptographically signed message in MIME format. --------------ms000509040906010807010604 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 5/22/2014 8:33 AM, Bob Friesenhahn wrote: > On Thu, 22 May 2014, Karl Denninger wrote: >> >> Write-caching is very evil in a ZFS world, because ZFS checksums each = >> block. If the filesystem gets back an "OK" for a block not actually=20 >> on the disk ZFS will presume the checksum is ok. If that assumption=20 >> proves to be false down the road you're going to have a very bad day. > > I don't agree with the above statement. Non-volatile write caching is = > very beneficial for zfs since it allows transactions (particularly=20 > synchronous zil writes) to complete much quicker. This is important=20 > for NFS servers and for databases. What is important is that the=20 > cache either be non-volatile (e.g. battery-backed RAM) or absolutely=20 > observe zfs's cache flush requests. Volatile caches which don't obey=20 > cache flush requests can result in a corrupted pool on power loss,=20 > system panic, or controller failure. > > Some plug-in RAID cards have poorly performing firmware which causes=20 > problems. Only testing or experience from other users can help=20 > identify such cards so that they can be avoided or set to their least=20 > harmful configuration. > Let's think this one though. You have said disk on said controller. It has a battery-backed RAM cache and JBOD drives on it. Your database says "Write/Commit" and the controller does, to cache, and = says "ok, done." The data is now in the battery-backed cache. Let's=20 further assume the cache is ECC-corrected and we'll accept the risk of=20 an undetected ECC failure (very, very long odds on that one so that=20 seems reasonable.) Some time passes and other I/O takes place without incident. Now the *DRIVE* returns an unrecoverable data error during the actual=20 write to spinning rust when the controller (eventually) flushes its cache= =2E Note that the controller can't rebuild the drive as it doesn't have a=20 second copy; it's JBOD. When does the operating system find out about=20 the fault and what locality of the fault does it learn about? Be very careful with your assumptions here. If there is more than one=20 filesystem on that drive the I/O that actually returns a fault (because=20 of when it is detected) may in fact be to a *different filesystem* than=20 the one that actually faulted! The only safe thing for the adapter to do if it detects a failure on a=20 deferred (battery-backed) write is to declare the entire *disk* dead and = return error for all subsequent I/O attempts to it, effectively forcing=20 all data on that pack to be declared "gone" at the OS level. You better = hope the adapter does that (are you sure yours does?) or you're going to = get a surprise of a most-unpleasant sort because there is no way for the = adapter to go back and declare a formerly-committed-and-confirmed I/O=20 invalid. At a minimum by doing this you have multiplied a single-block failure=20 into a failure of *all* blocks on the media as soon as the first one=20 fails. In practice that may not be all that far off the mark (drives=20 has a distressing habit of failing far more than one block at a time)=20 but to force that behavior is something you should be aware of. There is a very good argument for what amounts to a battery-backed RAM=20 "disk" for ZIL for the reasons you noted. And I do agree there are=20 significant performance improvements to be had from battery-backed RAM=20 adapters in a ZFS environment (by the way, set the zfs logbias to=20 "throughput" rather than "latency" if you're using a controller cache=20 since ZFS is incapable of deterministically predicting latency and that=20 can lead to some really odd behavior) but in terms of operational=20 integrity you are taking risk by doing this. Then again we lived with that risk in the world before ZFS and=20 hardware-backed RAID in that an *undetected* sector fault was=20 potentially ruinous, and since individual blocks were not checksummed it = did occasionally happen. All configurations carry risk and you have to evaluate which ones you're = willing to live with and which ones you simply cannot accept. --=20 -- Karl karl@denninger.net --------------ms000509040906010807010604 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA1MjIxNDAwMzNaMCMGCSqGSIb3DQEJBDEW BBTLVO0YxfZyq/y5yWqzHPdyOCucFjBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIApOXSJVcAmlMkrhpqUZuIT1xKhNb1 WKGUUTgVKjV7M4CkYGZkaXDBlRaiGLFI3njEhcafwDCT7eaMh45ARgS9u+EXvPuxwNpA9DFw loijUP3rmi2iyJZ095SLwFJEu7hjKw7BsAwLqSRdmYgwV5lw06NzZf6Mrxe0Mb1ibHjdH2/0 ZDt+wCoiFGickMjjzpRRyc2uD0VqLnnrMpw0lWnaNPiV9NcU9RlMU50XjwOD7RqO8tosxMRm 4RA+HanrO38xXv8N6UcPonkxIAnOnNrxVaswYodY/ejDSXQUU0tUykGTzJYwxKukG7beV3CQ Z7aqEU/YTj9AVbeqCrZWqBNnGp3jxumZKQz+Pw5gMeHx/TSe5wxsM5wixPlpz0wzd8ZwXLZV wJXY09qEKdnIcVckpcEUcb3Xok6vhp5hK7su1FL/aaSiDl9KGWrBYXTvA9vHjOg3OupMe2j3 ybWFgRgqG7BPrh5K8LBljr/UKMtsCQY0gltTqGUByJzdGhqXoZ70Is8oK47+1CLyIQYTA6oa vLCuXLqiBEWOHMQGvX1GGyXwgGknSIkIg75BAVuE+ycbSqIwqbryJITBYzWbex16PpztgJyQ QLznClROadXvN/o8OuCd9CPeQ7bEdd4ifax30ztkpkeA8Swc15Im3+WPu+9AoKcMJrP7HxMs u0uG+0AAAAAAAAA= --------------ms000509040906010807010604-- From owner-freebsd-fs@FreeBSD.ORG Thu May 22 14:11:38 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6298F2BF; Thu, 22 May 2014 14:11:38 +0000 (UTC) Received: from mxout1.bln1.prohost.de (mxout1.bln1.prohost.de [91.233.87.26]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 758ED25BA; Thu, 22 May 2014 14:11:37 +0000 (UTC) Received: from fbipool-clients-45-115.fbi.h-da.de (fbipool-clients-45-115.fbi.h-da.de [141.100.45.115]) (authenticated bits=0) by mx1.bln1.prohost.de (8.14.4/8.14.4) with ESMTP id s4MEBMsV003425 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Thu, 22 May 2014 16:11:23 +0200 Message-ID: <537E05A5.8040607@FreeBSD.org> Date: Thu, 22 May 2014 16:11:49 +0200 From: Benedict Reuschling Organization: The FreeBSD Project User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Allan Jude , Charles Sprickman , Warren Block Subject: Re: [patch] zfs sysctl patch References: <537D7431.4070103@freebsd.org> <020C690B-C468-494F-8E35-3A527E2546E1@bway.net> <537D8BCF.8070805@freebsd.org> In-Reply-To: <537D8BCF.8070805@freebsd.org> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Null-Tag: dc11f05ce529dcd61b5d744143a488aa Cc: freebsd-fs@FreeBSD.org, Eitan Adler X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 14:11:38 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Am 22.05.14 07:31, schrieb Allan Jude: > On 2014-05-22 01:21, Charles Sprickman wrote: >> On May 22, 2014, at 12:25 AM, Warren Block >> wrote: >> >>> On Wed, 21 May 2014, Allan Jude wrote: >>> >>>> A recent commit (r266497 by smh) added a number of new >>>> sysctls for ZFS >>>> >>>> Two of these had minor typos, and the phrasing of another was >>>> very awkward. >>>> >>>> --------------- >>>> >>>> Improve sysctl descriptions for: vfs.zfs.dirty_data_max >>>> vfs.zfs.dirty_data_max_max vfs.zfs.dirty_data_sync >>> >>> Nice. Approved for the doc side, but please also get approval >>> from smh. >> >> Vaguely OT, but where in the docs will the descriptions of these >> tunables land? >> >> And is there any place in the docs where all zfs tunables are >> collected (eg: sysctl vars and loader.conf vars)? >> >> Thanks! >> >> Charles >> >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs To >>> unsubscribe, send any mail to >>> "freebsd-fs-unsubscribe@freebsd.org" >> > > These descriptions are in 'sysctl -d' > > There are descriptions of many of the important sysctls going into > the docs, although it is not committed yet. You can preview here: > > http://www.allanjude.com/zfs_handbook/zfs-advanced.html > > DES was working on a system to automatically extract all the sysctls and their descriptions (if present) and export the to doc. Last time he said he had a showstopper, but that was at last years EuroBSDcon when I asked him and I didn't get to ask him last week at BSDCan. Although I think that he is too busy with other stuff. However, I think this is worthwile to pursue and maybe someone can pick it up from him. For now, the 'sysctl -d' command can be used as described above. Cheers Benedict -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: GPGTools - https://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTfgWlAAoJEAQa31nbPD2LoU8H/0aq7A8vN+Nz8+dh+/Pwy7ea iZO9t4LCwXBjroJeOhEOMjc9jPMNp+hvBon29xFsf4feW93zrpUQnxQjrOICfyXp 0O7uv5haI3L7PRrLWBWFoF6JNrmdn/A5nbd3xyGMVqBCd3f1IuDdeiXLHPSQdh+p ozzCSuKDMwLUtbn/z7i+yRjY1n815LEd8fy5mGwr+FBlNa7wA+qyTjKCN6ej80LE CtbZMwmCI2qiNALT+ZxFqizWXlHnjUtwNjw/BE/Ve6necytMI2nPGdo84NUul3GP BzZZ1Xa1xtYPTNTw9UZ4JhOp7vITiyRc3YrmuMJRuU421WGlfCQdaLLIbaVNU2c= =kuPb -----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Thu May 22 14:20:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 854339DC for ; Thu, 22 May 2014 14:20:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 712DE269F for ; Thu, 22 May 2014 14:20:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4MEK0oQ011401 for ; Thu, 22 May 2014 14:20:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4MEK0AH011400; Thu, 22 May 2014 14:20:00 GMT (envelope-from gnats) Date: Thu, 22 May 2014 14:20:00 GMT Message-Id: <201405221420.s4MEK0AH011400@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 14:20:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: "bug-followup@freebsd.org" Cc: Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Thu, 22 May 2014 14:16:25 +0000 --_5eaafcec-1bfd-4fa0-b05a-71b1e2909499_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable for google and other freebsd users: To compile freebsd kernel and its modules without optimization use makeoptions DEBUG=3D"-g -O0" in kernel config file. = --_5eaafcec-1bfd-4fa0-b05a-71b1e2909499_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
for google and other freebsd use= rs:

To compile freebsd kernel and its modules without optimization u= se

makeoptions =3B =3B =3B =3B DEBUG=3D"-g -O0"

in kernel config file.
= --_5eaafcec-1bfd-4fa0-b05a-71b1e2909499_-- From owner-freebsd-fs@FreeBSD.ORG Thu May 22 14:26:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C5C48F83 for ; Thu, 22 May 2014 14:26:18 +0000 (UTC) Received: from smtp102-5.vfemail.net (eight.vfemail.net [108.76.175.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 545172764 for ; Thu, 22 May 2014 14:26:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple; d=vfemail.net; h=date :message-id:from:to:subject:references:in-reply-to:content-type :mime-version:content-transfer-encoding; s=default; bh=c+0AWxkdR H/uCFiC2SfBD//pqwsJef4AXmnO3uckdfM=; b=X3oxgTuZRUMqsP3d+06xMcQrE 19IeLU3z6DPuP+Dbk70UyzMCbaNKiNBOiMid30ylkD4bee+8kdNqcI77a+ymSvzC 5EreyMtfWLqrc9A8r8c8JlYLWllu/i5s07YjJM6/43P5+y5VmQz3cxbGgn2ideKw ic1QylGVyZpnU2bIJk= Received: (qmail 23387 invoked by uid 89); 22 May 2014 14:19:33 -0000 Received: by simscan 1.4.0 ppid: 23380, pid: 23383, t: 0.0893s scanners:none Received: from unknown (HELO www111) (cmlja0BoYXZva21vbi5jb20=@MTcyLjE2LjEwMC45Mw==) by 172.16.100.62 with ESMTPA; 22 May 2014 14:19:33 -0000 Received: from rrcs-98-103-53-237.central.biz.rr.com (rrcs-98-103-53-237.central.biz.rr.com [98.103.53.237]) by www.vfemail.net (Horde Framework) with HTTP; Thu, 22 May 2014 09:19:32 -0500 Date: Thu, 22 May 2014 09:19:32 -0500 Message-ID: <20140522091932.Horde.hsT5LUjnShIYq2YrtCVdnA1@www.vfemail.net> From: Rick Romero To: freebsd-fs@freebsd.org Subject: Re: Turn off RAID read and write caching with ZFS? [SB QUAR: Thu May 22 08:33:59 2014] References: <719056985.20140522033824@supranet.net> <537DF2F3.10604@denninger.net> <537E0301.4010509@denninger.net> In-Reply-To: <537E0301.4010509@denninger.net> User-Agent: Internet Messaging Program (IMP) H5 (6.1.7) X-VFEmail-Originating-IP: OTguMTAzLjUzLjIzNw== X-VFEmail-AntiSpam: Notify admin@vfemail.net of any spam, and include VFEmail headers MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed; DelSp=Yes Content-Transfer-Encoding: 8bit Content-Disposition: inline Content-Description: Plaintext Message X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 14:26:18 -0000 Quoting Karl Denninger : > On 5/22/2014 8:33 AM, Bob Friesenhahn wrote: >> On Thu, 22 May 2014, Karl Denninger wrote: >>> Write-caching is very evil in a ZFS world, because ZFS checksums each >>> block. If the filesystem gets back an "OK" for a block not actually on >>> the disk ZFS will presume the checksum is ok.  If that assumption >>> proves to be false down the road you're going to have a very bad day. >> >> I don't agree with the above statement.  Non-volatile write caching is >> very beneficial for zfs since it allows transactions (particularly >> synchronous zil writes) to complete much quicker. This is important for >> NFS servers and for databases.  What is important is that the cache >> either be non-volatile (e.g. battery-backed RAM) or absolutely observe >> zfs's cache flush requests.  Volatile caches which don't obey cache >> flush requests can result in a corrupted pool on power loss, system >> panic, or controller failure. >> >> Some plug-in RAID cards have poorly performing firmware which causes >> problems.  Only testing or experience from other users can help >> identify such cards so that they can be avoided or set to their least >> harmful configuration. > > Let's think this one though. > > You have said disk on said controller. > > It has a battery-backed RAM cache and JBOD drives on it. > > Your database says "Write/Commit" and the controller does, to cache, and > says "ok, done."  The data is now in the battery-backed cache. Let's > further assume the cache is ECC-corrected and we'll accept the risk of > an undetected ECC failure (very, very long odds on that one so that > seems reasonable.) > > Some time passes and other I/O takes place without incident. > > Now the *DRIVE* returns an unrecoverable data error during the actual > write to spinning rust when the controller (eventually) flushes its > cache. Technically, you have the same problem on the local drive's cache. But disabling write cache on every device just to satisfy ZFS causes it to be ungodly slow - IMHO.  Also, IMHO, your scenario is a bit overstated. In this case, the drive should mark the sector as bad, and write it's cache data to a new sector - instead of going down the path of having the controller disable the entire disk as you described. Which, in the case of the controller disabling the entire drive, that is safer under a controller-based RAID scenario - because the controller cache can write to a different drive if that entire drive fails. When run as cached JBOD - then sure, you could be hosed if the entire drive fails and it's not caught before a write. So bascially, IMHO again, if you run write cache on the controller and have BBC + UPS, then use controller-based RAID.  Don't disable the drive cache in either case, unless you want complete ZFS protection at the cost of performance. I have had ZFS detect a power supply issue by repeatedly disabling drives - so I don't recommend the controller based RAID + write cache, just take the performance hit. Rick From owner-freebsd-fs@FreeBSD.ORG Thu May 22 16:30:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 551D1A5F for ; Thu, 22 May 2014 16:30:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2A9FF232D for ; Thu, 22 May 2014 16:30:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4MGU1un058653 for ; Thu, 22 May 2014 16:30:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4MGU1Ci058652; Thu, 22 May 2014 16:30:01 GMT (envelope-from gnats) Date: Thu, 22 May 2014 16:30:01 GMT Message-Id: <201405221630.s4MGU1Ci058652@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 16:30:02 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: Steven Hartland , "bug-followup@freebsd.org" Cc: Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Thu, 22 May 2014 16:24:48 +0000 --_0f2b99c2-5ea7-4f3d-ba3f-f126c437da89_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable after recompiling kernel and zfs with -O0=2C i am have very different stack= trace. I cant get dump with your patch in this case. Can you identify which= kernel component failed? its MPT driver? panic dblfault_handler cpu_search cpu_search_lowest sched_lowest sched_pickcpu sched_add intr_event_schedule_thread intr_event_handle intr_execute_handlers lapic_handle_intr Xapis_isr1 bus_space_write_4 mpt_write mpt_send_cmd mpt_execute_req bus_dmamap_load_ccb mpt_start mpt_action xpt_run_devq xpt_action_default scsi_action xpt_action dastart xpt_run_allocq xpt_schedule dareprobe g_disk_access g_access g_part_access g_access vdev_geom_attach_taster m_attach_taster vdev_geom_read_pool_label spa_generate_rootconf spa_import_rootpool zfs_mount vfs_domount_first vfs_domount vfs_donmount kernel_mount parse_mount vfs_mountroot_parse = --_0f2b99c2-5ea7-4f3d-ba3f-f126c437da89_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
after recompiling kernel and zfs= with -O0=2C i am have very different stacktrace. I cant get dump with your= patch in this case. Can you identify which kernel component failed? its MP= T driver?

panic
dblfault_handler
cpu_search
cpu_search_lowe= st
sched_lowest
sched_pickcpu
sched_add
intr_event_schedule_thr= ead
intr_event_handle
intr_execute_handlers
lapic_handle_intr
X= apis_isr1
bus_space_write_4
mpt_write
mpt_send_cmd
mpt_execute_= req
bus_dmamap_load_ccb
mpt_start
mpt_action
xpt_run_devq
xp= t_action_default
scsi_action
xpt_action
dastart
xpt_run_allocq<= br>xpt_schedule
dareprobe
g_disk_access
g_access
g_part_access<= br>g_access
vdev_geom_attach_taster
m_attach_taster
vdev_geom_read= _pool_label
spa_generate_rootconf
spa_import_rootpool
zfs_mountvfs_domount_first
vfs_domount
vfs_donmount
kernel_mount
parse_= mount
vfs_mountroot_parse
= --_0f2b99c2-5ea7-4f3d-ba3f-f126c437da89_-- From owner-freebsd-fs@FreeBSD.ORG Thu May 22 17:29:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8210FF0B; Thu, 22 May 2014 17:29:46 +0000 (UTC) Received: from mx1.scaleengine.net (beauharnois2.bhs1.scaleengine.net [142.4.218.15]) by mx1.freebsd.org (Postfix) with ESMTP id 59CD7281F; Thu, 22 May 2014 17:29:45 +0000 (UTC) Received: from [10.1.1.1] (S01060001abad1dea.hm.shawcable.net [50.70.146.73]) (Authenticated sender: allanjude.freebsd@scaleengine.com) by mx1.scaleengine.net (Postfix) with ESMTPSA id 3DAB07A813; Thu, 22 May 2014 17:29:45 +0000 (UTC) Message-ID: <537E340B.5040108@freebsd.org> Date: Thu, 22 May 2014 13:29:47 -0400 From: Allan Jude User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , Warren Block Subject: Re: [patch] zfs sysctl patch References: <537D7431.4070103@freebsd.org> <7B840D2D10124A4FAC40C69E91E6C20D@multiplay.co.uk> In-Reply-To: <7B840D2D10124A4FAC40C69E91E6C20D@multiplay.co.uk> X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="fKnirj5VudA3JMulrfEhem1Nmc1DSiu9I" Cc: freebsd-fs@freebsd.org, Benedict Reuschling , Eitan Adler X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 17:29:46 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --fKnirj5VudA3JMulrfEhem1Nmc1DSiu9I Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On 2014-05-22 00:56, Steven Hartland wrote: > ----- Original Message ----- From: "Warren Block" > To: "Allan Jude" > Cc: ; "Benedict Reuschling" ; > "Eitan Adler" > Sent: Thursday, May 22, 2014 5:25 AM > Subject: Re: [patch] zfs sysctl patch >=20 >=20 >> On Wed, 21 May 2014, Allan Jude wrote: >> >>> A recent commit (r266497 by smh) added a number of new sysctls for ZF= S >>> >>> Two of these had minor typos, and the phrasing of another was very >>> awkward. >>> >>> --------------- >>> >>> Improve sysctl descriptions for: >>> vfs.zfs.dirty_data_max >>> vfs.zfs.dirty_data_max_max >>> vfs.zfs.dirty_data_sync >> >> Nice. Approved for the doc side, but please also get approval from sm= h. >=20 > All good for me, thanks for reviewing and picking these up. >=20 > Regards > Steve Did the name of the sysctl vfs.zfs.dirty_data_max_max come from OpenZFS or did we pick that? If it is ours, I would suggest changing it to dirty_data_max_limit because '*_max_max' is confusing and a bit misleading. --=20 Allan Jude --fKnirj5VudA3JMulrfEhem1Nmc1DSiu9I Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJTfjQOAAoJEJrBFpNRJZKfeoEP/2AiLRBHBNXlujkcwey34hWu 3XZFES5dd9rBkzwhBYLxJcwZ1XUyYPVSraBPXZCDlnsF8jKd3s68MBEER0YAj/9K PaFKXRPYWqcRqUjh21PGBhyNp0+2+FLoP/XNestc/FFYqv5umbHaqnw4kfojWaoF ZFffYyxwNyMkdv6evN1Fxr3KXZzBXY7hMDsdi3aLJ3lRgGF/50rQNEkMSHPBNlDE SJxG+28D2iACED1RsNiws9DdGMk2gUac1MNFZ3HIiIG7XuNZbt32rCHyFg3FgPJX KE2Tx4cWBtNMa0eolz1Fv8ipewAH+HBHId5pd4gJRO0oMluCdomCxC87QGn+yoka qAs4n9xHrRm34Bee/T4BWQR1HOGidb/lEmhE1Om0ZWDtCDSTTcXCJZHSf75+7CDj 04O5Cr7lPw/8yFcV3ThVqNT+htPPk7W0xbREjHCO2AoK919LK3HruibF+M/10NzI GNrFU4Boc2pTcLV74L6MtW0Ntc9+FCAeK6LC6tVEjK/oRN1WrPDqq5GL3SknKnbY 8sdHB7wvAThw8P4rMqAxUiInuBiyhPTXTiDB1osnG2t6EU7m7LE2Q7VY/05XWkk1 mrSQEgFNbirOv//CRjzbFGwpe+ZKerC28cWkp1CaThE72Q14ILIC647xRBMyuT5L jPxsJ3fVK40UP+xb6rDU =hYK+ -----END PGP SIGNATURE----- --fKnirj5VudA3JMulrfEhem1Nmc1DSiu9I-- From owner-freebsd-fs@FreeBSD.ORG Thu May 22 18:00:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 46B897B9 for ; Thu, 22 May 2014 18:00:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1AB962A94 for ; Thu, 22 May 2014 18:00:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4MI000N089750 for ; Thu, 22 May 2014 18:00:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4MI002A089749; Thu, 22 May 2014 18:00:00 GMT (envelope-from gnats) Date: Thu, 22 May 2014 18:00:00 GMT Message-Id: <201405221800.s4MI002A089749@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Andriy Gapon Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: Andriy Gapon X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 18:00:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Andriy Gapon To: bug-followup@FreeBSD.org, hsn@sendmail.cz Cc: Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Thu, 22 May 2014 20:54:02 +0300 This looks like a possible stack exhaustion. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu May 22 18:07:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A11BBB12; Thu, 22 May 2014 18:07:27 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 62B4D2B62; Thu, 22 May 2014 18:07:27 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 15A2B20E7088D; Thu, 22 May 2014 18:07:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.3 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id B9EC120E7088B; Thu, 22 May 2014 18:07:19 +0000 (UTC) Message-ID: <7B1E8F19FEE443B7AAAD7C9B2DDB5928@multiplay.co.uk> From: "Steven Hartland" To: "Allan Jude" , "Warren Block" References: <537D7431.4070103@freebsd.org> <7B840D2D10124A4FAC40C69E91E6C20D@multiplay.co.uk> <537E340B.5040108@freebsd.org> Subject: Re: [patch] zfs sysctl patch Date: Thu, 22 May 2014 19:07:23 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, Benedict Reuschling , Eitan Adler X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 18:07:27 -0000 ----- Original Message ----- From: "Allan Jude" > Did the name of the sysctl vfs.zfs.dirty_data_max_max come from OpenZFS > or did we pick that? > > If it is ours, I would suggest changing it to dirty_data_max_limit > because '*_max_max' is confusing and a bit misleading. Yes, the sysctls mirror the names of the ZFs variables they effect, which are set by different means in other implementations. So it is best we keep them the same so when users are searching about information for a setting they get the richest set of results. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Thu May 22 19:00:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 663D2F50 for ; Thu, 22 May 2014 19:00:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 39E47202A for ; Thu, 22 May 2014 19:00:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4MJ006P011335 for ; Thu, 22 May 2014 19:00:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4MJ00qd011334; Thu, 22 May 2014 19:00:00 GMT (envelope-from gnats) Date: Thu, 22 May 2014 19:00:00 GMT Message-Id: <201405221900.s4MJ00qd011334@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: "Steven Hartland" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 May 2014 19:00:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: "Steven Hartland" To: , "Radim Kolar" Cc: Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Thu, 22 May 2014 19:51:28 +0100 Silly question are you using i386 and not amd64 architecture? If so can your try adding the following to your kernel config: options KSTACK_PAGES=4 Regards Steve From owner-freebsd-fs@FreeBSD.ORG Fri May 23 07:39:21 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6474E2CB; Fri, 23 May 2014 07:39:21 +0000 (UTC) Received: from mail-yh0-x236.google.com (mail-yh0-x236.google.com [IPv6:2607:f8b0:4002:c01::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 190A62C8F; Fri, 23 May 2014 07:39:21 +0000 (UTC) Received: by mail-yh0-f54.google.com with SMTP id i57so3914876yha.41 for ; Fri, 23 May 2014 00:39:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=xnHz7qI6gafHAZXxxlz18AwOY7gpjqE6oAePFAKQNqI=; b=D7xzDHNPPVBMIoYTN8k530wl/YfHzMHeSPYmpT+UjM/9ONTklG5fA2WiLzyWEHD8OY 0xxHFlm6xlK6tMP+se8JEU8k/BjYIUlp7KPTIr8dEvBZg4N+ipNQ/mGnhWapC0SwmELX HB2LyBYa1Pqbito8wv70M61D2IXQaoHPOONIRzUm34Xy4JTZQb7rhbBKQHxYXzK6cueG h+OSoNT1N4WfTyakyZ2BtWuDnfF3D+azeJ5KbsNT+O6PqaDpPA1JmQ1MdsKY3mm6uWFh 0POUd/FMBa/Hq7zvGEfDqnqNnG3Xm+uVZOnacTf82OrUCI+EEWAlvZ9WFPHP3gjWD7ab qnog== MIME-Version: 1.0 X-Received: by 10.236.147.232 with SMTP id t68mr4298684yhj.127.1400830759841; Fri, 23 May 2014 00:39:19 -0700 (PDT) Received: by 10.170.54.8 with HTTP; Fri, 23 May 2014 00:39:19 -0700 (PDT) In-Reply-To: <537DFD70.3010705@freebsd.org> References: <719056985.20140522033824@supranet.net> <537DF2F3.10604@denninger.net> <537DFD70.3010705@freebsd.org> Date: Fri, 23 May 2014 08:39:19 +0100 Message-ID: Subject: Re: Turn off RAID read and write caching with ZFS? From: krad To: Stefan Esser Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 May 2014 07:39:21 -0000 i think the general rule is correct in that you should turn off caching. Simply because you have two algorithms trying to be clever and most probably undermining each other. If you are worried about synchronous writes, then put some ssds in and put zil on it, even if its under the raid card, just make sure its a separate volume thats exported as a lun. There may be some gains to be made in certain scenarios with this hardware caching but that means you are going to have to test extensively to make sure its working in your case, which may or may not be worth while to you. On 22 May 2014 14:36, Stefan Esser wrote: > Am 22.05.2014 14:52, schrieb Karl Denninger: > [...] > > Modern drives typically try to compensate for their > > variable-geometryness through their own read-ahead cache, but the exact > > details of their algorithm are typically not exposed. > > > > What I would love to find is a "buffered" controller that recognizes all > > of this and works as follows: > > > > 1. Writes, when committed, are committed and no return is made until > > storage has written the data and claims it's on the disk. If the > > sector(s) written are in the buffer memory (from a previous read in 2 > > below) then the write physically alters both the disk AND the buffer. > > > > 2. Reads are always one full track in size and go into the buffer memory > > on a LRU basis. A read for a sector already in the buffer memory > > results in no physical I/O taking place. The controller does not store > > sectors per-se in the buffer, it stores tracks. This requires that the > > adapter be able to discern the *actual* underlying geometry of the drive > > so it knows where track boundaries are. Yes, I know drive caches > > themselves try to do this, but how well do they manage? Evidence > > suggests that it's not particularly effective. > > In the old times, controllers implemented read-ahead, either under > control of the host-adapter or the host OS (e.g. the based on the > detection of sequential access patterns). > > This changed, when large on-drive caches became practical. Drives > now do aggressive read-ahead caching, but without the penalty this > had, in the old times. I do not know, whether this applies to all > current drives, but since it is old technology, I assume so: > > The sector layout is reversed on each track - higher numbered > sectors come first. The drive starts reading data into its cache > as soon as the head receives stable data and it stops only when > the whole requested range of sectors has been read. > > E.g. if you request sectors 10 to 20, the drive may have the read > head positioned when sector 30 comes along. Starting at that sector, > data is read from sectors 30, 29, ..., 10 and stored in the drive's > cache. Only after sector 10 has been read, data is transferred to > the requesting host adapter, while the drive seeks to the next > track to operate on. This scheme offers opportunistic read-ahead, > which does not increase the random access seek times. > > The old method required the head to stay on the track for some > milliseconds to read sectors following the requested block on the > vague chance, that this data might later be requested. > > The new method just starts reading as soon as there is data under > the read head. This needs more cache on the drive, but does not add > latency for read-ahead. The disadvantage is, that you never know > how much read-ahead there will be, it depends on the rotational > position of the disk when the seek ends. And if the first sector > read from the track is in the middle of the requested range, the > drive needs to read the whole track to fulfil the request, but > that would happen with equal probability with the old sector > layout as well. > > > Without this read cache is a crapshoot that gets difficult to tune and > > is very workload-dependent in terms of what delivers best performance. > > All you can do is tune (if you're able with a given controller) and test. > > The read-ahead of reverse sectors as described above does not have > any negative side-effect. On average, you'll read half a track into > the drive's cache whenever you request a single sector. > > A controller that implements read-ahead does this by increasing the > amount of data requested from the drive. This leads to a higher > probability that a full track must be read to satisfy the request > and will thus increase latencies observed by the application. > > Rergards, STefan > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri May 23 10:40:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1AECBE71 for ; Fri, 23 May 2014 10:40:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 093732C4B for ; Fri, 23 May 2014 10:40:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4NAe1Me086862 for ; Fri, 23 May 2014 10:40:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4NAe1fS086861; Fri, 23 May 2014 10:40:01 GMT (envelope-from gnats) Date: Fri, 23 May 2014 10:40:01 GMT Message-Id: <201405231040.s4NAe1fS086861@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 May 2014 10:40:02 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: Steven Hartland , "bug-followup@freebsd.org" Cc: Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Fri, 23 May 2014 10:30:08 +0000 --_b76a0af4-5d00-49ea-9642-a6f444716426_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable yes its i386=2C i will recompile kernel and report results. = --_b76a0af4-5d00-49ea-9642-a6f444716426_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
yes its i386=2C i will recompile= kernel and report results.
= --_b76a0af4-5d00-49ea-9642-a6f444716426_-- From owner-freebsd-fs@FreeBSD.ORG Fri May 23 16:26:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4F9AEBA5; Fri, 23 May 2014 16:26:20 +0000 (UTC) Received: from blu0-omc3-s25.blu0.hotmail.com (blu0-omc3-s25.blu0.hotmail.com [65.55.116.100]) by mx1.freebsd.org (Postfix) with ESMTP id 18B1C2E08; Fri, 23 May 2014 16:26:19 +0000 (UTC) Received: from BLU179-W28 ([65.55.116.72]) by blu0-omc3-s25.blu0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Fri, 23 May 2014 09:26:13 -0700 X-TMN: [ixtqE6cHKcS2aNVYyhaBNBLnxxYokOPs] X-Originating-Email: [hsn@sendmail.cz] Message-ID: From: Radim Kolar To: Steven Hartland , "bug-followup@freebsd.org" , "freebsd-fs@freebsd.org" Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Fri, 23 May 2014 16:26:13 +0000 Importance: Normal In-Reply-To: References: <9423EBFC865C4281A93352433E62219E@multiplay.co.uk>, MIME-Version: 1.0 X-OriginalArrivalTime: 23 May 2014 16:26:13.0555 (UTC) FILETIME=[B6CB3030:01CF76A3] Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 May 2014 16:26:20 -0000 with "options KSTACK_PAGES=3D4" in kernel config it do not panic anymore. P= lease commit patch for making it default value on i386 (32-bit) = From owner-freebsd-fs@FreeBSD.ORG Fri May 23 16:30:02 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6F521D21 for ; Fri, 23 May 2014 16:30:02 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5D0F62E91 for ; Fri, 23 May 2014 16:30:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4NGU1wn013994 for ; Fri, 23 May 2014 16:30:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4NGU18q013993; Fri, 23 May 2014 16:30:01 GMT (envelope-from gnats) Date: Fri, 23 May 2014 16:30:01 GMT Message-Id: <201405231630.s4NGU18q013993@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 May 2014 16:30:02 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: Steven Hartland , "bug-followup@freebsd.org" , "freebsd-fs@freebsd.org" Cc: Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Fri, 23 May 2014 16:26:13 +0000 --_7507ba2e-9b9b-4475-96e5-477aa8a382b7_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable with "options KSTACK_PAGES=3D4" in kernel config it do not panic anymore. P= lease commit patch for making it default value on i386 (32-bit) = --_7507ba2e-9b9b-4475-96e5-477aa8a382b7_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
with "options KSTACK_PAGES=3D4" = in kernel config it do not panic anymore. Please commit patch for making it= default value on i386 (32-bit)
= --_7507ba2e-9b9b-4475-96e5-477aa8a382b7_-- From owner-freebsd-fs@FreeBSD.ORG Fri May 23 16:57:21 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4ADCCFC0; Fri, 23 May 2014 16:57:21 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 0E12A20DF; Fri, 23 May 2014 16:57:20 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id DAE6920E7088D; Fri, 23 May 2014 16:57:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.3 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id ADC8E20E70885; Fri, 23 May 2014 16:57:12 +0000 (UTC) Message-ID: <37D84F569F29444F935CA9493DE200C3@multiplay.co.uk> From: "Steven Hartland" To: "Radim Kolar" , , References: <9423EBFC865C4281A93352433E62219E@multiplay.co.uk>, Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Fri, 23 May 2014 17:57:17 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-2"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 May 2014 16:57:21 -0000 This is already noted in UPDATING. With i386 being an aging tech which really shouldn't really be used to run ZFS and that has machines with often small amounts of memory this should be left to those who do require it to build a custom kernel I'm afraid. From owner-freebsd-fs@FreeBSD.ORG Fri May 23 17:00:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A6599FD for ; Fri, 23 May 2014 17:00:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7B0F520FC for ; Fri, 23 May 2014 17:00:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4NH00Lc023597 for ; Fri, 23 May 2014 17:00:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4NH00XD023596; Fri, 23 May 2014 17:00:00 GMT (envelope-from gnats) Date: Fri, 23 May 2014 17:00:00 GMT Message-Id: <201405231700.s4NH00XD023596@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: "Steven Hartland" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 May 2014 17:00:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: "Steven Hartland" To: "Radim Kolar" , , Cc: Subject: Re: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Fri, 23 May 2014 17:57:17 +0100 This is already noted in UPDATING. With i386 being an aging tech which really shouldn't really be used to run ZFS and that has machines with often small amounts of memory this should be left to those who do require it to build a custom kernel I'm afraid. From owner-freebsd-fs@FreeBSD.ORG Fri May 23 17:16:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4A330999 for ; Fri, 23 May 2014 17:16:38 +0000 (UTC) Received: from noel.decibel.org (99-153-64-76.uvs.austtx.sbcglobal.net [99.153.64.76]) by mx1.freebsd.org (Postfix) with ESMTP id 0FBF72312 for ; Fri, 23 May 2014 17:16:38 +0000 (UTC) Received: from [10.69.224.71] (unknown [137.122.64.62]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by noel.decibel.org (Postfix) with ESMTPSA id 6BEEC6D45D for ; Fri, 23 May 2014 17:16:37 +0000 (UTC) From: Jim Nasby Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Subject: Fwd: GEOM_MIRROR: Synchronization request failed (error=5). Date: Fri, 23 May 2014 12:16:36 -0500 References: <20A5D934-D95B-4650-9DD3-2879D7FC016B@nasby.net> To: freebsd-fs@freebsd.org Message-Id: Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) X-Mailer: Apple Mail (2.1874) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 May 2014 17:16:38 -0000 Didn=92t get an answer on -questions=85 can anyone shed some light on = this? Begin forwarded message: > From: Jim Nasby > Subject: Re: GEOM_MIRROR: Synchronization request failed (error=3D5). > Date: May 18, 2014 at 1:25:37 PM CDT > To: freebsd-questions@freebsd.org >=20 > On May 18, 2014, at 1:17 PM, Jim Nasby wrote: >> Trying to add a disk to an existing gmirror (which, unfortunately is = sitting on a GPT partitioned drive): >>=20 >> GEOM_MIRROR: Synchronization request failed (error=3D5). = ad4[WRITE(offset=3D141343457280, length=3D131072)] >>=20 >> Does this mean the new drive (ad4) is bad? smartmon doesn=92t show = anything and a quick bonnie++ test was OK. >=20 > More info: >=20 > Here=92s what was in /var/log/messages right before that error: >=20 > May 18 17:36:13 noel kernel: GEOM_MIRROR: Device gm1: rebuilding = provider ad4. > May 18 17:58:39 noel kernel: ad4: WARNING - WRITE_DMA48 UDMA ICRC = error (retrying request) LBA=3D276061440 > May 18 17:58:39 noel kernel: ad4: FAILURE - WRITE_DMA48 = status=3D51 error=3D10 LBA=3D276061440 > May 18 17:58:39 noel kernel: GEOM_MIRROR: Synchronization request = failed (error=3D5). ad4[WRITE(offset=3D141343457280, length=3D131072)] >=20 > After this I did gmirror forget, gmirror clear and then re-inserted = ad4 into the mirror. As soon as that was done I started seeing this: >=20 > May 18 13:18:06 noel sudo: decibel : TTY=3Dpts/3 ; PWD=3D/home/decibel = ; USER=3Droot ; COMMAND=3D/sbin/gmirror insert gm1 ad4 > May 18 18:18:06 noel kernel: GEOM_MIRROR: Device gm1: rebuilding = provider ad4. > May 18 18:18:06 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D18176 > May 18 18:18:07 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D233216 > =85 > May 18 18:20:32 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D29618944 > May 18 18:20:35 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D30076928 > May 18 18:20:54 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D33539072 >=20 > Look at = http://lists.freebsd.org/pipermail/freebsd-questions/2005-August/095214.ht= ml, I tried changing the mode: >=20 > May 18 13:20:58 noel sudo: decibel : TTY=3Dpts/3 ; PWD=3D/home/decibel = ; USER=3Droot ; COMMAND=3D/sbin/atacontrol mode ad4 > May 18 18:21:05 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D35777024 > May 18 18:21:13 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D37299200 > May 18 13:21:14 noel sudo: decibel : TTY=3Dpts/3 ; PWD=3D/home/decibel = ; USER=3Droot ; COMMAND=3D/sbin/atacontrol mode ad4 UDMA4 >=20 > It didn=92t help=85 >=20 > May 18 18:21:28 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D40399360 > May 18 18:22:04 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D47683328 > =85 > May 18 18:23:30 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D64849664 > May 18 18:23:30 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error = (retrying request) LBA=3D64880896 >=20 > Errors are still happening. >=20 > I can easily return the drive if that=92s the problem here, but as I = mentioned a bonnie++ test didn=92t generate any errors. > --=20 > Jim C. Nasby, Data Architect jim@nasby.net > 512.569.9461 (cell) http://jim.nasby.net >=20 > _______________________________________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to = "freebsd-questions-unsubscribe@freebsd.org" >=20 --=20 Jim C. Nasby, Data Architect jim@nasby.net 512.569.9461 (cell) http://jim.nasby.net From owner-freebsd-fs@FreeBSD.ORG Fri May 23 17:23:05 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2F961F07; Fri, 23 May 2014 17:23:05 +0000 (UTC) Received: from blu0-omc3-s24.blu0.hotmail.com (blu0-omc3-s24.blu0.hotmail.com [65.55.116.99]) by mx1.freebsd.org (Postfix) with ESMTP id C7C2F23D4; Fri, 23 May 2014 17:23:04 +0000 (UTC) Received: from BLU179-W11 ([65.55.116.73]) by blu0-omc3-s24.blu0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Fri, 23 May 2014 10:23:03 -0700 X-TMN: [7ktihA8WGwpXGM22YxEdl+AY/sCjK0TG] X-Originating-Email: [hsn@sendmail.cz] Message-ID: From: Radim Kolar To: Steven Hartland , "bug-followup@freebsd.org" , "freebsd-fs@freebsd.org" Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Fri, 23 May 2014 17:23:03 +0000 Importance: Normal In-Reply-To: References: <9423EBFC865C4281A93352433E62219E@multiplay.co.uk>, , MIME-Version: 1.0 X-OriginalArrivalTime: 23 May 2014 17:23:03.0181 (UTC) FILETIME=[A716D3D0:01CF76AB] Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 May 2014 17:23:05 -0000 its not just about building custom kernel. GENERIC from 10-STABLE panics on= i386 mount ZFS root too. Maybe ZFS should not be used on i386=2C but it should not panic system. If = you are against changing default kernel configuration on i386 then add anot= her warning to boot messages similar to this: ZFS WARNING: Recommended minimum kmem_size is 512MB=3B expect unstable beha= vior. Consider tuning vm.kmem_size and vm.kmem_size_max in /boot/loader.conf. = From owner-freebsd-fs@FreeBSD.ORG Fri May 23 17:30:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 90458449 for ; Fri, 23 May 2014 17:30:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 65909241E for ; Fri, 23 May 2014 17:30:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4NHU0WY035457 for ; Fri, 23 May 2014 17:30:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4NHU00D035456; Fri, 23 May 2014 17:30:00 GMT (envelope-from gnats) Date: Fri, 23 May 2014 17:30:00 GMT Message-Id: <201405231730.s4NHU00D035456@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Radim Kolar Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Reply-To: Radim Kolar X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 May 2014 17:30:01 -0000 The following reply was made to PR kern/189355; it has been noted by GNATS. From: Radim Kolar To: Steven Hartland , "bug-followup@freebsd.org" , "freebsd-fs@freebsd.org" Cc: Subject: RE: kern/189355: [zfs] zfs panic on root mount 10-stable Date: Fri, 23 May 2014 17:23:03 +0000 --_db38143b-2d44-4957-bd01-080e9fe18e82_ Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable its not just about building custom kernel. GENERIC from 10-STABLE panics on= i386 mount ZFS root too. Maybe ZFS should not be used on i386=2C but it should not panic system. If = you are against changing default kernel configuration on i386 then add anot= her warning to boot messages similar to this: ZFS WARNING: Recommended minimum kmem_size is 512MB=3B expect unstable beha= vior. Consider tuning vm.kmem_size and vm.kmem_size_max in /boot/loader.conf. = --_db38143b-2d44-4957-bd01-080e9fe18e82_ Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable
its not just about building cust= om kernel. GENERIC from 10-STABLE panics on i386 mount ZFS root too.
Maybe ZFS should not be used on i386=2C but it should not panic system. If= you are against changing default kernel configuration on i386 then add ano= ther warning to boot messages similar to this:

ZFS WARNING: Recommen= ded minimum kmem_size is 512MB=3B expect unstable behavior.
 =3B&nbs= p=3B =3B =3B =3B =3B =3B =3B =3B =3B = =3B =3B Consider tuning vm.kmem_size and vm.kmem_size_max
 =3B&n= bsp=3B =3B =3B =3B =3B =3B =3B =3B =3B = =3B =3B in /boot/loader.conf.
= --_db38143b-2d44-4957-bd01-080e9fe18e82_-- From owner-freebsd-fs@FreeBSD.ORG Fri May 23 17:46:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C3B9A93E; Fri, 23 May 2014 17:46:51 +0000 (UTC) Received: from mailout07.t-online.de (mailout07.t-online.de [194.25.134.83]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mailout00.t-online.de", Issuer "TeleSec ServerPass DE-1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5600F2584; Fri, 23 May 2014 17:46:51 +0000 (UTC) Received: from fwd12.aul.t-online.de (fwd12.aul.t-online.de [172.20.26.241]) by mailout07.t-online.de (Postfix) with SMTP id 07EEB4ABD01; Fri, 23 May 2014 19:46:49 +0200 (CEST) Received: from [192.168.119.26] (Sr1r+eZbghBKbSjcJYhr0K48PR3bQIpFQqPKROHeB4moS5vEiZIIqOzB+Un5dv9ZE2@[84.154.114.101]) by fwd12.t-online.de with esmtp id 1WntYD-30TdJI0; Fri, 23 May 2014 19:46:37 +0200 Message-ID: <537F8972.40104@freebsd.org> Date: Fri, 23 May 2014 19:46:26 +0200 From: Stefan Esser User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: jim@nasby.net Subject: Re: Fwd: GEOM_MIRROR: Synchronization request failed (error=5). References: <20A5D934-D95B-4650-9DD3-2879D7FC016B@nasby.net> In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit X-ID: Sr1r+eZbghBKbSjcJYhr0K48PR3bQIpFQqPKROHeB4moS5vEiZIIqOzB+Un5dv9ZE2 X-TOI-MSGID: 5fea4cfc-db5c-4be5-8add-1d4a4bda60e4 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 May 2014 17:46:51 -0000 Am 23.05.2014 19:16, schrieb Jim Nasby: > Didn’t get an answer on -questions… can anyone shed some light on this? > > Begin forwarded message: > >> From: Jim Nasby >> Subject: Re: GEOM_MIRROR: Synchronization request failed (error=5). >> Date: May 18, 2014 at 1:25:37 PM CDT >> To: freebsd-questions@freebsd.org >> >> On May 18, 2014, at 1:17 PM, Jim Nasby wrote: >>> Trying to add a disk to an existing gmirror (which, unfortunately is sitting on a GPT partitioned drive): >>> >>> GEOM_MIRROR: Synchronization request failed (error=5). ad4[WRITE(offset=141343457280, length=131072)] >>> >>> Does this mean the new drive (ad4) is bad? smartmon doesn’t show anything and a quick bonnie++ test was OK. >> >> More info: >> >> Here’s what was in /var/log/messages right before that error: >> >> May 18 17:36:13 noel kernel: GEOM_MIRROR: Device gm1: rebuilding provider ad4. >> May 18 17:58:39 noel kernel: ad4: WARNING - WRITE_DMA48 UDMA ICRC error (retrying request) LBA=276061440 >> May 18 17:58:39 noel kernel: ad4: FAILURE - WRITE_DMA48 status=51 error=10 LBA=276061440 >> May 18 17:58:39 noel kernel: GEOM_MIRROR: Synchronization request failed (error=5). ad4[WRITE(offset=141343457280, length=131072)] >> >> After this I did gmirror forget, gmirror clear and then re-inserted ad4 into the mirror. As soon as that was done I started seeing this: >> >> May 18 13:18:06 noel sudo: decibel : TTY=pts/3 ; PWD=/home/decibel ; USER=root ; COMMAND=/sbin/gmirror insert gm1 ad4 >> May 18 18:18:06 noel kernel: GEOM_MIRROR: Device gm1: rebuilding provider ad4. >> May 18 18:18:06 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=18176 >> May 18 18:18:07 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=233216 >> … >> May 18 18:20:32 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=29618944 >> May 18 18:20:35 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=30076928 >> May 18 18:20:54 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=33539072 >> >> Look at http://lists.freebsd.org/pipermail/freebsd-questions/2005-August/095214.html, I tried changing the mode: >> >> May 18 13:20:58 noel sudo: decibel : TTY=pts/3 ; PWD=/home/decibel ; USER=root ; COMMAND=/sbin/atacontrol mode ad4 >> May 18 18:21:05 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=35777024 >> May 18 18:21:13 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=37299200 >> May 18 13:21:14 noel sudo: decibel : TTY=pts/3 ; PWD=/home/decibel ; USER=root ; COMMAND=/sbin/atacontrol mode ad4 UDMA4 >> >> It didn’t help… >> >> May 18 18:21:28 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=40399360 >> May 18 18:22:04 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=47683328 >> … >> May 18 18:23:30 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=64849664 >> May 18 18:23:30 noel kernel: ad4: WARNING - WRITE_DMA UDMA ICRC error (retrying request) LBA=64880896 >> >> Errors are still happening. >> >> I can easily return the drive if that’s the problem here, but as I mentioned a bonnie++ test didn’t generate any errors. The ICRC error that is reported indicates a problem with the data cable between the drive and the controller. I'd try a different cable ... Regards, STefan From owner-freebsd-fs@FreeBSD.ORG Mon May 26 11:06:45 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 52AC5D95 for ; Mon, 26 May 2014 11:06:45 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3EA6324D8 for ; Mon, 26 May 2014 11:06:45 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4QB6j9A032000 for ; Mon, 26 May 2014 11:06:45 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4QB6ipp031997 for freebsd-fs@FreeBSD.org; Mon, 26 May 2014 11:06:44 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 26 May 2014 11:06:44 GMT Message-Id: <201405261106.s4QB6ipp031997@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 May 2014 11:06:45 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/189826 fs [zfs] zpool create using gmirror partition hard-hangs o kern/189355 fs [zfs] zfs panic on root mount 10-stable o kern/188443 fs [smbfs] Segfault with tail(1) when mmap(2) called o kern/188328 fs [zfs] UPDATING should provide caveats for running `zpo o kern/188187 fs [zfs] [panic] 10-stable: Kernel panic on zpool import: o kern/187905 fs [zpool] Confusion zpool with a block size in HDD - blo o kern/187778 fs [zfs] Two ZFS filesystems mounted on / at same time o kern/187594 fs [zfs] [patch] ZFS ARC behavior problem and fix s kern/187414 fs [zfs] ZFS Write Deadlock on 8.4 o kern/187261 fs [fusefs] FUSE kernel panic when using socket / bind o kern/186942 fs [zfs] [panic] Fatal trap 12 (seems zfs related) o kern/186720 fs [xfs] is xfs now unsupported in the kernel? o kern/186645 fs [fusefs] Crash after unmounting wdfs o kern/186515 fs [gptboot] Doesn't boot with GPT when # of entries over o kern/186112 fs [zfs] [panic] ZFS Panic/Solaris Assert/zap.c:479 o kern/185963 fs [zfs] Kernel crash trying to import a damaged ZFS pool o kern/185734 fs [zfs] [panic] panic on stable/10 when writing to ZFS d o kern/185374 fs [msdosfs] [panic] Unmounting msdos filesystem in a bad o kern/184677 fs [zfs] [panic] ZFS snapshot umount kernel panic o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/184013 fs [fusefs] truecrypt broken (probably fusefs issue) o kern/183077 fs [opensolaris] [patch] don't have the compiler inline t o kern/182739 fs [fusefs] [panic] sysutils/fusefs-kmod kernel panic on o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] [panic] Kernel panic in ZFS I/O: solaris assert: o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181791 fs [zfs] ZFS ARC Deadlock o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175328 fs [fusefs] [panic] fusefs kernel page fault o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [softupdates] [panic] softdep_deallocate_dependencies: o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172630 fs [zfs] [lor] zfs/zfs_vfsops.c kern/kern_descrip.c o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus f kern/172197 fs [zfs] Userquota (as well as groupquota) does not work o kern/172092 fs [zfs] [panic] zfs import panics kernel o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170523 fs [zfs] zfs rename pool@snapshot1 pool@snapshot2 UNMOUNT o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167362 fs [fusefs] Reproduceble Page Fault when running rsync ov o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs [ufs] [patch] Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162195 fs [softupdates] [panic] panic with soft updates journali o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] [panic] zfs/dbuf.c: panic: solaris assert: arc_b o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] [panic] panic during ls(1) zfs snapshot director o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [softupdates] [panic] Kernel panic in softdep_disk_wri o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] [panic] zpool scrub causes panic if geli vdevs d o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 357 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon May 26 18:10:01 2014 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EA70AC3 for ; Mon, 26 May 2014 18:10:01 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BC6312CD5 for ; Mon, 26 May 2014 18:10:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s4QIA1IN026415 for ; Mon, 26 May 2014 18:10:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.8/8.14.8/Submit) id s4QIA1Gd026414; Mon, 26 May 2014 18:10:01 GMT (envelope-from gnats) Date: Mon, 26 May 2014 18:10:01 GMT Message-Id: <201405261810.s4QIA1Gd026414@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Martin Birgmeier Subject: Re: kern/184771: [nfs] [panic] panic on nfs mount Reply-To: Martin Birgmeier X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 May 2014 18:10:02 -0000 The following reply was made to PR kern/184771; it has been noted by GNATS. From: Martin Birgmeier To: bug-followup@FreeBSD.org, barber@mail.ru Cc: Subject: Re: kern/184771: [nfs] [panic] panic on nfs mount Date: Mon, 26 May 2014 20:09:12 +0200 I had the same problem netbooting releng/7 via NFS served by releng/9.2. I applied r261061 to releng/9.2. Now the panic does not happen any more. From owner-freebsd-fs@FreeBSD.ORG Thu May 29 08:14:53 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 71E9D78F for ; Thu, 29 May 2014 08:14:53 +0000 (UTC) Received: from mail.helenius.fi (r091.secroom.net [193.19.137.91]) by mx1.freebsd.org (Postfix) with ESMTP id 2EC2D2BBB for ; Thu, 29 May 2014 08:14:51 +0000 (UTC) Received: from mail.helenius.fi (localhost [127.0.0.1]) by mail.helenius.fi (Postfix) with ESMTP id 6DA987CD7 for ; Thu, 29 May 2014 08:14:44 +0000 (UTC) X-Virus-Scanned: amavisd-new at helenius.fi Received: from mail.helenius.fi ([127.0.0.1]) by mail.helenius.fi (mail.helenius.fi [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id jUYcVSm226ih for ; Thu, 29 May 2014 08:14:37 +0000 (UTC) Received: from [192.168.5.129] (a91-156-75-2.elisa-laajakaista.fi [91.156.75.2]) (Authenticated sender: pete) by mail.helenius.fi (Postfix) with ESMTPA id E3D367CCD for ; Thu, 29 May 2014 08:14:36 +0000 (UTC) From: Petri Helenius Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Subject: ZFS auto online Message-Id: <7781CE90-D672-4A55-B7E9-47A48EA146E4@helenius.fi> Date: Thu, 29 May 2014 11:13:34 +0300 To: "" Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) X-Mailer: Apple Mail (2.1878.2) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 May 2014 08:14:53 -0000 Hi, How do I get ZFS to automatically online reattached device? Pete From owner-freebsd-fs@FreeBSD.ORG Thu May 29 09:06:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 375EF8B7 for ; Thu, 29 May 2014 09:06:48 +0000 (UTC) Received: from squishy.elizium.za.net (squishy.elizium.za.net [80.68.90.178]) by mx1.freebsd.org (Postfix) with ESMTP id 0198720B5 for ; Thu, 29 May 2014 09:06:47 +0000 (UTC) Received: from sludge.elizium.za.net (sludge.elizium.za.net [196.41.137.247]) by squishy.elizium.za.net (Postfix) with ESMTPSA id E56C84812E; Thu, 29 May 2014 10:58:59 +0200 (SAST) Date: Thu, 29 May 2014 11:03:23 +0200 From: Hugo Lombard To: Petri Helenius Subject: Re: ZFS auto online Message-ID: <20140529090323.GC12020@sludge.elizium.za.net> References: <7781CE90-D672-4A55-B7E9-47A48EA146E4@helenius.fi> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7781CE90-D672-4A55-B7E9-47A48EA146E4@helenius.fi> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 May 2014 09:06:48 -0000 On Thu, May 29, 2014 at 11:13:34AM +0300, Petri Helenius wrote: > > How do I get ZFS to automatically online reattached device? > I suspect you'll have to wait for zfsd: * The ZFS daemon consumes kernel devctl(4) event data via devd(8)'s * unix domain socket in order to react to system changes that impact * the function of ZFS storage pools. The goal of this daemon is to * provide similar functionality to the Solaris ZFS Diagnostic Engine * (zfs-diagnosis), the Solaris ZFS fault handler (zfs-retire), and * the Solaris ZFS vdev insertion agent (zfs-mod sysevent handler). [ http://svnweb.freebsd.org/base/projects/zfsd/head/cddl/sbin/zfsd/zfsd.cc?revision=266602&view=markup ] -- Hugo Lombard .___. (o,o) /) ) ---"-"--- From owner-freebsd-fs@FreeBSD.ORG Thu May 29 14:04:41 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 594F36A0; Thu, 29 May 2014 14:04:41 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.feld.me", Issuer "Gandi Standard SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E97802D71; Thu, 29 May 2014 14:04:40 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id 546ddbb3; Thu, 29 May 2014 09:04:36 -0500 (CDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=feld.me; h=mime-version :content-type:date:from:to:cc:subject:in-reply-to:references :message-id:sender; s=blargle2; bh=3xY3/MP0BbVQPb4hFqLa3EfwNv8=; b= b/QpJI5AsozrJipnzzQTqyKeyUUlu+LG3fndd8x4LYetG/OXZRJlXN+5Xwx0o43B y8iHyB6jL2zHWh3E0TThKZkzX5OFHKT0bKUIRMSeqFsZ5DbxK9qeh3pnAxWT0Wlg wLd2W98KCXph03gqDO8V77LQmWT0z4YUr5xLYWTEsRG7MjVmgVuh4Cim95s21h9b bIACDx8HoshVAMpEnINFm155uak/6aujBk6Af0y2haZCXGBEefijH3vFdJdxZWFx rLg6nPCIB3FbBtkT69m2l1/5BTzb7jFHGrQM31Usltv8/3crzxkPjtbfvwkVoeFC ZH2PuTDTqfrxdShZSijqMA== DomainKey-Signature: a=rsa-sha1; c=nofws; d=feld.me; h=mime-version :content-type:date:from:to:cc:subject:in-reply-to:references :message-id:sender; q=dns; s=blargle2; b=MU9Pw3ArW3AQcsO683TyxN8 xYfqVOjilZDnkqWbSe45ohPppRrfktNpgnvCJNpv4LmsRs9hBBnGVIx1kJdAa9c1 OqzVKfhw25OpCkhByE70ZU5Glgpqzv/QypiYPDvtqTH91sLqF3ogAbtWhLlOPWLU 5OKeYlNddtovjQiIG8fuXtSXDPpHsD7/jlxj/tNoFmGxqwlL+EBVIRfsP4492bsJ U65n0mUgrT6KeNOaCC2LCqh+A5/QBamvCs52Hl0csZZBI3h74LMlYii6kk2iW268 Z3qoJVJ8gAj68IalbI+yhrr1OsfoLyDYsw6CCP0xrZwpuvIKa5Yxnxp/RLbR9PA= = Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id ca4a419f; Thu, 29 May 2014 09:04:36 -0500 (CDT) Received: from feld@feld.me by mail.feld.me (Archiveopteryx 3.2.0) with esmtpa id 1401372275-322-320/5/8; Thu, 29 May 2014 14:04:35 +0000 Mime-Version: 1.0 Content-Type: text/plain; format=flowed Date: Thu, 29 May 2014 09:04:35 -0500 From: Mark Felder To: Petri Helenius Subject: Re: ZFS auto online In-Reply-To: <7781CE90-D672-4A55-B7E9-47A48EA146E4@helenius.fi> References: <7781CE90-D672-4A55-B7E9-47A48EA146E4@helenius.fi> Message-Id: <308a6f01436240e060878bb0620d0946@mail.feld.me> X-Sender: feld@FreeBSD.org User-Agent: Roundcube Webmail/1.0.1 Sender: feld@feld.me Cc: freebsd-fs@freebsd.org, owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 May 2014 14:04:41 -0000 On 2014-05-29 03:13, Petri Helenius wrote: > Hi, > > How do I get ZFS to automatically online reattached device? > Besides waiting for zfsd, you could write a devd script that recognizes the device by one of several identifiers and then automatically runs the zfs commands you desire. From owner-freebsd-fs@FreeBSD.ORG Fri May 30 19:36:22 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EE6E8F4D for ; Fri, 30 May 2014 19:36:22 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id CF4CB28A7 for ; Fri, 30 May 2014 19:36:22 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 34A2123B6E7 for ; Fri, 30 May 2014 12:04:46 -0700 (PDT) Message-ID: <5388D64D.4030400@bayphoto.com> Date: Fri, 30 May 2014 12:04:45 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org Subject: ZFS Kernel Panic on 10.0-RELEASE Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070200060301070101080704" X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 May 2014 19:36:23 -0000 This is a cryptographically signed message in MIME format. --------------ms070200060301070101080704 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable Hey FS@ Over the weekend, we had upgraded one of our servers from 9.1-RELEASE to = 10.0-RELEASE, and then the zpool was upgraded (from 28 to 5000) Tuesday afternoon, the server suddenly rebooted (kernel panic), and as=20 soon as it tried to remount all of its ZFS volumes, it panic'd again. We have a zfs on root install, the total raidz is around 9TB. The volume it is panic-ing on is zroot/data/working, which has (or had=20 at this point) 4TB of data. I can boot off of the 10.0-RELEASE usb=20 image, and mount: zroot zroot/usr zroot/var zroot/data Just not the large volume that had 4TB of data. I've set the volume to readonly, and that still causes a panic upon=20 mount, I was able to snapshot the problematic volume, and even do a send = | receive, but that panics when the transfer is nearly complete (4.64TB=20 out of 4.639TB) Now, that data is not super critical, its basically scratch storage=20 where archives are extracted and shuffled around, and then moved off to=20 another location. We just like to keep about 60 days worth in case we=20 need to re-process something. The more important issue is, why did this happen, and what can we=20 recover from situations like this. It looks pretty bad when any data is=20 lost. See url's for the kernel panic message (used my phone to take a picture) https://drive.google.com/file/d/0B0i2JyKe_ya2RnRYT3A1Qk5ldkk/edit?usp=3Ds= haring https://drive.google.com/file/d/0B0i2JyKe_ya2YWNlbVl3MVFlTGc/edit?usp=3Ds= haring Here is the zpool history for that storage: History for 'zroot': 2013-11-19.11:31:37 zpool create -o altroot=3D/mnt -o cachefile=3D/var/tmp/zpool.cache -f zroot raidz2 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2 /dev/gpt/disk3 /dev/gpt/disk4 /dev/gpt/disk5 /dev/gpt/disk6 /dev/gpt/disk7 /dev/gpt/disk8 /dev/gpt/disk9 /dev/gpt/disk11 spare /dev/gpt/disk12 2013-11-19.11:31:37 zpool export zroot 2013-11-19.11:31:38 zpool import -o altroot=3D/mnt -o cachefile=3D/var/tmp/zpool.cache zroot 2013-11-19.11:31:38 zpool set bootfs=3Dzroot zroot 2013-11-19.11:31:43 zfs set checksum=3Dfletcher4 zroot 2013-11-19.11:34:11 zfs create zroot/usr 2013-11-19.11:34:11 zfs create zroot/home 2013-11-19.11:34:11 zfs create zroot/var 2013-11-19.11:34:11 zfs create zroot/data 2013-11-19.11:34:11 zfs create -o compression=3Don -o exec=3Don -o setuid=3Doff zroot/tmp 2013-11-19.11:34:11 zfs create -o compression=3Dlzjb -o setuid=3Doff zroot/usr/ports 2013-11-19.11:34:11 zfs create -o compression=3Doff -o exec=3Doff -o setuid=3Doff zroot/usr/ports/distfiles 2013-11-19.11:34:11 zfs create -o compression=3Doff -o exec=3Doff -o setuid=3Doff zroot/usr/ports/packages 2013-11-19.11:34:11 zfs create -o compression=3Dlzjb -o exec=3Doff -o= setuid=3Doff zroot/usr/src 2013-11-19.11:34:11 zfs create -o compression=3Dlzjb -o exec=3Doff -o= setuid=3Doff zroot/var/crash 2013-11-19.11:34:11 zfs create -o exec=3Doff -o setuid=3Doff zroot/va= r/db 2013-11-19.11:34:11 zfs create -o compression=3Dlzjb -o exec=3Don -o setuid=3Doff zroot/var/db/pkg 2013-11-19.11:34:11 zfs create -o exec=3Doff -o setuid=3Doff zroot/va= r/empty 2013-11-19.11:34:11 zfs create -o compression=3Dlzjb -o exec=3Doff -o= setuid=3Doff zroot/var/log 2013-11-19.11:34:11 zfs create -o compression=3Dgzip -o exec=3Doff -o= setuid=3Doff zroot/var/mail 2013-11-19.11:34:11 zfs create -o exec=3Doff -o setuid=3Doff zroot/va= r/run 2013-11-19.11:34:11 zfs create -o compression=3Dlzjb -o exec=3Don -o setuid=3Doff zroot/var/tmp 2013-11-19.11:34:11 zfs create -V 4G zroot/swap 2013-11-19.11:34:11 zfs set org.freebsd:swap=3Don zroot/swap 2013-11-19.11:34:11 zfs set checksum=3Doff zroot/swap 2013-11-19.11:43:24 zfs set readonly=3Don zroot/var/empty 2013-11-19.11:43:40 zfs set mountpoint=3Dlegacy zroot 2013-11-19.11:43:50 zfs set mountpoint=3D/tmp zroot/tmp 2013-11-19.11:43:58 zfs set mountpoint=3D/usr zroot/usr 2013-11-19.11:44:05 zfs set mountpoint=3D/var zroot/var 2013-11-19.11:44:12 zfs set mountpoint=3D/home zroot/home 2013-11-19.11:44:18 zfs set mountpoint=3D/data zroot/data 2013-11-19.20:11:53 zfs create zroot/data/working 2013-11-19.20:17:59 zpool scrub zroot 2013-11-19.21:21:23 zfs set aclmode=3Dpassthrough zroot/data 2013-11-19.21:21:33 zfs set aclinherit=3Dpassthrough zroot/data 2013-11-21.00:58:57 zfs set compression=3Dlzjb zroot/data/working 2014-05-24.14:24:40 zfs set readonly=3Doff zroot/var/empty 2014-05-24.15:37:15 zpool upgrade zroot 2014-05-27.15:32:41 zfs set mountpoint=3D/mnt zroot 2014-05-27.15:33:55 zfs set mountpoint=3D/mnt/tmp zroot/tmp 2014-05-27.15:34:03 zfs set mountpoint=3D/mnt/var zroot/var 2014-05-27.15:34:13 zfs set mountpoint=3D/mnt/crash zroot/var/crash 2014-05-27.15:34:22 zfs set mountpoint=3D/mnt/db zroot/var/db 2014-05-27.15:34:35 zfs set mountpoint=3D/mnt/db/pkg zroot/var/db/pkg= 2014-05-27.15:34:47 zfs set mountpoint=3D/mnt/db/empty zroot/var/empt= y 2014-05-27.15:35:22 zfs set mountpoint=3D/mnt/var/db zroot/var/db 2014-05-27.15:35:29 zfs set mountpoint=3D/mnt/var/db/pkg zroot/var/db= /pkg 2014-05-27.15:35:38 zfs set mountpoint=3D/mnt/var/empty zroot/var/emp= ty 2014-05-27.15:35:45 zfs set mountpoint=3D/mnt/var/log zroot/var/log 2014-05-27.15:35:54 zfs set mountpoint=3D/mnt/var/mail zroot/var/mail= 2014-05-27.15:36:02 zfs set mountpoint=3D/mnt/var/run zroot/var/run 2014-05-27.15:36:09 zfs set mountpoint=3D/mnt/var/tmp zroot/var/tmp 2014-05-27.15:36:34 zfs set mountpoint=3D/mnt/usr zroot/usr 2014-05-27.15:36:40 zfs set mountpoint=3D/mnt/usr/ports zroot/usr/por= ts 2014-05-27.15:36:54 zfs set mountpoint=3D/mnt/usr/distfiles zroot/usr/ports/distfiles 2014-05-27.15:37:12 zfs set mountpoint=3D/mnt/usr/ports/packages zroot/usr/ports/packages 2014-05-27.15:37:20 zfs set mountpoint=3D/mnt/usr/ports/distfiles zroot/usr/ports/distfiles 2014-05-27.15:37:35 zfs set mountpoint=3D/mnt/usr/src zroot/usr/src 2014-05-27.15:37:53 zfs set mountpoint=3D/mnt/home zroot/home 2014-05-27.15:38:39 zfs set mountpoint=3D/mnt/data zroot/data 2014-05-27.15:38:47 zfs set mountpoint=3D/mnt/data/working zroot/data/working 2014-05-27.15:57:53 zpool scrub zroot 2014-05-28.09:34:16 zfs snapshot zroot/data/working@1 2014-05-28.18:55:12 zfs set readonly=3Don zroot/data/working The full zfs attributes for that particular volume: NAME PROPERTY VALUE SOURCE zroot/data/working type filesystem - zroot/data/working creation Tue Nov 19 20:11 2013 - zroot/data/working used 4.64T - zroot/data/working available 2.49T - zroot/data/working referenced 4.64T - zroot/data/working compressratio 1.00x - zroot/data/working mounted no - zroot/data/working quota none default zroot/data/working reservation none default zroot/data/working recordsize 128K default zroot/data/working mountpoint /mnt/data/working local zroot/data/working sharenfs off default zroot/data/working checksum fletcher4 inherited from zr= oot zroot/data/working compression lzjb local zroot/data/working atime on default zroot/data/working devices on default zroot/data/working exec on default zroot/data/working setuid on default zroot/data/working readonly on local zroot/data/working jailed off default zroot/data/working snapdir hidden default zroot/data/working aclmode passthrough inherited from zroot/data zroot/data/working aclinherit passthrough inherited from zroot/data zroot/data/working canmount on default zroot/data/working xattr on default zroot/data/working copies 1 default zroot/data/working version 5 - zroot/data/working utf8only off - zroot/data/working normalization none - zroot/data/working casesensitivity sensitive - zroot/data/working vscan off default zroot/data/working nbmand off default zroot/data/working sharesmb off default zroot/data/working refquota none default zroot/data/working refreservation none default zroot/data/working primarycache all default zroot/data/working secondarycache all default zroot/data/working usedbysnapshots 0 - zroot/data/working usedbydataset 4.64T - zroot/data/working usedbychildren 0 - zroot/data/working usedbyrefreservation 0 - zroot/data/working logbias latency default zroot/data/working dedup off default zroot/data/working mlslabel - zroot/data/working sync standard default zroot/data/working refcompressratio 1.00x - zroot/data/working written 0 - zroot/data/working logicalused 4.65T - zroot/data/working logicalreferenced 4.65T - Was this a zfs 10.0-RELEASE issue? Or, did our Dell PERC H710 controller = just happen to become an issue and the timing is coincidental? Any pointers on either restoring the data or preventing this in the=20 future would be great Mike C --------------ms070200060301070101080704 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNTMwMTkw NDQ1WjAjBgkqhkiG9w0BCQQxFgQUSO2i2KfMX9DonqJuEdiTuurfdGswbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICACIkDSeYND7Z5p3UjJHSujTaXPZiCEoYYpFu q7Y6o+bk6GNxnopwdwwyYkhJH5gqjKoRn+Dge7Nn0a3qYSjZ4AIE6MW7iTZmAGjdzB9cJy8L f6yNnIkIPIT04lF7ifdFA7hRbgj+Df+pmCeSaoB0zIKFOFlPXYzbjyuhidNgAJH+OwG1XFPZ uxNCK5hZFADHitQapdSRj8SLxj5qP4kKsoVcFdK29XBQ1Y0rM/3htMea83UcDjJJICBP6975 QSTbkLtZmQvFnHNTqRqcsWSvoa1woLDG/L407cs+49qCBCnzsoOt/VfIgffXshypWw5ThtKr 8ywBOF6PxWUhibo+ZDTKrn3unwbvqGgLmi8BwJi6Fgji8C1ZTEKge8e+LKo8L+mBO75dsJKX 7auvKf4Fs9afswC0eWYIhy9UnAVe7pmnQPjay+43xTcWrnl0DRKAAWIvtedLP0XTbhMEGYwW kieVJYG9RckzYBgFDJdLNWd0OyRKDCiT1iP0MpK0JyFBOBBKzBsPX1xs5we1TTEbGEI5idVW 8t0/NvfuLwd5oqm+kCpxAOJ+9YgMIwXJQTabuP326eRdonOrtgQnyFmh9jxBAEcKmAVaST36 kdy9VqIgENLCu1+xLjT0Ip2udQqPsdKWQiPtJCL2c0T+PdsFa6f4tC/iKJFO2t7yeeSCZJ5M AAAAAAAA --------------ms070200060301070101080704-- From owner-freebsd-fs@FreeBSD.ORG Fri May 30 19:48:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 40F634B5 for ; Fri, 30 May 2014 19:48:14 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 229EE2992 for ; Fri, 30 May 2014 19:48:14 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id B7C0676900; Fri, 30 May 2014 12:48:13 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 82911-02; Fri, 30 May 2014 12:48:13 -0700 (PDT) Received: from [10.8.0.10] (unknown [10.8.0.10]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id 969CA768FD; Fri, 30 May 2014 12:48:12 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) Subject: Re: ZFS Kernel Panic on 10.0-RELEASE From: Jordan Hubbard In-Reply-To: <5388D64D.4030400@bayphoto.com> Date: Fri, 30 May 2014 12:48:11 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: References: <5388D64D.4030400@bayphoto.com> To: mike@bayphoto.com X-Mailer: Apple Mail (2.1878.2) Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 May 2014 19:48:14 -0000 On May 30, 2014, at 12:04 PM, Mike Carlson wrote: > Over the weekend, we had upgraded one of our servers from 9.1-RELEASE = to 10.0-RELEASE, and then the zpool was upgraded (from 28 to 5000) >=20 > Tuesday afternoon, the server suddenly rebooted (kernel panic), and as = soon as it tried to remount all of its ZFS volumes, it panic'd again. What=92s the panic text? That=92s pretty crucial in figuring out = whether this is recoverable (e.g. if it=92s spacemap corruption related, = probably not). - Jordan From owner-freebsd-fs@FreeBSD.ORG Fri May 30 20:10:30 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C8FC8C2C for ; Fri, 30 May 2014 20:10:30 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id AA31A2B88 for ; Fri, 30 May 2014 20:10:30 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 17BF423B394; Fri, 30 May 2014 13:10:29 -0700 (PDT) Message-ID: <5388E5B4.3030002@bayphoto.com> Date: Fri, 30 May 2014 13:10:28 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Jordan Hubbard Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080807090304060202080806" X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 May 2014 20:10:31 -0000 This is a cryptographically signed message in MIME format. --------------ms080807090304060202080806 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 5/30/2014 12:48 PM, Jordan Hubbard wrote: > On May 30, 2014, at 12:04 PM, Mike Carlson wrote: > >> Over the weekend, we had upgraded one of our servers from 9.1-RELEASE = to 10.0-RELEASE, and then the zpool was upgraded (from 28 to 5000) >> >> Tuesday afternoon, the server suddenly rebooted (kernel panic), and as= soon as it tried to remount all of its ZFS volumes, it panic'd again. > What=92s the panic text? That=92s pretty crucial in figuring out wheth= er this is recoverable (e.g. if it=92s spacemap corruption related, proba= bly not). > > - Jordan > > > I had linked the pictures I took of the console, but here is my manual=20 reproduction: Fatal trap 12: page fault while in kernel mode cpuid =3D 7; apic id =3D 07 fault virtual address =3D 0x4a0 fault code =3D supervisor read data, page not present instruction pointer =3D 0x20:0xffffffff81a7f39f stack pointer =3D 0x28:0xfffffe1834789570 frame pointer =3D 0x28:0xfffffe18347895b0 code segment =3D base 0x0, limit 0xfffff, type 0x1b =3D DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags =3D interrupt enabled, resume, IOPL =3D 0 current process =3D 1849 (txg_thread_enter) trap number =3D 12 panic: page fault cpuid =3D 7 KDB: stack backtrace: #0 0xffffffff808e7dd0 at kdb_backtrace+0x60 #1 0xffffffff808af8b5 at panic+0x155 #2 0xffffffff80c8e629 at trap_fatal+0x3a2 #3 0xffffffff80c8e969 at trap_pfault+0x2c9 #4 0xffffffff80c8e0f6 at trap+0x5e6 #5 0xffffffff80c75392 at calltrap+0x8 #6 0xffffffff81a53b5a at dsl_dataset_block_kill+0x3a #7 0xffffffff81a50967 at dnode_sync+0x237 #8 0xffffffff81a48fcb at dmu_objset_sync_dnodes+0x2b #9 0xffffffff81a48e4d at dmo_objset_sync+0x1ed #10 0xffffffff81a5d29a at dsl_pool_sync+0xca #11 0xffffffff81a78a4e at spa_sync+0x52e #12 0xffffffff81a81925 at txg_sync_thread+0x375 #13 0xffffffff8088198a at fork_exit+0x9a #14 0xffffffff80c758ce at fork_trampoline+0xe uptime: 46s Automatic reboot in 15 seconds - press a key on the console to abort --------------ms080807090304060202080806 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNTMwMjAx MDI4WjAjBgkqhkiG9w0BCQQxFgQUBBjRKL1z14/jF5b26yzjbB1LjpEwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAJqIYGF9qdWK84a2GiYVT+jKj158N7AFkRGp uBqLK3pKQXRByRD8jy+fkN7In/ciZy5hYmO2TyC6FdKBRhhygufyr/oNY/vJ1ZKCl469pCCN ZjoCBpipcWkLO5XRMkUXGcHsYEbvo95ZBct5sd/bMPPBIhbU1bLfuJGpi//w99V6duhswbb9 rG4FHs/vHd+MlRX46X/8TFEWeHdj42m56hpgwtTCh+A/QEqgu7Dlfgr+rBgRNMkztSBRTiHy Ii0+FMrbH3FHxXBb0tAHI7Ty9koFosVImKaK6CiEwaGY4wwLOIyJ3zbCVl/SmSBKzuonu0rD 92k8Jy8KyceDZRwLsRQLVxCsK3ThmRt6POsS1sWtNherBYnq8b3fr1V7u3rG0UCTfSgbb3Qd gJkNzVBaKpCP+gZpw/F8m90eOmHOE3R2KY9fA0VAlgK4q967dJ/ua/SFpbAF9fh1l28gcLAh P27X4P2xm7BFNjQvkYVr6zIaoL2gyaW2hNtAY7wpGjpVM6UGf6blD8rELAfUgyonqWo1ZbMn Bvih2wK7FvjOKLZ6uf77/fH7o6gtyVzdL5CzvIFnyqNXSUKVDQOmL3IB85VsQ8fPpW150fNh D6dFtg0x62IubrWipELmAoMCavOwRUSdQp78t6aMUMYbxZXwuxa52d9eS7UTeXFhRLS6orry AAAAAAAA --------------ms080807090304060202080806-- From owner-freebsd-fs@FreeBSD.ORG Sun Jun 1 14:21:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 718D3CBF for ; Sun, 1 Jun 2014 14:21:59 +0000 (UTC) Received: from mail-qg0-x231.google.com (mail-qg0-x231.google.com [IPv6:2607:f8b0:400d:c04::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3278320D7 for ; Sun, 1 Jun 2014 14:21:58 +0000 (UTC) Received: by mail-qg0-f49.google.com with SMTP id a108so9045548qge.36 for ; Sun, 01 Jun 2014 07:21:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtual.org.ua; s=google; h=mime-version:from:date:message-id:subject:to:content-type; bh=tzudZ1NbBCIy1dfOIbhONBk4j6JVlJgRoHMsFdLIo4s=; b=jH4VqSJZBycQtcl+PecKDSAgOqltrCGxXTma2jfF2Vj9r/85Yux/0/vyy+gH5Qes8g ku6VKKWbamNkzontklbkg7hGtqLpfLNau5ZJTkVw3vKsX3L82NNrf448GmCVk8XGUh2f AWHSzhXlhzasttJK2uPf6BNKIzgHIkWsFfFWQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:from:date:message-id:subject:to :content-type; bh=tzudZ1NbBCIy1dfOIbhONBk4j6JVlJgRoHMsFdLIo4s=; b=ec24P1GGLBd64mLyuJf9SzheOQgN80pJ2XPhnN2cRxCau9WzBf4xiqupgVjBgbEJeu LNKrjx3u6ycMhPMhH4u48/sJNwHZK9mcnZu9K9CHxdK9k0zUSOViD3AbG8bl5UGAkYF9 V1gwFTTDmRzU1G9+tmKGAT5ZT8QXLEv0lbBGaqqxMOBe05mI0sPBFMIMsnVc8hURuEei HjGcuENn8ObYpE0rfdBH1YmLWENZHMTVkJaJ/btkxyYbFj2VN2IppCeOjojZVZLwiHNJ RnSx6MfIk92u088I30gHXCiDjC2OGER8Z07ojrIHLZ0hQFAH9Vu+Pc/K4xtF5gJo4bhV dHXQ== X-Gm-Message-State: ALoCoQmdRTlp1UrcQ48wLi26X1y+sEOcNrVrll3DZYcxC6+k7u+mfiilOiOTKycP7HpSwFMQZTbV X-Received: by 10.224.166.9 with SMTP id k9mr40404130qay.25.1401632517641; Sun, 01 Jun 2014 07:21:57 -0700 (PDT) MIME-Version: 1.0 Received: by 10.140.92.110 with HTTP; Sun, 1 Jun 2014 07:21:37 -0700 (PDT) From: Pavlo Greenberg Date: Sun, 1 Jun 2014 17:21:37 +0300 Message-ID: Subject: Recover ZFS pool after re-initialization To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jun 2014 14:21:59 -0000 Hello. Is it possible to recover the data from a pool, that was accidentally destroyed? I erroneously ran the wrong command from my bash history and instead of "zpool import" did "zpool create". I didn't write anything on this pool after that. The history of the pool is empty now and I can't roll-back what I did. Is there any way to bring the previous pool back or even somehow restore the data it contained? Peculiar situation, I know, but sometimes the most foolish errors are the most fatal. From owner-freebsd-fs@FreeBSD.ORG Sun Jun 1 15:44:26 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 372627E1 for ; Sun, 1 Jun 2014 15:44:26 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id EFD5E26C7 for ; Sun, 1 Jun 2014 15:44:25 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 3B88020E7088B; Sun, 1 Jun 2014 15:36:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.3 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id EC4A420E70886; Sun, 1 Jun 2014 15:36:31 +0000 (UTC) Message-ID: <4F8352CFDC8643D0AB6F999E4A87847D@multiplay.co.uk> From: "Steven Hartland" To: "Pavlo Greenberg" , References: Subject: Re: Recover ZFS pool after re-initialization Date: Sun, 1 Jun 2014 16:36:36 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jun 2014 15:44:26 -0000 ----- Original Message ----- From: "Pavlo Greenberg" To: Sent: Sunday, June 01, 2014 3:21 PM Subject: Recover ZFS pool after re-initialization > Hello. > Is it possible to recover the data from a pool, that was accidentally > destroyed? I erroneously ran the wrong command from my bash history > and instead of "zpool import" did "zpool create". I didn't write > anything on this pool after that. The history of the pool is empty now > and I can't roll-back what I did. > Is there any way to bring the previous pool back or even somehow > restore the data it contained? Peculiar situation, I know, but > sometimes the most foolish errors are the most fatal. I would guess not if you did a full create, but I'm very suprised that worked as without specifying the additional device parameters the create should fail so its rather hard to confuse the two. In addition I would also expect create to check for an already existing pool on the devices before allowing you to proceed. If thats not the case it would be a worth while check to add IMO. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Sun Jun 1 15:55:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BA0BFBB3; Sun, 1 Jun 2014 15:55:34 +0000 (UTC) Received: from i3mail.icecube.wisc.edu (i3mail.icecube.wisc.edu [128.104.255.23]) by mx1.freebsd.org (Postfix) with ESMTP id 8D3F6278B; Sun, 1 Jun 2014 15:55:34 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by i3mail.icecube.wisc.edu (Postfix) with ESMTP id 579B63806E; Sun, 1 Jun 2014 10:55:28 -0500 (CDT) X-Virus-Scanned: amavisd-new at icecube.wisc.edu Received: from i3mail.icecube.wisc.edu ([127.0.0.1]) by localhost (i3mail.icecube.wisc.edu [127.0.0.1]) (amavisd-new, port 10030) with ESMTP id oHvKQqhQ-e4S; Sun, 1 Jun 2014 10:55:28 -0500 (CDT) Received: from comporellon.tachypleus.net (polaris.tachypleus.net [75.101.50.44]) by i3mail.icecube.wisc.edu (Postfix) with ESMTPSA id EDF833802B; Sun, 1 Jun 2014 10:55:27 -0500 (CDT) Message-ID: <538B4CEF.2030801@freebsd.org> Date: Sun, 01 Jun 2014 08:55:27 -0700 From: Nathan Whitehorn User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-hackers@freebsd.org, freebsd-fs@freebsd.org Subject: Re: fdisk(8) vs gpart(8), and gnop References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> In-Reply-To: <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jun 2014 15:55:34 -0000 On 06/01/14 08:52, Steven Hartland wrote: > ----- Original Message ----- From: "Mark Felder" > >> On May 31, 2014, at 20:57, Freddie Cash wrote: >> >>> There's a sysctl where you can set the minimum ashift for zfs. Then you >>> never need to use gnop. >>> >>> I believe it's part of 10.0? >> >> I've not seen this yet. What we need is to port the ability to set >> ashift at pool creation time: >> >> $ zpool create -o ashift=12 tank mirror disk1 disk2 mirror disk3 disk4 >> >> I believe the Linux zfs port has this functionality now, but we still >> do not. > > We don't have that direct option yet but you can achieve the > same thing by setting: vfs.zfs.min_auto_ashift=12 > Does anyone have any objections to me changing this default, right now, today? -Nathan From owner-freebsd-fs@FreeBSD.ORG Sun Jun 1 16:00:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 51FA1E9B; Sun, 1 Jun 2014 16:00:23 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 13C2627C0; Sun, 1 Jun 2014 16:00:22 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 4E46120E7088C; Sun, 1 Jun 2014 16:00:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 1BDE020E70886; Sun, 1 Jun 2014 16:00:17 +0000 (UTC) Message-ID: <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> From: "Steven Hartland" To: "Nathan Whitehorn" , , References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> Subject: Re: fdisk(8) vs gpart(8), and gnop Date: Sun, 1 Jun 2014 17:00:21 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jun 2014 16:00:23 -0000 ----- Original Message ----- From: "Nathan Whitehorn" To: ; Sent: Sunday, June 01, 2014 4:55 PM Subject: Re: fdisk(8) vs gpart(8), and gnop > On 06/01/14 08:52, Steven Hartland wrote: >> ----- Original Message ----- From: "Mark Felder" >> >>> On May 31, 2014, at 20:57, Freddie Cash wrote: >>> >>>> There's a sysctl where you can set the minimum ashift for zfs. Then you >>>> never need to use gnop. >>>> >>>> I believe it's part of 10.0? >>> >>> I've not seen this yet. What we need is to port the ability to set >>> ashift at pool creation time: >>> >>> $ zpool create -o ashift=12 tank mirror disk1 disk2 mirror disk3 disk4 >>> >>> I believe the Linux zfs port has this functionality now, but we still >>> do not. >> >> We don't have that direct option yet but you can achieve the >> same thing by setting: vfs.zfs.min_auto_ashift=12 >> > Does anyone have any objections to me changing this default, right now, > today? > -Nathan I think you will get some objections to that, as it can have quite an impact on the performance for disks which are 512, due to the increased overhead of transfering 4k when only 512 is really required. This has a more dramatic impact on RAIDZx due too. Personally we run a custom kernel on our machines which has just this change in it to ensure capability with future disks, so I can confirm it does indeed have the desired effect :) Regards Steve From owner-freebsd-fs@FreeBSD.ORG Sun Jun 1 16:07:54 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 12AC6413; Sun, 1 Jun 2014 16:07:54 +0000 (UTC) Received: from i3mail.icecube.wisc.edu (i3mail.icecube.wisc.edu [128.104.255.23]) by mx1.freebsd.org (Postfix) with ESMTP id D801B2887; Sun, 1 Jun 2014 16:07:53 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by i3mail.icecube.wisc.edu (Postfix) with ESMTP id 52A6F38070; Sun, 1 Jun 2014 11:07:53 -0500 (CDT) X-Virus-Scanned: amavisd-new at icecube.wisc.edu Received: from i3mail.icecube.wisc.edu ([127.0.0.1]) by localhost (i3mail.icecube.wisc.edu [127.0.0.1]) (amavisd-new, port 10030) with ESMTP id 8QBNQY1VbJVa; Sun, 1 Jun 2014 11:07:53 -0500 (CDT) Received: from comporellon.tachypleus.net (polaris.tachypleus.net [75.101.50.44]) by i3mail.icecube.wisc.edu (Postfix) with ESMTPSA id D10EE3805A; Sun, 1 Jun 2014 11:07:52 -0500 (CDT) Message-ID: <538B4FD7.4090000@freebsd.org> Date: Sun, 01 Jun 2014 09:07:51 -0700 From: Nathan Whitehorn User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-hackers@freebsd.org, freebsd-fs@freebsd.org Subject: Re: fdisk(8) vs gpart(8), and gnop References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> In-Reply-To: <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jun 2014 16:07:54 -0000 On 06/01/14 09:00, Steven Hartland wrote: > > ----- Original Message ----- From: "Nathan Whitehorn" > > To: ; > Sent: Sunday, June 01, 2014 4:55 PM > Subject: Re: fdisk(8) vs gpart(8), and gnop > > >> On 06/01/14 08:52, Steven Hartland wrote: >>> ----- Original Message ----- From: "Mark Felder" >>> >>>> On May 31, 2014, at 20:57, Freddie Cash wrote: >>>> >>>>> There's a sysctl where you can set the minimum ashift for zfs. >>>>> Then you >>>>> never need to use gnop. >>>>> >>>>> I believe it's part of 10.0? >>>> >>>> I've not seen this yet. What we need is to port the ability to set >>>> ashift at pool creation time: >>>> >>>> $ zpool create -o ashift=12 tank mirror disk1 disk2 mirror disk3 disk4 >>>> >>>> I believe the Linux zfs port has this functionality now, but we >>>> still do not. >>> >>> We don't have that direct option yet but you can achieve the >>> same thing by setting: vfs.zfs.min_auto_ashift=12 >>> >> Does anyone have any objections to me changing this default, right >> now, today? >> -Nathan > > I think you will get some objections to that, as it can have quite an > impact > on the performance for disks which are 512, due to the increased > overhead of > transfering 4k when only 512 is really required. This has a more dramatic > impact on RAIDZx due too. > > Personally we run a custom kernel on our machines which has just this > change > in it to ensure capability with future disks, so I can confirm it does > indeed > have the desired effect :) So the discussion here is related to what to do about the installer. The current ZFS component unconditionally creates gnops all over the place to set ashift to 4k. That's across the board worse: it has exactly the performance impact of changing the default of this sysctl (whatever that is), it can't easily be overridden (which the sysctl can), and it's a horrible hack to boot. There are a few options: 1. Change the default of vfs.zfs.min_auto_ashift 2. Have the same effect but in a vastly worse way by adjusting the installer to create gnops 3. Have ZFS choose by itself and decide to do that permanently. Our ATA code is good about reporting block sizes now, so (3) isn't a big issue except for the mixed-pool case, which is a huge PITA. We need to choose one of these. I favor (1). -Nathan From owner-freebsd-fs@FreeBSD.ORG Sun Jun 1 16:14:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 25A0B776; Sun, 1 Jun 2014 16:14:56 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id B1ABF2950; Sun, 1 Jun 2014 16:14:55 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id EA61F20E7088C; Sun, 1 Jun 2014 16:14:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 9FE4920E70886; Sun, 1 Jun 2014 16:14:50 +0000 (UTC) Message-ID: <8D276E03788643A39AABD6A7127B21A0@multiplay.co.uk> From: "Steven Hartland" To: "Nathan Whitehorn" , , References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> Subject: Re: fdisk(8) vs gpart(8), and gnop Date: Sun, 1 Jun 2014 17:14:54 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jun 2014 16:14:56 -0000 ----- Original Message ----- From: "Nathan Whitehorn" To: "Steven Hartland" ; ; Sent: Sunday, June 01, 2014 5:07 PM Subject: Re: fdisk(8) vs gpart(8), and gnop > On 06/01/14 09:00, Steven Hartland wrote: >> >> ----- Original Message ----- From: "Nathan Whitehorn" >> >> To: ; >> Sent: Sunday, June 01, 2014 4:55 PM >> Subject: Re: fdisk(8) vs gpart(8), and gnop >> >> >>> On 06/01/14 08:52, Steven Hartland wrote: >>>> ----- Original Message ----- From: "Mark Felder" >>>> >>>>> On May 31, 2014, at 20:57, Freddie Cash wrote: >>>>> >>>>>> There's a sysctl where you can set the minimum ashift for zfs. >>>>>> Then you >>>>>> never need to use gnop. >>>>>> >>>>>> I believe it's part of 10.0? >>>>> >>>>> I've not seen this yet. What we need is to port the ability to set >>>>> ashift at pool creation time: >>>>> >>>>> $ zpool create -o ashift=12 tank mirror disk1 disk2 mirror disk3 disk4 >>>>> >>>>> I believe the Linux zfs port has this functionality now, but we >>>>> still do not. >>>> >>>> We don't have that direct option yet but you can achieve the >>>> same thing by setting: vfs.zfs.min_auto_ashift=12 >>>> >>> Does anyone have any objections to me changing this default, right >>> now, today? >>> -Nathan >> >> I think you will get some objections to that, as it can have quite an >> impact >> on the performance for disks which are 512, due to the increased >> overhead of >> transfering 4k when only 512 is really required. This has a more dramatic >> impact on RAIDZx due too. >> >> Personally we run a custom kernel on our machines which has just this >> change >> in it to ensure capability with future disks, so I can confirm it does >> indeed >> have the desired effect :) > > So the discussion here is related to what to do about the installer. The > current ZFS component unconditionally creates gnops all over the place > to set ashift to 4k. That's across the board worse: it has exactly the > performance impact of changing the default of this sysctl (whatever that > is), it can't easily be overridden (which the sysctl can), and it's a > horrible hack to boot. There are a few options: > > 1. Change the default of vfs.zfs.min_auto_ashift > 2. Have the same effect but in a vastly worse way by adjusting the > installer to create gnops > 3. Have ZFS choose by itself and decide to do that permanently. > > Our ATA code is good about reporting block sizes now, so (3) isn't a big > issue except for the mixed-pool case, which is a huge PITA. > > We need to choose one of these. I favor (1). I wasn't aware of that but it should do #3 min_auto_ashift is a bigger discussion. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Sun Jun 1 16:32:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5FC58E9E; Sun, 1 Jun 2014 16:32:27 +0000 (UTC) Received: from i3mail.icecube.wisc.edu (i3mail.icecube.wisc.edu [128.104.255.23]) by mx1.freebsd.org (Postfix) with ESMTP id 15DF52AC6; Sun, 1 Jun 2014 16:32:26 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by i3mail.icecube.wisc.edu (Postfix) with ESMTP id AA70438067; Sun, 1 Jun 2014 11:32:25 -0500 (CDT) X-Virus-Scanned: amavisd-new at icecube.wisc.edu Received: from i3mail.icecube.wisc.edu ([127.0.0.1]) by localhost (i3mail.icecube.wisc.edu [127.0.0.1]) (amavisd-new, port 10030) with ESMTP id SmxNPqPrJeYK; Sun, 1 Jun 2014 11:32:25 -0500 (CDT) Received: from comporellon.tachypleus.net (polaris.tachypleus.net [75.101.50.44]) by i3mail.icecube.wisc.edu (Postfix) with ESMTPSA id BB5BE3805E; Sun, 1 Jun 2014 11:32:24 -0500 (CDT) Message-ID: <538B5597.3060007@freebsd.org> Date: Sun, 01 Jun 2014 09:32:23 -0700 From: Nathan Whitehorn User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-hackers@freebsd.org, freebsd-fs@freebsd.org Subject: Re: fdisk(8) vs gpart(8), and gnop References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> <8D276E03788643A39AABD6A7127B21A0@multiplay.co.uk> In-Reply-To: <8D276E03788643A39AABD6A7127B21A0@multiplay.co.uk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jun 2014 16:32:27 -0000 On 06/01/14 09:14, Steven Hartland wrote: > > ----- Original Message ----- From: "Nathan Whitehorn" > > To: "Steven Hartland" ; > ; > Sent: Sunday, June 01, 2014 5:07 PM > Subject: Re: fdisk(8) vs gpart(8), and gnop > > >> On 06/01/14 09:00, Steven Hartland wrote: >>> >>> ----- Original Message ----- From: "Nathan Whitehorn" >>> >>> To: ; >>> Sent: Sunday, June 01, 2014 4:55 PM >>> Subject: Re: fdisk(8) vs gpart(8), and gnop >>> >>> >>>> On 06/01/14 08:52, Steven Hartland wrote: >>>>> ----- Original Message ----- From: "Mark Felder" >>>>> >>>>>> On May 31, 2014, at 20:57, Freddie Cash wrote: >>>>>> >>>>>>> There's a sysctl where you can set the minimum ashift for zfs. >>>>>>> Then you >>>>>>> never need to use gnop. >>>>>>> >>>>>>> I believe it's part of 10.0? >>>>>> >>>>>> I've not seen this yet. What we need is to port the ability to >>>>>> set ashift at pool creation time: >>>>>> >>>>>> $ zpool create -o ashift=12 tank mirror disk1 disk2 mirror disk3 >>>>>> disk4 >>>>>> >>>>>> I believe the Linux zfs port has this functionality now, but we >>>>>> still do not. >>>>> >>>>> We don't have that direct option yet but you can achieve the >>>>> same thing by setting: vfs.zfs.min_auto_ashift=12 >>>>> >>>> Does anyone have any objections to me changing this default, right >>>> now, today? >>>> -Nathan >>> >>> I think you will get some objections to that, as it can have quite >>> an impact >>> on the performance for disks which are 512, due to the increased >>> overhead of >>> transfering 4k when only 512 is really required. This has a more >>> dramatic >>> impact on RAIDZx due too. >>> >>> Personally we run a custom kernel on our machines which has just >>> this change >>> in it to ensure capability with future disks, so I can confirm it >>> does indeed >>> have the desired effect :) >> >> So the discussion here is related to what to do about the installer. >> The current ZFS component unconditionally creates gnops all over the >> place to set ashift to 4k. That's across the board worse: it has >> exactly the performance impact of changing the default of this sysctl >> (whatever that is), it can't easily be overridden (which the sysctl >> can), and it's a horrible hack to boot. There are a few options: >> >> 1. Change the default of vfs.zfs.min_auto_ashift >> 2. Have the same effect but in a vastly worse way by adjusting the >> installer to create gnops >> 3. Have ZFS choose by itself and decide to do that permanently. >> >> Our ATA code is good about reporting block sizes now, so (3) isn't a >> big issue except for the mixed-pool case, which is a huge PITA. >> >> We need to choose one of these. I favor (1). > > I wasn't aware of that but it should do #3 > > min_auto_ashift is a bigger discussion. Fair enough. I'm going to decide not to worry about (2) while integrating some installer patches then. If we do either (1) or (3), I'm perfectly happy. It would be nice if that discussion happened, however, rather than dying now. -Nathan From owner-freebsd-fs@FreeBSD.ORG Sun Jun 1 19:47:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DA5D5F12 for ; Sun, 1 Jun 2014 19:47:01 +0000 (UTC) Received: from mail.egr.msu.edu (dauterive.egr.msu.edu [35.9.37.168]) by mx1.freebsd.org (Postfix) with ESMTP id 965992A7A for ; Sun, 1 Jun 2014 19:47:01 +0000 (UTC) Received: from dauterive (localhost [127.0.0.1]) by mail.egr.msu.edu (Postfix) with ESMTP id 3B3A526830 for ; Sun, 1 Jun 2014 15:40:33 -0400 (EDT) X-Virus-Scanned: amavisd-new at egr.msu.edu Received: from mail.egr.msu.edu ([127.0.0.1]) by dauterive (dauterive.egr.msu.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id maltBJQF6pm1 for ; Sun, 1 Jun 2014 15:40:33 -0400 (EDT) Received: from EGR authenticated sender Message-ID: <538B81B0.7030903@egr.msu.edu> Date: Sun, 01 Jun 2014 15:40:32 -0400 From: Adam McDougall User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: fdisk(8) vs gpart(8), and gnop References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> In-Reply-To: <538B4FD7.4090000@freebsd.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jun 2014 19:47:02 -0000 On 06/01/2014 12:07, Nathan Whitehorn wrote: > On 06/01/14 09:00, Steven Hartland wrote: >> >> ----- Original Message ----- From: "Nathan Whitehorn" >> >> To: ; >> Sent: Sunday, June 01, 2014 4:55 PM >> Subject: Re: fdisk(8) vs gpart(8), and gnop >> >> >>> On 06/01/14 08:52, Steven Hartland wrote: >>>> ----- Original Message ----- From: "Mark Felder" >>>> >>>>> On May 31, 2014, at 20:57, Freddie Cash wrote: >>>>> >>>>>> There's a sysctl where you can set the minimum ashift for zfs. >>>>>> Then you >>>>>> never need to use gnop. >>>>>> >>>>>> I believe it's part of 10.0? The new sysctl is not yet part of a release: 11 r264850 Thu Apr 24 10 r266122 Thu May 15 9 r266123 Thu May 15 >>>>> >>>>> I've not seen this yet. What we need is to port the ability to set >>>>> ashift at pool creation time: >>>>> >>>>> $ zpool create -o ashift=12 tank mirror disk1 disk2 mirror disk3 disk4 >>>>> >>>>> I believe the Linux zfs port has this functionality now, but we >>>>> still do not. >>>> >>>> We don't have that direct option yet but you can achieve the >>>> same thing by setting: vfs.zfs.min_auto_ashift=12 >>>> >>> Does anyone have any objections to me changing this default, right >>> now, today? >>> -Nathan >> >> I think you will get some objections to that, as it can have quite an >> impact >> on the performance for disks which are 512, due to the increased >> overhead of >> transfering 4k when only 512 is really required. This has a more dramatic >> impact on RAIDZx due too. Another drawback is space consumption. Using 4k when not needed consumes a considerable amount of extra space. The loss can be on the order of terabytes when using many 2TB drives in a raidz for example. >> >> Personally we run a custom kernel on our machines which has just this >> change >> in it to ensure capability with future disks, so I can confirm it does >> indeed >> have the desired effect :) > > So the discussion here is related to what to do about the installer. The > current ZFS component unconditionally creates gnops all over the place The 10.0-RELEASE installer ZFS configuration defaults to gnop but easily allows the user to opt out with the "Force 4K Sectors" menu option. I think we should keep that end result with the opt-out (replace opt-in gnop with opt-out sysctl?) and reflect that default in the installed kernel since it can be overridden easily. > to set ashift to 4k. That's across the board worse: it has exactly the > performance impact of changing the default of this sysctl (whatever that > is), it can't easily be overridden (which the sysctl can), and it's a > horrible hack to boot. There are a few options: > > 1. Change the default of vfs.zfs.min_auto_ashift > 2. Have the same effect but in a vastly worse way by adjusting the > installer to create gnops > 3. Have ZFS choose by itself and decide to do that permanently. > > Our ATA code is good about reporting block sizes now, so (3) isn't a big > issue except for the mixed-pool case, which is a huge PITA. > > We need to choose one of these. I favor (1). > -Nathan > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sun Jun 1 21:27:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 35726CDD for ; Sun, 1 Jun 2014 21:27:44 +0000 (UTC) Received: from mail-pd0-x22c.google.com (mail-pd0-x22c.google.com [IPv6:2607:f8b0:400e:c02::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D753C222E for ; Sun, 1 Jun 2014 21:27:43 +0000 (UTC) Received: by mail-pd0-f172.google.com with SMTP id fp1so2738444pdb.31 for ; Sun, 01 Jun 2014 14:27:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphix.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=ots8PHipsPZ5ortSy5BicTIvA2Bo0qt+PWJ6Ph/ErKo=; b=UuXLaD9jgO08sqafTD6xPIb2XbNdn253cxcqflb8Z8JCRVX6Yz+kLyT4BIy3DyhjhB UqDZ8AtcRlt4SFZAQmtybzH6+vi+ACHCRuzF/ys0xlKv0a4kSA1ZO5GT1umPmC/chX5x wlXH0WtY08Ee6dy+LaHcLovO99FwvRIzvzH9k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=ots8PHipsPZ5ortSy5BicTIvA2Bo0qt+PWJ6Ph/ErKo=; b=WVb2lhNr70wOImXMgGxYQChyeWG+d3kXQqE2dF43axm/kAtFybvCK4Ynr51vaN5aSY cnL6pKez+oEf3jrYZZv1vf7hTy6mfvdtre7b0OF60FgkcFKjSEBETXiM2bsRH3ui5xbB VP8BBc3b3WsY5z2h8K2nJceMlC9tE26sABTYogKgmxrNJ29JD/Hef55yr2Y2dTzRYRRr HgQpaW4sSBKxjEsfI/ipfbCgn26NmhF1++wkNUEd59ubdwvOlKUTj8yRfzfnCSFJ9JfP Iv1b7p9cUlWYLZ42M2GdMYAO9sWj7JZuSE6WJyRCsTfTYS9QA7SAX7ywWhBjgq15nMU2 Isfg== X-Gm-Message-State: ALoCoQmBqQQLXfjSemRW3AToUbnxj+CbCwTbscibf9v25nTbJC6z/Fed0RAiSxOgUZ1SbhUff6Ic MIME-Version: 1.0 X-Received: by 10.68.133.7 with SMTP id oy7mr35897783pbb.43.1401658062863; Sun, 01 Jun 2014 14:27:42 -0700 (PDT) Received: by 10.70.0.202 with HTTP; Sun, 1 Jun 2014 14:27:42 -0700 (PDT) In-Reply-To: <538B4FD7.4090000@freebsd.org> References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> Date: Sun, 1 Jun 2014 14:27:42 -0700 Message-ID: Subject: Re: fdisk(8) vs gpart(8), and gnop From: Matthew Ahrens To: Nathan Whitehorn Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jun 2014 21:27:44 -0000 On Sun, Jun 1, 2014 at 9:07 AM, Nathan Whitehorn wrote: > On 06/01/14 09:00, Steven Hartland wrote: > >> >> ----- Original Message ----- From: "Nathan Whitehorn" < >> nwhitehorn@freebsd.org> >> To: ; >> Sent: Sunday, June 01, 2014 4:55 PM >> Subject: Re: fdisk(8) vs gpart(8), and gnop >> >> >> On 06/01/14 08:52, Steven Hartland wrote: >>> >>>> ----- Original Message ----- From: "Mark Felder" >>>> >>>> On May 31, 2014, at 20:57, Freddie Cash wrote: >>>>> >>>>> There's a sysctl where you can set the minimum ashift for zfs. Then >>>>>> you >>>>>> never need to use gnop. >>>>>> >>>>>> I believe it's part of 10.0? >>>>>> >>>>> >>>>> I've not seen this yet. What we need is to port the ability to set >>>>> ashift at pool creation time: >>>>> >>>>> $ zpool create -o ashift=12 tank mirror disk1 disk2 mirror disk3 disk4 >>>>> >>>>> I believe the Linux zfs port has this functionality now, but we still >>>>> do not. >>>>> >>>> >>>> We don't have that direct option yet but you can achieve the >>>> same thing by setting: vfs.zfs.min_auto_ashift=12 >>>> >>>> Does anyone have any objections to me changing this default, right >>> now, today? >>> -Nathan >>> >> >> I think you will get some objections to that, as it can have quite an >> impact >> on the performance for disks which are 512, due to the increased overhead >> of >> transfering 4k when only 512 is really required. This has a more dramatic >> impact on RAIDZx due too. >> >> Personally we run a custom kernel on our machines which has just this >> change >> in it to ensure capability with future disks, so I can confirm it does >> indeed >> have the desired effect :) >> > > So the discussion here is related to what to do about the installer. The > current ZFS component unconditionally creates gnops all over the place to > set ashift to 4k. That's across the board worse: it has exactly the > performance impact of changing the default of this sysctl (whatever that > is), it can't easily be overridden (which the sysctl can), and it's a > horrible hack to boot. There are a few options: > > 1. Change the default of vfs.zfs.min_auto_ashift > This is probably a bad idea -- as others have mentioned, it can drastically impact space usage and performance on 512B disks, especially when using small ZFS blocks (e.g. for databases or VDI) and/or RAID-Z. That said, it could be a reasonable default for specialized distros that are not used for these workloads (maybe FreeNAS or PCBSD?). 2. Have the same effect but in a vastly worse way by adjusting the > installer to create gnops > 3. Have ZFS choose by itself and decide to do that permanently. > If the device reports a 512B sector size, it would be great for ZFS to assume the device could be lying, and automatically determine the minimum ashift which gives good performance. I think this could be done reasonably well for the common case by doing the following when each 512B-sector device is added: 1. do random 4KB writes to the disk to determine wIOPS@4K 2. do random 3.5KB writes to the disk to determine wIOPS@3.5K If wIOPS@4K > wIOPS@3.5K, assume 4KB sectors, otherwise assume 512B sectors. (Note: I haven't tried this in practice; we will need to test it out and perhaps make some tweaks.) I don't have the time or hardware to implement and test this, but I'd be happy to mentor or code review. --matt > > Our ATA code is good about reporting block sizes now, so (3) isn't a big > issue except for the mixed-pool case, which is a huge PITA. > > We need to choose one of these. I favor (1). > -Nathan > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sun Jun 1 21:31:53 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 327288F for ; Sun, 1 Jun 2014 21:31:53 +0000 (UTC) Received: from mail-pa0-x236.google.com (mail-pa0-x236.google.com [IPv6:2607:f8b0:400e:c03::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 007BA22CE for ; Sun, 1 Jun 2014 21:31:52 +0000 (UTC) Received: by mail-pa0-f54.google.com with SMTP id lf10so3264536pab.41 for ; Sun, 01 Jun 2014 14:31:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphix.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=uoCNQcS1rR92X9OgWF3M4XpNU4Ti2uDClI7HBu1vP+k=; b=N4n+SBbCMxs8Vbvyc87PEDyibbOvTMCQgXalkjQavdcUPVH1+GsDHiu9RHFl3pU9nC +BLCJ8TFrdqRefTtdZLZPWM3k4H/Ebwar7lDqtaAtUq/RTTb5tGNZDu7vuPTvzi41XeM rrwYqXboPqCR9lb32F9ausLoo/vAUNmH89oYA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=uoCNQcS1rR92X9OgWF3M4XpNU4Ti2uDClI7HBu1vP+k=; b=R0D1N6db68ayQTxQKmtQcxgMdyXu/jEL2vSXbpm4MZKQbwYND1h/KYy6SgSBFkzhIT pSGqxqaSoeAx6pHACeT3KGPxYDDF1vtII89xBiOYs/Z9sTqJWHO0ZQ38YBUH8Vg3jjsg oHEPeVm/ghsGesSF8pPl4TwJ1SqK0JIf0jGuvThpr+KPNnZiMkCViNMKJ2v1JUo+eq0S AiY5V4z8sYEeKLH+Uw0GHD1yE+zJIGvJzyNQcR2E4pNxLC7ozg1mFk9y2vLDQGVdzvAK bv8TSXx2sIhp3OOkSCclOYpgbspr4B+nVq1sIQ69sx3YLQk1sbyaBGwLlssjMzq6FF98 ySvg== X-Gm-Message-State: ALoCoQmkSiWl2cZKr5nWge6xoypQ+OOyZflPty/58j/0DFY+D5IiPZt5+r8QcEcIcLSu8WwP3mWZ MIME-Version: 1.0 X-Received: by 10.67.14.231 with SMTP id fj7mr35500784pad.115.1401658312556; Sun, 01 Jun 2014 14:31:52 -0700 (PDT) Received: by 10.70.0.202 with HTTP; Sun, 1 Jun 2014 14:31:52 -0700 (PDT) In-Reply-To: <538B4CEF.2030801@freebsd.org> References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> Date: Sun, 1 Jun 2014 14:31:52 -0700 Message-ID: Subject: Re: fdisk(8) vs gpart(8), and gnop From: Matthew Ahrens To: Nathan Whitehorn Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs , FreeBSD Hackers , George Wilson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Jun 2014 21:31:53 -0000 On Sun, Jun 1, 2014 at 8:55 AM, Nathan Whitehorn wrote: > On 06/01/14 08:52, Steven Hartland wrote: > >> ----- Original Message ----- From: "Mark Felder" >> >> On May 31, 2014, at 20:57, Freddie Cash wrote: >>> >>> There's a sysctl where you can set the minimum ashift for zfs. Then you >>>> never need to use gnop. >>>> >>>> I believe it's part of 10.0? >>>> >>> >>> I've not seen this yet. What we need is to port the ability to set >>> ashift at pool creation time: >>> >>> $ zpool create -o ashift=12 tank mirror disk1 disk2 mirror disk3 disk4 >>> >>> I believe the Linux zfs port has this functionality now, but we still do >>> not. >>> >> I am strongly against implementing "-o ashift=12"[*]. If we need to explicitly tell ZFS what sector size to use, we should make that intention clear with an appropriately named property, for example "-o device_sector_size=4K". Even better, this property should be per-disk. --matt [*] Once we implement an appropriately-named property, I would be OK with also allowing the "-o ashift=12" as a Linux compatibility feature. > >> We don't have that direct option yet but you can achieve the >> same thing by setting: vfs.zfs.min_auto_ashift=12 >> >> Does anyone have any objections to me changing this default, right now, > today? > -Nathan > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 00:01:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 93D8B14A for ; Mon, 2 Jun 2014 00:01:03 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id 77D7420C3 for ; Mon, 2 Jun 2014 00:01:02 +0000 (UTC) Received: from [192.168.2.9] (c-71-198-189-199.hsd1.ca.comcast.net [71.198.189.199]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 02D2523BAC1 for ; Sun, 1 Jun 2014 17:00:55 -0700 (PDT) Message-ID: <538BBEB7.4070008@bayphoto.com> Date: Sun, 01 Jun 2014 17:00:55 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> In-Reply-To: <5388E5B4.3030002@bayphoto.com> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060901060501040609070000" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 00:01:03 -0000 This is a cryptographically signed message in MIME format. --------------ms060901060501040609070000 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 5/30/2014 1:10 PM, Mike Carlson wrote: > On 5/30/2014 12:48 PM, Jordan Hubbard wrote: >> On May 30, 2014, at 12:04 PM, Mike Carlson wrote: >> >>> Over the weekend, we had upgraded one of our servers from=20 >>> 9.1-RELEASE to 10.0-RELEASE, and then the zpool was upgraded (from=20 >>> 28 to 5000) >>> >>> Tuesday afternoon, the server suddenly rebooted (kernel panic), and=20 >>> as soon as it tried to remount all of its ZFS volumes, it panic'd=20 >>> again. >> What=92s the panic text? That=92s pretty crucial in figuring out whet= her=20 >> this is recoverable (e.g. if it=92s spacemap corruption related,=20 >> probably not). >> >> - Jordan >> >> >> > I had linked the pictures I took of the console, but here is my manual = > reproduction: > > Fatal trap 12: page fault while in kernel mode > cpuid =3D 7; apic id =3D 07 > fault virtual address =3D 0x4a0 > fault code =3D supervisor read data, page not present > instruction pointer =3D 0x20:0xffffffff81a7f39f > stack pointer =3D 0x28:0xfffffe1834789570 > frame pointer =3D 0x28:0xfffffe18347895b0 > code segment =3D base 0x0, limit 0xfffff, type 0x1b > =3D DPL 0, pres 1, long 1, def32 0, gran 1= > processor eflags =3D interrupt enabled, resume, IOPL =3D 0 > current process =3D 1849 (txg_thread_enter) > trap number =3D 12 > panic: page fault > cpuid =3D 7 > KDB: stack backtrace: > #0 0xffffffff808e7dd0 at kdb_backtrace+0x60 > #1 0xffffffff808af8b5 at panic+0x155 > #2 0xffffffff80c8e629 at trap_fatal+0x3a2 > #3 0xffffffff80c8e969 at trap_pfault+0x2c9 > #4 0xffffffff80c8e0f6 at trap+0x5e6 > #5 0xffffffff80c75392 at calltrap+0x8 > #6 0xffffffff81a53b5a at dsl_dataset_block_kill+0x3a > #7 0xffffffff81a50967 at dnode_sync+0x237 > #8 0xffffffff81a48fcb at dmu_objset_sync_dnodes+0x2b > #9 0xffffffff81a48e4d at dmo_objset_sync+0x1ed > #10 0xffffffff81a5d29a at dsl_pool_sync+0xca > #11 0xffffffff81a78a4e at spa_sync+0x52e > #12 0xffffffff81a81925 at txg_sync_thread+0x375 > #13 0xffffffff8088198a at fork_exit+0x9a > #14 0xffffffff80c758ce at fork_trampoline+0xe > uptime: 46s > Automatic reboot in 15 seconds - press a key on the console to abort= > This just happened again to another server. We upgraded two servers on=20 the same morning, and now both of them exhibit this corrupted zfs volume = and panic behavior. Out of all the volumes, one of them is causing the panic, and the panic=20 message is nearly identical. I have 4 snapshots over the last 24 hours, so hopefully a snapshot from=20 noon today can be sent to a new volume ( zfs send | zfs recv ) I guess I can now rule out it being a hardware issue, this is clearly=20 problem related to the upgrade (freebsd-update was used). I first=20 thought the first system had a bad upgrade, perhaps a mix and match of=20 9.2 binaries running on a 10 kernel, but I used the 'freebsd-update IDS' = command to verify the integrity of the install, and it looked good, the=20 only differences were config files in /etc/ that we manage. --------------ms060901060501040609070000 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAyMDAw MDU1WjAjBgkqhkiG9w0BCQQxFgQU2v9IBlvXiSW5M2vc+Ah5dN7uWzkwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICALPb9ukmr6mvdAN3jFOauNuu9ZcMVC+Tvypl o8yXK4zDfjVElEDewa8W1C1UDnx+kbYX29hPK3kyq/7uKmmRSOY5kltGODwzjCC7LjHuRnk2 NJTJeoA2kpFZFjezpsXJLeN7G44cKGugQvCz6J+T4qoQ8rMWh/cti2wfvDAbTHIqv1771oOI VtLmMGCCsRxsPUGOrmYvDTnL4DmZbS8QjVFlwqccNlLi8y3Oq0CPyh1RwmRoGV6iZZ9ovvPj /gcRvC+jW5oEv8OKgfDofRpLZ2f4j0eRfYhfS7eigJqQsOEOyiTugl4nsCNR6JQAqM1lcq3L tV/WgoYqCa2NVWskym8qAQbjf59nNpF5s7fSeC+NTIJeq25CWvJli4UQBLncSI645Wa3F5YY pnTNpI8AR7CZd9oxrWUPPSIKAJL0PCgqMWbgc89iCPM2jW0EgT661CboOoCWDNxo2A3vOIgK dISeRmwloyCBQxuPBUEz5E8P4ikfIeOymBin+mYVMjPGKniFw17bT50CXf+eTFtHdEwmv1Gi eUj7YcrJOnBLlLXbPRrynfLFeIIngOMMko9QVx1opM4uGc9SvG5UzWTnZObKKrXpmKUg7/H/ b4dRw7+YegKOMeajMc1s4CHDhjXwrPWAIUENDr+Ym/pAqswiq1aQY5E+XGwzb1uWI98H5cyY AAAAAAAA --------------ms060901060501040609070000-- From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 05:33:01 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4DB09985; Mon, 2 Jun 2014 05:33:01 +0000 (UTC) Received: from gw.catspoiler.org (gw.catspoiler.org [75.1.14.242]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2A9022A9F; Mon, 2 Jun 2014 05:33:00 +0000 (UTC) Received: from FreeBSD.org (mousie.catspoiler.org [192.168.101.2]) by gw.catspoiler.org (8.13.3/8.13.3) with ESMTP id s525Wiqn020165; Sun, 1 Jun 2014 22:32:48 -0700 (PDT) (envelope-from truckman@FreeBSD.org) Message-Id: <201406020532.s525Wiqn020165@gw.catspoiler.org> Date: Sun, 1 Jun 2014 22:32:44 -0700 (PDT) From: Don Lewis Subject: Re: fdisk(8) vs gpart(8), and gnop To: mahrens@delphix.com In-Reply-To: MIME-Version: 1.0 Content-Type: TEXT/plain; charset=us-ascii Cc: freebsd-fs@FreeBSD.org, freebsd-hackers@FreeBSD.org, nwhitehorn@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 05:33:01 -0000 On 1 Jun, Matthew Ahrens wrote: > On Sun, Jun 1, 2014 at 9:07 AM, Nathan Whitehorn > wrote: > >> On 06/01/14 09:00, Steven Hartland wrote: >> >>> >>> ----- Original Message ----- From: "Nathan Whitehorn" < >>> nwhitehorn@freebsd.org> >>> To: ; >>> Sent: Sunday, June 01, 2014 4:55 PM >>> Subject: Re: fdisk(8) vs gpart(8), and gnop >>> >>> >>> On 06/01/14 08:52, Steven Hartland wrote: >>>> >>>>> ----- Original Message ----- From: "Mark Felder" >>>>> >>>>> On May 31, 2014, at 20:57, Freddie Cash wrote: >>>>>> >>>>>> There's a sysctl where you can set the minimum ashift for zfs. Then >>>>>>> you >>>>>>> never need to use gnop. >>>>>>> >>>>>>> I believe it's part of 10.0? >>>>>>> >>>>>> >>>>>> I've not seen this yet. What we need is to port the ability to set >>>>>> ashift at pool creation time: >>>>>> >>>>>> $ zpool create -o ashift=12 tank mirror disk1 disk2 mirror disk3 disk4 >>>>>> >>>>>> I believe the Linux zfs port has this functionality now, but we still >>>>>> do not. >>>>>> >>>>> >>>>> We don't have that direct option yet but you can achieve the >>>>> same thing by setting: vfs.zfs.min_auto_ashift=12 >>>>> >>>>> Does anyone have any objections to me changing this default, right >>>> now, today? >>>> -Nathan >>>> >>> >>> I think you will get some objections to that, as it can have quite an >>> impact >>> on the performance for disks which are 512, due to the increased overhead >>> of >>> transfering 4k when only 512 is really required. This has a more dramatic >>> impact on RAIDZx due too. >>> >>> Personally we run a custom kernel on our machines which has just this >>> change >>> in it to ensure capability with future disks, so I can confirm it does >>> indeed >>> have the desired effect :) >>> >> >> So the discussion here is related to what to do about the installer. The >> current ZFS component unconditionally creates gnops all over the place to >> set ashift to 4k. That's across the board worse: it has exactly the >> performance impact of changing the default of this sysctl (whatever that >> is), it can't easily be overridden (which the sysctl can), and it's a >> horrible hack to boot. There are a few options: >> >> 1. Change the default of vfs.zfs.min_auto_ashift >> > > This is probably a bad idea -- as others have mentioned, it can drastically > impact space usage and performance on 512B disks, especially when using > small ZFS blocks (e.g. for databases or VDI) and/or RAID-Z. That said, it > could be a reasonable default for specialized distros that are not used for > these workloads (maybe FreeNAS or PCBSD?). > > 2. Have the same effect but in a vastly worse way by adjusting the >> installer to create gnops >> 3. Have ZFS choose by itself and decide to do that permanently. >> > > If the device reports a 512B sector size, it would be great for ZFS to > assume the device could be lying, and automatically determine the minimum > ashift which gives good performance. I think this could be done reasonably > well for the common case by doing the following when each 512B-sector > device is added: > > 1. do random 4KB writes to the disk to determine wIOPS@4K > 2. do random 3.5KB writes to the disk to determine wIOPS@3.5K > > If wIOPS@4K > wIOPS@3.5K, assume 4KB sectors, otherwise assume 512B > sectors. (Note: I haven't tried this in practice; we will need to test it > out and perhaps make some tweaks.) Or maybe 1. do random 4KB writes that are 4KB aligned 2. do random 4KB writes that are not 4KB aligned That would eliminate any differences due to the I/O size. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 09:09:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 61409FE5; Mon, 2 Jun 2014 09:09:04 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 207772BA7; Mon, 2 Jun 2014 09:09:03 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 013E020E7088B; Mon, 2 Jun 2014 09:09:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.3 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 1065A20E70886; Mon, 2 Jun 2014 09:08:57 +0000 (UTC) Message-ID: <35E54263991449299DE1F938A3DA83B0@multiplay.co.uk> From: "Steven Hartland" To: "Matthew Ahrens" , "Nathan Whitehorn" References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> Subject: Re: fdisk(8) vs gpart(8), and gnop Date: Mon, 2 Jun 2014 10:09:02 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs , FreeBSD Hackers , George Wilson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 09:09:04 -0000 ----- Original Message ----- From: "Matthew Ahrens" > On Sun, Jun 1, 2014 at 8:55 AM, Nathan Whitehorn > wrote: > >> On 06/01/14 08:52, Steven Hartland wrote: >> >>> ----- Original Message ----- From: "Mark Felder" >>> >>> On May 31, 2014, at 20:57, Freddie Cash wrote: >>>> >>>> There's a sysctl where you can set the minimum ashift for zfs. Then you >>>>> never need to use gnop. >>>>> >>>>> I believe it's part of 10.0? >>>>> >>>> >>>> I've not seen this yet. What we need is to port the ability to set >>>> ashift at pool creation time: >>>> >>>> $ zpool create -o ashift=12 tank mirror disk1 disk2 mirror disk3 disk4 >>>> >>>> I believe the Linux zfs port has this functionality now, but we still do >>>> not. >>>> >>> > I am strongly against implementing "-o ashift=12"[*]. If we need to > explicitly tell ZFS what sector size to use, we should make that intention > clear with an appropriately named property, for example "-o > device_sector_size=4K". Even better, this property should be per-disk. > > --matt > > [*] Once we implement an appropriately-named property, I would be OK with > also allowing the "-o ashift=12" as a Linux compatibility feature. Being able to override the detected sector size for a disk would get my vote as that would result in everything else just working and would be able to have different ashift per top level vdev. That said I'm not sure if that should be a ZFS option or just a OS option? IIRC solaris / illumos has that ability already? Regards Steve From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 09:12:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BB2E21EA for ; Mon, 2 Jun 2014 09:12:20 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 52F302C42 for ; Mon, 2 Jun 2014 09:12:19 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 4C65A20E7088C; Mon, 2 Jun 2014 09:12:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 2F99020E70886; Mon, 2 Jun 2014 09:12:15 +0000 (UTC) Message-ID: <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> From: "Steven Hartland" To: , References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Date: Mon, 2 Jun 2014 10:12:20 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=response Content-Transfer-Encoding: 8bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 09:12:20 -0000 ----- Original Message ----- From: "Mike Carlson" > On 5/30/2014 1:10 PM, Mike Carlson wrote: > > On 5/30/2014 12:48 PM, Jordan Hubbard wrote: > >> On May 30, 2014, at 12:04 PM, Mike Carlson wrote: > >> > >>> Over the weekend, we had upgraded one of our servers from 9.1-RELEASE to 10.0-RELEASE, and then the zpool was upgraded (from > >>> 28 to 5000) > >>> > >>> Tuesday afternoon, the server suddenly rebooted (kernel panic), and as soon as it tried to remount all of its ZFS volumes, > >>> it panic'd again. > >> What’s the panic text? That’s pretty crucial in figuring out whether this is recoverable (e.g. if it’s spacemap corruption > >> related, probably not). > >> > >> - Jordan > >> > >> > >> > > I had linked the pictures I took of the console, but here is my manual reproduction: > > > > Fatal trap 12: page fault while in kernel mode > > cpuid = 7; apic id = 07 > > fault virtual address = 0x4a0 > > fault code = supervisor read data, page not present > > instruction pointer = 0x20:0xffffffff81a7f39f > > stack pointer = 0x28:0xfffffe1834789570 > > frame pointer = 0x28:0xfffffe18347895b0 > > code segment = base 0x0, limit 0xfffff, type 0x1b > > = DPL 0, pres 1, long 1, def32 0, gran 1 > > processor eflags = interrupt enabled, resume, IOPL = 0 > > current process = 1849 (txg_thread_enter) > > trap number = 12 > > panic: page fault > > cpuid = 7 > > KDB: stack backtrace: > > #0 0xffffffff808e7dd0 at kdb_backtrace+0x60 > > #1 0xffffffff808af8b5 at panic+0x155 > > #2 0xffffffff80c8e629 at trap_fatal+0x3a2 > > #3 0xffffffff80c8e969 at trap_pfault+0x2c9 > > #4 0xffffffff80c8e0f6 at trap+0x5e6 > > #5 0xffffffff80c75392 at calltrap+0x8 > > #6 0xffffffff81a53b5a at dsl_dataset_block_kill+0x3a > > #7 0xffffffff81a50967 at dnode_sync+0x237 > > #8 0xffffffff81a48fcb at dmu_objset_sync_dnodes+0x2b > > #9 0xffffffff81a48e4d at dmo_objset_sync+0x1ed > > #10 0xffffffff81a5d29a at dsl_pool_sync+0xca > > #11 0xffffffff81a78a4e at spa_sync+0x52e > > #12 0xffffffff81a81925 at txg_sync_thread+0x375 > > #13 0xffffffff8088198a at fork_exit+0x9a > > #14 0xffffffff80c758ce at fork_trampoline+0xe > > uptime: 46s > > Automatic reboot in 15 seconds - press a key on the console to abort > > > This just happened again to another server. We upgraded two servers on the same morning, and now both of them exhibit this > corrupted zfs volume and panic behavior. > > Out of all the volumes, one of them is causing the panic, and the panic message is nearly identical. > > I have 4 snapshots over the last 24 hours, so hopefully a snapshot from noon today can be sent to a new volume ( zfs send | zfs > recv ) > > I guess I can now rule out it being a hardware issue, this is clearly problem related to the upgrade (freebsd-update was used). > I first thought the first system had a bad upgrade, perhaps a mix and match of 9.2 binaries running on a 10 kernel, but I used > the 'freebsd-update IDS' command to verify the integrity of the install, and it looked good, the only differences were config > files in /etc/ that we manage. > Do you have a kernel crash dump from this? Also can you confirm if your amd64 or just i386? Regards Steve From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 13:24:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 86686EB6 for ; Mon, 2 Jun 2014 13:24:23 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id DEE82248C for ; Mon, 2 Jun 2014 13:24:21 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEANxejFODaFve/2dsb2JhbABYhDGCbL9ngSd0gk+BCwINGQJfiFWgYY8ipDsXgSqMdIMwgUsErS2DVCGBcg X-IronPort-AV: E=Sophos;i="4.98,957,1392181200"; d="scan'208";a="125564179" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 02 Jun 2014 09:23:11 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 9C7B6B4033 for ; Mon, 2 Jun 2014 09:23:11 -0400 (EDT) Date: Mon, 2 Jun 2014 09:23:11 -0400 (EDT) From: Rick Macklem To: FreeBSD Filesystems Message-ID: <220107037.9988770.1401715391557.JavaMail.root@uoguelph.ca> Subject: RFC and testing: NFSv4.1 server going into head MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 13:24:23 -0000 Hi, I think that the NFSv4.1 server code in projects/nfsv4.1-server is about ready to be merged into head. As such, if anyone has the resources to do so in the next 2 weeks, please take a look at the code and/or test it. Also, feel free to make any comments w.r.t. merging this code into head, such as preferred timing, whether or not you think it should happen, etc. If/when the merge is done, it will be fairly large, but shouldn't affect the NFSv3, NFSv4.0 server functionality (however, I may screw up and break them for a little while;-). I think NFSv4.1 might be useful, since it uses sessions to provide "exactly once" RPC semantics, which should improve overall correctness. This server code does not have any pNFS support in it. Implementing a pNFS server is a large project that may happen someday. Thanks in advance for any testing/review/comments, rick ps: The NFSv4.1 client is already in head and the options for mounting with NFSv4.1 are "nfsv4.minorversion=1" for FreeBSD and "vers=4,minorversion=1" for the Linux client. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 15:02:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 69E5C952; Mon, 2 Jun 2014 15:02:35 +0000 (UTC) Received: from i3mail.icecube.wisc.edu (i3mail.icecube.wisc.edu [128.104.255.23]) by mx1.freebsd.org (Postfix) with ESMTP id 20F0B2022; Mon, 2 Jun 2014 15:02:34 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by i3mail.icecube.wisc.edu (Postfix) with ESMTP id 7C7F13806B; Mon, 2 Jun 2014 10:02:33 -0500 (CDT) X-Virus-Scanned: amavisd-new at icecube.wisc.edu Received: from i3mail.icecube.wisc.edu ([127.0.0.1]) by localhost (i3mail.icecube.wisc.edu [127.0.0.1]) (amavisd-new, port 10030) with ESMTP id X0mKz39HqwFI; Mon, 2 Jun 2014 10:02:33 -0500 (CDT) Received: from comporellon.tachypleus.net (polaris.tachypleus.net [75.101.50.44]) by i3mail.icecube.wisc.edu (Postfix) with ESMTPSA id D2F093806A; Mon, 2 Jun 2014 10:02:32 -0500 (CDT) Message-ID: <538C9207.9040806@freebsd.org> Date: Mon, 02 Jun 2014 08:02:31 -0700 From: Nathan Whitehorn User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Matthew Ahrens Subject: Re: fdisk(8) vs gpart(8), and gnop References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 15:02:35 -0000 On 06/01/14 14:27, Matthew Ahrens wrote: > >>> I think you will get some objections to that, as it can have quite an >>> impact >>> on the performance for disks which are 512, due to the increased overhead >>> of >>> transfering 4k when only 512 is really required. This has a more dramatic >>> impact on RAIDZx due too. >>> >>> Personally we run a custom kernel on our machines which has just this >>> change >>> in it to ensure capability with future disks, so I can confirm it does >>> indeed >>> have the desired effect :) >>> >> So the discussion here is related to what to do about the installer. The >> current ZFS component unconditionally creates gnops all over the place to >> set ashift to 4k. That's across the board worse: it has exactly the >> performance impact of changing the default of this sysctl (whatever that >> is), it can't easily be overridden (which the sysctl can), and it's a >> horrible hack to boot. There are a few options: >> >> 1. Change the default of vfs.zfs.min_auto_ashift >> > This is probably a bad idea -- as others have mentioned, it can drastically > impact space usage and performance on 512B disks, especially when using > small ZFS blocks (e.g. for databases or VDI) and/or RAID-Z. That said, it > could be a reasonable default for specialized distros that are not used for > these workloads (maybe FreeNAS or PCBSD?). > > 2. Have the same effect but in a vastly worse way by adjusting the >> installer to create gnops >> 3. Have ZFS choose by itself and decide to do that permanently. >> > If the device reports a 512B sector size, it would be great for ZFS to > assume the device could be lying, and automatically determine the minimum > ashift which gives good performance. I think this could be done reasonably > well for the common case by doing the following when each 512B-sector > device is added: > > 1. do random 4KB writes to the disk to determine wIOPS@4K > 2. do random 3.5KB writes to the disk to determine wIOPS@3.5K > > If wIOPS@4K > wIOPS@3.5K, assume 4KB sectors, otherwise assume 512B > sectors. (Note: I haven't tried this in practice; we will need to test it > out and perhaps make some tweaks.) > > I don't have the time or hardware to implement and test this, but I'd be > happy to mentor or code review. > > --matt I think we basically don't have any lying disks anymore. The ATA code does a very good job of this -- most tell the truth, but in an odd way that gets reported up the stack. ada(4) has a quirks table for the ones that do not. If this is the only concern, then we should just stop telling people to worry about this. My bigger concern is this pool upgrade one -- what if someone puts in a 4K disk in the future? -Nathan From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 15:49:09 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 98542F79 for ; Mon, 2 Jun 2014 15:49:09 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id 7998B248B for ; Mon, 2 Jun 2014 15:49:08 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id EC62823BC13; Mon, 2 Jun 2014 08:49:07 -0700 (PDT) Message-ID: <538C9CF3.6070208@bayphoto.com> Date: Mon, 02 Jun 2014 08:49:07 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> In-Reply-To: <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms020608040302080809050203" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 15:49:09 -0000 This is a cryptographically signed message in MIME format. --------------ms020608040302080809050203 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 6/2/2014 2:12 AM, Steven Hartland wrote: > ----- Original Message ----- From: "Mike Carlson" > >> On 5/30/2014 1:10 PM, Mike Carlson wrote: >> > On 5/30/2014 12:48 PM, Jordan Hubbard wrote: >> >> On May 30, 2014, at 12:04 PM, Mike Carlson wrot= e: >> >> >> >>> Over the weekend, we had upgraded one of our servers from=20 >> 9.1-RELEASE to 10.0-RELEASE, and then the zpool was upgraded (from=20 >> >>> 28 to 5000) >> >>> >> >>> Tuesday afternoon, the server suddenly rebooted (kernel panic),=20 >> and as soon as it tried to remount all of its ZFS volumes, >>> it=20 >> panic'd again. >> >> What=92s the panic text? That=92s pretty crucial in figuring out=20 >> whether this is recoverable (e.g. if it=92s spacemap corruption >>=20 >> related, probably not). >> >> >> >> - Jordan >> >> >> >> >> >> >> > I had linked the pictures I took of the console, but here is my=20 >> manual reproduction: >> > >> > Fatal trap 12: page fault while in kernel mode >> > cpuid =3D 7; apic id =3D 07 >> > fault virtual address =3D 0x4a0 >> > fault code =3D supervisor read data, page not prese= nt >> > instruction pointer =3D 0x20:0xffffffff81a7f39f >> > stack pointer =3D 0x28:0xfffffe1834789570 >> > frame pointer =3D 0x28:0xfffffe18347895b0 >> > code segment =3D base 0x0, limit 0xfffff, type 0x1b >> > =3D DPL 0, pres 1, long 1, def32 0, gra= n 1 >> > processor eflags =3D interrupt enabled, resume, IOPL =3D = 0 >> > current process =3D 1849 (txg_thread_enter) >> > trap number =3D 12 >> > panic: page fault >> > cpuid =3D 7 >> > KDB: stack backtrace: >> > #0 0xffffffff808e7dd0 at kdb_backtrace+0x60 >> > #1 0xffffffff808af8b5 at panic+0x155 >> > #2 0xffffffff80c8e629 at trap_fatal+0x3a2 >> > #3 0xffffffff80c8e969 at trap_pfault+0x2c9 >> > #4 0xffffffff80c8e0f6 at trap+0x5e6 >> > #5 0xffffffff80c75392 at calltrap+0x8 >> > #6 0xffffffff81a53b5a at dsl_dataset_block_kill+0x3a >> > #7 0xffffffff81a50967 at dnode_sync+0x237 >> > #8 0xffffffff81a48fcb at dmu_objset_sync_dnodes+0x2b >> > #9 0xffffffff81a48e4d at dmo_objset_sync+0x1ed >> > #10 0xffffffff81a5d29a at dsl_pool_sync+0xca >> > #11 0xffffffff81a78a4e at spa_sync+0x52e >> > #12 0xffffffff81a81925 at txg_sync_thread+0x375 >> > #13 0xffffffff8088198a at fork_exit+0x9a >> > #14 0xffffffff80c758ce at fork_trampoline+0xe >> > uptime: 46s >> > Automatic reboot in 15 seconds - press a key on the console to=20 >> abort >> > >> This just happened again to another server. We upgraded two servers=20 >> on the same morning, and now both of them exhibit this corrupted zfs=20 >> volume and panic behavior. >> >> Out of all the volumes, one of them is causing the panic, and the=20 >> panic message is nearly identical. >> >> I have 4 snapshots over the last 24 hours, so hopefully a snapshot=20 >> from noon today can be sent to a new volume ( zfs send | zfs recv ) >> >> I guess I can now rule out it being a hardware issue, this is clearly = >> problem related to the upgrade (freebsd-update was used). I first=20 >> thought the first system had a bad upgrade, perhaps a mix and match=20 >> of 9.2 binaries running on a 10 kernel, but I used the=20 >> 'freebsd-update IDS' command to verify the integrity of the install,=20 >> and it looked good, the only differences were config files in /etc/=20 >> that we manage. >> > > Do you have a kernel crash dump from this? > > Also can you confirm if your amd64 or just i386? > > Regards > Steve > > I dont have a crash dump, and this is on amd64 I might be able to get a crash dump on one of them, the other is back up = and running. It is a little challenging because the system I can do this = on has zfs on root, but I have a spare drive I can use as the swap volume= =2E Mike C --------------ms020608040302080809050203 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAyMTU0 OTA3WjAjBgkqhkiG9w0BCQQxFgQUNO7U8kYbROIr4ihMCDVyIu/6fYwwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAFeGRflxQBxEo0sscOyH4HZ2ELsIoPkzyr5+ 8gP3qp/ngL+cyuB8/K96Un33/4JV3CIkZbTCXBgqLf43SifxBgKx5xQa/acy5exz1NEzDlm6 QqEZU13FmQjL05v/dBrdmGmN7CbUnqjC9MDUsuUA3QfWTIwjjbiossiot7befn7IsnrpxIak bbKbAwepI1htDWx8YU2ZuOTszA0Utr2BJRlUq6TjGcOKXd4YYLLoXD2so8DpUHlDyCaUfHTD GnWeAob/pFis6+8eidUfKM7vW3T1ra3Is/mTBkMJ29oxUeHJ+eJIHhYv8inyatszYofXTL+X y3VMNA/ttkwvOK5QiYoKTl8BDJgpkdW9CgxEMIfMOby2puHlkwJhI2IR0vF0OL1oXiLFDcY2 55fy7XQm1bQqbign88XNqackPb0u8kGpVQjTl9LY154JZAWM9R9zpI62gnj/ACrGt3YZ9ueH 7c2uMNjtULKppJ+jQKKibjAzl7wBuE8xUOHr/Dl6RKJxPYkNFmr4q/i4z0OQKQRRLyE30OO1 pXo5lKnti+R9MQh3ioyYBgkoOVhtcBbfSNgm0EErCne96zsZRiGpG/+HXFdzFqaFPcPoOOLj a2DzSt3p+svk3MHoU54QZFsA/EumOTjN5M4HAJ9lAGOFXn0ryaw5KYATKEs6e62RKPONLhQL AAAAAAAA --------------ms020608040302080809050203-- From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 15:49:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CFEF495; Mon, 2 Jun 2014 15:49:35 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.feld.me", Issuer "Gandi Standard SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9E6932495; Mon, 2 Jun 2014 15:49:34 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id f94517e4; Mon, 2 Jun 2014 10:49:33 -0500 (CDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=feld.me; h=mime-version :content-type:date:from:to:cc:subject:in-reply-to:references :message-id:sender; s=blargle2; bh=U1+qVQibQ1ewYluFG2hYO5ZYx1k=; b= FJtCok4A5uyeFvUKyy9L/koWIMYHsVSk3Pzg8QE4FVOLlRfpgaMn3pandKKd5i62 ejAqH/ZZoIkJxn7g986mwpVj8L29eXleQvX4BtJBvYoWlhlp4WaGhQ/gSt4zM9j6 Q/ooaE97v09xVarQqJbElQJEfQ3lB6s5WPPa/pKfvPNKpFFWWYWuTq8QmczhXpno xLGpUnybi+0F8roASaKEfsnXTr6eqptgpXOtcLjqgWekfsGndkYeFnMvOkSrNhq6 A0cONPkCiBdasJhTaQ833ZjfKd78n3qh/Y98sVQIyaA5xCbQ9fRjF7bwQkG5xqiM KMwiKndP6+w34V1OgbS+7Q== DomainKey-Signature: a=rsa-sha1; c=nofws; d=feld.me; h=mime-version :content-type:date:from:to:cc:subject:in-reply-to:references :message-id:sender; q=dns; s=blargle2; b=now8t03R7BtRZ8qdnrHUOTq oCSGCfoih73+U03Lt5/3qnjxJxGx/r3ChKjfzigfbDoKt2CKtGXObwN3CC8XgAsf 61s6Z1mmHcC62xJWQiaWbYcC0kIsLJREqxkHJTrxyy+fTLOGSLVvkjEiqf14tFRK +lxx/bPBFlHEXb6B0s+iGsTR3YGmxjNsJF3aBjjnpOgW/CkKhgoGuhm4JZTqGyaq yUhfYTg9jm4XsQFHeZu7My75RhFZWZo+Wy2ETKkAvTcf70dmfQSlBUZAoSTCWc1r e6AlMkz2wRcZPKbWB2FUZG9WpRfrB2lSWep1UVNVtM3BNPO+ZBAFbjLL0bwgMFw= = Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id b03b15d9; Mon, 2 Jun 2014 10:49:33 -0500 (CDT) Received: from feld@feld.me by mail.feld.me (Archiveopteryx 3.2.0) with esmtpa id 1401724172-322-320/5/20; Mon, 2 Jun 2014 15:49:32 +0000 Mime-Version: 1.0 Content-Type: text/plain; format=flowed Date: Mon, 2 Jun 2014 10:49:32 -0500 From: Mark Felder To: Nathan Whitehorn Subject: Re: fdisk(8) vs gpart(8), and gnop In-Reply-To: <538C9207.9040806@freebsd.org> References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> <538C9207.9040806@freebsd.org> Message-Id: X-Sender: feld@FreeBSD.org User-Agent: Roundcube Webmail/1.0.1 Sender: feld@feld.me Cc: freebsd-fs@freebsd.org, FreeBSD Hackers , Matthew Ahrens , owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 15:49:35 -0000 On 2014-06-02 10:02, Nathan Whitehorn wrote: > > My bigger concern is this pool upgrade one -- what if someone puts in > a 4K disk in the future? This is a concern of mine, and I sort of wish we did 4k by default and forced people to override if they want 512b or something else. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 16:37:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 86C84890 for ; Mon, 2 Jun 2014 16:37:17 +0000 (UTC) Received: from mail-pa0-x230.google.com (mail-pa0-x230.google.com [IPv6:2607:f8b0:400e:c03::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5329D2A1A for ; Mon, 2 Jun 2014 16:37:17 +0000 (UTC) Received: by mail-pa0-f48.google.com with SMTP id fb1so1331540pad.35 for ; Mon, 02 Jun 2014 09:37:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphix.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=bwzj8nSsmgJWdnVCwWtYZ4ZSvRs2oTN3S+7Jnk/tsRE=; b=Vsm+SJwWORZOynBmpld2S15HbEh1w8ABiaKvzU5ZkrKyHV5thOW0zGl/jTzPfzeMEd 91bFPRUN5XhSbls+JBvr/Y8W8Mearf8s+qLjdy3TAlJr8gwOqGLHjh+R3BwvsX8gLltP c2EzxNQYPfsJAfw6M4Tcx393AFOPhHCfob7Ds= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=bwzj8nSsmgJWdnVCwWtYZ4ZSvRs2oTN3S+7Jnk/tsRE=; b=RMVPAYJIB6RklAQcLa5+N6DFzfH0+cnKu/3h/ti8Ity4vSxotFnC8HbSVorLkhvXCW 5Lh87nrFYSMn+r9pfcq8JvnHQNU7tggGZtjzKSsvX2/OsTfr1eygHBix/m1GsgUul2p1 HY+KCPKDO10dYGn1O005CgSQOJqza9jkPn7VAypJxCVPIQUh7bYVb20Rzsv7SDWA2UbP k/hvl1zmZ+zoNMmqRtj1mdmRXy3IjkSNIdc5CSYbt+aWoqzbB51ZR9BqqrU+wjZkpt3W lDlMUZZDLGoU6VRmSV8idGGsPotkEb0LGU9BcED9IZkWC+WtXVCrphleKm+tnsQPTjFn +WlA== X-Gm-Message-State: ALoCoQnkhyqwvxZ9397NjgzBWu4Z3M4+etr99ajqMhMzX4LY0ubIOZBcTwDZAHlZ6jIDliOrnYlB MIME-Version: 1.0 X-Received: by 10.68.164.100 with SMTP id yp4mr41222247pbb.136.1401727036804; Mon, 02 Jun 2014 09:37:16 -0700 (PDT) Received: by 10.70.0.202 with HTTP; Mon, 2 Jun 2014 09:37:16 -0700 (PDT) In-Reply-To: <538C9207.9040806@freebsd.org> References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> <538C9207.9040806@freebsd.org> Date: Mon, 2 Jun 2014 09:37:16 -0700 Message-ID: Subject: Re: fdisk(8) vs gpart(8), and gnop From: Matthew Ahrens To: Nathan Whitehorn Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 16:37:17 -0000 On Mon, Jun 2, 2014 at 8:02 AM, Nathan Whitehorn wrote: > > My bigger concern is this pool upgrade one -- what if someone puts in a 4K > disk in the future? > > We could dynamically change the "effective ashift" by preferring to allocate multiples of 4K. This would involve some work in the space allocator. Again, I don't have time to do this right now but would be happy to mentor someone. --matt From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 17:07:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 44941C15; Mon, 2 Jun 2014 17:07:14 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id D18CB2D35; Mon, 2 Jun 2014 17:07:13 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id E7BB020E7088B; Mon, 2 Jun 2014 17:07:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 10B7D20E70886; Mon, 2 Jun 2014 17:07:09 +0000 (UTC) Message-ID: From: "Steven Hartland" To: "Nathan Whitehorn" , "Matthew Ahrens" References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> <538C9207.9040806@freebsd.org> Subject: Re: fdisk(8) vs gpart(8), and gnop Date: Mon, 2 Jun 2014 18:07:14 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 17:07:14 -0000 ----- Original Message ----- From: "Nathan Whitehorn" To: "Matthew Ahrens" Cc: "freebsd-fs" ; "FreeBSD Hackers" ; "Steven Hartland" Sent: Monday, June 02, 2014 4:02 PM Subject: Re: fdisk(8) vs gpart(8), and gnop > On 06/01/14 14:27, Matthew Ahrens wrote: >> >>>> I think you will get some objections to that, as it can have quite an >>>> impact >>>> on the performance for disks which are 512, due to the increased overhead >>>> of >>>> transfering 4k when only 512 is really required. This has a more dramatic >>>> impact on RAIDZx due too. >>>> >>>> Personally we run a custom kernel on our machines which has just this >>>> change >>>> in it to ensure capability with future disks, so I can confirm it does >>>> indeed >>>> have the desired effect :) >>>> >>> So the discussion here is related to what to do about the installer. The >>> current ZFS component unconditionally creates gnops all over the place to >>> set ashift to 4k. That's across the board worse: it has exactly the >>> performance impact of changing the default of this sysctl (whatever that >>> is), it can't easily be overridden (which the sysctl can), and it's a >>> horrible hack to boot. There are a few options: >>> >>> 1. Change the default of vfs.zfs.min_auto_ashift >>> >> This is probably a bad idea -- as others have mentioned, it can drastically >> impact space usage and performance on 512B disks, especially when using >> small ZFS blocks (e.g. for databases or VDI) and/or RAID-Z. That said, it >> could be a reasonable default for specialized distros that are not used for >> these workloads (maybe FreeNAS or PCBSD?). >> >> 2. Have the same effect but in a vastly worse way by adjusting the >>> installer to create gnops >>> 3. Have ZFS choose by itself and decide to do that permanently. >>> >> If the device reports a 512B sector size, it would be great for ZFS to >> assume the device could be lying, and automatically determine the minimum >> ashift which gives good performance. I think this could be done reasonably >> well for the common case by doing the following when each 512B-sector >> device is added: >> >> 1. do random 4KB writes to the disk to determine wIOPS@4K >> 2. do random 3.5KB writes to the disk to determine wIOPS@3.5K >> >> If wIOPS@4K > wIOPS@3.5K, assume 4KB sectors, otherwise assume 512B >> sectors. (Note: I haven't tried this in practice; we will need to test it >> out and perhaps make some tweaks.) >> >> I don't have the time or hardware to implement and test this, but I'd be >> happy to mentor or code review. >> >> --matt > > I think we basically don't have any lying disks anymore. The ATA code does a very good job of this -- most tell the truth, but > in an odd way that gets reported up the stack. ada(4) has a quirks table for the ones that do not. If this is the only concern, > then we should just stop telling people to worry about this. > > My bigger concern is this pool upgrade one -- what if someone puts in a 4K disk in the future? Thats very much not the case I'm afraid, I try to add quirks for disk as they are reported but there's always going to be quite a few which are wrong until manufacturers stop making their FW lie :( We really need a system which can be user updated for this sort of thing but I've not had any time to even think about that I'm afraid. IIRC scottl has ideas in the this area too. Regards Steve Regards Steve From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 17:09:26 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7A7BADA8 for ; Mon, 2 Jun 2014 17:09:26 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 3CCAE2D69 for ; Mon, 2 Jun 2014 17:09:26 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 9A01520E7088C; Mon, 2 Jun 2014 17:09:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 2522F20E70886; Mon, 2 Jun 2014 17:09:20 +0000 (UTC) Message-ID: <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> From: "Steven Hartland" To: , References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Date: Mon, 2 Jun 2014 18:09:25 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 17:09:26 -0000 ----- Original Message ----- From: "Mike Carlson" > I dont have a crash dump, and this is on amd64 > > I might be able to get a crash dump on one of them, the other is back up > and running. It is a little challenging because the system I can do this > on has zfs on root, but I have a spare drive I can use as the swap volume. A crash dump would be very useful, so if you can that would be appreciated. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 17:12:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 06256E50; Mon, 2 Jun 2014 17:12:52 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id B85252DFB; Mon, 2 Jun 2014 17:12:51 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id E56F520E7088C; Mon, 2 Jun 2014 17:12:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 8380E20E70886; Mon, 2 Jun 2014 17:12:45 +0000 (UTC) Message-ID: From: "Steven Hartland" To: "Mark Felder" , "Nathan Whitehorn" References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> <538C9207.9040806@freebsd.org> Subject: Re: fdisk(8) vs gpart(8), and gnop Date: Mon, 2 Jun 2014 18:12:50 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, FreeBSD Hackers , Matthew Ahrens , owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 17:12:52 -0000 ----- Original Message ----- From: "Mark Felder" > On 2014-06-02 10:02, Nathan Whitehorn wrote: >> >> My bigger concern is this pool upgrade one -- what if someone puts in >> a 4K disk in the future? > > This is a concern of mine, and I sort of wish we did 4k by default and > forced people to override if they want 512b or something else. That is exactly why we enforce min 4k everywhere here too, but its not for everyone which is why I stuck to 512b default when I added it. I guess the big question is: Is future compatibility vs performance the right way to go for the default as those what want the absolute best performance could always reduce the value prior to creating / added top level vdevs? Regards Steve From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 17:15:53 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F1D5AFF1; Mon, 2 Jun 2014 17:15:52 +0000 (UTC) Received: from aslan.scsiguy.com (aslan.scsiguy.com [70.89.174.89]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A21102E32; Mon, 2 Jun 2014 17:15:52 +0000 (UTC) Received: from jt-mbp.sldomain.com (207-225-98-3.dia.static.qwest.net [207.225.98.3]) (authenticated bits=0) by aslan.scsiguy.com (8.14.8/8.14.8) with ESMTP id s52HFlxG018167 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Mon, 2 Jun 2014 11:15:49 -0600 (MDT) (envelope-from gibbs@scsiguy.com) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) Subject: Re: fdisk(8) vs gpart(8), and gnop From: "Justin T. Gibbs" In-Reply-To: Date: Mon, 2 Jun 2014 11:15:42 -0600 Content-Transfer-Encoding: quoted-printable Message-Id: <61DC020F-F061-4A6E-AAEA-F0AE4CAE92F9@scsiguy.com> References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> <538C9207.9040806@freebsd.org> To: Mark Felder X-Mailer: Apple Mail (2.1878.2) Cc: freebsd-fs@freebsd.org, FreeBSD Hackers , Matthew Ahrens , owner-freebsd-fs@freebsd.org, Nathan Whitehorn X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 17:15:53 -0000 On Jun 2, 2014, at 9:49 AM, Mark Felder wrote: > On 2014-06-02 10:02, Nathan Whitehorn wrote: >> My bigger concern is this pool upgrade one -- what if someone puts in >> a 4K disk in the future? >=20 > This is a concern of mine, and I sort of wish we did 4k by default and = forced people to override if they want 512b or something else. Adding a 4k sectored device is fine. You just need to use it in a new = top-level vdev in the pool. If you are at the point where you can=92t get new or compatible = warrantee replacements for the drives that may fail in your existing = pool, you should be migrating your data to new devices anyway. Mixing = devices with different performance characteristics within a TLV can lead = to pessimal behavior. I don=92t think that ZFS should jump through = large hoops to try and make this work well. Instead, we should = encourage the use of similar devices within a TLV (guidance that the = installer has sufficient information to provide*) and the system should = be optimized assuming this is how it will be used. I certainly *do not* want FreeBSD to automatically inflate the ashift = used on my pools. Doing so is an attempt to guess why I chose the = devices I did at pool creation time and my strategy for retiring them in = the future. The current proposal guesses wrong for me and the products = I help build. I=92d bet it will be wrong more times than right. =97 Justin *) Using the tools already in FreeBSD it is quite easy to group devices = by transport type, capacity, logical block size, physical block size, = and, for at least SCSI transports, media rotational speed. We do this = in Spectra=92s ZFS appliance so users have to work really hard to mix = devices that they shouldn=92t.= From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 17:20:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 40BEB227; Mon, 2 Jun 2014 17:20:10 +0000 (UTC) Received: from d.mail.sonic.net (d.mail.sonic.net [64.142.111.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 231BA2E72; Mon, 2 Jun 2014 17:20:09 +0000 (UTC) Received: from aurora.physics.berkeley.edu (aurora.Physics.Berkeley.EDU [128.32.117.67]) (authenticated bits=0) by d.mail.sonic.net (8.14.4/8.14.4) with ESMTP id s52HK7nD016453 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NOT); Mon, 2 Jun 2014 10:20:07 -0700 Message-ID: <538CB246.9080905@freebsd.org> Date: Mon, 02 Jun 2014 10:20:06 -0700 From: Nathan Whitehorn User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: "Justin T. Gibbs" , Mark Felder Subject: Re: fdisk(8) vs gpart(8), and gnop References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> <538C9207.9040806@freebsd.org> <61DC020F-F061-4A6E-AAEA-F0AE4CAE92F9@scsiguy.com> In-Reply-To: <61DC020F-F061-4A6E-AAEA-F0AE4CAE92F9@scsiguy.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-Sonic-ID: C;eqR8JHrq4xGzasUoeQW9yA== M;MIimJHrq4xGzasUoeQW9yA== Cc: freebsd-fs@freebsd.org, FreeBSD Hackers , Matthew Ahrens , owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 17:20:10 -0000 On 06/02/14 10:15, Justin T. Gibbs wrote: > On Jun 2, 2014, at 9:49 AM, Mark Felder wrote: > >> On 2014-06-02 10:02, Nathan Whitehorn wrote: >>> My bigger concern is this pool upgrade one -- what if someone puts in >>> a 4K disk in the future? >> This is a concern of mine, and I sort of wish we did 4k by default and forced people to override if they want 512b or something else. > Adding a 4k sectored device is fine. You just need to use it in a new top-level vdev in the pool. > > If you are at the point where you can’t get new or compatible warrantee replacements for the drives that may fail in your existing pool, you should be migrating your data to new devices anyway. Mixing devices with different performance characteristics within a TLV can lead to pessimal behavior. I don’t think that ZFS should jump through large hoops to try and make this work well. Instead, we should encourage the use of similar devices within a TLV (guidance that the installer has sufficient information to provide*) and the system should be optimized assuming this is how it will be used. > > I certainly *do not* want FreeBSD to automatically inflate the ashift used on my pools. Doing so is an attempt to guess why I chose the devices I did at pool creation time and my strategy for retiring them in the future. The current proposal guesses wrong for me and the products I help build. I’d bet it will be wrong more times than right. > > — > Justin > > *) Using the tools already in FreeBSD it is quite easy to group devices by transport type, capacity, logical block size, physical block size, and, for at least SCSI transports, media rotational speed. We do this in Spectra’s ZFS appliance so users have to work really hard to mix devices that they shouldn’t. > Well, this makes it sound easy then. We just don't worry about it and keep the existing defaults. This requires some documentation updates and changes to the installer. The "standard" advice seems to be universally to add gnops to set the sector size to 4k and the existing installer ZFS support does this. -Nathan From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 17:27:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8BA145DB for ; Mon, 2 Jun 2014 17:27:08 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id 6EB8F2F4D for ; Mon, 2 Jun 2014 17:27:07 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 19CD823B88C; Mon, 2 Jun 2014 10:27:07 -0700 (PDT) Message-ID: <538CB3EA.9010807@bayphoto.com> Date: Mon, 02 Jun 2014 10:27:06 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> In-Reply-To: <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms050903090306050008030401" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 17:27:08 -0000 This is a cryptographically signed message in MIME format. --------------ms050903090306050008030401 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 6/2/2014 10:09 AM, Steven Hartland wrote: > ----- Original Message ----- From: "Mike Carlson" >> I dont have a crash dump, and this is on amd64 >> >> I might be able to get a crash dump on one of them, the other is back = >> up and running. It is a little challenging because the system I can=20 >> do this on has zfs on root, but I have a spare drive I can use as the = >> swap volume. > > A crash dump would be very useful, so if you can that would be=20 > appreciated. > > Regards > Steve > > I have a crash dump, its 3.2GB in size. How can I get this to the right people? I can put it up on S3, but I'd=20 prefer to limit its access since it may contain sensitive information=20 (we dont have local accounts aside from root, but better to be safe) Mike C --------------ms050903090306050008030401 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAyMTcy NzA2WjAjBgkqhkiG9w0BCQQxFgQU+KRQo4UIeXbPL++OvA1CJ0/n9G8wbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAEqrbs6XGUBYH+Wtqd58piNDgC2MCyt/15BV gr1lP4cZktTfoXozJY5DR4MK/ahuwLsYZJw1NRM20OzxmuYXvT6kEe8WWmuTbbqNoAPbTrFa NwHHssmgePnh1cK2GrbtzTqpdQh31Dw48TDnho0tBR4pNDLpUO//ULvYLi0CdXRnjPkEaf71 Ie+S1eiqeBQx7O6d6i17hyQnCkocREDwqPvgR+lE9giUD7/SIiq/gL84vLrwQeelIGAj7D7l MTFdgh0lsvyz43IF9/yxD0WmLELbWx3QSaq9czrUCACu0/uVAe5sfUTR4Mw5u3pK4HRKmJ3d LFeoh/l6E0JPUo075ZD7riDQ3WNzmbhNJWApUDxTLH3JWdIKhSPq1fZbBJQ3P5n5anom0P05 XG4kWDX4VUcmgrB6V7GEMFLmcgVpnE7/g9HAClUig5EkNl3NCrCx1aCUleBgyX9f+C0RLfPz qJAjQPjQtjOzLa68ic2jbNYRaVeX0g+IUKTmS9RiXpM9bnkHWlwfhoLQXqllXjoESUoEQWUR UoBUdP29WTUSxCWw8FzcyTCEEx7o8zoMyoATNFoskbaB2xWvcNT0yY74H9fuob2fDPvucNqi +UXTv/dHoV/Csss5YV951Y71qc7USmqSfeSfAg2DZMni51JUsqJJcwflzfbnCRLgpMefs/PG AAAAAAAA --------------ms050903090306050008030401-- From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 17:35:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7363F9E7 for ; Mon, 2 Jun 2014 17:35:24 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 36BDA2049 for ; Mon, 2 Jun 2014 17:35:24 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 4752220E7088C; Mon, 2 Jun 2014 17:35:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 1109220E70886; Mon, 2 Jun 2014 17:35:19 +0000 (UTC) Message-ID: <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> From: "Steven Hartland" To: , References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Date: Mon, 2 Jun 2014 18:35:24 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 17:35:24 -0000 ----- Original Message ----- From: "Mike Carlson" > I have a crash dump, its 3.2GB in size. > > How can I get this to the right people? I can put it up on S3, but I'd > prefer to limit its access since it may contain sensitive information > (we dont have local accounts aside from root, but better to be safe) For starters can you put the top contents of core.X.txt somewhere. If there's sensitive info in there just the top kgdb section with the backtrace will be good initially. Regards Steve From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 18:24:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E665D93 for ; Mon, 2 Jun 2014 18:24:43 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id C4EDE2533 for ; Mon, 2 Jun 2014 18:24:42 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 558F123BBB1; Mon, 2 Jun 2014 11:24:42 -0700 (PDT) Message-ID: <538CC16A.6060207@bayphoto.com> Date: Mon, 02 Jun 2014 11:24:42 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> In-Reply-To: <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070202020907020902050608" X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 18:24:44 -0000 This is a cryptographically signed message in MIME format. --------------ms070202020907020902050608 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 6/2/2014 10:35 AM, Steven Hartland wrote: > ----- Original Message ----- From: "Mike Carlson" >> I have a crash dump, its 3.2GB in size. >> >> How can I get this to the right people? I can put it up on S3, but=20 >> I'd prefer to limit its access since it may contain sensitive=20 >> information (we dont have local accounts aside from root, but better=20 >> to be safe) > > For starters can you put the top contents of core.X.txt somewhere. > > If there's sensitive info in there just the top kgdb section with > the backtrace will be good initially. > > Regards > Steve > > I don't have a core.0.txt, I only have: ~/p/z/dump> ls -al total 347690 drwxr-xr-x 3 mikec wheel 8 Jun 2 03:25 . drwxr-xr-x 4 mikec wheel 5 Jun 2 10:44 .. drwxrwxr-x 2 mikec operator 2 Jun 2 03:07 .snap -rw-r--r-- 1 mikec wheel 2 Jun 2 03:24 bounds -rw------- 1 mikec wheel 446 Jun 2 03:24 info.0 lrwxr-xr-x 1 mikec wheel 6 Jun 2 03:25 info.last -> info.0 -rw------- 1 mikec wheel 3469885440 Jun 2 03:25 vmcore.0 lrwxr-xr-x 1 mikec wheel 8 Jun 2 03:25 vmcore.last -> vmcore.0 But, here is the kgdb output (with backtrace): ~/p/z/dump> cat ../kgdb_backtrace.txt <118>root@:/ # zfs set canmount=3Don zroot/data/working <118>root@:/ # zfs mount zroot/data/working Fatal trap 12: page fault while in kernel mode cpuid =3D 14; apic id =3D 22 fault virtual address =3D 0x4a0 fault code =3D supervisor read data, page not present instruction pointer =3D 0x20:0xffffffff8185a39f stack pointer =3D 0x28:0xfffffe1834608570 frame pointer =3D 0x28:0xfffffe18346085b0 code segment =3D base 0x0, limit 0xfffff, type 0x1b =3D DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags =3D interrupt enabled, resume, IOPL =3D 0 current process =3D 2 (txg_thread_enter) trap number =3D 12 panic: page fault cpuid =3D 14 KDB: stack backtrace: #0 0xffffffff808e7ee0 at kdb_backtrace+0x60 #1 0xffffffff808af9c5 at panic+0x155 #2 0xffffffff80c8e7b2 at trap_fatal+0x3a2 #3 0xffffffff80c8ea89 at trap_pfault+0x2c9 #4 0xffffffff80c8e216 at trap+0x5e6 #5 0xffffffff80c754b2 at calltrap+0x8 #6 0xffffffff8182eb5a at dsl_dataset_block_kill+0x3a #7 0xffffffff8182b967 at dnode_sync+0x237 #8 0xffffffff81823fcb at dmu_objset_sync_dnodes+0x2b #9 0xffffffff81823e4d at dmu_objset_sync+0x1ed #10 0xffffffff8183829a at dsl_pool_sync+0xca #11 0xffffffff81853a4e at spa_sync+0x52e #12 0xffffffff8185c925 at txg_sync_thread+0x375 #13 0xffffffff80881a9a at fork_exit+0x9a #14 0xffffffff80c759ee at fork_trampoline+0xe Uptime: 26m15s Dumping 3309 out of 98234 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% Reading symbols from /boot/kernel/zfs.ko.symbols...done. Loaded symbols for /boot/kernel/zfs.ko.symbols Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. Loaded symbols for /boot/kernel/opensolaris.ko.symbols #0 doadump (textdump=3D) at pcpu.h:219 219 __asm("movq %%gs:%1,%0" : "=3Dr" (td) (kgdb) backtrace #0 doadump (textdump=3D) at pcpu.h:219 #1 0xffffffff808af640 in kern_reboot (howto=3D260) at /usr/src/sys/kern/kern_shutdown.c:447 #2 0xffffffff808afa04 in panic (fmt=3D) at /usr/src/sys/kern/kern_shutdown.c:754 #3 0xffffffff80c8e7b2 in trap_fatal (frame=3D, eva=3D) at /usr/src/sys/amd64/amd64/trap.c:882 #4 0xffffffff80c8ea89 in trap_pfault (frame=3D0xfffffe18346084c0, usermode=3D0) at /usr/src/sys/amd64/amd64/trap.c:699 #5 0xffffffff80c8e216 in trap (frame=3D0xfffffe18346084c0) at /usr/src/sys/amd64/amd64/trap.c:463 #6 0xffffffff80c754b2 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:232 #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=3D0xfffff80041835000= , bp=3D0xfffffe001b8a1780) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/spa_misc.c:1635 #8 0xffffffff8182eb5a in dsl_dataset_block_kill (ds=3D0xfffff800410fec00, bp=3D0xfffffe001b8a1780, tx=3D0xfffff8004faa0600, async=3D0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/dsl_dataset.c:129 #9 0xffffffff8182b967 in dnode_sync (dn=3D0xfffff8004fe626c0, tx=3D0xfffff8004faa0600) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/dnode_sync.c:128 #10 0xffffffff81823fcb in dmu_objset_sync_dnodes (list=3D0xfffff80041956b10, newlist=3D, tx=3D) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/dmu_objset.c:945 #11 0xffffffff81823e4d in dmu_objset_sync (os=3D0xfffff80041956800, pio=3D0xfffff800418c43b0, tx=3D0xfffff8004faa0600) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/dmu_objset.c:1062 #12 0xffffffff8183829a in dsl_pool_sync (dp=3D0xfffff8004183c000, txg=3D) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/dsl_pool.c:413 #13 0xffffffff81853a4e in spa_sync (spa=3D0xfffff80041835000, txg=3D3373534) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/spa.c:6410 #14 0xffffffff8185c925 in txg_sync_thread (arg=3D0xfffff8004183c000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/txg.c:515 #15 0xffffffff80881a9a in fork_exit (callout=3D0xffffffff8185c5b0 , arg=3D0xfffff8004183c000, frame=3D0xfffffe1834608a= c0) at /usr/src/sys/kern/kern_fork.c:995 #16 0xffffffff80c759ee in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606 #17 0x0000000000000000 in ?? () Current language: auto; currently minimal If anyone wants to help out and check out the vmcore file, email me off=20 the list and I'll provide a S3 url of the tar'd + xz file. Thanks, Mike C --------------ms070202020907020902050608 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAyMTgy NDQyWjAjBgkqhkiG9w0BCQQxFgQU/+bvLZb9BpM/COyUAc7UEGAQ+xgwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAEJQHNYdsfzfqRV23V8lmJuC4ptduA5Q1G89 mm68/NndPPPmqAo1YBC6smTozIhh2TcosStEWE1KuDkZ5uiyEQVpDOkMNvXp3CTHWtZRMvOx Y4KZz9FhkxiLT3OAFuSkB/ylyHnUdiHhBhvimYdYFi3YiK9V3aUM6d2Sza0OTb/SxxA25rcr B6bu7pigxJ00VLb9ICWOQGEHEbWFcqYWd5bkYT9sR+Fyd1RiGi9E8h4Z5MCl1Cv6ft63uB2f iEs3rZHR6joudKSHEH1CJRNtcPO9X4waVIDhG0pAa046vDX1FFIFrsaXCQ0DJWmJHDkN5l8C iPrgE83c0cunBHoXHFBOXcT/0qMX7wyFY/2uyQlKMiVO5MnOz3Wbf7+IBPeHQVCt9DVqG3sO oxQfNob6abQFQ11p10zGRhUFz8x2zFAWgRMEYv8PldRAWnM0hZPBsa+YgaP3dO2zqC6ni3Gi WVnABDBjfojJWKMde3ZDDUB7vc+Dl0HjX4cnoZoOfmBPwTrzyIYVijiyi5PsWpNqkvlZgCFE YO9LCVHmIIR1bVrBwWhImyO1FPgUn619u0K7uoFZgmmF1fJzypKQWn08u6Q1sdrko8aDhoAM okYitpQ1gQ57No0jfPTqu5lItk95ZRJHFqOaO0IYPsc/PuIxWWkFFpb6C6w9YymmfR+HeYY2 AAAAAAAA --------------ms070202020907020902050608-- From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 18:30:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B41B9508 for ; Mon, 2 Jun 2014 18:30:38 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id 920EE2577 for ; Mon, 2 Jun 2014 18:30:37 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 28CA223BB5B for ; Mon, 2 Jun 2014 11:30:37 -0700 (PDT) Message-ID: <538CC2CC.3060306@bayphoto.com> Date: Mon, 02 Jun 2014 11:30:36 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> In-Reply-To: <538CC16A.6060207@bayphoto.com> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms050301020308080405050402" X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 18:30:38 -0000 This is a cryptographically signed message in MIME format. --------------ms050301020308080405050402 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 6/2/2014 11:24 AM, Mike Carlson wrote: > On 6/2/2014 10:35 AM, Steven Hartland wrote: >> ----- Original Message ----- From: "Mike Carlson" >>> I have a crash dump, its 3.2GB in size. >>> >>> How can I get this to the right people? I can put it up on S3, but=20 >>> I'd prefer to limit its access since it may contain sensitive=20 >>> information (we dont have local accounts aside from root, but better = >>> to be safe) >> >> For starters can you put the top contents of core.X.txt somewhere. >> >> If there's sensitive info in there just the top kgdb section with >> the backtrace will be good initially. >> >> Regards >> Steve >> >> > I don't have a core.0.txt, I only have: > > ~/p/z/dump> ls -al > total 347690 > drwxr-xr-x 3 mikec wheel 8 Jun 2 03:25 . > drwxr-xr-x 4 mikec wheel 5 Jun 2 10:44 .. > drwxrwxr-x 2 mikec operator 2 Jun 2 03:07 .snap > -rw-r--r-- 1 mikec wheel 2 Jun 2 03:24 bounds > -rw------- 1 mikec wheel 446 Jun 2 03:24 info.0 > lrwxr-xr-x 1 mikec wheel 6 Jun 2 03:25 info.last -> > info.0 > -rw------- 1 mikec wheel 3469885440 Jun 2 03:25 vmcore.0 > lrwxr-xr-x 1 mikec wheel 8 Jun 2 03:25 vmcore.last > -> vmcore.0 > > But, here is the kgdb output (with backtrace): > > ~/p/z/dump> cat ../kgdb_backtrace.txt > <118>root@:/ # zfs set canmount=3Don zroot/data/working > <118>root@:/ # zfs mount zroot/data/working > > > Fatal trap 12: page fault while in kernel mode > cpuid =3D 14; apic id =3D 22 > fault virtual address =3D 0x4a0 > fault code =3D supervisor read data, page not present > instruction pointer =3D 0x20:0xffffffff8185a39f > stack pointer =3D 0x28:0xfffffe1834608570 > frame pointer =3D 0x28:0xfffffe18346085b0 > code segment =3D base 0x0, limit 0xfffff, type 0x1b > =3D DPL 0, pres 1, long 1, def32 0, gran 1 > processor eflags =3D interrupt enabled, resume, IOPL =3D 0 > current process =3D 2 (txg_thread_enter) > trap number =3D 12 > panic: page fault > cpuid =3D 14 > KDB: stack backtrace: > #0 0xffffffff808e7ee0 at kdb_backtrace+0x60 > #1 0xffffffff808af9c5 at panic+0x155 > #2 0xffffffff80c8e7b2 at trap_fatal+0x3a2 > #3 0xffffffff80c8ea89 at trap_pfault+0x2c9 > #4 0xffffffff80c8e216 at trap+0x5e6 > #5 0xffffffff80c754b2 at calltrap+0x8 > #6 0xffffffff8182eb5a at dsl_dataset_block_kill+0x3a > #7 0xffffffff8182b967 at dnode_sync+0x237 > #8 0xffffffff81823fcb at dmu_objset_sync_dnodes+0x2b > #9 0xffffffff81823e4d at dmu_objset_sync+0x1ed > #10 0xffffffff8183829a at dsl_pool_sync+0xca > #11 0xffffffff81853a4e at spa_sync+0x52e > #12 0xffffffff8185c925 at txg_sync_thread+0x375 > #13 0xffffffff80881a9a at fork_exit+0x9a > #14 0xffffffff80c759ee at fork_trampoline+0xe > Uptime: 26m15s > Dumping 3309 out of 98234 > MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% > > Reading symbols from /boot/kernel/zfs.ko.symbols...done. > Loaded symbols for /boot/kernel/zfs.ko.symbols > Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. > Loaded symbols for /boot/kernel/opensolaris.ko.symbols > #0 doadump (textdump=3D) at pcpu.h:219 > 219 __asm("movq %%gs:%1,%0" : "=3Dr" (td) > (kgdb) backtrace > #0 doadump (textdump=3D) at pcpu.h:219 > #1 0xffffffff808af640 in kern_reboot (howto=3D260) at > /usr/src/sys/kern/kern_shutdown.c:447 > #2 0xffffffff808afa04 in panic (fmt=3D) at > /usr/src/sys/kern/kern_shutdown.c:754 > #3 0xffffffff80c8e7b2 in trap_fatal (frame=3D,= > eva=3D) at /usr/src/sys/amd64/amd64/trap.c:882 > #4 0xffffffff80c8ea89 in trap_pfault (frame=3D0xfffffe18346084c0, > usermode=3D0) at /usr/src/sys/amd64/amd64/trap.c:699 > #5 0xffffffff80c8e216 in trap (frame=3D0xfffffe18346084c0) at > /usr/src/sys/amd64/amd64/trap.c:463 > #6 0xffffffff80c754b2 in calltrap () at > /usr/src/sys/amd64/amd64/exception.S:232 > #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=3D0xfffff8004183500= 0, > bp=3D0xfffffe001b8a1780) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/spa_misc.c:1635 > #8 0xffffffff8182eb5a in dsl_dataset_block_kill > (ds=3D0xfffff800410fec00, bp=3D0xfffffe001b8a1780, > tx=3D0xfffff8004faa0600, async=3D0) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/dsl_dataset.c:129 > #9 0xffffffff8182b967 in dnode_sync (dn=3D0xfffff8004fe626c0, > tx=3D0xfffff8004faa0600) at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/dnode_sync.c:128 > #10 0xffffffff81823fcb in dmu_objset_sync_dnodes > (list=3D0xfffff80041956b10, newlist=3D, tx=3D optimized out>) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/dmu_objset.c:945 > #11 0xffffffff81823e4d in dmu_objset_sync (os=3D0xfffff80041956800, > pio=3D0xfffff800418c43b0, tx=3D0xfffff8004faa0600) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/dmu_objset.c:1062 > #12 0xffffffff8183829a in dsl_pool_sync (dp=3D0xfffff8004183c000, > txg=3D) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/dsl_pool.c:413 > #13 0xffffffff81853a4e in spa_sync (spa=3D0xfffff80041835000, > txg=3D3373534) at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/spa.c:6410 > #14 0xffffffff8185c925 in txg_sync_thread (arg=3D0xfffff8004183c000)= > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/txg.c:515 > #15 0xffffffff80881a9a in fork_exit (callout=3D0xffffffff8185c5b0 > , arg=3D0xfffff8004183c000, frame=3D0xfffffe1834608= ac0) > at /usr/src/sys/kern/kern_fork.c:995 > #16 0xffffffff80c759ee in fork_trampoline () at > /usr/src/sys/amd64/amd64/exception.S:606 > #17 0x0000000000000000 in ?? () > Current language: auto; currently minimal > > > If anyone wants to help out and check out the vmcore file, email me=20 > off the list and I'll provide a S3 url of the tar'd + xz file. > > Thanks, > Mike C Oh, also, here is the output of list **0xffffffff8185a39f (kgdb) list *0xffffffff8185a39f 0xffffffff8185a39f is in bp_get_dsize_sync (/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/f= s/zfs/spa_misc.c:1635). warning: Source file is more recent than executable. 1630 1631 ASSERT(spa_config_held(spa, SCL_ALL, RW_READER) !=3D = 0); 1632 1633 if (asize !=3D 0 && spa->spa_deflate) { 1634 vdev_t *vd =3D vdev_lookup_top(spa, DVA_GET_VDEV(dva)); 1635 dsize =3D (asize >> SPA_MINBLOCKSHIFT) * vd->vdev_deflate_ratio; 1636 } 1637 1638 return (dsize); 1639 } --------------ms050301020308080405050402 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAyMTgz MDM2WjAjBgkqhkiG9w0BCQQxFgQUB1/afVGbaKb6XuC11uML2UGN55gwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICALGl0oL5YsjMylGpvfUxoZrGO4W/PWEDjO5b 6xkR7gfqVCQadUWBkvJkTxzA2SCd453J13e/1HScyfUQfOAEwWWny65irNSEIaW8wb51G+QV /bix/NO07wmAKsCVpoggxn4oBRZ1V1UZ8+RUbZW4SnufxguxuQsKVp/Ck3s2uITRMXukJGNx C59hlacusudks2er9g5uySnRNpLm/wsLjqD0DJHRv0ajK6eLpQef4vCX95nAN+aiNajxv1O2 zYe03Iwwj9rCYJ008Im9D0sokAY7P0/DtqvHLUWYrdtWEV4rjjCIgn7El3vGQ38HQ8jUYgPZ JMNawqD9gdRSy5+DimawgeHNEzgx5yK2W9K0HZ/H7gG9tk9xl5y1aaDp1ZpPqH4lPq429WSQ zJcrb9Z8wvbE/XZKi1a7D8lWTBTnivL1izAYOygGQp4t0t9WxVZt7eE585y45pQBuHeUBmYU j896YcpVFnma2RyTOGi6iQedQj3JmReobms+HctDR/qdNC2wPTniVhuEAJvpEcP5ijQ0cWy5 EAlej8rnZEjq+loMZ+I+MWmqWNxmuVFUN3pBeNmpE0f0SqYTYmKjo08u9NVU2X5LVtqvIbed xR50u0Ls0mVOzkf/iRC/+O2Difuum9zXHnoUX0tlPsdo6mjJcxwoJ9OG8VZLolb3GWvigZfL AAAAAAAA --------------ms050301020308080405050402-- From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 20:06:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4CE89C9F for ; Mon, 2 Jun 2014 20:06:27 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id F0FE02EED for ; Mon, 2 Jun 2014 20:06:26 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id E55FB20E7088C; Mon, 2 Jun 2014 20:06:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.9 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1, HELO_NO_DOMAIN, RDNS_DYNAMIC, STOX_REPLY_TYPE, TVD_FINGER_02 autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 82BF620E70886; Mon, 2 Jun 2014 20:06:20 +0000 (UTC) Message-ID: From: "Steven Hartland" To: , References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Date: Mon, 2 Jun 2014 21:06:25 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 20:06:27 -0000 Could you check what vd is at frame 7? ----- Original Message ----- From: "Mike Carlson" On 6/2/2014 10:35 AM, Steven Hartland wrote: > ----- Original Message ----- From: "Mike Carlson" >> I have a crash dump, its 3.2GB in size. >> >> How can I get this to the right people? I can put it up on S3, but >> I'd prefer to limit its access since it may contain sensitive >> information (we dont have local accounts aside from root, but better >> to be safe) > > For starters can you put the top contents of core.X.txt somewhere. > > If there's sensitive info in there just the top kgdb section with > the backtrace will be good initially. > > Regards > Steve > > I don't have a core.0.txt, I only have: ~/p/z/dump> ls -al total 347690 drwxr-xr-x 3 mikec wheel 8 Jun 2 03:25 . drwxr-xr-x 4 mikec wheel 5 Jun 2 10:44 .. drwxrwxr-x 2 mikec operator 2 Jun 2 03:07 .snap -rw-r--r-- 1 mikec wheel 2 Jun 2 03:24 bounds -rw------- 1 mikec wheel 446 Jun 2 03:24 info.0 lrwxr-xr-x 1 mikec wheel 6 Jun 2 03:25 info.last -> info.0 -rw------- 1 mikec wheel 3469885440 Jun 2 03:25 vmcore.0 lrwxr-xr-x 1 mikec wheel 8 Jun 2 03:25 vmcore.last -> vmcore.0 But, here is the kgdb output (with backtrace): ~/p/z/dump> cat ../kgdb_backtrace.txt <118>root@:/ # zfs set canmount=on zroot/data/working <118>root@:/ # zfs mount zroot/data/working Fatal trap 12: page fault while in kernel mode cpuid = 14; apic id = 22 fault virtual address = 0x4a0 fault code = supervisor read data, page not present instruction pointer = 0x20:0xffffffff8185a39f stack pointer = 0x28:0xfffffe1834608570 frame pointer = 0x28:0xfffffe18346085b0 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 2 (txg_thread_enter) trap number = 12 panic: page fault cpuid = 14 KDB: stack backtrace: #0 0xffffffff808e7ee0 at kdb_backtrace+0x60 #1 0xffffffff808af9c5 at panic+0x155 #2 0xffffffff80c8e7b2 at trap_fatal+0x3a2 #3 0xffffffff80c8ea89 at trap_pfault+0x2c9 #4 0xffffffff80c8e216 at trap+0x5e6 #5 0xffffffff80c754b2 at calltrap+0x8 #6 0xffffffff8182eb5a at dsl_dataset_block_kill+0x3a #7 0xffffffff8182b967 at dnode_sync+0x237 #8 0xffffffff81823fcb at dmu_objset_sync_dnodes+0x2b #9 0xffffffff81823e4d at dmu_objset_sync+0x1ed #10 0xffffffff8183829a at dsl_pool_sync+0xca #11 0xffffffff81853a4e at spa_sync+0x52e #12 0xffffffff8185c925 at txg_sync_thread+0x375 #13 0xffffffff80881a9a at fork_exit+0x9a #14 0xffffffff80c759ee at fork_trampoline+0xe Uptime: 26m15s Dumping 3309 out of 98234 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% Reading symbols from /boot/kernel/zfs.ko.symbols...done. Loaded symbols for /boot/kernel/zfs.ko.symbols Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. Loaded symbols for /boot/kernel/opensolaris.ko.symbols #0 doadump (textdump=) at pcpu.h:219 219 __asm("movq %%gs:%1,%0" : "=r" (td) (kgdb) backtrace #0 doadump (textdump=) at pcpu.h:219 #1 0xffffffff808af640 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:447 #2 0xffffffff808afa04 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:754 #3 0xffffffff80c8e7b2 in trap_fatal (frame=, eva=) at /usr/src/sys/amd64/amd64/trap.c:882 #4 0xffffffff80c8ea89 in trap_pfault (frame=0xfffffe18346084c0, usermode=0) at /usr/src/sys/amd64/amd64/trap.c:699 #5 0xffffffff80c8e216 in trap (frame=0xfffffe18346084c0) at /usr/src/sys/amd64/amd64/trap.c:463 #6 0xffffffff80c754b2 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:232 #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=0xfffff80041835000, bp=0xfffffe001b8a1780) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c:1635 #8 0xffffffff8182eb5a in dsl_dataset_block_kill (ds=0xfffff800410fec00, bp=0xfffffe001b8a1780, tx=0xfffff8004faa0600, async=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dataset.c:129 #9 0xffffffff8182b967 in dnode_sync (dn=0xfffff8004fe626c0, tx=0xfffff8004faa0600) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dnode_sync.c:128 #10 0xffffffff81823fcb in dmu_objset_sync_dnodes (list=0xfffff80041956b10, newlist=, tx=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:945 #11 0xffffffff81823e4d in dmu_objset_sync (os=0xfffff80041956800, pio=0xfffff800418c43b0, tx=0xfffff8004faa0600) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:1062 #12 0xffffffff8183829a in dsl_pool_sync (dp=0xfffff8004183c000, txg=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c:413 #13 0xffffffff81853a4e in spa_sync (spa=0xfffff80041835000, txg=3373534) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6410 #14 0xffffffff8185c925 in txg_sync_thread (arg=0xfffff8004183c000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:515 #15 0xffffffff80881a9a in fork_exit (callout=0xffffffff8185c5b0 , arg=0xfffff8004183c000, frame=0xfffffe1834608ac0) at /usr/src/sys/kern/kern_fork.c:995 #16 0xffffffff80c759ee in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606 #17 0x0000000000000000 in ?? () Current language: auto; currently minimal If anyone wants to help out and check out the vmcore file, email me off the list and I'll provide a S3 url of the tar'd + xz file. Thanks, Mike C From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 20:16:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 55717F37 for ; Mon, 2 Jun 2014 20:16:01 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id 2996C2FF8 for ; Mon, 2 Jun 2014 20:16:00 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 659E723B0B5; Mon, 2 Jun 2014 13:15:59 -0700 (PDT) Message-ID: <538CDB7F.2060408@bayphoto.com> Date: Mon, 02 Jun 2014 13:15:59 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070604040507080607080702" X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 20:16:01 -0000 This is a cryptographically signed message in MIME format. --------------ms070604040507080607080702 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 6/2/2014 1:06 PM, Steven Hartland wrote: > Could you check what vd is at frame 7? > ----- Original Message ----- From: "Mike Carlson" > > > On 6/2/2014 10:35 AM, Steven Hartland wrote: >> ----- Original Message ----- From: "Mike Carlson" >>> I have a crash dump, its 3.2GB in size. >>> >>> How can I get this to the right people? I can put it up on S3, but=20 >>> I'd prefer to limit its access since it may contain sensitive=20 >>> information (we dont have local accounts aside from root, but better = >>> to be safe) >> >> For starters can you put the top contents of core.X.txt somewhere. >> >> If there's sensitive info in there just the top kgdb section with >> the backtrace will be good initially. >> >> Regards >> Steve >> >> > I don't have a core.0.txt, I only have: > > ~/p/z/dump> ls -al > total 347690 > drwxr-xr-x 3 mikec wheel 8 Jun 2 03:25 . > drwxr-xr-x 4 mikec wheel 5 Jun 2 10:44 .. > drwxrwxr-x 2 mikec operator 2 Jun 2 03:07 .snap > -rw-r--r-- 1 mikec wheel 2 Jun 2 03:24 bounds > -rw------- 1 mikec wheel 446 Jun 2 03:24 info.0 > lrwxr-xr-x 1 mikec wheel 6 Jun 2 03:25 info.last -> > info.0 > -rw------- 1 mikec wheel 3469885440 Jun 2 03:25 vmcore.0 > lrwxr-xr-x 1 mikec wheel 8 Jun 2 03:25 vmcore.last > -> vmcore.0 > > But, here is the kgdb output (with backtrace): > > ~/p/z/dump> cat ../kgdb_backtrace.txt > <118>root@:/ # zfs set canmount=3Don zroot/data/working > <118>root@:/ # zfs mount zroot/data/working > > > Fatal trap 12: page fault while in kernel mode > cpuid =3D 14; apic id =3D 22 > fault virtual address =3D 0x4a0 > fault code =3D supervisor read data, page not present > instruction pointer =3D 0x20:0xffffffff8185a39f > stack pointer =3D 0x28:0xfffffe1834608570 > frame pointer =3D 0x28:0xfffffe18346085b0 > code segment =3D base 0x0, limit 0xfffff, type 0x1b > =3D DPL 0, pres 1, long 1, def32 0, gran 1 > processor eflags =3D interrupt enabled, resume, IOPL =3D 0 > current process =3D 2 (txg_thread_enter) > trap number =3D 12 > panic: page fault > cpuid =3D 14 > KDB: stack backtrace: > #0 0xffffffff808e7ee0 at kdb_backtrace+0x60 > #1 0xffffffff808af9c5 at panic+0x155 > #2 0xffffffff80c8e7b2 at trap_fatal+0x3a2 > #3 0xffffffff80c8ea89 at trap_pfault+0x2c9 > #4 0xffffffff80c8e216 at trap+0x5e6 > #5 0xffffffff80c754b2 at calltrap+0x8 > #6 0xffffffff8182eb5a at dsl_dataset_block_kill+0x3a > #7 0xffffffff8182b967 at dnode_sync+0x237 > #8 0xffffffff81823fcb at dmu_objset_sync_dnodes+0x2b > #9 0xffffffff81823e4d at dmu_objset_sync+0x1ed > #10 0xffffffff8183829a at dsl_pool_sync+0xca > #11 0xffffffff81853a4e at spa_sync+0x52e > #12 0xffffffff8185c925 at txg_sync_thread+0x375 > #13 0xffffffff80881a9a at fork_exit+0x9a > #14 0xffffffff80c759ee at fork_trampoline+0xe > Uptime: 26m15s > Dumping 3309 out of 98234 > MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% > > Reading symbols from /boot/kernel/zfs.ko.symbols...done. > Loaded symbols for /boot/kernel/zfs.ko.symbols > Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. > Loaded symbols for /boot/kernel/opensolaris.ko.symbols > #0 doadump (textdump=3D) at pcpu.h:219 > 219 __asm("movq %%gs:%1,%0" : "=3Dr" (td) > (kgdb) backtrace > #0 doadump (textdump=3D) at pcpu.h:219 > #1 0xffffffff808af640 in kern_reboot (howto=3D260) at > /usr/src/sys/kern/kern_shutdown.c:447 > #2 0xffffffff808afa04 in panic (fmt=3D) at > /usr/src/sys/kern/kern_shutdown.c:754 > #3 0xffffffff80c8e7b2 in trap_fatal (frame=3D,= > eva=3D) at /usr/src/sys/amd64/amd64/trap.c:882 > #4 0xffffffff80c8ea89 in trap_pfault (frame=3D0xfffffe18346084c0, > usermode=3D0) at /usr/src/sys/amd64/amd64/trap.c:699 > #5 0xffffffff80c8e216 in trap (frame=3D0xfffffe18346084c0) at > /usr/src/sys/amd64/amd64/trap.c:463 > #6 0xffffffff80c754b2 in calltrap () at > /usr/src/sys/amd64/amd64/exception.S:232 > #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=3D0xfffff8004183500= 0, > bp=3D0xfffffe001b8a1780) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/spa_misc.c:1635 > #8 0xffffffff8182eb5a in dsl_dataset_block_kill > (ds=3D0xfffff800410fec00, bp=3D0xfffffe001b8a1780, > tx=3D0xfffff8004faa0600, async=3D0) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/dsl_dataset.c:129 > #9 0xffffffff8182b967 in dnode_sync (dn=3D0xfffff8004fe626c0, > tx=3D0xfffff8004faa0600) at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/dnode_sync.c:128 > #10 0xffffffff81823fcb in dmu_objset_sync_dnodes > (list=3D0xfffff80041956b10, newlist=3D, tx=3D optimized out>) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/dmu_objset.c:945 > #11 0xffffffff81823e4d in dmu_objset_sync (os=3D0xfffff80041956800, > pio=3D0xfffff800418c43b0, tx=3D0xfffff8004faa0600) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/dmu_objset.c:1062 > #12 0xffffffff8183829a in dsl_pool_sync (dp=3D0xfffff8004183c000, > txg=3D) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/dsl_pool.c:413 > #13 0xffffffff81853a4e in spa_sync (spa=3D0xfffff80041835000, > txg=3D3373534) at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/spa.c:6410 > #14 0xffffffff8185c925 in txg_sync_thread (arg=3D0xfffff8004183c000)= > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/z= fs/txg.c:515 > #15 0xffffffff80881a9a in fork_exit (callout=3D0xffffffff8185c5b0 > , arg=3D0xfffff8004183c000, frame=3D0xfffffe1834608= ac0) > at /usr/src/sys/kern/kern_fork.c:995 > #16 0xffffffff80c759ee in fork_trampoline () at > /usr/src/sys/amd64/amd64/exception.S:606 > #17 0x0000000000000000 in ?? () > Current language: auto; currently minimal > > > If anyone wants to help out and check out the vmcore file, email me=20 > off the list and I'll provide a S3 url of the tar'd + xz file. > > Thanks, > Mike C > > > Output of "frame 7": (kgdb) frame 7 #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=3D0xfffff80041835000= , bp=3D0xfffffe001b8a1780) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/spa_misc.c:1635 1635 dsize =3D (asize >> SPA_MINBLOCKSHIFT) * vd->vdev_deflate_ratio; Is that what you were looking for? I'm not familar with this process, so I hope this does not become too=20 painful in pulling the details out. Mike C --------------ms070604040507080607080702 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAyMjAx NTU5WjAjBgkqhkiG9w0BCQQxFgQUQy7UmRLbmUaNVjLHB2JUJxd12OQwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICACiQ49rbBIPUgzgh+th5vdpY8zxOvzFsnTEX gMZNVMMoKVzvxStJzpWETMmCsj0VGPGLzpZCXy6WI4Zw5J16M33CUeCjgga7nB2i9NnalMyy shK0deIyL5mM3MkrZbB8PZ2AfJ3ZmcjYTV8Cs26uPE6yBPhshKYpXvFLWU+6TqdAQ2IUaEDe uGADls/L0O8dKv5bx8fvuzeOnUNYkrxrO1Cxu8c5QY5phivL+HQGOzArO12FDhnzm3XryBzd CxlrXRE6DRDNmNHvp7fL8OJ6JXNWTQFG6yHliKQOOLAf3MwHhBf4I+40K2YTIRQJctTNHDdM WcPRErmocmpZZH4KlpUHBV1h3tjYkED0hwhon99cLurF098m0lplmb9FPxczY7SfesgWvkj1 OiduiJZ4Mx5+r1VJrUC/UGgaZXdRCHKKJBUvlldj7tVMCJcdk8/g+eaf8qeaaR3KikV5VtPJ 6LAFknBUKqkgfSRrkWEU6wNvkfW8Q+0WA6919t92snX4ZSJHQRGjvMZuvoYS6yZqrN3fpGjA ZrQL2aatcCmSwg3OfQv/3tlBHItlmXQSyL4gP5sUHLEiE0C0zrCcCBgMrkISIaJGMTKk+pwu 3Zp0mw3eGzhF8fd1pR8WOFcHF/+CTMlzGGnKvEvKAQ4cAjfiUAjC6FPmein3x9EOHus74if+ AAAAAAAA --------------ms070604040507080607080702-- From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 20:19:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BC0D1108; Mon, 2 Jun 2014 20:19:50 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "wonkity.com", Issuer "wonkity.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 60ED52029; Mon, 2 Jun 2014 20:19:50 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.8/8.14.8) with ESMTP id s52KJkEw015934 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 2 Jun 2014 14:19:46 -0600 (MDT) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.8/8.14.8/Submit) with ESMTP id s52KJkFe015930; Mon, 2 Jun 2014 14:19:46 -0600 (MDT) (envelope-from wblock@wonkity.com) Date: Mon, 2 Jun 2014 14:19:45 -0600 (MDT) From: Warren Block To: Steven Hartland Subject: Re: fdisk(8) vs gpart(8), and gnop In-Reply-To: Message-ID: References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> <538C9207.9040806@freebsd.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Mon, 02 Jun 2014 14:19:46 -0600 (MDT) Cc: freebsd-fs , FreeBSD Hackers , Matthew Ahrens , Nathan Whitehorn X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 20:19:50 -0000 On Mon, 2 Jun 2014, Steven Hartland wrote: > > ----- Original Message ----- From: "Nathan Whitehorn" > >> >> I think we basically don't have any lying disks anymore. The ATA code does >> a very good job of this -- most tell the truth, but in an odd way that gets >> reported up the stack. ada(4) has a quirks table for the ones that do not. >> If this is the only concern, then we should just stop telling people to >> worry about this. >> >> My bigger concern is this pool upgrade one -- what if someone puts in a 4K >> disk in the future? > > Thats very much not the case I'm afraid, I try to add quirks for disk as > they are reported but there's always going to be quite a few which are > wrong until manufacturers stop making their FW lie :( > Both gpart and diskinfo show the correct values in the stripesize fields. At least, I've yet to see it be wrong. Maybe that is where ZFS should be getting the blocksize anyway. (Of course, stripesize might only be correct due to the quirks you mention, in which case... never mind.) From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 20:44:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 10CE1DDD for ; Mon, 2 Jun 2014 20:44:18 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id B5B7122FB for ; Mon, 2 Jun 2014 20:44:17 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 0366120E7088C; Mon, 2 Jun 2014 20:44:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.9 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1, HELO_NO_DOMAIN, RDNS_DYNAMIC, STOX_REPLY_TYPE, TVD_FINGER_02 autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id B918920E70886; Mon, 2 Jun 2014 20:44:12 +0000 (UTC) Message-ID: <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> From: "Steven Hartland" To: , References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Date: Mon, 2 Jun 2014 21:44:17 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 20:44:18 -0000 ----- Original Message ----- From: "Mike Carlson" To: "Steven Hartland" ; Sent: Monday, June 02, 2014 9:15 PM Subject: Re: ZFS Kernel Panic on 10.0-RELEASE > On 6/2/2014 1:06 PM, Steven Hartland wrote: >> I don't have a core.0.txt, I only have: >> >> ~/p/z/dump> ls -al >> total 347690 >> drwxr-xr-x 3 mikec wheel 8 Jun 2 03:25 . >> drwxr-xr-x 4 mikec wheel 5 Jun 2 10:44 .. >> drwxrwxr-x 2 mikec operator 2 Jun 2 03:07 .snap >> -rw-r--r-- 1 mikec wheel 2 Jun 2 03:24 bounds >> -rw------- 1 mikec wheel 446 Jun 2 03:24 info.0 >> lrwxr-xr-x 1 mikec wheel 6 Jun 2 03:25 info.last -> >> info.0 >> -rw------- 1 mikec wheel 3469885440 Jun 2 03:25 vmcore.0 >> lrwxr-xr-x 1 mikec wheel 8 Jun 2 03:25 vmcore.last >> -> vmcore.0 >> >> But, here is the kgdb output (with backtrace): >> >> ~/p/z/dump> cat ../kgdb_backtrace.txt >> <118>root@:/ # zfs set canmount=on zroot/data/working >> <118>root@:/ # zfs mount zroot/data/working >> >> >> Fatal trap 12: page fault while in kernel mode >> cpuid = 14; apic id = 22 >> fault virtual address = 0x4a0 >> fault code = supervisor read data, page not present >> instruction pointer = 0x20:0xffffffff8185a39f >> stack pointer = 0x28:0xfffffe1834608570 >> frame pointer = 0x28:0xfffffe18346085b0 >> code segment = base 0x0, limit 0xfffff, type 0x1b >> = DPL 0, pres 1, long 1, def32 0, gran 1 >> processor eflags = interrupt enabled, resume, IOPL = 0 >> current process = 2 (txg_thread_enter) >> trap number = 12 >> panic: page fault >> cpuid = 14 >> KDB: stack backtrace: >> #0 0xffffffff808e7ee0 at kdb_backtrace+0x60 >> #1 0xffffffff808af9c5 at panic+0x155 >> #2 0xffffffff80c8e7b2 at trap_fatal+0x3a2 >> #3 0xffffffff80c8ea89 at trap_pfault+0x2c9 >> #4 0xffffffff80c8e216 at trap+0x5e6 >> #5 0xffffffff80c754b2 at calltrap+0x8 >> #6 0xffffffff8182eb5a at dsl_dataset_block_kill+0x3a >> #7 0xffffffff8182b967 at dnode_sync+0x237 >> #8 0xffffffff81823fcb at dmu_objset_sync_dnodes+0x2b >> #9 0xffffffff81823e4d at dmu_objset_sync+0x1ed >> #10 0xffffffff8183829a at dsl_pool_sync+0xca >> #11 0xffffffff81853a4e at spa_sync+0x52e >> #12 0xffffffff8185c925 at txg_sync_thread+0x375 >> #13 0xffffffff80881a9a at fork_exit+0x9a >> #14 0xffffffff80c759ee at fork_trampoline+0xe >> Uptime: 26m15s >> Dumping 3309 out of 98234 >> MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% >> >> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >> Loaded symbols for /boot/kernel/zfs.ko.symbols >> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >> #0 doadump (textdump=) at pcpu.h:219 >> 219 __asm("movq %%gs:%1,%0" : "=r" (td) >> (kgdb) backtrace >> #0 doadump (textdump=) at pcpu.h:219 >> #1 0xffffffff808af640 in kern_reboot (howto=260) at >> /usr/src/sys/kern/kern_shutdown.c:447 >> #2 0xffffffff808afa04 in panic (fmt=) at >> /usr/src/sys/kern/kern_shutdown.c:754 >> #3 0xffffffff80c8e7b2 in trap_fatal (frame=, >> eva=) at /usr/src/sys/amd64/amd64/trap.c:882 >> #4 0xffffffff80c8ea89 in trap_pfault (frame=0xfffffe18346084c0, >> usermode=0) at /usr/src/sys/amd64/amd64/trap.c:699 >> #5 0xffffffff80c8e216 in trap (frame=0xfffffe18346084c0) at >> /usr/src/sys/amd64/amd64/trap.c:463 >> #6 0xffffffff80c754b2 in calltrap () at >> /usr/src/sys/amd64/amd64/exception.S:232 >> #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=0xfffff80041835000, >> bp=0xfffffe001b8a1780) >> at >> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c:1635 >> #8 0xffffffff8182eb5a in dsl_dataset_block_kill >> (ds=0xfffff800410fec00, bp=0xfffffe001b8a1780, >> tx=0xfffff8004faa0600, async=0) >> at >> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dataset.c:129 >> #9 0xffffffff8182b967 in dnode_sync (dn=0xfffff8004fe626c0, >> tx=0xfffff8004faa0600) at >> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dnode_sync.c:128 >> #10 0xffffffff81823fcb in dmu_objset_sync_dnodes >> (list=0xfffff80041956b10, newlist=, tx=> optimized out>) >> at >> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:945 >> #11 0xffffffff81823e4d in dmu_objset_sync (os=0xfffff80041956800, >> pio=0xfffff800418c43b0, tx=0xfffff8004faa0600) >> at >> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:1062 >> #12 0xffffffff8183829a in dsl_pool_sync (dp=0xfffff8004183c000, >> txg=) >> at >> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c:413 >> #13 0xffffffff81853a4e in spa_sync (spa=0xfffff80041835000, >> txg=3373534) at >> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6410 >> #14 0xffffffff8185c925 in txg_sync_thread (arg=0xfffff8004183c000) >> at >> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:515 >> #15 0xffffffff80881a9a in fork_exit (callout=0xffffffff8185c5b0 >> , arg=0xfffff8004183c000, frame=0xfffffe1834608ac0) >> at /usr/src/sys/kern/kern_fork.c:995 >> #16 0xffffffff80c759ee in fork_trampoline () at >> /usr/src/sys/amd64/amd64/exception.S:606 >> #17 0x0000000000000000 in ?? () >> Current language: auto; currently minimal >> >> >> If anyone wants to help out and check out the vmcore file, email me >> off the list and I'll provide a S3 url of the tar'd + xz file. >> >> > Output of "frame 7": > > (kgdb) frame 7 > #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=0xfffff80041835000, > bp=0xfffffe001b8a1780) > at > /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c:1635 > 1635 dsize = (asize >> SPA_MINBLOCKSHIFT) * > vd->vdev_deflate_ratio; > > Is that what you were looking for? Thats the line I gathered it was on but no I need to know what the value of vd is, so what you need to do is: print vd If thats valid then: print *vd Given the panic I'm kind of expecting garbage or null (0x00) > I'm not familar with this process, so I hope this does not become too > painful in pulling the details out. No problem, everyone has to learn some time ;-) Regards Steve From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 20:45:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 88D94E69; Mon, 2 Jun 2014 20:45:51 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 456C02312; Mon, 2 Jun 2014 20:45:51 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id B826E20E7088B; Mon, 2 Jun 2014 20:45:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 6717120E70886; Mon, 2 Jun 2014 20:45:46 +0000 (UTC) Message-ID: <1389B184A2434E55BCB9AD457273D1CF@multiplay.co.uk> From: "Steven Hartland" To: "Warren Block" References: <20140601004242.GA97224@bewilderbeast.blackhelicopters.org> <3D6974D83AE9495E890D9F3CA654FA94@multiplay.co.uk> <538B4CEF.2030801@freebsd.org> <1DB2D63312CE439A96B23EAADFA9436E@multiplay.co.uk> <538B4FD7.4090000@freebsd.org> <538C9207.9040806@freebsd.org> Subject: Re: fdisk(8) vs gpart(8), and gnop Date: Mon, 2 Jun 2014 21:45:51 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs , FreeBSD Hackers , Matthew Ahrens , Nathan Whitehorn X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 20:45:51 -0000 ----- Original Message ----- From: "Warren Block" >> ----- Original Message ----- From: "Nathan Whitehorn" >> >>> >>> I think we basically don't have any lying disks anymore. The ATA code does >>> a very good job of this -- most tell the truth, but in an odd way that gets >>> reported up the stack. ada(4) has a quirks table for the ones that do not. >>> If this is the only concern, then we should just stop telling people to >>> worry about this. >>> >>> My bigger concern is this pool upgrade one -- what if someone puts in a 4K >>> disk in the future? >> >> Thats very much not the case I'm afraid, I try to add quirks for disk as >> they are reported but there's always going to be quite a few which are >> wrong until manufacturers stop making their FW lie :( >> > > Both gpart and diskinfo show the correct values in the stripesize > fields. At least, I've yet to see it be wrong. Maybe that is where ZFS > should be getting the blocksize anyway. > > (Of course, stripesize might only be correct due to the quirks you > mention, in which case... never mind.) It is indeed because of the quirks we've manually entered I'm afraid :( Regards Steve From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 20:46:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 22157FD3 for ; Mon, 2 Jun 2014 20:46:45 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id F3E33233F for ; Mon, 2 Jun 2014 20:46:44 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 1875623B505; Mon, 2 Jun 2014 13:46:44 -0700 (PDT) Message-ID: <538CE2B3.8090008@bayphoto.com> Date: Mon, 02 Jun 2014 13:46:43 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> In-Reply-To: <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms050206090208010102000709" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 20:46:45 -0000 This is a cryptographically signed message in MIME format. --------------ms050206090208010102000709 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 6/2/2014 1:44 PM, Steven Hartland wrote: > > ----- Original Message ----- From: "Mike Carlson" > To: "Steven Hartland" ; > Sent: Monday, June 02, 2014 9:15 PM > Subject: Re: ZFS Kernel Panic on 10.0-RELEASE > > >> On 6/2/2014 1:06 PM, Steven Hartland wrote: > >>> I don't have a core.0.txt, I only have: >>> >>> ~/p/z/dump> ls -al >>> total 347690 >>> drwxr-xr-x 3 mikec wheel 8 Jun 2 03:25 . >>> drwxr-xr-x 4 mikec wheel 5 Jun 2 10:44 .. >>> drwxrwxr-x 2 mikec operator 2 Jun 2 03:07 .snap >>> -rw-r--r-- 1 mikec wheel 2 Jun 2 03:24 bounds >>> -rw------- 1 mikec wheel 446 Jun 2 03:24 info.0 >>> lrwxr-xr-x 1 mikec wheel 6 Jun 2 03:25 info.last -= > >>> info.0 >>> -rw------- 1 mikec wheel 3469885440 Jun 2 03:25 vmcore.0 >>> lrwxr-xr-x 1 mikec wheel 8 Jun 2 03:25 vmcore.last= >>> -> vmcore.0 >>> >>> But, here is the kgdb output (with backtrace): >>> >>> ~/p/z/dump> cat ../kgdb_backtrace.txt >>> <118>root@:/ # zfs set canmount=3Don zroot/data/working >>> <118>root@:/ # zfs mount zroot/data/working >>> >>> >>> Fatal trap 12: page fault while in kernel mode >>> cpuid =3D 14; apic id =3D 22 >>> fault virtual address =3D 0x4a0 >>> fault code =3D supervisor read data, page not present= >>> instruction pointer =3D 0x20:0xffffffff8185a39f >>> stack pointer =3D 0x28:0xfffffe1834608570 >>> frame pointer =3D 0x28:0xfffffe18346085b0 >>> code segment =3D base 0x0, limit 0xfffff, type 0x1b >>> =3D DPL 0, pres 1, long 1, def32 0, gran = 1 >>> processor eflags =3D interrupt enabled, resume, IOPL =3D 0 >>> current process =3D 2 (txg_thread_enter) >>> trap number =3D 12 >>> panic: page fault >>> cpuid =3D 14 >>> KDB: stack backtrace: >>> #0 0xffffffff808e7ee0 at kdb_backtrace+0x60 >>> #1 0xffffffff808af9c5 at panic+0x155 >>> #2 0xffffffff80c8e7b2 at trap_fatal+0x3a2 >>> #3 0xffffffff80c8ea89 at trap_pfault+0x2c9 >>> #4 0xffffffff80c8e216 at trap+0x5e6 >>> #5 0xffffffff80c754b2 at calltrap+0x8 >>> #6 0xffffffff8182eb5a at dsl_dataset_block_kill+0x3a >>> #7 0xffffffff8182b967 at dnode_sync+0x237 >>> #8 0xffffffff81823fcb at dmu_objset_sync_dnodes+0x2b >>> #9 0xffffffff81823e4d at dmu_objset_sync+0x1ed >>> #10 0xffffffff8183829a at dsl_pool_sync+0xca >>> #11 0xffffffff81853a4e at spa_sync+0x52e >>> #12 0xffffffff8185c925 at txg_sync_thread+0x375 >>> #13 0xffffffff80881a9a at fork_exit+0x9a >>> #14 0xffffffff80c759ee at fork_trampoline+0xe >>> Uptime: 26m15s >>> Dumping 3309 out of 98234 >>> MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% >>> >>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>> #0 doadump (textdump=3D) at pcpu.h:219 >>> 219 __asm("movq %%gs:%1,%0" : "=3Dr" (td) >>> (kgdb) backtrace >>> #0 doadump (textdump=3D) at pcpu.h:219 >>> #1 0xffffffff808af640 in kern_reboot (howto=3D260) at >>> /usr/src/sys/kern/kern_shutdown.c:447 >>> #2 0xffffffff808afa04 in panic (fmt=3D) at >>> /usr/src/sys/kern/kern_shutdown.c:754 >>> #3 0xffffffff80c8e7b2 in trap_fatal (frame=3D, >>> eva=3D) at /usr/src/sys/amd64/amd64/trap.c:88= 2 >>> #4 0xffffffff80c8ea89 in trap_pfault (frame=3D0xfffffe18346084c0,= >>> usermode=3D0) at /usr/src/sys/amd64/amd64/trap.c:699 >>> #5 0xffffffff80c8e216 in trap (frame=3D0xfffffe18346084c0) at >>> /usr/src/sys/amd64/amd64/trap.c:463 >>> #6 0xffffffff80c754b2 in calltrap () at >>> /usr/src/sys/amd64/amd64/exception.S:232 >>> #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=3D0xfffff80041835= 000, >>> bp=3D0xfffffe001b8a1780) >>> at >>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/spa_misc.c:1635=20 >>> >>> #8 0xffffffff8182eb5a in dsl_dataset_block_kill >>> (ds=3D0xfffff800410fec00, bp=3D0xfffffe001b8a1780, >>> tx=3D0xfffff8004faa0600, async=3D0) >>> at >>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/dsl_dataset.c:129=20 >>> >>> #9 0xffffffff8182b967 in dnode_sync (dn=3D0xfffff8004fe626c0, >>> tx=3D0xfffff8004faa0600) at >>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/dnode_sync.c:128=20 >>> >>> #10 0xffffffff81823fcb in dmu_objset_sync_dnodes >>> (list=3D0xfffff80041956b10, newlist=3D, tx=3D= >> optimized out>) >>> at >>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/dmu_objset.c:945=20 >>> >>> #11 0xffffffff81823e4d in dmu_objset_sync (os=3D0xfffff80041956800= , >>> pio=3D0xfffff800418c43b0, tx=3D0xfffff8004faa0600) >>> at >>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/dmu_objset.c:1062=20 >>> >>> #12 0xffffffff8183829a in dsl_pool_sync (dp=3D0xfffff8004183c000, >>> txg=3D) >>> at >>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/dsl_pool.c:413=20 >>> >>> #13 0xffffffff81853a4e in spa_sync (spa=3D0xfffff80041835000, >>> txg=3D3373534) at >>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/spa.c:6410=20 >>> >>> #14 0xffffffff8185c925 in txg_sync_thread (arg=3D0xfffff8004183c00= 0) >>> at >>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs= /zfs/txg.c:515=20 >>> >>> #15 0xffffffff80881a9a in fork_exit (callout=3D0xffffffff8185c5b0 >>> , arg=3D0xfffff8004183c000, frame=3D0xfffffe18346= 08ac0) >>> at /usr/src/sys/kern/kern_fork.c:995 >>> #16 0xffffffff80c759ee in fork_trampoline () at >>> /usr/src/sys/amd64/amd64/exception.S:606 >>> #17 0x0000000000000000 in ?? () >>> Current language: auto; currently minimal >>> >>> >>> If anyone wants to help out and check out the vmcore file, email me=20 >>> off the list and I'll provide a S3 url of the tar'd + xz file. >>> >>> >> Output of "frame 7": >> >> (kgdb) frame 7 >> #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=3D0xfffff800418350= 00, >> bp=3D0xfffffe001b8a1780) >> at >> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/= zfs/spa_misc.c:1635 >> 1635 dsize =3D (asize >> SPA_MINBLOCKSHIFT) * >> vd->vdev_deflate_ratio; >> >> Is that what you were looking for? > > Thats the line I gathered it was on but no I need to know what the valu= e > of vd is, so what you need to do is: > print vd > > If thats valid then: > print *vd > It reports: (kgdb) print *vd No symbol "vd" in current context. Should I rebuild the kernel with additional options? > Given the panic I'm kind of expecting garbage or null (0x00) > >> I'm not familar with this process, so I hope this does not become too = >> painful in pulling the details out. > > No problem, everyone has to learn some time ;-) > Thanks Steve :) > Regards > Steve > > --------------ms050206090208010102000709 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAyMjA0 NjQzWjAjBgkqhkiG9w0BCQQxFgQU9DRFRFD+ueornXIePCgh6LilaM8wbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAJ2iy7R2vLDhGuSGMooD5cEnuuwkkyyyMODK 6T6Y6wpRte/zTrCuo+t+v0GNj5eDCfO2C7+P9OXEqSIi+SPf8vW+TmeM8E6GFQ1vE52Jo+5v YpgtPx6dL/9yKihHigVuLACMNr701EHlbhstsM0VVsE4EzETv1JM1Bl4oaWX5s7n+4H9wuvI 3p4Tg1WmUaypv1YrBQUNDTWefovww14WFwMPFxInMQUkAGYHZ2f2HrkkbtNo9FmoopexajbM Drkm5UrsH8cSCCyWHLB1Zvx+Pdr3R1/CUYi91DuFo20vPCSQBuElwIBd/KV/D7TrqdHkGH9R csVVHv5U3Vkp2bdvdy34taL3HgAM+vp23Xtv3c0UMz2ZrL+5DFmaOHQ5ZncFbfVe+d8oya5+ hWgndUnarAY8lC07lheOhcOe61nNe8idN7zEa4BnIQot5lXEm8ocscv81eAHiG/cmE99S9ax LfT7Q7j7SRzbYLwaHNGCsfiNZHE186GJxBtGXay4hC4FcAhdofM75/Gm6TJSm9sGLX/ZHmjP h+5uzfa0yE1yP7950u7/9ZxgUripr3HoVBXgngJIFN7vLI2QZapBid2LAch4tmVDLuTxeu/m 3sxQQyxvU5Gfs5PgG3afABiT4d8uqOIzOJzwAhe2IfDBWhCQHRS2vlNPSMxVy4eQe2OHzFCc AAAAAAAA --------------ms050206090208010102000709-- From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 21:15:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 698599AA for ; Mon, 2 Jun 2014 21:15:12 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 0050325C8 for ; Mon, 2 Jun 2014 21:15:11 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 9FC5420E7088C; Mon, 2 Jun 2014 21:15:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 621F120E70886; Mon, 2 Jun 2014 21:15:05 +0000 (UTC) Message-ID: <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> From: "Steven Hartland" To: , References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Date: Mon, 2 Jun 2014 22:15:10 +0100 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_0602_01CF7EB0.1E7212D0" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 21:15:12 -0000 This is a multi-part message in MIME format. ------=_NextPart_000_0602_01CF7EB0.1E7212D0 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=response Content-Transfer-Encoding: 7bit ----- Original Message ----- From: "Mike Carlson" >> Thats the line I gathered it was on but no I need to know what the value >> of vd is, so what you need to do is: >> print vd >> >> If thats valid then: >> print *vd >> >It reports: > >(kgdb) print *vd > No symbol "vd" in current context. Dam optimiser :( > Should I rebuild the kernel with additional options? Likely wont help as kernel with zero optimisations tends to fail to build in my experience :( Can you try applying the attached patch to your src e.g. cd /usr/src patch < zfs-dsize-dva-check.patch The rebuild, install the kernel and then reproduce the issue again. Hopefully it will provide some more information on the cause, but I suspect you might be seeing the effect os have some corruption. Regards Steve ------=_NextPart_000_0602_01CF7EB0.1E7212D0 Content-Type: application/octet-stream; name="zfs-dsize-dva-check.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="zfs-dsize-dva-check.patch" Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c=0A= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A= --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c (revision = 266009)=0A= +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c (working = copy)=0A= @@ -1631,7 +1631,14 @@ dva_get_dsize_sync(spa_t *spa, const dva_t *dva)=0A= ASSERT(spa_config_held(spa, SCL_ALL, RW_READER) !=3D 0);=0A= =0A= if (asize !=3D 0 && spa->spa_deflate) {=0A= - vdev_t *vd =3D vdev_lookup_top(spa, DVA_GET_VDEV(dva));=0A= + uint64_t vdev =3D DVA_GET_VDEV(dva);=0A= + vdev_t *vd =3D vdev_lookup_top(spa, vdev);=0A= + if (vd =3D=3D NULL) {=0A= + cmn_err(CE_WARN, "dva_get_dsize_sync(): bad DVA %llu:%llu",=0A= + (u_longlong_t)vdev, (u_longlong_t)asize);=0A= + ASSERT(0);=0A= + return (dsize);=0A= + }=0A= dsize =3D (asize >> SPA_MINBLOCKSHIFT) * vd->vdev_deflate_ratio;=0A= }=0A= =0A= ------=_NextPart_000_0602_01CF7EB0.1E7212D0-- From owner-freebsd-fs@FreeBSD.ORG Mon Jun 2 22:57:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5941D4F6 for ; Mon, 2 Jun 2014 22:57:58 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id 33F492DDC for ; Mon, 2 Jun 2014 22:57:57 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id A569023B57D; Mon, 2 Jun 2014 15:57:56 -0700 (PDT) Message-ID: <538D0174.6000906@bayphoto.com> Date: Mon, 02 Jun 2014 15:57:56 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> In-Reply-To: <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms010902090704070500020304" X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Jun 2014 22:57:58 -0000 This is a cryptographically signed message in MIME format. --------------ms010902090704070500020304 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 6/2/2014 2:15 PM, Steven Hartland wrote: > ----- Original Message ----- From: "Mike Carlson" > >>> Thats the line I gathered it was on but no I need to know what the=20 >>> value >>> of vd is, so what you need to do is: >>> print vd >>> >>> If thats valid then: >>> print *vd >>> >> It reports: >> >> (kgdb) print *vd >> No symbol "vd" in current context. > > Dam optimiser :( > >> Should I rebuild the kernel with additional options? > > Likely wont help as kernel with zero optimisations tends to fail > to build in my experience :( > > Can you try applying the attached patch to your src e.g. > cd /usr/src > patch < zfs-dsize-dva-check.patch > > The rebuild, install the kernel and then reproduce the issue again. > > Hopefully it will provide some more information on the cause, but > I suspect you might be seeing the effect os have some corruption. > > Regards > Steve Well, after building the kernel with your patch, installing it and=20 booting off of it, the system does not panic. It reports this when I mount the filesystem: Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 Here is the results, I can now mount the file system! root@working-1:~ # zfs set canmount=3Don zroot/data/working root@working-1:~ # zfs mount zroot/data/working root@working-1:~ # df Filesystem 1K-blocks Used Avail Capacity=20 Mounted on zroot 2677363378 1207060 2676156318 0% = / devfs 1 1 0 100% /dev /dev/mfid10p1 253911544 2827824 230770800 1% =20 /dump zroot/home 2676156506 188 2676156318 0% =20 /home zroot/data 2676156389 71 2676156318 0% =20 /mnt/data zroot/usr/ports/distfiles 2676246609 90291 2676156318 0% =20 /mnt/usr/ports/distfiles zroot/usr/ports/packages 2676158702 2384 2676156318 0% =20 /mnt/usr/ports/packages zroot/tmp 2676156812 493 2676156318 0% =20 /tmp zroot/usr 2679746045 3589727 2676156318 0% =20 /usr zroot/usr/ports 2676986896 830578 2676156318 0% =20 /usr/ports zroot/usr/src 2676643553 487234 2676156318 0% =20 /usr/src zroot/var 2676650671 494353 2676156318 0% =20 /var zroot/var/crash 2676156388 69 2676156318 0% =20 /var/crash zroot/var/db 2677521200 1364882 2676156318 0% =20 /var/db zroot/var/db/pkg 2676198058 41740 2676156318 0% =20 /var/db/pkg zroot/var/empty 2676156387 68 2676156318 0% =20 /var/empty zroot/var/log 2676168522 12203 2676156318 0% =20 /var/log zroot/var/mail 2676157043 725 2676156318 0% =20 /var/mail zroot/var/run 2676156508 190 2676156318 0% =20 /var/run zroot/var/tmp 2676156389 71 2676156318 0% =20 /var/tmp zroot/data/working 7664687468 4988531149 2676156318 65% =20 /mnt/data/working root@working-1:~ # ls /mnt/data/working/ DONE_ORDERS DP2_CMD NEW_MULTI_TESTING PROCESS RECYCLER XML_NOTIFICATIONS XML_REPORTS Mike C --------------ms010902090704070500020304 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAyMjI1 NzU2WjAjBgkqhkiG9w0BCQQxFgQUv4fAtvHZa4ywEOKzDIIL0dtl+LUwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAB8fN1cDWbgGxseDXo8SF9vuszTtcAaz1+6T /rlbXntxayhHouONTk/3j1/NPCF36HXSMQgHSKQ9MRO/N6wfB5neEKq3dDOTiaqCBonMNsgI ifeZncIr+LSodWKqsqezKcABAuzZbdPGHmT+KN7++n2vVnEp7WoGZLfzDea6jfi/79tUczIi 7whlwNuLPQCRU1X6VzwfYTO2+Bg7C5iv3H8tA5wMwkSQoSxy6BsWrEVyYYXyWRgUYk5qyFHE BsxyzJZ2lg4J8XS8VYBOfz/20BPPqNzKg7xgDDA3ciG6AJJAqFpHgfqbyuj+T6bMPK2hkV/2 SSYrfUFYraReEGyfHl8zu1LeGHzL6LSIexuxPF2jIB/OPFke29qjwyz5sAz2anwxbc8/Qdst nT2QnxrfAAFE0A44so3IvN6reSkAz9g4CdVmzJzLCqF0ZIqLmiY3lj6mpX3SPEKhmZKbPKHr 6rJKCaJqHqSg4kFMlUS9qoJHh9/+/ro44PI1Jwe4+mfQsMhelDRUVpqd34Vv0bWRPiran3kY H5oyNXk1FZwaMWQNeOjig/U6tNEjGUsMAaPfvvmqSffef8/n4VAs79dM8yHzOGuqdvwi7ofW WxBRXXReO/Fq25cm+W6PL0CYdBOhRTpjsX67Nw96DTU0V7AYpDcRY08HUAf1JxO11YnC/NW/ AAAAAAAA --------------ms010902090704070500020304-- From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 00:29:09 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DDBF2C82 for ; Tue, 3 Jun 2014 00:29:09 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 8D94F24F2 for ; Tue, 3 Jun 2014 00:29:09 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id CE60820E7088B; Tue, 3 Jun 2014 00:29:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id CD84520E70886; Tue, 3 Jun 2014 00:29:02 +0000 (UTC) Message-ID: From: "Steven Hartland" To: , References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> <538D0174.6000906@bayphoto.com> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Date: Tue, 3 Jun 2014 01:29:08 +0100 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_0681_01CF7ECB.36F91C70" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 00:29:09 -0000 This is a multi-part message in MIME format. ------=_NextPart_000_0681_01CF7ECB.36F91C70 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=original Content-Transfer-Encoding: 7bit ----- Original Message ----- From: "Mike Carlson" To: "Steven Hartland" ; Sent: Monday, June 02, 2014 11:57 PM Subject: Re: ZFS Kernel Panic on 10.0-RELEASE > On 6/2/2014 2:15 PM, Steven Hartland wrote: >> ----- Original Message ----- From: "Mike Carlson" >> >>>> Thats the line I gathered it was on but no I need to know what the >>>> value >>>> of vd is, so what you need to do is: >>>> print vd >>>> >>>> If thats valid then: >>>> print *vd >>>> >>> It reports: >>> >>> (kgdb) print *vd >>> No symbol "vd" in current context. >> >> Dam optimiser :( >> >>> Should I rebuild the kernel with additional options? >> >> Likely wont help as kernel with zero optimisations tends to fail >> to build in my experience :( >> >> Can you try applying the attached patch to your src e.g. >> cd /usr/src >> patch < zfs-dsize-dva-check.patch >> >> The rebuild, install the kernel and then reproduce the issue again. >> >> Hopefully it will provide some more information on the cause, but >> I suspect you might be seeing the effect os have some corruption. > > Well, after building the kernel with your patch, installing it and > booting off of it, the system does not panic. > > It reports this when I mount the filesystem: > > Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 > Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 > Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 > > Here is the results, I can now mount the file system! > > root@working-1:~ # zfs set canmount=on zroot/data/working > root@working-1:~ # zfs mount zroot/data/working > root@working-1:~ # df > Filesystem 1K-blocks Used Avail Capacity > Mounted on > zroot 2677363378 1207060 2676156318 0% / > devfs 1 1 0 100% /dev > /dev/mfid10p1 253911544 2827824 230770800 1% > /dump > zroot/home 2676156506 188 2676156318 0% > /home > zroot/data 2676156389 71 2676156318 0% > /mnt/data > zroot/usr/ports/distfiles 2676246609 90291 2676156318 0% > /mnt/usr/ports/distfiles > zroot/usr/ports/packages 2676158702 2384 2676156318 0% > /mnt/usr/ports/packages > zroot/tmp 2676156812 493 2676156318 0% > /tmp > zroot/usr 2679746045 3589727 2676156318 0% > /usr > zroot/usr/ports 2676986896 830578 2676156318 0% > /usr/ports > zroot/usr/src 2676643553 487234 2676156318 0% > /usr/src > zroot/var 2676650671 494353 2676156318 0% > /var > zroot/var/crash 2676156388 69 2676156318 0% > /var/crash > zroot/var/db 2677521200 1364882 2676156318 0% > /var/db > zroot/var/db/pkg 2676198058 41740 2676156318 0% > /var/db/pkg > zroot/var/empty 2676156387 68 2676156318 0% > /var/empty > zroot/var/log 2676168522 12203 2676156318 0% > /var/log > zroot/var/mail 2676157043 725 2676156318 0% > /var/mail > zroot/var/run 2676156508 190 2676156318 0% > /var/run > zroot/var/tmp 2676156389 71 2676156318 0% > /var/tmp > zroot/data/working 7664687468 4988531149 2676156318 65% > /mnt/data/working > root@working-1:~ # ls /mnt/data/working/ > DONE_ORDERS DP2_CMD NEW_MULTI_TESTING PROCESS > RECYCLER XML_NOTIFICATIONS XML_REPORTS That does indeed seem to indicated some on disk corruption. There are a number of cases in the code which have a similar check but I'm afraid I don't know the implications of the corruption your seeing but others may. The attached updated patch will enforce the safe panic in this case unless the sysctl vfs.zfs.recover is set to 1 (which can also now be done on the fly). I'd recommend backing up the data off the pool and restoring it else where. It would be interesting to see the output of the following command on your pool: zdb -uuumdC Regards Steve ------=_NextPart_000_0681_01CF7ECB.36F91C70 Content-Type: application/octet-stream; name="zfs-dsize-dva-check.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="zfs-dsize-dva-check.patch" Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c=0A= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A= --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c (revision = 266009)=0A= +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c (working = copy)=0A= @@ -252,7 +252,7 @@ int zfs_flags =3D 0;=0A= int zfs_recover =3D 0;=0A= SYSCTL_DECL(_vfs_zfs);=0A= TUNABLE_INT("vfs.zfs.recover", &zfs_recover);=0A= -SYSCTL_INT(_vfs_zfs, OID_AUTO, recover, CTLFLAG_RDTUN, &zfs_recover, 0,=0A= +SYSCTL_INT(_vfs_zfs, OID_AUTO, recover, CTLFLAG_RWTUN, &zfs_recover, 0,=0A= "Try to recover from otherwise-fatal errors.");=0A= =0A= extern int zfs_txg_synctime_ms;=0A= @@ -1631,7 +1631,13 @@ dva_get_dsize_sync(spa_t *spa, const dva_t *dva)=0A= ASSERT(spa_config_held(spa, SCL_ALL, RW_READER) !=3D 0);=0A= =0A= if (asize !=3D 0 && spa->spa_deflate) {=0A= - vdev_t *vd =3D vdev_lookup_top(spa, DVA_GET_VDEV(dva));=0A= + uint64_t vdev =3D DVA_GET_VDEV(dva);=0A= + vdev_t *vd =3D vdev_lookup_top(spa, vdev);=0A= + if (vd =3D=3D NULL) {=0A= + zfs_panic_recover(=0A= + "dva_get_dsize_sync(): bad DVA %llu:%llu",=0A= + (u_longlong_t)vdev, (u_longlong_t)asize);=0A= + }=0A= dsize =3D (asize >> SPA_MINBLOCKSHIFT) * vd->vdev_deflate_ratio;=0A= }=0A= =0A= ------=_NextPart_000_0681_01CF7ECB.36F91C70-- From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 00:37:37 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E125AF1C for ; Tue, 3 Jun 2014 00:37:37 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id BB96F25A5 for ; Tue, 3 Jun 2014 00:37:37 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id B861523B561; Mon, 2 Jun 2014 17:37:31 -0700 (PDT) Message-ID: <538D18CB.5020906@bayphoto.com> Date: Mon, 02 Jun 2014 17:37:31 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> <538D0174.6000906@bayphoto.com> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms020807010704030109080200" X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 00:37:38 -0000 This is a cryptographically signed message in MIME format. --------------ms020807010704030109080200 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 6/2/2014 5:29 PM, Steven Hartland wrote: > > ----- Original Message ----- From: "Mike Carlson" > To: "Steven Hartland" ; > Sent: Monday, June 02, 2014 11:57 PM > Subject: Re: ZFS Kernel Panic on 10.0-RELEASE > > >> On 6/2/2014 2:15 PM, Steven Hartland wrote: >>> ----- Original Message ----- From: "Mike Carlson" = >>> >>>>> Thats the line I gathered it was on but no I need to know what the = >>>>> value >>>>> of vd is, so what you need to do is: >>>>> print vd >>>>> >>>>> If thats valid then: >>>>> print *vd >>>>> >>>> It reports: >>>> >>>> (kgdb) print *vd >>>> No symbol "vd" in current context. >>> >>> Dam optimiser :( >>> >>>> Should I rebuild the kernel with additional options? >>> >>> Likely wont help as kernel with zero optimisations tends to fail >>> to build in my experience :( >>> >>> Can you try applying the attached patch to your src e.g. >>> cd /usr/src >>> patch < zfs-dsize-dva-check.patch >>> >>> The rebuild, install the kernel and then reproduce the issue again. >>> >>> Hopefully it will provide some more information on the cause, but >>> I suspect you might be seeing the effect os have some corruption. >> >> Well, after building the kernel with your patch, installing it and=20 >> booting off of it, the system does not panic. >> >> It reports this when I mount the filesystem: >> >> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 >> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 >> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 >> >> Here is the results, I can now mount the file system! >> >> root@working-1:~ # zfs set canmount=3Don zroot/data/working >> root@working-1:~ # zfs mount zroot/data/working >> root@working-1:~ # df >> Filesystem 1K-blocks Used Avail Capacity =20 >> Mounted on >> zroot 2677363378 1207060 2676156318 =20 >> 0% / >> devfs 1 1 0 100% /dev >> /dev/mfid10p1 253911544 2827824 230770800 1% =20 >> /dump >> zroot/home 2676156506 188 2676156318 =20 >> 0% /home >> zroot/data 2676156389 71 2676156318 =20 >> 0% /mnt/data >> zroot/usr/ports/distfiles 2676246609 90291 2676156318 =20 >> 0% /mnt/usr/ports/distfiles >> zroot/usr/ports/packages 2676158702 2384 2676156318 =20 >> 0% /mnt/usr/ports/packages >> zroot/tmp 2676156812 493 2676156318 =20 >> 0% /tmp >> zroot/usr 2679746045 3589727 2676156318 =20 >> 0% /usr >> zroot/usr/ports 2676986896 830578 2676156318 =20 >> 0% /usr/ports >> zroot/usr/src 2676643553 487234 2676156318 =20 >> 0% /usr/src >> zroot/var 2676650671 494353 2676156318 =20 >> 0% /var >> zroot/var/crash 2676156388 69 2676156318 =20 >> 0% /var/crash >> zroot/var/db 2677521200 1364882 2676156318 =20 >> 0% /var/db >> zroot/var/db/pkg 2676198058 41740 2676156318 =20 >> 0% /var/db/pkg >> zroot/var/empty 2676156387 68 2676156318 =20 >> 0% /var/empty >> zroot/var/log 2676168522 12203 2676156318 =20 >> 0% /var/log >> zroot/var/mail 2676157043 725 2676156318 =20 >> 0% /var/mail >> zroot/var/run 2676156508 190 2676156318 =20 >> 0% /var/run >> zroot/var/tmp 2676156389 71 2676156318 =20 >> 0% /var/tmp >> zroot/data/working 7664687468 4988531149 2676156318 65% =20 >> /mnt/data/working >> root@working-1:~ # ls /mnt/data/working/ >> DONE_ORDERS DP2_CMD NEW_MULTI_TESTING PROCESS >> RECYCLER XML_NOTIFICATIONS XML_REPORTS > > That does indeed seem to indicated some on disk corruption. > > There are a number of cases in the code which have a similar check but > I'm afraid I don't know the implications of the corruption your > seeing but others may. > > The attached updated patch will enforce the safe panic in this case > unless the sysctl vfs.zfs.recover is set to 1 (which can also now be > done on the fly). > > I'd recommend backing up the data off the pool and restoring it else > where. > > It would be interesting to see the output of the following command > on your pool: > zdb -uuumdC > > Regards > Steve I'm applying that patch and rebuilding the kernel again Here is the output from zdb -uuumdC: zroot: version: 28 name: 'zroot' state: 0 txg: 13 pool_guid: 9132288035431788388 hostname: 'amnesia.discdrive.bayphoto.com' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 9132288035431788388 children[0]: type: 'raidz' id: 0 guid: 15520162542638044402 nparity: 2 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9894744555520 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 4289437176706222104 path: '/dev/gpt/disk0' phys_path: '/dev/gpt/disk0' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 5369387862706621015 path: '/dev/gpt/disk1' phys_path: '/dev/gpt/disk1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 456749962069636782 path: '/dev/gpt/disk2' phys_path: '/dev/gpt/disk2' whole_disk: 1 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 3809413300177228462 path: '/dev/gpt/disk3' phys_path: '/dev/gpt/disk3' whole_disk: 1 create_txg: 4 children[4]: type: 'disk' id: 4 guid: 4978694931676882497 path: '/dev/gpt/disk4' phys_path: '/dev/gpt/disk4' whole_disk: 1 create_txg: 4 children[5]: type: 'disk' id: 5 guid: 17831739822150458220 path: '/dev/gpt/disk5' phys_path: '/dev/gpt/disk5' whole_disk: 1 create_txg: 4 children[6]: type: 'disk' id: 6 guid: 1286918567594965543 path: '/dev/gpt/disk6' phys_path: '/dev/gpt/disk6' whole_disk: 1 create_txg: 4 children[7]: type: 'disk' id: 7 guid: 7958718879588658810 path: '/dev/gpt/disk7' phys_path: '/dev/gpt/disk7' whole_disk: 1 create_txg: 4 children[8]: type: 'disk' id: 8 guid: 18392960683862755998 path: '/dev/gpt/disk8' phys_path: '/dev/gpt/disk8' whole_disk: 1 create_txg: 4 children[9]: type: 'disk' id: 9 guid: 13046629036569375198 path: '/dev/gpt/disk9' phys_path: '/dev/gpt/disk9' whole_disk: 1 create_txg: 4 children[10]: type: 'disk' id: 10 guid: 10604061156531251346 path: '/dev/gpt/disk11' phys_path: '/dev/gpt/disk11' whole_disk: 1 create_txg: 4 I find it strange that it says version 28 when it was upgraded to=20 version 5000 --------------ms020807010704030109080200 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAzMDAz NzMxWjAjBgkqhkiG9w0BCQQxFgQUcpWCecZx5Ldqot8lVfmaEBIc7kcwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAB/JLRR8hug44t8lxVsEfjPguhpAKb7SoNVE 9EjdcreJFQLdJFvWAkehi1pcRkinC6DyZxgb5e3bP79i9bAJTZMjTJLvRMNdaXdxbz+j0OHO kuUCfgsEmxtqjT+OgLVcM9UY72B7BAdfI3oANDmj26KBgSHocawC/A6DP4lNvH2z2SPiKXZP +yElIeTKQQnr8YQjkQV/M+/9F7LMxoy5SQ+U6kcmjEA2h1J4ymp3oZSCxYXkNhdTh1GctBiS Ij1GcnuAkmCXICMTkPqVcI+0I8NaePsF4xsHTfZI9MQYtO/ATc0ppfigm8sE30THKeCJnZKg WBuDhrDLlx3EeY9odYojge589/OmYdJ3MyjqPcIJevX/mII9k/QSkhmynC7PIpzeSoFrPmg+ qH28IWjmWpO3zPMhCptaYhfW8iUSbAZAEVuYKRA8Tkzk0i59jNvn2JqorUcCmThBGSkXrcEl rVPdq1vXe8uk6fg/tzToSS4TNrkCiOFUVPfr9SNBbIXUjCT4EHSeu2tAwsYHrG08py/KqjZR DsrXjrE37Hfn8K0IRz4RRc1KbeDQCngCoJqQ8nLyCTux2r3jhoVOPY+golZa/byW1o0jnVIy zTqUcpdlA9CX/gN2azB4XCt/mUHh89Es+mzjAvFyISXDm3w47YPJRN7Egg0Z9zWM/3d7ENcA AAAAAAAA --------------ms020807010704030109080200-- From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 00:45:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 15BD94FB for ; Tue, 3 Jun 2014 00:45:29 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id C6F4A2654 for ; Tue, 3 Jun 2014 00:45:28 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id D51E420E7088C; Tue, 3 Jun 2014 00:45:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.8 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1, HELO_NO_DOMAIN, RDNS_DYNAMIC, STOX_REPLY_TYPE, TVD_FINGER_02 autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 9E50920E70886; Tue, 3 Jun 2014 00:45:23 +0000 (UTC) Message-ID: <9F4F4969D31A444EB646E949B712791C@multiplay.co.uk> From: "Steven Hartland" To: , References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> <538D0174.6000906@bayphoto.com> <538D18CB.5020906@bayphoto.com> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Date: Tue, 3 Jun 2014 01:45:29 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 00:45:29 -0000 ----- Original Message ----- From: "Mike Carlson" >> It would be interesting to see the output of the following command >> on your pool: >> zdb -uuumdC > I'm applying that patch and rebuilding the kernel again Here is the output from zdb -uuumdC: Looks like you missed the pool name which changes the output unfortunately what does the following show: zdb -uuumdC zroot Regards Steve From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 00:54:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0FB357AA for ; Tue, 3 Jun 2014 00:54:47 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id E0CB4270B for ; Tue, 3 Jun 2014 00:54:45 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id C775423B3BD for ; Mon, 2 Jun 2014 17:54:45 -0700 (PDT) Message-ID: <538D1CD5.5070902@bayphoto.com> Date: Mon, 02 Jun 2014 17:54:45 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> <538D0174.6000906@bayphoto.com> <538D18CB.5020906@bayphoto.com> In-Reply-To: <538D18CB.5020906@bayphoto.com> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms090706030304070507080605" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 00:54:47 -0000 This is a cryptographically signed message in MIME format. --------------ms090706030304070507080605 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 6/2/2014 5:37 PM, Mike Carlson wrote: > On 6/2/2014 5:29 PM, Steven Hartland wrote: >> >> ----- Original Message ----- From: "Mike Carlson" >> To: "Steven Hartland" ;=20 >> >> Sent: Monday, June 02, 2014 11:57 PM >> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE >> >> >>> On 6/2/2014 2:15 PM, Steven Hartland wrote: >>>> ----- Original Message ----- From: "Mike Carlson" >>>> >>>>>> Thats the line I gathered it was on but no I need to know what=20 >>>>>> the value >>>>>> of vd is, so what you need to do is: >>>>>> print vd >>>>>> >>>>>> If thats valid then: >>>>>> print *vd >>>>>> >>>>> It reports: >>>>> >>>>> (kgdb) print *vd >>>>> No symbol "vd" in current context. >>>> >>>> Dam optimiser :( >>>> >>>>> Should I rebuild the kernel with additional options? >>>> >>>> Likely wont help as kernel with zero optimisations tends to fail >>>> to build in my experience :( >>>> >>>> Can you try applying the attached patch to your src e.g. >>>> cd /usr/src >>>> patch < zfs-dsize-dva-check.patch >>>> >>>> The rebuild, install the kernel and then reproduce the issue again. >>>> >>>> Hopefully it will provide some more information on the cause, but >>>> I suspect you might be seeing the effect os have some corruption. >>> >>> Well, after building the kernel with your patch, installing it and=20 >>> booting off of it, the system does not panic. >>> >>> It reports this when I mount the filesystem: >>> >>> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 >>> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 >>> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648 >>> >>> Here is the results, I can now mount the file system! >>> >>> root@working-1:~ # zfs set canmount=3Don zroot/data/working >>> root@working-1:~ # zfs mount zroot/data/working >>> root@working-1:~ # df >>> Filesystem 1K-blocks Used Avail Capacity = >>> Mounted on >>> zroot 2677363378 1207060 2676156318 =20 >>> 0% / >>> devfs 1 1 0 100% /dev >>> /dev/mfid10p1 253911544 2827824 230770800 1% = >>> /dump >>> zroot/home 2676156506 188 2676156318 =20 >>> 0% /home >>> zroot/data 2676156389 71 2676156318 =20 >>> 0% /mnt/data >>> zroot/usr/ports/distfiles 2676246609 90291 2676156318 =20 >>> 0% /mnt/usr/ports/distfiles >>> zroot/usr/ports/packages 2676158702 2384 2676156318 =20 >>> 0% /mnt/usr/ports/packages >>> zroot/tmp 2676156812 493 2676156318 =20 >>> 0% /tmp >>> zroot/usr 2679746045 3589727 2676156318 =20 >>> 0% /usr >>> zroot/usr/ports 2676986896 830578 2676156318 =20 >>> 0% /usr/ports >>> zroot/usr/src 2676643553 487234 2676156318 =20 >>> 0% /usr/src >>> zroot/var 2676650671 494353 2676156318 =20 >>> 0% /var >>> zroot/var/crash 2676156388 69 2676156318 =20 >>> 0% /var/crash >>> zroot/var/db 2677521200 1364882 2676156318 =20 >>> 0% /var/db >>> zroot/var/db/pkg 2676198058 41740 2676156318 =20 >>> 0% /var/db/pkg >>> zroot/var/empty 2676156387 68 2676156318 =20 >>> 0% /var/empty >>> zroot/var/log 2676168522 12203 2676156318 =20 >>> 0% /var/log >>> zroot/var/mail 2676157043 725 2676156318 =20 >>> 0% /var/mail >>> zroot/var/run 2676156508 190 2676156318 =20 >>> 0% /var/run >>> zroot/var/tmp 2676156389 71 2676156318 =20 >>> 0% /var/tmp >>> zroot/data/working 7664687468 4988531149 2676156318 65% =20 >>> /mnt/data/working >>> root@working-1:~ # ls /mnt/data/working/ >>> DONE_ORDERS DP2_CMD NEW_MULTI_TESTING PROCESS >>> RECYCLER XML_NOTIFICATIONS XML_REPORTS >> >> That does indeed seem to indicated some on disk corruption. >> >> There are a number of cases in the code which have a similar check but= >> I'm afraid I don't know the implications of the corruption your >> seeing but others may. >> >> The attached updated patch will enforce the safe panic in this case >> unless the sysctl vfs.zfs.recover is set to 1 (which can also now be >> done on the fly). >> >> I'd recommend backing up the data off the pool and restoring it else >> where. >> >> It would be interesting to see the output of the following command >> on your pool: >> zdb -uuumdC >> >> Regards >> Steve > Scratch that last one, the cachefile had to be reset on the pool to=20 /boot/zfs/zpool.cache So I'm running it now, and its taking so long to traverse all blocks,=20 that it is telling me its going to take around 5400 HOURS I guess I'll report back 90 days? --------------ms090706030304070507080605 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAzMDA1 NDQ1WjAjBgkqhkiG9w0BCQQxFgQUOGFO6f8zpYOzSeRm8MXph2HhDrcwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAG+pTqLPnK0iPOpBE0tnK+X+md9+w+JTQSNZ JuLVgFNTueIgtRrOG3AnvvXbxDP3gaM9rHIW838Re8z2MakkV5qZzSR6dfYjw3sgMWRxUn/M t6m+hS9c0NYiXW5jyeKZJ9EYBqmNU53Z7wZc7yrU1OWo84ycKixdOFUuPKr3D/ZkZFlfyqDS Xmr7NabJvdKa5SGDWQaHhPuCoRIZnzBN8IwYrf1dOKPvKhuICGxI8tmluogBTV1mFTOazySc 8iqN/+YXyDfKm6ZNQJjMwwmC2Zunkxke6/neQRnnko7ecQnDiHC2BZKktKoLqTApsApa5Epr MgdLycF2tNb9prDS3o5q+prGdnMfLEui8lCHUjLo35FFpCpBoeZSzaA1QUi/YuzNJfyQexM4 hKd+coC22Bj7yL1cTtt+huGDi9NBwSebGxlCqc++BnthsZb7P2W1UEAkmRaCiIl9SYGNMoM5 VpfVt655kSpYVNOqoFhRkXS/A5u5zccKO1BVLGLGMvIc0EkBQw89W4MjLWCY8sZK345lYTID jfuvuNg7Y49rvO6v2G7IK3UdU9bnQL7fzaTvUQ6Kz9QxK+Eu5MAnTGXouDNzdEVipp4tTMTf T6UXyTrvZ2VXl8ZcsPwcM9CHMLYbxw6whYqb8KauBM14N4ZsFoAbgGlGfZYFAMH0NDlrmYvL AAAAAAAA --------------ms090706030304070507080605-- From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 04:04:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EA10B5C8 for ; Tue, 3 Jun 2014 04:04:18 +0000 (UTC) Received: from mail-lb0-x233.google.com (mail-lb0-x233.google.com [IPv6:2a00:1450:4010:c04::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 749272698 for ; Tue, 3 Jun 2014 04:04:18 +0000 (UTC) Received: by mail-lb0-f179.google.com with SMTP id c11so3099556lbj.10 for ; Mon, 02 Jun 2014 21:04:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:reply-to:message-id:to:subject:mime-version:content-type :content-transfer-encoding; bh=7N8bYGjiOHQ5/QaHbaAvWU8To9rQTIdnQ1EPEJPZABY=; b=wg+vm4h3igf/OHptPFeao0ZzIQD50ED02/xqO8oWtb7TSuCtPp13RAeDm3zV53LPO4 A/1MFrUPa8kVaeEem3uPckWQjc21KYjMcDyK4eN9Un4cLnYRvMxfdziwjdzNeLdwtVGe Fye3/SAYigxkAD6gDHDspVV8l1tTfDRbO7QwrD2cKJSpSzvK9+w/YDXYGyNxOR8zJFWZ sghk8nxUiOX3b9LMyWxVkn9buF/sQpiF3Vi4BnqiYJZ1s06513v8oEOIXdHWA/qiQMV9 wV618QNNaMV91wFg36R59D1WpUUrCqY5TjKgeHs0jn8oKFbuY6vIHyN0r8OlI2AcjaF1 9hmQ== X-Received: by 10.152.143.3 with SMTP id sa3mr5892569lab.53.1401768256297; Mon, 02 Jun 2014 21:04:16 -0700 (PDT) Received: from adminpc.aztcom.ru (main.aztcom.ru. [80.247.106.146]) by mx.google.com with ESMTPSA id x2sm12795380lae.1.2014.06.02.21.04.15 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 02 Jun 2014 21:04:15 -0700 (PDT) Date: Tue, 3 Jun 2014 11:04:13 +0700 From: Maxxie Root X-Mailer: The Bat! (v4.2.44.2) Professional Reply-To: Maxxie Root X-Priority: 3 (Normal) Message-ID: <299712548.20140603110413@gmail.com> To: freebsd-fs@freebsd.org Subject: FUSE curlftpfs write problem after upgrading to 10.0-RELEASE MIME-Version: 1.0 Content-Type: text/plain; charset=windows-1251 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 04:04:19 -0000 Hello! I've got a problem with FUSE (particularly fusefs-curlftpfs) - cannot write files, it fails with "Operation not supported" On 9.2 it worked well Here is brief information: # uname -a FreeBSD testbsd 10.0-RELEASE-p3 FreeBSD 10.0-RELEASE-p3 #0: Tue May 13 18:2= 6:10 UTC 2014 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/G= ENERIC i386 # kldload fuse fuse-freebsd: version 0.4.4, FUSE ABI 7.8 # kldstat Id Refs Address Size Name 1 8 0xc0400000 1276c0c kernel 2 1 0xc5e70000 12000 ipfw.ko 3 1 0xc5f1c000 2000 blank_saver.ko 4 1 0xc6029000 d000 fuse.ko # curlftpfs -V curlftpfs 0.9.2 libcurl/7.36.0 fuse/2.9 # curlftpfs -s ftp://user:password@ftpserver /mnt/fuseftp # mount | grep fuse /dev/fuse on /mnt/fuseftp (fusefs, local, synchronous) # ls -la /mnt/fuseftp total 12 drwxrwxrwt 2 root wheel 512 Jun 3 10:12 . drwxrwxrwt 3 root wheel 512 Jan 17 08:51 .. -rw-r--r-- 1 root wheel 990 Oct 25 2013 readtest.txt # file /mnt/fuseftp/readtest.txt /mnt/fuseftp/readtest.txt: ASCII text, with CRLF line terminators # dd if=3D/mnt/fuseftp/readtest.txt of=3D/dev/null 1+1 records in 1+1 records out 990 bytes transferred in 0.000154 secs (6427803 bytes/sec) # touch /mnt/fuseftp/writetest.txt touch: /mnt/fuseftp/writetest.txt: Operation not supported # dd if=3D/dev/null of=3D/mnt/fuseftp/writetest.txt count=3D1024 dd: /mnt/fuseftp/writetest.txt: Operation not supported Any suggestions? Thanks //Max From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 08:26:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 597FD3B7 for ; Tue, 3 Jun 2014 08:26:43 +0000 (UTC) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.81]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1AB9F2C8E for ; Tue, 3 Jun 2014 08:26:42 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1Wrk3L-0001Fy-DP for freebsd-fs@freebsd.org; Tue, 03 Jun 2014 10:26:39 +0200 Content-Type: text/plain; charset=iso-8859-15; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: FUSE curlftpfs write problem after upgrading to 10.0-RELEASE References: <299712548.20140603110413@gmail.com> Date: Tue, 03 Jun 2014 10:26:37 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Ronald Klop" Message-ID: In-Reply-To: <299712548.20140603110413@gmail.com> User-Agent: Opera Mail/12.17 (Win32) X-Authenticated-As-Hash: 398f5522cb258ce43cb679602f8cfe8b62a256d1 X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: -- X-Spam-Score: -2.9 X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED, BAYES_00 autolearn=disabled version=3.3.1 X-Scan-Signature: 6afe0509a8fa736d7bae96a824aab2db X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 08:26:43 -0000 On Tue, 03 Jun 2014 06:04:13 +0200, Maxxie Root wrote: > Hello! > > I've got a problem with FUSE (particularly fusefs-curlftpfs) - cannot > write files, it fails with "Operation not supported" > On 9.2 it worked well Are you using the fuse module from ports or from freebsd itself? In 10.0 the fuse kernel module is included. Ronald. > > Here is brief information: > > # uname -a > FreeBSD testbsd 10.0-RELEASE-p3 FreeBSD 10.0-RELEASE-p3 #0: Tue May 13 > 18:26:10 UTC 2014 > root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC i386 > > # kldload fuse > fuse-freebsd: version 0.4.4, FUSE ABI 7.8 > > # kldstat > Id Refs Address Size Name > 1 8 0xc0400000 1276c0c kernel > 2 1 0xc5e70000 12000 ipfw.ko > 3 1 0xc5f1c000 2000 blank_saver.ko > 4 1 0xc6029000 d000 fuse.ko > > # curlftpfs -V > curlftpfs 0.9.2 libcurl/7.36.0 fuse/2.9 > > # curlftpfs -s ftp://user:password@ftpserver /mnt/fuseftp > > # mount | grep fuse > /dev/fuse on /mnt/fuseftp (fusefs, local, synchronous) > > # ls -la /mnt/fuseftp > total 12 > drwxrwxrwt 2 root wheel 512 Jun 3 10:12 . > drwxrwxrwt 3 root wheel 512 Jan 17 08:51 .. > -rw-r--r-- 1 root wheel 990 Oct 25 2013 readtest.txt > > # file /mnt/fuseftp/readtest.txt > /mnt/fuseftp/readtest.txt: ASCII text, with CRLF line terminators > > # dd if=/mnt/fuseftp/readtest.txt of=/dev/null > 1+1 records in > 1+1 records out > 990 bytes transferred in 0.000154 secs (6427803 bytes/sec) > > # touch /mnt/fuseftp/writetest.txt > touch: /mnt/fuseftp/writetest.txt: Operation not supported > > # dd if=/dev/null of=/mnt/fuseftp/writetest.txt count=1024 > dd: /mnt/fuseftp/writetest.txt: Operation not supported > > Any suggestions? > Thanks > > //Max > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 08:35:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 770DF519 for ; Tue, 3 Jun 2014 08:35:29 +0000 (UTC) Received: from mail-la0-x230.google.com (mail-la0-x230.google.com [IPv6:2a00:1450:4010:c03::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id F259A2D60 for ; Tue, 3 Jun 2014 08:35:28 +0000 (UTC) Received: by mail-la0-f48.google.com with SMTP id mc6so3182777lab.7 for ; Tue, 03 Jun 2014 01:35:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:reply-to:message-id:to:subject:in-reply-to:references :mime-version:content-type:content-transfer-encoding; bh=Bh+VM50rTaz0Be/SgzvVYeggEDkGwVSVDpKlfA68Zv8=; b=MpWYAWo4ikkFFUBHk48cOY6JbUeiXISiE+0Ys9UlWv6jef91IEWg1UqyFeof9PtAn/ 0EcWqytjUh5aX7DqbhX8ZUhOK6GBGMxmRj+ZkLBLa/XZvPlbEvSLIATmZmRgZPJrceS3 TE1Cu094NHzMYwz7UDSQYUOHI4/JfSNXzIjFY22Bb0YECt2L9EuP/N8vPoJEi6xYWRX2 L6Dil1GRLm6dcSm2jL9i+/GBMWSo9oW0j5krTH4hWxI8G/DzX7LKyqUhmvrFpE822Zn7 1GVJ0ZuWq3RUsZwA4yAWfnG9sM07oUkeaeXtZwuBgGyUFoc2OdbZNx48E2LMibzf/9z3 iJyA== X-Received: by 10.112.173.201 with SMTP id bm9mr31143155lbc.16.1401784526893; Tue, 03 Jun 2014 01:35:26 -0700 (PDT) Received: from adminpc.aztcom.ru (main.aztcom.ru. [80.247.106.146]) by mx.google.com with ESMTPSA id aa1sm15365830lbd.12.2014.06.03.01.35.25 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 03 Jun 2014 01:35:26 -0700 (PDT) Date: Tue, 3 Jun 2014 15:35:23 +0700 From: Maxxie Root X-Mailer: The Bat! (v4.2.44.2) Professional Reply-To: Maxxie Root X-Priority: 3 (Normal) Message-ID: <1013096370.20140603153523@gmail.com> To: freebsd-fs@freebsd.org Subject: Re: FUSE curlftpfs write problem after upgrading to 10.0-RELEASE In-Reply-To: References: <299712548.20140603110413@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=windows-1251 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 08:35:29 -0000 > Are you using the fuse module from ports or from freebsd itself? In 10.0 > the fuse kernel module is included. > Ronald. I'm using fuse module from base system //Max From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 09:21:19 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1DC4A67B for ; Tue, 3 Jun 2014 09:21:19 +0000 (UTC) Received: from mail-wg0-f45.google.com (mail-wg0-f45.google.com [74.125.82.45]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AB4562206 for ; Tue, 3 Jun 2014 09:21:18 +0000 (UTC) Received: by mail-wg0-f45.google.com with SMTP id m15so6307016wgh.4 for ; Tue, 03 Jun 2014 02:21:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:cc:content-type:content-transfer-encoding; bh=Q4kENxuillqIdOdyQ1w1KrzoU4jo6GapsUMT/e2UDq8=; b=lS0l484YXLl8BXFtoaA1RaVmEvtG61GL6tM3i71I3abaSKt2RdoU2jpWl0ItHdiu1R KnDvj2URoWGRr7Se9VSZbv4fgNHqRMfWdjqwsESS/x76KWcBpW2UnEmM/KzamaSGAqaW w59hyLj3XXG0fCzT/dRBaSTKN4po0zfJd6iy+RpvJf1M+CaMnNGtyVhS4csmYSMPHssq NkFymUQk9Oy+++1eRBTZg/GppYXv/AeAQOoy+GqMEUoK5nFrPZnVY/ZuPIXFqxp7jeX8 qhbX6jtTFbWzQiebCM0ZsZdGlOffIS696MLZY5xaPiq/VHW2UiP5Y3Yqy80pBY5/vrg/ Ajbg== X-Gm-Message-State: ALoCoQle8UwoNT6tFOYTQfQw1JvQkxiCpiY2+yFisuy2tjJWumAgtg3j5Lzybv+hnHNMDtf9DxWu MIME-Version: 1.0 X-Received: by 10.194.11.74 with SMTP id o10mr13891096wjb.82.1401787276197; Tue, 03 Jun 2014 02:21:16 -0700 (PDT) Received: by 10.180.13.242 with HTTP; Tue, 3 Jun 2014 02:21:16 -0700 (PDT) In-Reply-To: <220107037.9988770.1401715391557.JavaMail.root@uoguelph.ca> References: <220107037.9988770.1401715391557.JavaMail.root@uoguelph.ca> Date: Tue, 3 Jun 2014 11:21:16 +0200 Message-ID: Subject: Re: RFC and testing: NFSv4.1 server going into head From: Olav Gjerde Cc: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 09:21:19 -0000 This is great stuff! I'm wondering though, is there any reason that the mount option for FreeBSD has to be different than Linux? On Mon, Jun 2, 2014 at 3:23 PM, Rick Macklem wrote: > Hi, > > I think that the NFSv4.1 server code in projects/nfsv4.1-server is > about ready to be merged into head. > > As such, if anyone has the resources to do so in the next 2 weeks, > please take a look at the code and/or test it. > > Also, feel free to make any comments w.r.t. merging this code into > head, such as preferred timing, whether or not you think it should > happen, etc. > > If/when the merge is done, it will be fairly large, but shouldn't > affect the NFSv3, NFSv4.0 server functionality (however, I may > screw up and break them for a little while;-). I think NFSv4.1 might > be useful, since it uses sessions to provide "exactly once" > RPC semantics, which should improve overall correctness. This server > code does not have any pNFS support in it. Implementing a pNFS server > is a large project that may happen someday. > > Thanks in advance for any testing/review/comments, rick > ps: The NFSv4.1 client is already in head and the options for > mounting with NFSv4.1 are "nfsv4.minorversion=3D1" for FreeBSD > and "vers=3D4,minorversion=3D1" for the Linux client. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" --=20 Olav Gr=C3=B8n=C3=A5s Gjerde From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 09:24:54 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 40BBA955 for ; Tue, 3 Jun 2014 09:24:54 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id F19DE2246 for ; Tue, 3 Jun 2014 09:24:53 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 0EC7320E7088D; Tue, 3 Jun 2014 09:24:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id C7DBD20E70886; Tue, 3 Jun 2014 09:24:46 +0000 (UTC) Message-ID: From: "Steven Hartland" To: , References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> <538D0174.6000906@bayphoto.com> <538D18CB.5020906@bayphoto.com> <538D1CD5.5070902@bayphoto.com> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Date: Tue, 3 Jun 2014 10:24:42 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 09:24:54 -0000 ----- Original Message ----- From: "Mike Carlson" > Scratch that last one, the cachefile had to be reset on the pool to > /boot/zfs/zpool.cache > > So I'm running it now, and its taking so long to traverse all blocks, > that it is telling me its going to take around 5400 HOURS > > I guess I'll report back 90 days? Try with just the following should be quicker: zdb -uuuC zroot Regards Steve From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 15:57:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0AC61B96 for ; Tue, 3 Jun 2014 15:57:56 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id D5FC22A70 for ; Tue, 3 Jun 2014 15:57:55 +0000 (UTC) Received: from [192.168.251.238] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 6480E23B5C3; Tue, 3 Jun 2014 08:57:54 -0700 (PDT) Message-ID: <538DF082.3030407@bayphoto.com> Date: Tue, 03 Jun 2014 08:57:54 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> <538D0174.6000906@bayphoto.com> <538D18CB.5020906@bayphoto.com> <538D1CD5.5070902@bayphoto.com> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms020303080409010008050700" X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 15:57:56 -0000 This is a cryptographically signed message in MIME format. --------------ms020303080409010008050700 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 6/3/2014 2:24 AM, Steven Hartland wrote: > > ----- Original Message ----- From: "Mike Carlson" > >> Scratch that last one, the cachefile had to be reset on the pool to=20 >> /boot/zfs/zpool.cache >> >> So I'm running it now, and its taking so long to traverse all blocks, = >> that it is telling me its going to take around 5400 HOURS >> >> I guess I'll report back 90 days? > > Try with just the following should be quicker: > zdb -uuuC zroot > > Regards > Steve > zdb -uuumcD eventually segfaulted: Uberblock: magic =3D 0000000000bab10c version =3D 5000 txg =3D 3378596 guid_sum =3D 1996697515446579069 timestamp =3D 1401756315 UTC =3D Mon Jun 2 17:45:15 2014 rootbp =3D DVA[0]=3D<0:3f08b7fd000:600> DVA[1]=3D<0:5500f66f000:600> DVA[2]=3D<0:86001fb9c00:600> [L0 DMU objset] fletcher4 lzjb LE contiguous unique triple size=3D800L/200P birth=3D3378596L/3378596P fill=3D326 cksum=3D10553d553d:65de1705c49:1445be46ea217:2bed6cb4bc5e02 All DDTs are empty Metaslabs: vdev 0 metaslabs 143 offset spacemap free --------------- ------------------- --------------- =20 ------------- metaslab 0 offset 0 spacemap 34 =20 free 12.8G metaslab 1 offset 1000000000 spacemap 162 =20 free 21.0G metaslab 2 offset 2000000000 spacemap 170 =20 free 4.20G metaslab 3 offset 3000000000 spacemap 182 =20 free 26.8G metaslab 4 offset 4000000000 spacemap 183 =20 free 18.7G metaslab 5 offset 5000000000 spacemap 184 =20 free 27.9G metaslab 6 offset 6000000000 spacemap 185 =20 free 19.9G metaslab 7 offset 7000000000 spacemap 187 =20 free 30.8G metaslab 8 offset 8000000000 spacemap 188 =20 free 24.4G metaslab 9 offset 9000000000 spacemap 189 =20 free 2.73G metaslab 10 offset a000000000 spacemap 190 =20 free 17.4G metaslab 11 offset b000000000 spacemap 193 =20 free 20.5G metaslab 12 offset c000000000 spacemap 194 =20 free 10.0G metaslab 13 offset d000000000 spacemap 195 =20 free 15.0G metaslab 14 offset e000000000 spacemap 196 =20 free 19.8G metaslab 15 offset f000000000 spacemap 197 =20 free 22.6G metaslab 16 offset 10000000000 spacemap 198 =20 free 11.8G metaslab 17 offset 11000000000 spacemap 199 =20 free 18.3G metaslab 18 offset 12000000000 spacemap 200 =20 free 3.35G metaslab 19 offset 13000000000 spacemap 201 =20 free 24.2G metaslab 20 offset 14000000000 spacemap 202 =20 free 9.8G metaslab 21 offset 15000000000 spacemap 205 =20 free 16.1G metaslab 22 offset 16000000000 spacemap 206 =20 free 31.4G metaslab 23 offset 17000000000 spacemap 207 =20 free 10.6G metaslab 24 offset 18000000000 spacemap 208 =20 free 29.9G metaslab 25 offset 19000000000 spacemap 209 =20 free 13.0G metaslab 26 offset 1a000000000 spacemap 210 =20 free 15.2G metaslab 27 offset 1b000000000 spacemap 33 =20 free 35.3G metaslab 28 offset 1c000000000 spacemap 186 =20 free 3.40G metaslab 29 offset 1d000000000 spacemap 211 =20 free 17.9G metaslab 30 offset 1e000000000 spacemap 212 =20 free 11.2G metaslab 31 offset 1f000000000 spacemap 213 =20 free 7.69G metaslab 32 offset 20000000000 spacemap 214 =20 free 21.2G metaslab 33 offset 21000000000 spacemap 215 =20 free 7.66G metaslab 34 offset 22000000000 spacemap 216 =20 free 15.6G metaslab 35 offset 23000000000 spacemap 217 =20 free 28.2G metaslab 36 offset 24000000000 spacemap 218 =20 free 20.8G metaslab 37 offset 25000000000 spacemap 221 =20 free 14.5G metaslab 38 offset 26000000000 spacemap 192 =20 free 14.1G metaslab 39 offset 27000000000 spacemap 222 =20 free 23.5G metaslab 40 offset 28000000000 spacemap 223 =20 free 22.8G metaslab 41 offset 29000000000 spacemap 224 =20 free 16.2G metaslab 42 offset 2a000000000 spacemap 225 =20 free 16.7G metaslab 43 offset 2b000000000 spacemap 226 =20 free 18.3G metaslab 44 offset 2c000000000 spacemap 227 =20 free 3.63G metaslab 45 offset 2d000000000 spacemap 228 =20 free 6.13G metaslab 46 offset 2e000000000 spacemap 229 =20 free 22.8G metaslab 47 offset 2f000000000 spacemap 230 =20 free 31.2G metaslab 48 offset 30000000000 spacemap 204 =20 free 5.64G metaslab 49 offset 31000000000 spacemap 232 =20 free 4.14G metaslab 50 offset 32000000000 spacemap 233 =20 free 22.0G metaslab 51 offset 33000000000 spacemap 234 =20 free 21.1G metaslab 52 offset 34000000000 spacemap 235 =20 free 10.9G metaslab 53 offset 35000000000 spacemap 236 =20 free 28.6G metaslab 54 offset 36000000000 spacemap 32 =20 free 24.2G metaslab 55 offset 37000000000 spacemap 237 =20 free 6.30G metaslab 56 offset 38000000000 spacemap 238 =20 free 22.6G metaslab 57 offset 39000000000 spacemap 239 =20 free 12.9G metaslab 58 offset 3a000000000 spacemap 242 =20 free 22.8G metaslab 59 offset 3b000000000 spacemap 243 =20 free 22.0G metaslab 60 offset 3c000000000 spacemap 244 =20 free 26.4G metaslab 61 offset 3d000000000 spacemap 245 =20 free 9.6G metaslab 62 offset 3e000000000 spacemap 246 =20 free 22.1G metaslab 63 offset 3f000000000 spacemap 247 =20 free 59.1G metaslab 64 offset 40000000000 spacemap 220 =20 free 61.8G metaslab 65 offset 41000000000 spacemap 191 =20 free 17.7G metaslab 66 offset 42000000000 spacemap 248 =20 free 13.1G metaslab 67 offset 43000000000 spacemap 249 =20 free 22.5G metaslab 68 offset 44000000000 spacemap 250 =20 free 4.39G metaslab 69 offset 45000000000 spacemap 251 =20 free 16.2G metaslab 70 offset 46000000000 spacemap 252 =20 free 3.88G metaslab 71 offset 47000000000 spacemap 253 =20 free 8.96G metaslab 72 offset 48000000000 spacemap 254 =20 free 25.2G metaslab 73 offset 49000000000 spacemap 255 =20 free 15.2G metaslab 74 offset 4a000000000 spacemap 257 =20 free 26.1G metaslab 75 offset 4b000000000 spacemap 203 =20 free 5.36G metaslab 76 offset 4c000000000 spacemap 258 =20 free 59.4G metaslab 77 offset 4d000000000 spacemap 259 =20 free 15.9G metaslab 78 offset 4e000000000 spacemap 260 =20 free 62.1G metaslab 79 offset 4f000000000 spacemap 261 =20 free 19.4G metaslab 80 offset 50000000000 spacemap 262 =20 free 4.07G metaslab 81 offset 51000000000 spacemap 263 =20 free 31.0G metaslab 82 offset 52000000000 spacemap 264 =20 free 32.1G metaslab 83 offset 53000000000 spacemap 265 =20 free 21.9G metaslab 84 offset 54000000000 spacemap 266 =20 free 26.2G metaslab 85 offset 55000000000 spacemap 241 =20 free 58.9G metaslab 86 offset 56000000000 spacemap 267 =20 free 22.3G metaslab 87 offset 57000000000 spacemap 268 =20 free 8.49G metaslab 88 offset 58000000000 spacemap 269 =20 free 17.5G metaslab 89 offset 59000000000 spacemap 270 =20 free 24.2G metaslab 90 offset 5a000000000 spacemap 271 =20 free 6.78G metaslab 91 offset 5b000000000 spacemap 219 =20 free 12.7G metaslab 92 offset 5c000000000 spacemap 274 =20 free 27.4G metaslab 93 offset 5d000000000 spacemap 275 =20 free 21.5G metaslab 94 offset 5e000000000 spacemap 276 =20 free 25.2G metaslab 95 offset 5f000000000 spacemap 277 =20 free 27.8G metaslab 96 offset 60000000000 spacemap 278 =20 free 6.67G metaslab 97 offset 61000000000 spacemap 279 =20 free 26.3G metaslab 98 offset 62000000000 spacemap 280 =20 free 12.0G metaslab 99 offset 63000000000 spacemap 281 =20 free 18.1G metaslab 100 offset 64000000000 spacemap 282 =20 free 23.3G metaslab 101 offset 65000000000 spacemap 256 =20 free 25.0G metaslab 102 offset 66000000000 spacemap 231 =20 free 16.8G metaslab 103 offset 67000000000 spacemap 284 =20 free 16.2G metaslab 104 offset 68000000000 spacemap 285 =20 free 20.0G metaslab 105 offset 69000000000 spacemap 286 =20 free 30.6G metaslab 106 offset 6a000000000 spacemap 287 =20 free 24.5G metaslab 107 offset 6b000000000 spacemap 288 =20 free 19.6G metaslab 108 offset 6c000000000 spacemap 289 =20 free 16.8G metaslab 109 offset 6d000000000 spacemap 290 =20 free 22.7G metaslab 110 offset 6e000000000 spacemap 291 =20 free 22.0G metaslab 111 offset 6f000000000 spacemap 292 =20 free 16.6G metaslab 112 offset 70000000000 spacemap 240 =20 free 14.8G metaslab 113 offset 71000000000 spacemap 293 =20 free 20.9G metaslab 114 offset 72000000000 spacemap 294 =20 free 53.7G metaslab 115 offset 73000000000 spacemap 295 =20 free 17.9G metaslab 116 offset 74000000000 spacemap 296 =20 free 19.1G metaslab 117 offset 75000000000 spacemap 297 =20 free 32.7G metaslab 118 offset 76000000000 spacemap 298 =20 free 17.8G metaslab 119 offset 77000000000 spacemap 273 =20 free 55.0G metaslab 120 offset 78000000000 spacemap 299 =20 free 20.7G metaslab 121 offset 79000000000 spacemap 300 =20 free 16.8G metaslab 122 offset 7a000000000 spacemap 301 =20 free 16.8G metaslab 123 offset 7b000000000 spacemap 302 =20 free 22.7G metaslab 124 offset 7c000000000 spacemap 303 =20 free 14.8G metaslab 125 offset 7d000000000 spacemap 304 =20 free 22.1G metaslab 126 offset 7e000000000 spacemap 305 =20 free 15.3G metaslab 127 offset 7f000000000 spacemap 306 =20 free 17.1G metaslab 128 offset 80000000000 spacemap 307 =20 free 20.2G metaslab 129 offset 81000000000 spacemap 283 =20 free 58.2G metaslab 130 offset 82000000000 spacemap 308 =20 free 24.5G metaslab 131 offset 83000000000 spacemap 309 =20 free 4.19G metaslab 132 offset 84000000000 spacemap 310 =20 free 15.0G metaslab 133 offset 85000000000 spacemap 311 =20 free 19.9G metaslab 134 offset 86000000000 spacemap 312 =20 free 60.9G metaslab 135 offset 87000000000 spacemap 313 =20 free 60.6G metaslab 136 offset 88000000000 spacemap 314 =20 free 60.9G metaslab 137 offset 89000000000 spacemap 315 =20 free 59.8G metaslab 138 offset 8a000000000 spacemap 316 =20 free 60.9G metaslab 139 offset 8b000000000 spacemap 272 =20 free 61.6G metaslab 140 offset 8c000000000 spacemap 317 =20 free 62.4G metaslab 141 offset 8d000000000 spacemap 318 =20 free 61.2G metaslab 142 offset 8e000000000 spacemap 319 =20 free 61.5G Traversing all blocks to verify metadata checksums and verify nothing leaked ... load: 1.59 cmd: zdb 54160 [physrd] 31.13r 3.05u 1.15s 4% 142544k load: 0.45 cmd: zdb 54160 [physrd] 105.37r 6.69u 2.33s 4% 263428k 5.64T completed ( 119MB/s) estimated time remaining: 0hr 12min 55sec Assertion failed: (bp->blk_pad[0] =3D=3D 0), file /usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/= common/fs/zfs/zio.c, line 2978. Abort (core dumped) The second command you suggested returned: # zdb -uuuC zroot MOS Configuration: version: 5000 name: 'zroot' state: 0 txg: 3377279 pool_guid: 9132288035431788388 hostid: 2783470193 hostname: 'working-1.discdrive.bayphoto.com' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 9132288035431788388 children[0]: type: 'raidz' id: 0 guid: 15520162542638044402 nparity: 2 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9894744555520 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 4289437176706222104 path: '/dev/mfid0p2' devid: 'id1,sd@n6b8ca3a0f13870001a02310703f4b791= /b' phys_path: '/dev/mfid0p2' whole_disk: 1 DTL: 181 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 5369387862706621015 path: '/dev/mfid1p2' devid: 'id1,sd@n6b8ca3a0f13870001a02311604ce1965= /b' phys_path: '/dev/mfid1p2' whole_disk: 1 DTL: 180 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 456749962069636782 path: '/dev/mfid2p2' devid: 'id1,sd@n6b8ca3a0f13870001a02312105778eef= /b' phys_path: '/dev/mfid2p2' whole_disk: 1 DTL: 179 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 3809413300177228462 path: '/dev/mfid3p2' devid: 'id1,sd@n6b8ca3a0f13870001a02312905f430b5= /b' phys_path: '/dev/mfid3p2' whole_disk: 1 DTL: 178 create_txg: 4 children[4]: type: 'disk' id: 4 guid: 4978694931676882497 path: '/dev/mfid4p2' devid: 'id1,sd@n6b8ca3a0f13870001a02313606b73c4a= /b' phys_path: '/dev/mfid4p2' whole_disk: 1 DTL: 177 create_txg: 4 children[5]: type: 'disk' id: 5 guid: 17831739822150458220 path: '/dev/mfid5p2' devid: 'id1,sd@n6b8ca3a0f13870001a023142077914f5= /b' phys_path: '/dev/mfid5p2' whole_disk: 1 DTL: 176 create_txg: 4 children[6]: type: 'disk' id: 6 guid: 1286918567594965543 path: '/dev/mfid6p2' devid: 'id1,sd@n6b8ca3a0f13870001a02314c080cb066= /b' phys_path: '/dev/mfid6p2' whole_disk: 1 DTL: 175 create_txg: 4 children[7]: type: 'disk' id: 7 guid: 7958718879588658810 path: '/dev/mfid7p2' devid: 'id1,sd@n6b8ca3a0f13870001a02315608a7f0a2= /b' phys_path: '/dev/mfid7p2' whole_disk: 1 DTL: 174 create_txg: 4 children[8]: type: 'disk' id: 8 guid: 18392960683862755998 path: '/dev/mfid8p2' devid: 'id1,sd@n6b8ca3a0f13870001a023160093a9190= /b' phys_path: '/dev/mfid8p2' whole_disk: 1 DTL: 173 create_txg: 4 children[9]: type: 'disk' id: 9 guid: 13046629036569375198 path: '/dev/mfid9p2' devid: 'id1,sd@n6b8ca3a0f13870001a02316909c8894c= /b' phys_path: '/dev/mfid9p2' whole_disk: 1 DTL: 172 create_txg: 4 children[10]: type: 'disk' id: 10 guid: 10604061156531251346 path: '/dev/mfid11p2' devid: 'id1,sd@n6b8ca3a0ef7a7a0019cc18e30bbfa11e= /b' phys_path: '/dev/mfid11p2' whole_disk: 1 DTL: 171 create_txg: 4 features_for_read: Uberblock: magic =3D 0000000000bab10c version =3D 5000 txg =3D 3389469 guid_sum =3D 1996697515446579069 timestamp =3D 1401810802 UTC =3D Tue Jun 3 08:53:22 2014 rootbp =3D DVA[0]=3D<0:3f0bf445c00:c00> DVA[1]=3D<0:55027e77200:c00> DVA[2]=3D<0:86003a4c400:c00> [L0 DMU objset] fletcher4 uncompressed LE contiguous unique triple size=3D800L/800P birth=3D3389469L/3389469P fill=3D326 cksum=3D389487e40:6aa058451f9:64bbaf298ba16:3f9bfc58017be5d Any reason why I would have to manually re-import the cache file? I had=20 performed that task during the initial install (this was before=20 bsdinstall had a zfs on root option, so it was done manually, where you=20 have to export the cachefile, then at the end of the install cp it to=20 /boot/zfs/zpool.cache and re-import it) Mike C --------------ms020303080409010008050700 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjAzMTU1 NzU0WjAjBgkqhkiG9w0BCQQxFgQUUnxqwts3Mpuz3Ij0+ccbneYQ9eUwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAGbcigpz79XRDr+K5KJmxGpedc5dwMTL1hws DmSywLpM3I+H0RbwTB5uwpwgju+59AYl6tNmk5rTpDzAcoaSDeDhkDYVkTj7VdmI4U/FL31R d74H7TO6Khll+5WxxKxboU6WHlvO/o9x8wVpW+mMVNRz3ApdX29GcQg7F0KhOrlZiqCeHL6z sgrxxg4k3IhA5Tzt9KcuCAIPBsKroV23/O3Ep4v5N0qDFo7dcRrRH3pKhlJgYq9ULksFMZYp 66MLBGFnXS2wvbMPJ/MyWgOxK7dR/GH9IIRhsy7gppaMNgPIVA10b5Fv1XcNeEk2b63YUi+A dRoucal1A3fwEj5jLvit6YTkHreCTRGWEPUZeqvOgVtIwMIy96td0nqYCr+LaX/q9bUxxbQZ vpyGCBwlybJ73vjvG00lHYftQuijtYPVTqSazuIb+lnmY27XptOzVDZQRTnTcg60DXVNRmOo 8QIdm2HMLh7ma+uousi7N2Jzq9LJp7/W8RbL9CtfO2Bl+/G/zKReSqTxP4FQI5skaMlHstyA zw27RCfWQRaccggJRzJ1wgNWMi5q0Uo6kpko0ttoukvKNZ1PrmEKTStLNK79qoD/bCewcSLL NckSQxbU6DPdXcaCXp907/bLkYOcF5BvJmcVZ+K+p5ArqOuNpwssJDTclGytCbwXyQ1wj1Qi AAAAAAAA --------------ms020303080409010008050700-- From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 16:50:54 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 84BB8A22 for ; Tue, 3 Jun 2014 16:50:54 +0000 (UTC) Received: from nm23-vm6.bullet.mail.ne1.yahoo.com (nm23-vm6.bullet.mail.ne1.yahoo.com [98.138.91.116]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 245682F4B for ; Tue, 3 Jun 2014 16:50:53 +0000 (UTC) Received: from [98.138.101.132] by nm23.bullet.mail.ne1.yahoo.com with NNFMP; 03 Jun 2014 16:48:58 -0000 Received: from [98.138.101.179] by tm20.bullet.mail.ne1.yahoo.com with NNFMP; 03 Jun 2014 16:48:58 -0000 Received: from [127.0.0.1] by omp1090.mail.ne1.yahoo.com with NNFMP; 03 Jun 2014 16:48:58 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 210448.76439.bm@omp1090.mail.ne1.yahoo.com Received: (qmail 11387 invoked by uid 60001); 3 Jun 2014 16:48:58 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1401814138; bh=6LUH5LsNLgX5ORoARfLDg64wJW+/N7XDXIIGBR3B3u0=; h=Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type; b=JqVShC+ZE5nFatrd5GKQ/XKoKQAkBzgWi6ftHAQx7YM5kAbJ4TwSZHMapJh5NfO2k/Bblnr5qrjDWFC8kGI2ybnBa9CBO7IHcBzCW5UkiEkaYN6nt00GuEL6M/DcAdkc8+J1PCh02yE9WVZQwSsRvGJMvrVm2AIq5phR/Hxa1Eo= X-YMail-OSG: xBIZGqEVM1m1bz8wsWWkiTI9xKjev60TUdbr7EgLCWQDptK VehIjJBF_xALF0MkUKMoCh5BwzDswDK8Y.MUjSX8h81U33RvrePOa4.Yk2LZ bKsPFjpcdbG7WJ9G3SaTmaA9_DYCUbtEbOsUQmHYzdYLDEmc10MdBD1dA8Z9 xjcHX4wyVxVhcGR3TZo9uEJSWR2p__BxGVkx0zOfGuR01L_IhfECLfZCYLqD _7enRO11261eSR7jaKVxvds_UHIbkdPCJuaEWPMwoK6XKU8PYiAC4Bkc1gLo jtQZn1bp_uZBYuXPH5O31He_2oJ0laaLt716uVtNDTPUtiZr_qzND96IWEnl QlsTTgaIY2XdJ5NxJGTKT1cm0NrBgPcDe.WCuzautrUhdu0kRMj3I8Pe7BUX fO5IAZM0P8wMNAqoaEIgzPHRwTySDn8gMSe1NTjESNQvUAMon.lzLfP5_Vs6 k_nVfDdtacD_v.QunWtH7TIC3fMwcBdlWfHdi2p8SjKzB7gVNZj92Tn74780 iDIHaU3B.TMVqcghzi_MRaVfZkPvnzjHJTpc5Im1dA52YgCZZ.89F6FofZQW IZy3V.yWTdjyDeIqmgzTmMNL7zWDs3TmJGJ5mmJ_s8PG3e8IkmEH_2B8N32x Iad9vtCxhuIy6iXCuPsfKnN1_OXnqAMSAaFtnM7Fd1q07Ew-- Received: from [72.83.146.103] by web121106.mail.ne1.yahoo.com via HTTP; Tue, 03 Jun 2014 09:48:58 PDT X-Rocket-MIMEInfo: 002.001, RllJOgoKSSBlbmNvdW50ZXJlZCBhIGJ1ZyBpbiBTb2xhcmlzIDEwIHRoYXQgbWF5IGFsc28gYmUgcHJlc2VudCBpbiB0aGUgY29kZSB0aGF0IEZyZWVCU0QgZm9ya2VkIHNvbWUgdGltZSBhZ28uwqAgVGhpcyBpcyB0aGUgZm9ydW0gZW50cnkgSSBwdXQgdXAgaW4gY2FzZSBpdCdzIHJlbGV2YW50LgoKaHR0cHM6Ly9mb3J1bXMuZnJlZWJzZC5vcmcvdmlld3RvcGljLnBocD9mPTQ4JnQ9NDY3MTYmcD0yNjExNDMjcDI2MTE0MwoKUmVnYXJkcywKSmltIEJhcmtlcgEwAQEBAQ-- X-Mailer: YahooMailWebService/0.8.188.663 Message-ID: <1401814138.84520.YahooMailNeo@web121106.mail.ne1.yahoo.com> Date: Tue, 3 Jun 2014 09:48:58 -0700 (PDT) From: Jim Barker Reply-To: Jim Barker Subject: Solaris 10 zfs bug To: "freebsd-fs@freebsd.org" MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 16:50:54 -0000 FYI:=0A=0AI encountered a bug in Solaris 10 that may also be present in the= code that FreeBSD forked some time ago.=A0 This is the forum entry I put u= p in case it's relevant.=0A=0Ahttps://forums.freebsd.org/viewtopic.php?f=3D= 48&t=3D46716&p=3D261143#p261143=0A=0ARegards,=0AJim Barker From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 17:08:07 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C654F358; Tue, 3 Jun 2014 17:08:07 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.feld.me", Issuer "Gandi Standard SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 63E9F2081; Tue, 3 Jun 2014 17:08:07 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id 189a460d; Tue, 3 Jun 2014 12:07:58 -0500 (CDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=feld.me; h=mime-version :content-type:content-transfer-encoding:date:from:to:cc:subject :in-reply-to:references:message-id:sender; s=blargle2; bh=6iSvun PA9qtqlG3Auvd7xtDxxlE=; b=RRmM4XnAHig0/PDqm7yQsZEIVOcETohymFF/Go FxVB5gydwBNWN8JnmBefnOqsnf/g7hlzfmjJ6uRy8AVxgt096DPdDK6Xxc5uMMUT 5GxzISDFFfTvMLBM01GhN8vapKgSjAXI7L+P+Lzr1ALvXhycEt8OXTHTvZ1Ud+h9 2n8vleUaJY/UjlUNrqEeRhuT67BB1YeejLREK+hOYk2SDudMXeTY+gOHs7HlirwG tTONzy1Nzoei+S1iMcF5I4+BIbulJTzLVRXUjLEL3s8y1eJqIQjGgJy18k7QIw+A fP3mH8Uz4j56Ck0wAKeI8CVLYMgu/6QKheATrFkOr7kyHjrA== DomainKey-Signature: a=rsa-sha1; c=nofws; d=feld.me; h=mime-version :content-type:content-transfer-encoding:date:from:to:cc:subject :in-reply-to:references:message-id:sender; q=dns; s=blargle2; b= V4LfKGDOpSuPM7XwgIANSmKHWzlCV8d6XnHzC4auinWndbaEFX6TRVAA6v3h3GUL W0AOxk3cEdqW1T/+x5iI+Nmi6DhzdB6ltZv4waE4oOclmjTRoUZsqtimb+GKk1IL fCTHxR5XFDmE0AWKz+p7icP0XW+IrR8L5ZUFW6ovuIjY6y69BZKf32RQVaUOkRvF AWPy4BrBkASUXTSUsal+LlnnFDMH2iJifnixCci5zcFCeO+YbdbJmS5mzjppf15c Xx4Ji9IjXvi3uQeixoCVULsgGCX4FFaT1yQjxits2uKU6s4z1yzONUpikPOQgVGH ONu7yI3pEyeiDUz9eQIhoQ== Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id de27e511; Tue, 3 Jun 2014 12:07:58 -0500 (CDT) Received: from feld@feld.me by mail.feld.me (Archiveopteryx 3.2.0) with esmtpa id 1401815277-3795-3791/5/5; Tue, 3 Jun 2014 17:07:57 +0000 Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: quoted-printable Date: Tue, 3 Jun 2014 12:07:56 -0500 From: Mark Felder To: Jim Barker Subject: Re: Solaris 10 zfs bug In-Reply-To: <1401814138.84520.YahooMailNeo@web121106.mail.ne1.yahoo.com> References: <1401814138.84520.YahooMailNeo@web121106.mail.ne1.yahoo.com> Message-Id: X-Sender: feld@FreeBSD.org User-Agent: Roundcube Webmail/1.0.1 Sender: feld@feld.me Cc: freebsd-fs@freebsd.org, owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 17:08:07 -0000 On 2014-06-03 11:48, Jim Barker wrote: > FYI: >=20 > I encountered a bug in Solaris 10 that may also be present in the code > that FreeBSD forked some time ago.=C2=A0 This is the forum entry I put = up > in case it's relevant. >=20 > https://forums.freebsd.org/viewtopic.php?f=3D48&t=3D46716&p=3D261143#p2= 61143 >=20 >=20 Do you have any details on how to reproduce the panic? That would permit=20 the developers to find the bug in the FreeBSD port of ZFS and fix it. From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 17:11:45 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C54B042C for ; Tue, 3 Jun 2014 17:11:45 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AC4FF2114 for ; Tue, 3 Jun 2014 17:11:45 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s53HBjCq003199 for ; Tue, 3 Jun 2014 18:11:45 +0100 (BST) (envelope-from no-reply-bugzilla-daemon@freebsd.org) From: no-reply-bugzilla-daemon@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 186112] [zfs] [panic] ZFS Panic/Solaris Assert/zap.c:479 Date: Tue, 03 Jun 2014 17:11:45 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: ler@lerctr.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 17:11:45 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=186112 --- Comment #2 from Larry Rosenman --- I've wound up commenting out a bunch of these ASSERTS, and haven't seen any negative consequences, HOWEVER, I'd like someone to let me know how I could look at the FS(s)/POOL and see if there is a real issue. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 18:26:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4AD8F4F8 for ; Tue, 3 Jun 2014 18:26:14 +0000 (UTC) Received: from nm28-vm3.bullet.mail.ne1.yahoo.com (nm28-vm3.bullet.mail.ne1.yahoo.com [98.138.91.158]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 09DF62827 for ; Tue, 3 Jun 2014 18:26:13 +0000 (UTC) Received: from [98.138.226.179] by nm28.bullet.mail.ne1.yahoo.com with NNFMP; 03 Jun 2014 18:26:07 -0000 Received: from [98.138.89.174] by tm14.bullet.mail.ne1.yahoo.com with NNFMP; 03 Jun 2014 18:26:07 -0000 Received: from [127.0.0.1] by omp1030.mail.ne1.yahoo.com with NNFMP; 03 Jun 2014 18:26:07 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 181147.58999.bm@omp1030.mail.ne1.yahoo.com Received: (qmail 3528 invoked by uid 60001); 3 Jun 2014 18:26:07 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1401819967; bh=no+GtLrQBMMSzQHwbl1rsN4JHaBue5LrlITTXn9B8y0=; h=References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=eTklIMqQ1/DD6WgmEJqBHidBefcsVdNWykT8+BHGty5kXVa6ZhOO7x0L5Q2g/heL5bpTNn6lnWpUTFanWr0hAGhilGB/V63cYfXgb7XX3iNtY9fTf9P9rkl8VHfDpIzV8NLwWj4N7rPDqNoNhXBVDMFSmZ62R6R9bSNsB3W8P+s= X-YMail-OSG: JhjF1tIVM1nYitme.TMMRL0Rbclfn8hEl57IXWgHe0k3BaD Npxc7rISTo19OKQFzAfFHi9qtblI_XiVux7PSMQev9WgDRhfLkrOMj_dZW0i EstoN2ZRqRB_ETAcD0aL6GhBJxsFahG6gVALTo2c0RHCQ262U_SwTSo3h5w9 Fl3w7sM.f_qEvepTVhtaxYGO85NAquh74REeSH7lEX1ynpueLxNydDIP8IBn ZuHrua69qPQkXYiRCfmypsLf0S7T2rhz1mZcOWDcAOAnJWHQbYngsCGin4SQ uXjj.lg.iP13Z_Ax6gSXyhQ228Un5FRWiCXel3j2mA9dPxtANdQm4PEBLtfF kdcpPDbeKTf4o8jmhXL0nnZVTtWyJvIytj39tmNlDnRYA2geJ6FaFXCHFvpX dDynJozQ.wNZ3YMugXDZEBqwpDKngr6cem1tYSR0kih0iBlKdYf4THaearPI weSl_gUt4_9_uFpNYCNwdYovo.ha6c1xP3AF5wBglydwf3lelbrujp2DYAC2 WV_ldgsQp3HNf_b5g4t5cYf0FMaQNIRKwEwQawlF6j8c9y93WEUHSIA-- Received: from [72.83.146.103] by web121103.mail.ne1.yahoo.com via HTTP; Tue, 03 Jun 2014 11:26:06 PDT X-Rocket-MIMEInfo: 002.001, TWFyaywKCkkgd2lzaCBJIGRpZC7CoCBJdCBzaW1wbHkgcGFuaWMnZCBvbiBtZSBpbiB0aGUgbWlkZGxlIG9mIHRoZSBuaWdodCBwcmVzdW1hYmx5IHVuZGVyIHR5cGljYWwgdXNlLsKgIFRoaXMganVzdCBoYXBwZW5lZCB0byBtZSB0d2ljZSBpbiB0aGUgcGFzdCA1IGRheXMsIGJvdGggdGltZXMgd2hpbGUgSSB3YXMgc2xlZXBpbmcuwqAgSSBjb3VsZCBwcm92aWRlIHRoZSBjb3JlIGR1bXAsIGJ1dCBpdCBpcyBhIHNvbGFyaXMgY29yZSBkdW1wLCBzbyBJIGRvbid0IGtub3cgaG93IG11Y2ggdXNlIHlvdSBjb3UBMAEBAQE- X-Mailer: YahooMailWebService/0.8.188.663 References: <1401814138.84520.YahooMailNeo@web121106.mail.ne1.yahoo.com> Message-ID: <1401819966.4015.YahooMailNeo@web121103.mail.ne1.yahoo.com> Date: Tue, 3 Jun 2014 11:26:06 -0700 (PDT) From: Jim Barker Reply-To: Jim Barker Subject: Re: Solaris 10 zfs bug To: Mark Felder In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: "freebsd-fs@freebsd.org" , "owner-freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 18:26:14 -0000 Mark,=0A=0AI wish I did.=A0 It simply panic'd on me in the middle of the ni= ght presumably under typical use.=A0 This just happened to me twice in the = past 5 days, both times while I was sleeping.=A0 I could provide the core d= ump, but it is a solaris core dump, so I don't know how much use you could = get out of it.=0A=0AThe debug output that I put in the thread was created b= y an Oracle kernel engineer.=A0 I was hoping to provide a heads up and I do= n't kernel develop myself, so beyond what was provided, I couldn't give any= further information unfortunately.=0A=0ARegards,=0AJim Barker=0A=0A=0A=0A= =0A________________________________=0A From: Mark Felder = =0ATo: Jim Barker =0ACc: freebsd-fs@freebsd.org; owne= r-freebsd-fs@freebsd.org =0ASent: Tuesday, June 3, 2014 1:07 PM=0ASubject: = Re: Solaris 10 zfs bug=0A =0A=0AOn 2014-06-03 11:48, Jim Barker wrote:=0A= =0A> FYI:=0A> =0A> I encountered a bug in Solaris 10 that may also be prese= nt in the code=0A> that FreeBSD forked some time ago.=A0 This is the forum = entry I put up=0A> in case it's relevant.=0A> =0A> https://forums.freebsd.o= rg/viewtopic.php?f=3D48&t=3D46716&p=3D261143#p261143=0A> =0A> =0A=0ADo you = have any details on how to reproduce the panic? That would permit =0Athe de= velopers to find the bug in the FreeBSD port of ZFS and fix it. From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 20:10:19 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9A65062E for ; Tue, 3 Jun 2014 20:10:19 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 4253B2500 for ; Tue, 3 Jun 2014 20:10:18 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqYEAK4qjlODaFve/2dsb2JhbABZg1lYgiNJv3QBgSR0giUBAQUjSwsFFhgCAg1OJgYICx0EiCGrbqVegSqMVwEBGzQHhEAEm0iRdYNUIYE5OQ X-IronPort-AV: E=Sophos;i="4.98,968,1392181200"; d="scan'208";a="126052928" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 03 Jun 2014 15:56:57 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 14009B3F13; Tue, 3 Jun 2014 15:56:58 -0400 (EDT) Date: Tue, 3 Jun 2014 15:56:58 -0400 (EDT) From: Rick Macklem To: Olav Gjerde Message-ID: <1484841422.10809696.1401825418076.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: RFC and testing: NFSv4.1 server going into head MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 20:10:19 -0000 Olav Gjerde wrote: > This is great stuff! I'm wondering though, is there any reason that > the mount option for FreeBSD has to be different than Linux? >=20 The "nfsv4" was chosen to be compatible with "nfsv3" and "nfsv2", which FreeBSD has used forever (I don't know who originally chose them). I have thought of adding "vers=3D" ("nfsv4" would still work) to be more compatible with Solaris and Linux, but I've never gotten around to it. If no one else comes up with such a patch, I may get around to it someday, = rick > On Mon, Jun 2, 2014 at 3:23 PM, Rick Macklem > wrote: > > Hi, > > > > I think that the NFSv4.1 server code in projects/nfsv4.1-server is > > about ready to be merged into head. > > > > As such, if anyone has the resources to do so in the next 2 weeks, > > please take a look at the code and/or test it. > > > > Also, feel free to make any comments w.r.t. merging this code into > > head, such as preferred timing, whether or not you think it should > > happen, etc. > > > > If/when the merge is done, it will be fairly large, but shouldn't > > affect the NFSv3, NFSv4.0 server functionality (however, I may > > screw up and break them for a little while;-). I think NFSv4.1 > > might > > be useful, since it uses sessions to provide "exactly once" > > RPC semantics, which should improve overall correctness. This > > server > > code does not have any pNFS support in it. Implementing a pNFS > > server > > is a large project that may happen someday. > > > > Thanks in advance for any testing/review/comments, rick > > ps: The NFSv4.1 client is already in head and the options for > > mounting with NFSv4.1 are "nfsv4.minorversion=3D1" for FreeBSD > > and "vers=3D4,minorversion=3D1" for the Linux client. > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to > > "freebsd-fs-unsubscribe@freebsd.org" >=20 >=20 >=20 > -- > Olav Gr=C3=B8n=C3=A5s Gjerde > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 20:38:31 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 82DC2B50 for ; Tue, 3 Jun 2014 20:38:31 +0000 (UTC) Received: from nm4-vm6.bullet.mail.ne1.yahoo.com (nm4-vm6.bullet.mail.ne1.yahoo.com [98.138.91.97]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 443752838 for ; Tue, 3 Jun 2014 20:38:30 +0000 (UTC) Received: from [98.138.100.103] by nm4.bullet.mail.ne1.yahoo.com with NNFMP; 03 Jun 2014 20:38:24 -0000 Received: from [98.138.226.169] by tm102.bullet.mail.ne1.yahoo.com with NNFMP; 03 Jun 2014 20:38:24 -0000 Received: from [127.0.0.1] by omp1070.mail.ne1.yahoo.com with NNFMP; 03 Jun 2014 20:38:24 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 346345.31495.bm@omp1070.mail.ne1.yahoo.com Received: (qmail 73809 invoked by uid 60001); 3 Jun 2014 20:38:24 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1401827904; bh=/lnzMMicFkHMd/NvFd2KWAIjLOCBW6t6pISAdlyvARo=; h=References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=jBQpjSWg7SOg20fzAe4UOZEhfr0c1R/O2T86Ts3xVrq2+nUfKajTiQerekZayBfE6/U0U0IlMHOZwXGbpZnOH3otaPV9jZMqEo/wSPIiQDysYcvsIXojeFULI4y8Am88+BxjRg8myYxznFgLiqnHEEkDzZXoY7DJwqCUUNg+X9E= X-YMail-OSG: 0qLhtJAVM1ngNkQlVl3YwY3cUdlX8gfIcfHqgAA4iTef71H FV59uYKTW6nLjSgnPBjfFsCIi3nuKx5GMe_yP3BNATxnIJpUddOhWnPuaWQC y0clqkjzNvo7AhUvZv9CkgstyILftv0Mi0Qp1dWEP6zK1ZnlZ61ty27L7XMd 9A6FLrKljdO3EatvkjYhub191Z11Y_dke9z8rSBJlcge9zRruzjBTiV8uiBZ bau.u.iEFbWtigraUfa6cSo8sJgnhK1SWdoybIHUGX36J5J5f28jhHC55w8W y7TOjNoUhO9O0JDH5mfaoE78aK2Mpo7o37SFosmnluXX1nsntrCWBfP3hF1u c41RxBFJ5O6DJiNDi5gPNiXONc2vAKG4FCvRSxIl4A6PCWROw1Vxc2np_HkM s0IPl6iTnote0qagkiBnfTOXijTir8S4yWfEraSAVWR_GwuZ5wRfJm5p4rs0 Z5JuM0cZ.XhcGRRYFVbOEQi.yAU3eHlSKlb1ZQvznIp4LOi.mnE4fN_yFMiU WWDjw69eNv3Txc2nS.8toji9Vp2np.OxjD_cJ5eqrODy6z0MhvJ4uN9I- Received: from [72.83.146.103] by web121106.mail.ne1.yahoo.com via HTTP; Tue, 03 Jun 2014 13:38:24 PDT X-Rocket-MIMEInfo: 002.001, TWFyaywKCkkgd2FzIGxvb2tpbmcgYXQgdGhlIHRpbWVmcmFtZXMgdGhhdCB0aGUgY29yZSBkdW1wcyBvY2N1cnJlZCBhbmQgYXJvdW5kIHRoYXQgdGltZSB3ZSB3ZXJlIGRvaW5nIHNvbWUgemZzIHNlbmRzIHRvIHJlcGxpY2F0ZSBzb21lIGRhdGEgb24gdGhlcmUuwqAgU28gZWl0aGVyIHRoZSBzbmFwc2hvdCBjcmVhdGlvbiwgemZzIHNlbmQsIG9yIHNuYXBzaG90IGRlbGV0aW9uIG1heSBoYXZlIHRyaWdnZXJlZCB0aGUgYnVnLgoKSmltCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCiBGcm8BMAEBAQE- X-Mailer: YahooMailWebService/0.8.188.663 References: <1401814138.84520.YahooMailNeo@web121106.mail.ne1.yahoo.com> Message-ID: <1401827904.21523.YahooMailNeo@web121106.mail.ne1.yahoo.com> Date: Tue, 3 Jun 2014 13:38:24 -0700 (PDT) From: Jim Barker Reply-To: Jim Barker Subject: Re: Solaris 10 zfs bug To: Mark Felder In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: "freebsd-fs@freebsd.org" , "owner-freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 20:38:31 -0000 Mark,=0A=0AI was looking at the timeframes that the core dumps occurred and= around that time we were doing some zfs sends to replicate some data on th= ere.=A0 So either the snapshot creation, zfs send, or snapshot deletion may= have triggered the bug.=0A=0AJim=0A=0A=0A=0A______________________________= __=0A From: Mark Felder =0ATo: Jim Barker =0ACc: freebsd-fs@freebsd.org; owner-freebsd-fs@freebsd.org =0ASent:= Tuesday, June 3, 2014 1:07 PM=0ASubject: Re: Solaris 10 zfs bug=0A =0A=0AO= n 2014-06-03 11:48, Jim Barker wrote:=0A=0A> FYI:=0A> =0A> I encountered a = bug in Solaris 10 that may also be present in the code=0A> that FreeBSD for= ked some time ago.=A0 This is the forum entry I put up=0A> in case it's rel= evant.=0A> =0A> https://forums.freebsd.org/viewtopic.php?f=3D48&t=3D46716&p= =3D261143#p261143=0A> =0A> =0A=0ADo you have any details on how to reproduc= e the panic? That would permit =0Athe developers to find the bug in the Fre= eBSD port of ZFS and fix it. From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 22:17:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BFC96364; Tue, 3 Jun 2014 22:17:17 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.feld.me", Issuer "Gandi Standard SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 414BC2177; Tue, 3 Jun 2014 22:17:16 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id 4e3d3317; Tue, 3 Jun 2014 17:17:13 -0500 (CDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=feld.me; h=content-type :mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:sender; s= blargle2; bh=KRVx7zFqLflLvZbekt4u5LT4Grw=; b=wqSrA6+4BZhaBe8J2kF SBI8flRA3L+e7tGQxdKFj3yqJIilMzUVwTjG4hGOfPgnRaoGZAVlDg0cgz66sT6g 44cV2/buAJjq6P+9dMEahmPlK5eCLaHZrvjl08Y3zSsAg2KfBQ0HbFhuMPof9gQe 6xiDlhaCCb1vyIXBlkYJzDKZoZ4r3DVwYV5D1DC7cmK0/r8ZwdEGM69z7jI11ac8 qZfU13Cs78dqnMojN/fxRZUAVRnMvEmLSRp/AxQqnulmbr4ntZqKVC+FIZcGRKVI goCLiyVNOMr/zhmin9co18Kxa3INYxHygTNMlZxSWdAp9bqWRaiMliGK9TBTT/cB HtA== DomainKey-Signature: a=rsa-sha1; c=nofws; d=feld.me; h=content-type :mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:sender; q= dns; s=blargle2; b=vMVcZZmnQOqeiIrfv8OE1EJLR6ksKCna3ZRdMRFQyws1p kb0eAtt+MxVuzbycLdpofzAyyM4ybw3NEhPlvHiGtA3jtQPfG2TIsRjlQpMqZgTI j4PtY3Gi73E9tUKvgiaCDrSiMnoFlB3h+gP2+zfEGCx6QINrb2f3majnqGUrmpUo oF91AUKsrHPZZBBYvaGFqHuGI3lehr2nqQpZL3CcWtJ2p/IuIB4f7n1qRx2KgDqF J1YR9n30GbukZgJ3T05dAEPsedB1v9BJjebBHpsllkBwJAXqb8DEzbRUdD5QY/ZC 8NH6ZgSaPY9hOUIZuJiqHzWN20e9aeJ9FEH+sUUHg== Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id 97855967; Tue, 3 Jun 2014 17:17:13 -0500 (CDT) Received: from feld@feld.me by mail.feld.me (Archiveopteryx 3.2.0) with esmtpa id 1401833832-3794-3791/5/4; Tue, 3 Jun 2014 22:17:12 +0000 Content-Type: text/plain; charset=iso-8859-1 Mime-Version: 1.0 Subject: Re: Solaris 10 zfs bug From: Mark Felder In-Reply-To: <1401827904.21523.YahooMailNeo@web121106.mail.ne1.yahoo.com> Date: Tue, 3 Jun 2014 17:17:10 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: <10589F87-04C8-4DD9-A8F3-F2DAB89D0641@FreeBSD.org> References: <1401814138.84520.YahooMailNeo@web121106.mail.ne1.yahoo.com> <1401827904.21523.YahooMailNeo@web121106.mail.ne1.yahoo.com> To: Jim Barker X-Mailer: Apple Mail (2.1878.2) Sender: feld@feld.me Cc: freebsd-fs@freebsd.org, owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 22:17:17 -0000 On Jun 3, 2014, at 15:38, Jim Barker wrote: > Mark, >=20 > I was looking at the timeframes that the core dumps occurred and = around that time we were doing some zfs sends to replicate some data on = there. So either the snapshot creation, zfs send, or snapshot deletion = may have triggered the bug. >=20 I personally do quite a few snapshots and zfs send every 15 minutes on a = 9.x server and haven't seen this yet. I'd have figured iXSystems would = have hit it while doing QA on their TrueNAS product as well if it was as = simple as doing a large amount of send/recv testing. However, it's good = to have this documented publicly in case someone does come across a = similar issue. Perhaps it could be related to zfs attributes on a = particular filesystem? (dedup, compression, etc etc) Thanks for the heads up. From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 23:33:43 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EF6CA1EF; Tue, 3 Jun 2014 23:33:43 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id B21832764; Tue, 3 Jun 2014 23:33:43 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id B2DD720E7088C; Tue, 3 Jun 2014 23:33:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 43DD420E70886; Tue, 3 Jun 2014 23:33:32 +0000 (UTC) Message-ID: From: "Steven Hartland" To: "Jim Barker" , "Mark Felder" References: <1401814138.84520.YahooMailNeo@web121106.mail.ne1.yahoo.com> <1401827904.21523.YahooMailNeo@web121106.mail.ne1.yahoo.com> Subject: Re: Solaris 10 zfs bug Date: Wed, 4 Jun 2014 00:33:28 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 23:33:44 -0000 Its likely FreeBSD is quite different in that area, unless you can reproduce on a FreeBSD install its unlikely we're going to be able to do much about it. You might consider seeing if anyone on the openzfs lists knows of the issue though. Regards Steve ----- Original Message ----- From: "Jim Barker" To: "Mark Felder" Cc: ; Sent: Tuesday, June 03, 2014 9:38 PM Subject: Re: Solaris 10 zfs bug Mark, I was looking at the timeframes that the core dumps occurred and around that time we were doing some zfs sends to replicate some data on there. So either the snapshot creation, zfs send, or snapshot deletion may have triggered the bug. Jim ________________________________ From: Mark Felder To: Jim Barker Cc: freebsd-fs@freebsd.org; owner-freebsd-fs@freebsd.org Sent: Tuesday, June 3, 2014 1:07 PM Subject: Re: Solaris 10 zfs bug On 2014-06-03 11:48, Jim Barker wrote: > FYI: > > I encountered a bug in Solaris 10 that may also be present in the code > that FreeBSD forked some time ago. This is the forum entry I put up > in case it's relevant. > > https://forums.freebsd.org/viewtopic.php?f=48&t=46716&p=261143#p261143 > > Do you have any details on how to reproduce the panic? That would permit the developers to find the bug in the FreeBSD port of ZFS and fix it. _______________________________________________ freebsd-fs@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Jun 3 23:56:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2835457C for ; Tue, 3 Jun 2014 23:56:56 +0000 (UTC) Received: from smtp104.biz.mail.ne1.yahoo.com (smtp104.biz.mail.ne1.yahoo.com [98.138.207.11]) by mx1.freebsd.org (Postfix) with SMTP id DC16B2903 for ; Tue, 3 Jun 2014 23:56:55 +0000 (UTC) Received: (qmail 75699 invoked from network); 3 Jun 2014 23:50:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1401839414; bh=APBPeAIQLF4fwYuW67uPO5L6tY0Y5GAK0fabH4p2yV4=; h=X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:User-Agent:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Subject:From:Date:To:CC:Message-ID; b=beSJ3IMJM8xh6QNh+1bKe84pAQXEUSCOABhkJJkXu/52L8rKyqdIs8n4FUD+gOmQgsBNI0BKzDtpO5YWVi/Ntvq9kLTMGNT6WbfH8UMInbxYRxAgDWyljkuTrY6xcfiCS2wevDtABSFNW3fbiiRk1Q9aO53qqj4/4fHcB4Rc7oo= X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: HPRVqI0VM1lMEAGvrkUNoH2R6jU48Yc1Nbcj4gmJ.YDSrqk 5pgzdnZMuDEq0xQFwABi4fZnwwxua5QusLcA4CAei2iAj2zsIQ5c8Whop5yV UIlWDSy82yRMb.mBd.B_Alk_Ae41hU89dhAnAdaDYG6ivonoCjRfnMw1qWam JNcR8f7lv6qhl2cOaqn2eQPcEnX3ABmxSXhzo9gJ_4JwLomj80KI01JC.5Ip tkK6QjO6qo_39u9SUP_6n4C.g8Ry9AvqdQJehS7i9pYq24WHUKx4gc1NgY1m PEfRmwdycszEBWu2mYEoGuwewOKfXMj7eKtZki6ABC4A1x3HdBsPHumoq5Qh O6mu1EqO5_VN7pjzmJVc.zeWXlVTTsOQsHyn0ZQAVNAAUDWt3HW9DoyMJD8W AzuQ6J0NR55kUEbgsKa8SuPQLPS2Gli0dDvmRhOWe1.p1hc.rBGfPqqhyieg hSW6.UGrJzfazd8pXZEzlpMD5QDsoce8Va7mJfoMctZIuwNXBP5uAR.pgOKO CLHBXOu8w.YiFpD0q9Xpxtcmtmv.0.wk- X-Yahoo-SMTP: hJ2LZG2swBCTQHKUJvegzHhZTUZqXQ-- X-Rocket-Received: from [192.168.1.111] (jbarker@72.83.146.103 with plain [98.138.105.25]) by smtp104.biz.mail.ne1.yahoo.com with SMTP; 03 Jun 2014 16:50:14 -0700 PDT User-Agent: K-9 Mail for Android In-Reply-To: References: <1401814138.84520.YahooMailNeo@web121106.mail.ne1.yahoo.com> <1401827904.21523.YahooMailNeo@web121106.mail.ne1.yahoo.com> MIME-Version: 1.0 Subject: Re: Solaris 10 zfs bug From: Jim Barker Date: Tue, 03 Jun 2014 19:50:27 -0400 To: Steven Hartland , Mark Felder Message-ID: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs@freebsd.org, owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Jun 2014 23:56:56 -0000 Steve, That is good to hear, I just wanted to bring awareness to a potential issu= e=2E I am happy that it doesn't seem to apply=2E Regards, Jim Barker On June 3, 2014 7:33:28 PM EDT, Steven Hartland wrote: >Its likely FreeBSD is quite different in that area, unless you >can reproduce on a FreeBSD install its unlikely we're going to >be able to do much about it=2E > >You might consider seeing if anyone on the openzfs lists knows >of the issue though=2E > > Regards > Steve >----- Original Message -----=20 >From: "Jim Barker" >To: "Mark Felder" >Cc: ; >Sent: Tuesday, June 03, 2014 9:38 PM >Subject: Re: Solaris 10 zfs bug > > >Mark, > >I was looking at the timeframes that the core dumps occurred and around >that time we were doing some zfs sends to replicate some=20 >data on there=2E So either the snapshot creation, zfs send, or snapshot >deletion may have triggered the bug=2E > >Jim > > > >________________________________ > From: Mark Felder >To: Jim Barker >Cc: freebsd-fs@freebsd=2Eorg; owner-freebsd-fs@freebsd=2Eorg >Sent: Tuesday, June 3, 2014 1:07 PM >Subject: Re: Solaris 10 zfs bug > > >On 2014-06-03 11:48, Jim Barker wrote: > >> FYI: >> >> I encountered a bug in Solaris 10 that may also be present in the >code >> that FreeBSD forked some time ago=2E This is the forum entry I put up >> in case it's relevant=2E >> >> >https://forums=2Efreebsd=2Eorg/viewtopic=2Ephp?f=3D48&t=3D46716&p=3D26114= 3#p261143 >> >> > >Do you have any details on how to reproduce the panic? That would >permit >the developers to find the bug in the FreeBSD port of ZFS and fix it=2E >_______________________________________________ >freebsd-fs@freebsd=2Eorg mailing list >http://lists=2Efreebsd=2Eorg/mailman/listinfo/freebsd-fs >To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd=2Eorg" From owner-freebsd-fs@FreeBSD.ORG Wed Jun 4 08:00:12 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 243B11D8 for ; Wed, 4 Jun 2014 08:00:12 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 05C762244 for ; Wed, 4 Jun 2014 08:00:12 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5480B2d049301 for ; Wed, 4 Jun 2014 09:00:11 +0100 (BST) (envelope-from bz-noreply@freebsd.org) Message-Id: <201406040800.s5480B2d049301@kenobi.freebsd.org> From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bugzilla] Commit Needs MFC MIME-Version: 1.0 X-Bugzilla-Type: whine X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated Date: Wed, 04 Jun 2014 08:00:11 +0000 Content-Type: text/plain X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jun 2014 08:00:12 -0000 Hi, You have a bug in the "Needs MFC" state which has not been touched in 7 days. This email serves as a reminder that you may want to MFC this bug or marked it as completed. In the event you have a longer MFC timeout you may update this bug with a comment and I won't remind you again for 7 days. This reminder is an experimental feature. Please file a bug or mail bugmeister@ with concerns. This search was scheduled by eadler@FreeBSD.org. (8 bugs) Bug 133174: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=133174 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [msdosfs] [patch] msdosfs must support multibyte international characters in file names Bug 136470: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=136470 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] Cannot mount / in read-only, over NFS Bug 139651: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139651 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] mount(8): read-only remount of NFS volume does not work Bug 144447: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=144447 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] sharenfs fsunshare() & fsshare_main() non functional Bug 154228: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=154228 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [md] md getting stuck in wdrain state Bug 155411: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=155411 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [regression] [8.2-release] [tmpfs]: mount: tmpfs : No space left on device Bug 156545: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=156545 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [ufs] mv could break UFS on SMP systems Bug 180236: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=180236 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] [nullfs] Leakage free space using ZFS with nullfs on 9.1-STABLE From owner-freebsd-fs@FreeBSD.ORG Wed Jun 4 10:50:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C933FBCE for ; Wed, 4 Jun 2014 10:50:59 +0000 (UTC) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.81]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8A35E220C for ; Wed, 4 Jun 2014 10:50:59 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1Ws8mQ-0001qz-SC for freebsd-fs@freebsd.org; Wed, 04 Jun 2014 12:50:51 +0200 Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: Solaris 10 zfs bug References: <1401814138.84520.YahooMailNeo@web121106.mail.ne1.yahoo.com> <1401819966.4015.YahooMailNeo@web121103.mail.ne1.yahoo.com> Date: Wed, 04 Jun 2014 12:50:49 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Ronald Klop" Message-ID: In-Reply-To: <1401819966.4015.YahooMailNeo@web121103.mail.ne1.yahoo.com> User-Agent: Opera Mail/12.16 (FreeBSD) X-Authenticated-As-Hash: 398f5522cb258ce43cb679602f8cfe8b62a256d1 X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: -0.2 X-Spam-Status: No, score=-0.2 required=5.0 tests=ALL_TRUSTED, BAYES_50 autolearn=disabled version=3.3.1 X-Scan-Signature: 66f4fda096222dd2b2010deb1ce817c5 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jun 2014 10:50:59 -0000 On Tue, 03 Jun 2014 20:26:06 +0200, Jim Barker wrote: > Mark, > > I wish I did. It simply panic'd on me in the middle of the night > presumably under typical use. This just happened to me twice in the > past 5 days, both times while I was sleeping. I could provide the core Than stop sleeping! :-) > dump, but it is a solaris core dump, so I don't know how much use you > could get out of it. > > The debug output that I put in the thread was created by an Oracle > kernel engineer. I was hoping to provide a heads up and I don't kernel > develop myself, so beyond what was provided, I couldn't give any further > information unfortunately. > > Regards, > Jim Barker > > > > > ________________________________ > From: Mark Felder > To: Jim Barker > Cc: freebsd-fs@freebsd.org; owner-freebsd-fs@freebsd.org > Sent: Tuesday, June 3, 2014 1:07 PM > Subject: Re: Solaris 10 zfs bug > > On 2014-06-03 11:48, Jim Barker wrote: > >> FYI: >> >> I encountered a bug in Solaris 10 that may also be present in the code >> that FreeBSD forked some time ago. This is the forum entry I put up >> in case it's relevant. >> >> https://forums.freebsd.org/viewtopic.php?f=48&t=46716&p=261143#p261143 >> >> > > Do you have any details on how to reproduce the panic? That would permit > the developers to find the bug in the FreeBSD port of ZFS and fix it. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Jun 4 14:57:18 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1C0D87B6 for ; Wed, 4 Jun 2014 14:57:18 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 03D422B73 for ; Wed, 4 Jun 2014 14:57:18 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s54EvHoH095957 for ; Wed, 4 Jun 2014 15:57:17 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 157399] [zfs] trouble with: mdconfig force delete && zfs stripe Date: Wed, 04 Jun 2014 14:57:17 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: rum1cro@yandex.ru X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jun 2014 14:57:18 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=157399 rum1cro@yandex.ru changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rum1cro@yandex.ru --- Comment #3 from rum1cro@yandex.ru --- Seems as it was fixed, can't reproduce it on head. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Jun 4 18:47:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2E47F56F for ; Wed, 4 Jun 2014 18:47:01 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id 08F3D230D for ; Wed, 4 Jun 2014 18:47:00 +0000 (UTC) Received: from [192.168.251.39] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 9868523B3BB for ; Wed, 4 Jun 2014 11:46:54 -0700 (PDT) Message-ID: <538F699E.4060802@bayphoto.com> Date: Wed, 04 Jun 2014 11:46:54 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> <538D0174.6000906@bayphoto.com> <538D18CB.5020906@bayphoto.com> <538D1CD5.5070902@bayphoto.com> <538DF082.3030407@bayphoto.com> In-Reply-To: <538DF082.3030407@bayphoto.com> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms090905050307090206000305" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jun 2014 18:47:01 -0000 This is a cryptographically signed message in MIME format. --------------ms090905050307090206000305 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable Top-posting... sorry I'm going to have to roll this particular server back into production,=20 so I'll be rebuilding it from scratch That is okay with this particular system, the other server that=20 exhibited the same issue will have to have all 19TB of its usable data=20 streamed off to temp storage (if we can get it) and rebuilt as well. Thank you Steve for being so helpful, and patient with me stumbling=20 through kgdb :) I have some lingering questions about the entire situation: First, these servers perform regular zpool scrubs (once a month), and=20 have ECC memory. According the the additional logging information I was=20 able to get from Steve's patch, it seems that even with these safeguards = data was still corrupted. A scub after the initial panic did not report=20 any errors. Second, these two servers had an extra anomaly, and that was the missing = zpool.cache. I say missing, because zdb was unable to access the zpool,=20 it was not until I ran "zpool set cachefile=3D/boot/zfs/zpool.cache=20 ". This was previously not an issue. The two servers were upgraded fro 9.1 to 10 on the same morning, within=20 minutes of each other. That is about it as far as commonalities. Both=20 have different drive types (900GB SAS vs 2TB SATA), different=20 controllers (Dell PERC (mfi) vs LSI (mps)), Dell vs SuperMicro boards... We do use the aio kernel module, and as well as some sysctl and=20 loader.conf tuning. I've backed all of those out, so we're just running=20 a stock OS. Ideally, I would like to never run into this situation again. However, I = don't have any evidence to point to an upgrade misstep or some=20 catastrophic configuration error (kernel parameters, zpool create). Thank everyone, Mike C On 6/3/2014 8:57 AM, Mike Carlson wrote: > On 6/3/2014 2:24 AM, Steven Hartland wrote: >> >> ----- Original Message ----- From: "Mike Carlson" >> >>> Scratch that last one, the cachefile had to be reset on the pool to=20 >>> /boot/zfs/zpool.cache >>> >>> So I'm running it now, and its taking so long to traverse all=20 >>> blocks, that it is telling me its going to take around 5400 HOURS >>> >>> I guess I'll report back 90 days? >> >> Try with just the following should be quicker: >> zdb -uuuC zroot >> >> Regards >> Steve >> > > zdb -uuumcD eventually segfaulted: > > Uberblock: > magic =3D 0000000000bab10c > version =3D 5000 > txg =3D 3378596 > guid_sum =3D 1996697515446579069 > timestamp =3D 1401756315 UTC =3D Mon Jun 2 17:45:15 2014 > rootbp =3D DVA[0]=3D<0:3f08b7fd000:600> > DVA[1]=3D<0:5500f66f000:600> DVA[2]=3D<0:86001fb9c00:600> [L0 DMU > objset] fletcher4 lzjb LE contiguous unique triple size=3D800L/200P > birth=3D3378596L/3378596P fill=3D326 > cksum=3D10553d553d:65de1705c49:1445be46ea217:2bed6cb4bc5e02 > > All DDTs are empty > > Metaslabs: > vdev 0 > metaslabs 143 offset spacemap free > --------------- ------------------- --------------- =20 > ------------- > metaslab 0 offset 0 spacemap 34 =20 > free 12.8G > metaslab 1 offset 1000000000 spacemap 162 =20 > free 21.0G > metaslab 2 offset 2000000000 spacemap 170 =20 > free 4.20G > metaslab 3 offset 3000000000 spacemap 182 =20 > free 26.8G > metaslab 4 offset 4000000000 spacemap 183 =20 > free 18.7G > metaslab 5 offset 5000000000 spacemap 184 =20 > free 27.9G > metaslab 6 offset 6000000000 spacemap 185 =20 > free 19.9G > metaslab 7 offset 7000000000 spacemap 187 =20 > free 30.8G > metaslab 8 offset 8000000000 spacemap 188 =20 > free 24.4G > metaslab 9 offset 9000000000 spacemap 189 =20 > free 2.73G > metaslab 10 offset a000000000 spacemap 190 =20 > free 17.4G > metaslab 11 offset b000000000 spacemap 193 =20 > free 20.5G > metaslab 12 offset c000000000 spacemap 194 =20 > free 10.0G > metaslab 13 offset d000000000 spacemap 195 =20 > free 15.0G > metaslab 14 offset e000000000 spacemap 196 =20 > free 19.8G > metaslab 15 offset f000000000 spacemap 197 =20 > free 22.6G > metaslab 16 offset 10000000000 spacemap 198 =20 > free 11.8G > metaslab 17 offset 11000000000 spacemap 199 =20 > free 18.3G > metaslab 18 offset 12000000000 spacemap 200 =20 > free 3.35G > metaslab 19 offset 13000000000 spacemap 201 =20 > free 24.2G > metaslab 20 offset 14000000000 spacemap 202 =20 > free 9.8G > metaslab 21 offset 15000000000 spacemap 205 =20 > free 16.1G > metaslab 22 offset 16000000000 spacemap 206 =20 > free 31.4G > metaslab 23 offset 17000000000 spacemap 207 =20 > free 10.6G > metaslab 24 offset 18000000000 spacemap 208 =20 > free 29.9G > metaslab 25 offset 19000000000 spacemap 209 =20 > free 13.0G > metaslab 26 offset 1a000000000 spacemap 210 =20 > free 15.2G > metaslab 27 offset 1b000000000 spacemap 33 =20 > free 35.3G > metaslab 28 offset 1c000000000 spacemap 186 =20 > free 3.40G > metaslab 29 offset 1d000000000 spacemap 211 =20 > free 17.9G > metaslab 30 offset 1e000000000 spacemap 212 =20 > free 11.2G > metaslab 31 offset 1f000000000 spacemap 213 =20 > free 7.69G > metaslab 32 offset 20000000000 spacemap 214 =20 > free 21.2G > metaslab 33 offset 21000000000 spacemap 215 =20 > free 7.66G > metaslab 34 offset 22000000000 spacemap 216 =20 > free 15.6G > metaslab 35 offset 23000000000 spacemap 217 =20 > free 28.2G > metaslab 36 offset 24000000000 spacemap 218 =20 > free 20.8G > metaslab 37 offset 25000000000 spacemap 221 =20 > free 14.5G > metaslab 38 offset 26000000000 spacemap 192 =20 > free 14.1G > metaslab 39 offset 27000000000 spacemap 222 =20 > free 23.5G > metaslab 40 offset 28000000000 spacemap 223 =20 > free 22.8G > metaslab 41 offset 29000000000 spacemap 224 =20 > free 16.2G > metaslab 42 offset 2a000000000 spacemap 225 =20 > free 16.7G > metaslab 43 offset 2b000000000 spacemap 226 =20 > free 18.3G > metaslab 44 offset 2c000000000 spacemap 227 =20 > free 3.63G > metaslab 45 offset 2d000000000 spacemap 228 =20 > free 6.13G > metaslab 46 offset 2e000000000 spacemap 229 =20 > free 22.8G > metaslab 47 offset 2f000000000 spacemap 230 =20 > free 31.2G > metaslab 48 offset 30000000000 spacemap 204 =20 > free 5.64G > metaslab 49 offset 31000000000 spacemap 232 =20 > free 4.14G > metaslab 50 offset 32000000000 spacemap 233 =20 > free 22.0G > metaslab 51 offset 33000000000 spacemap 234 =20 > free 21.1G > metaslab 52 offset 34000000000 spacemap 235 =20 > free 10.9G > metaslab 53 offset 35000000000 spacemap 236 =20 > free 28.6G > metaslab 54 offset 36000000000 spacemap 32 =20 > free 24.2G > metaslab 55 offset 37000000000 spacemap 237 =20 > free 6.30G > metaslab 56 offset 38000000000 spacemap 238 =20 > free 22.6G > metaslab 57 offset 39000000000 spacemap 239 =20 > free 12.9G > metaslab 58 offset 3a000000000 spacemap 242 =20 > free 22.8G > metaslab 59 offset 3b000000000 spacemap 243 =20 > free 22.0G > metaslab 60 offset 3c000000000 spacemap 244 =20 > free 26.4G > metaslab 61 offset 3d000000000 spacemap 245 =20 > free 9.6G > metaslab 62 offset 3e000000000 spacemap 246 =20 > free 22.1G > metaslab 63 offset 3f000000000 spacemap 247 =20 > free 59.1G > metaslab 64 offset 40000000000 spacemap 220 =20 > free 61.8G > metaslab 65 offset 41000000000 spacemap 191 =20 > free 17.7G > metaslab 66 offset 42000000000 spacemap 248 =20 > free 13.1G > metaslab 67 offset 43000000000 spacemap 249 =20 > free 22.5G > metaslab 68 offset 44000000000 spacemap 250 =20 > free 4.39G > metaslab 69 offset 45000000000 spacemap 251 =20 > free 16.2G > metaslab 70 offset 46000000000 spacemap 252 =20 > free 3.88G > metaslab 71 offset 47000000000 spacemap 253 =20 > free 8.96G > metaslab 72 offset 48000000000 spacemap 254 =20 > free 25.2G > metaslab 73 offset 49000000000 spacemap 255 =20 > free 15.2G > metaslab 74 offset 4a000000000 spacemap 257 =20 > free 26.1G > metaslab 75 offset 4b000000000 spacemap 203 =20 > free 5.36G > metaslab 76 offset 4c000000000 spacemap 258 =20 > free 59.4G > metaslab 77 offset 4d000000000 spacemap 259 =20 > free 15.9G > metaslab 78 offset 4e000000000 spacemap 260 =20 > free 62.1G > metaslab 79 offset 4f000000000 spacemap 261 =20 > free 19.4G > metaslab 80 offset 50000000000 spacemap 262 =20 > free 4.07G > metaslab 81 offset 51000000000 spacemap 263 =20 > free 31.0G > metaslab 82 offset 52000000000 spacemap 264 =20 > free 32.1G > metaslab 83 offset 53000000000 spacemap 265 =20 > free 21.9G > metaslab 84 offset 54000000000 spacemap 266 =20 > free 26.2G > metaslab 85 offset 55000000000 spacemap 241 =20 > free 58.9G > metaslab 86 offset 56000000000 spacemap 267 =20 > free 22.3G > metaslab 87 offset 57000000000 spacemap 268 =20 > free 8.49G > metaslab 88 offset 58000000000 spacemap 269 =20 > free 17.5G > metaslab 89 offset 59000000000 spacemap 270 =20 > free 24.2G > metaslab 90 offset 5a000000000 spacemap 271 =20 > free 6.78G > metaslab 91 offset 5b000000000 spacemap 219 =20 > free 12.7G > metaslab 92 offset 5c000000000 spacemap 274 =20 > free 27.4G > metaslab 93 offset 5d000000000 spacemap 275 =20 > free 21.5G > metaslab 94 offset 5e000000000 spacemap 276 =20 > free 25.2G > metaslab 95 offset 5f000000000 spacemap 277 =20 > free 27.8G > metaslab 96 offset 60000000000 spacemap 278 =20 > free 6.67G > metaslab 97 offset 61000000000 spacemap 279 =20 > free 26.3G > metaslab 98 offset 62000000000 spacemap 280 =20 > free 12.0G > metaslab 99 offset 63000000000 spacemap 281 =20 > free 18.1G > metaslab 100 offset 64000000000 spacemap 282 =20 > free 23.3G > metaslab 101 offset 65000000000 spacemap 256 =20 > free 25.0G > metaslab 102 offset 66000000000 spacemap 231 =20 > free 16.8G > metaslab 103 offset 67000000000 spacemap 284 =20 > free 16.2G > metaslab 104 offset 68000000000 spacemap 285 =20 > free 20.0G > metaslab 105 offset 69000000000 spacemap 286 =20 > free 30.6G > metaslab 106 offset 6a000000000 spacemap 287 =20 > free 24.5G > metaslab 107 offset 6b000000000 spacemap 288 =20 > free 19.6G > metaslab 108 offset 6c000000000 spacemap 289 =20 > free 16.8G > metaslab 109 offset 6d000000000 spacemap 290 =20 > free 22.7G > metaslab 110 offset 6e000000000 spacemap 291 =20 > free 22.0G > metaslab 111 offset 6f000000000 spacemap 292 =20 > free 16.6G > metaslab 112 offset 70000000000 spacemap 240 =20 > free 14.8G > metaslab 113 offset 71000000000 spacemap 293 =20 > free 20.9G > metaslab 114 offset 72000000000 spacemap 294 =20 > free 53.7G > metaslab 115 offset 73000000000 spacemap 295 =20 > free 17.9G > metaslab 116 offset 74000000000 spacemap 296 =20 > free 19.1G > metaslab 117 offset 75000000000 spacemap 297 =20 > free 32.7G > metaslab 118 offset 76000000000 spacemap 298 =20 > free 17.8G > metaslab 119 offset 77000000000 spacemap 273 =20 > free 55.0G > metaslab 120 offset 78000000000 spacemap 299 =20 > free 20.7G > metaslab 121 offset 79000000000 spacemap 300 =20 > free 16.8G > metaslab 122 offset 7a000000000 spacemap 301 =20 > free 16.8G > metaslab 123 offset 7b000000000 spacemap 302 =20 > free 22.7G > metaslab 124 offset 7c000000000 spacemap 303 =20 > free 14.8G > metaslab 125 offset 7d000000000 spacemap 304 =20 > free 22.1G > metaslab 126 offset 7e000000000 spacemap 305 =20 > free 15.3G > metaslab 127 offset 7f000000000 spacemap 306 =20 > free 17.1G > metaslab 128 offset 80000000000 spacemap 307 =20 > free 20.2G > metaslab 129 offset 81000000000 spacemap 283 =20 > free 58.2G > metaslab 130 offset 82000000000 spacemap 308 =20 > free 24.5G > metaslab 131 offset 83000000000 spacemap 309 =20 > free 4.19G > metaslab 132 offset 84000000000 spacemap 310 =20 > free 15.0G > metaslab 133 offset 85000000000 spacemap 311 =20 > free 19.9G > metaslab 134 offset 86000000000 spacemap 312 =20 > free 60.9G > metaslab 135 offset 87000000000 spacemap 313 =20 > free 60.6G > metaslab 136 offset 88000000000 spacemap 314 =20 > free 60.9G > metaslab 137 offset 89000000000 spacemap 315 =20 > free 59.8G > metaslab 138 offset 8a000000000 spacemap 316 =20 > free 60.9G > metaslab 139 offset 8b000000000 spacemap 272 =20 > free 61.6G > metaslab 140 offset 8c000000000 spacemap 317 =20 > free 62.4G > metaslab 141 offset 8d000000000 spacemap 318 =20 > free 61.2G > metaslab 142 offset 8e000000000 spacemap 319 =20 > free 61.5G > > > Traversing all blocks to verify metadata checksums and verify > nothing leaked ... > > load: 1.59 cmd: zdb 54160 [physrd] 31.13r 3.05u 1.15s 4% 142544k > load: 0.45 cmd: zdb 54160 [physrd] 105.37r 6.69u 2.33s 4% 263428k > 5.64T completed ( 119MB/s) estimated time remaining: 0hr 12min > 55sec Assertion failed: (bp->blk_pad[0] =3D=3D 0), file > /usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/co= mmon/fs/zfs/zio.c, > line 2978. > Abort (core dumped) > > The second command you suggested returned: > > # zdb -uuuC zroot > > MOS Configuration: > version: 5000 > name: 'zroot' > state: 0 > txg: 3377279 > pool_guid: 9132288035431788388 > hostid: 2783470193 > hostname: 'working-1.discdrive.bayphoto.com' > vdev_children: 1 > vdev_tree: > type: 'root' > id: 0 > guid: 9132288035431788388 > children[0]: > type: 'raidz' > id: 0 > guid: 15520162542638044402 > nparity: 2 > metaslab_array: 31 > metaslab_shift: 36 > ashift: 9 > asize: 9894744555520 > is_log: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 4289437176706222104 > path: '/dev/mfid0p2' > devid:=20 > 'id1,sd@n6b8ca3a0f13870001a02310703f4b791/b' > phys_path: '/dev/mfid0p2' > whole_disk: 1 > DTL: 181 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 5369387862706621015 > path: '/dev/mfid1p2' > devid:=20 > 'id1,sd@n6b8ca3a0f13870001a02311604ce1965/b' > phys_path: '/dev/mfid1p2' > whole_disk: 1 > DTL: 180 > create_txg: 4 > children[2]: > type: 'disk' > id: 2 > guid: 456749962069636782 > path: '/dev/mfid2p2' > devid:=20 > 'id1,sd@n6b8ca3a0f13870001a02312105778eef/b' > phys_path: '/dev/mfid2p2' > whole_disk: 1 > DTL: 179 > create_txg: 4 > children[3]: > type: 'disk' > id: 3 > guid: 3809413300177228462 > path: '/dev/mfid3p2' > devid:=20 > 'id1,sd@n6b8ca3a0f13870001a02312905f430b5/b' > phys_path: '/dev/mfid3p2' > whole_disk: 1 > DTL: 178 > create_txg: 4 > children[4]: > type: 'disk' > id: 4 > guid: 4978694931676882497 > path: '/dev/mfid4p2' > devid:=20 > 'id1,sd@n6b8ca3a0f13870001a02313606b73c4a/b' > phys_path: '/dev/mfid4p2' > whole_disk: 1 > DTL: 177 > create_txg: 4 > children[5]: > type: 'disk' > id: 5 > guid: 17831739822150458220 > path: '/dev/mfid5p2' > devid:=20 > 'id1,sd@n6b8ca3a0f13870001a023142077914f5/b' > phys_path: '/dev/mfid5p2' > whole_disk: 1 > DTL: 176 > create_txg: 4 > children[6]: > type: 'disk' > id: 6 > guid: 1286918567594965543 > path: '/dev/mfid6p2' > devid:=20 > 'id1,sd@n6b8ca3a0f13870001a02314c080cb066/b' > phys_path: '/dev/mfid6p2' > whole_disk: 1 > DTL: 175 > create_txg: 4 > children[7]: > type: 'disk' > id: 7 > guid: 7958718879588658810 > path: '/dev/mfid7p2' > devid:=20 > 'id1,sd@n6b8ca3a0f13870001a02315608a7f0a2/b' > phys_path: '/dev/mfid7p2' > whole_disk: 1 > DTL: 174 > create_txg: 4 > children[8]: > type: 'disk' > id: 8 > guid: 18392960683862755998 > path: '/dev/mfid8p2' > devid:=20 > 'id1,sd@n6b8ca3a0f13870001a023160093a9190/b' > phys_path: '/dev/mfid8p2' > whole_disk: 1 > DTL: 173 > create_txg: 4 > children[9]: > type: 'disk' > id: 9 > guid: 13046629036569375198 > path: '/dev/mfid9p2' > devid:=20 > 'id1,sd@n6b8ca3a0f13870001a02316909c8894c/b' > phys_path: '/dev/mfid9p2' > whole_disk: 1 > DTL: 172 > create_txg: 4 > children[10]: > type: 'disk' > id: 10 > guid: 10604061156531251346 > path: '/dev/mfid11p2' > devid:=20 > 'id1,sd@n6b8ca3a0ef7a7a0019cc18e30bbfa11e/b' > phys_path: '/dev/mfid11p2' > whole_disk: 1 > DTL: 171 > create_txg: 4 > features_for_read: > > Uberblock: > magic =3D 0000000000bab10c > version =3D 5000 > txg =3D 3389469 > guid_sum =3D 1996697515446579069 > timestamp =3D 1401810802 UTC =3D Tue Jun 3 08:53:22 2014 > rootbp =3D DVA[0]=3D<0:3f0bf445c00:c00> > DVA[1]=3D<0:55027e77200:c00> DVA[2]=3D<0:86003a4c400:c00> [L0 DMU > objset] fletcher4 uncompressed LE contiguous unique triple > size=3D800L/800P birth=3D3389469L/3389469P fill=3D326 > cksum=3D389487e40:6aa058451f9:64bbaf298ba16:3f9bfc58017be5d > > Any reason why I would have to manually re-import the cache file? I=20 > had performed that task during the initial install (this was before=20 > bsdinstall had a zfs on root option, so it was done manually, where=20 > you have to export the cachefile, then at the end of the install cp it = > to /boot/zfs/zpool.cache and re-import it) > > Mike C --------------ms090905050307090206000305 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjA0MTg0 NjU0WjAjBgkqhkiG9w0BCQQxFgQUG8m07b7FQ/fq6SgP2xsIQROvmyYwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAJrxrvhxDGvQTG10aI6+vM3TQPcKyDslQYdm 8NHaREYl0Jh7M61kNEHG+dmcYTjIXu6PbF3wzYgw26xvnFPEBAM7AP1TtCZip/XVdQrDZvBU lGOWLx8d0zUSLa6yZIthQ71Pt+etjc4qaf/8epmtEi7Uo4uMRt0koRkrdagJeT4cQwyDYisn 9T62aVLoPtEZ7ecdG9Z0+E09DonAk7+MpuEDxMOKXCRUeKAjPffrXnCWzusrv8KmzNUp8Wwq AGceYmyJmQcCpTlGWYJ0fyTpNcUMtEDlK2sAN/BugFG3gdytGcnd1X8KQuRwHxyxMWbIX1+v plXHBAlbAUlCgd4UNNOstkeukak53ZDuvVk84GwSSdlZ00bVOBixUUrc6e+F+8c/066zWPVK 5QfV3rzFyDrZ+F31sJgVZ5/8hNuavtWepDaSt2knC/EGjnm8qECuT+GgzmvSAfFTDAwUUCy/ d1vP90XxyojqbyvG4Ns1g0Lp7/b1JMOEEzMkXdI+41ygkO3VI84CZ62+9nqnnt4u8KX7VzmO 12TtWZ8mHMi9xs0Io5j3trIcl4vOphrQB9WgTYXf6+xiMcZxsvAoXYc0jlQmIzdcnS+IIaBX KqAkFulkiD0r486uNFbplrckrY1+rfjrKx9pCYDQb/XOV2670xSj/GeRJmnQMr2X9IpA/40P AAAAAAAA --------------ms090905050307090206000305-- From owner-freebsd-fs@FreeBSD.ORG Wed Jun 4 19:23:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 00C5C446 for ; Wed, 4 Jun 2014 19:23:17 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 6188326D1 for ; Wed, 4 Jun 2014 19:23:16 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 875AD20E7088C; Wed, 4 Jun 2014 19:23:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id E7D8620E70886; Wed, 4 Jun 2014 19:23:08 +0000 (UTC) Message-ID: From: "Steven Hartland" To: , References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> <538D0174.6000906@bayphoto.com> <538D18CB.5020906@bayphoto.com> <538D1CD5.5070902@bayphoto.com> <538DF082.3030407@bayphoto.com> <538F699E.4060802@bayphoto.com> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Date: Wed, 4 Jun 2014 20:23:05 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="Windows-1252"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jun 2014 19:23:18 -0000 You mention mfi and 9.1, which rings alarm bells. They shouldn't be, but if your drives are > 2^32 sectors you'll have corruption: http://svnweb.freebsd.org/base?view=revision&revision=242497 In addition to this I did a large number of fixes to mfi after this point which could result in all sorts of issues, but that doesn't explain issues with mps. Upgrading shouldn't have removed the cache file so I'm guessing that your initial install was already missing this. zdb is picky about havin a cache file, which is something we should fix at some point as IIRC the changes avg or mav made, I can't remember which, means that FreeBSD doesn't rely on the cache file being present as much as it did. Back to the corruption, unfortunately this could be any number of things so its almost impossible to tell at which point the issue originally occured :( It might well be worth emailing a summary of the issue to the openzfs mailing list see if someone on there has any ideas where the DVA corruption could have occured. Regards Steve ----- Original Message ----- From: "Mike Carlson" To: Sent: Wednesday, June 04, 2014 7:46 PM Subject: Re: ZFS Kernel Panic on 10.0-RELEASE Top-posting... sorry I'm going to have to roll this particular server back into production, so I'll be rebuilding it from scratch That is okay with this particular system, the other server that exhibited the same issue will have to have all 19TB of its usable data streamed off to temp storage (if we can get it) and rebuilt as well. Thank you Steve for being so helpful, and patient with me stumbling through kgdb :) I have some lingering questions about the entire situation: First, these servers perform regular zpool scrubs (once a month), and have ECC memory. According the the additional logging information I was able to get from Steve's patch, it seems that even with these safeguards data was still corrupted. A scub after the initial panic did not report any errors. Second, these two servers had an extra anomaly, and that was the missing zpool.cache. I say missing, because zdb was unable to access the zpool, it was not until I ran "zpool set cachefile=/boot/zfs/zpool.cache ". This was previously not an issue. The two servers were upgraded fro 9.1 to 10 on the same morning, within minutes of each other. That is about it as far as commonalities. Both have different drive types (900GB SAS vs 2TB SATA), different controllers (Dell PERC (mfi) vs LSI (mps)), Dell vs SuperMicro boards... We do use the aio kernel module, and as well as some sysctl and loader.conf tuning. I've backed all of those out, so we're just running a stock OS. Ideally, I would like to never run into this situation again. However, I don't have any evidence to point to an upgrade misstep or some catastrophic configuration error (kernel parameters, zpool create). Thank everyone, Mike C On 6/3/2014 8:57 AM, Mike Carlson wrote: > On 6/3/2014 2:24 AM, Steven Hartland wrote: >> >> ----- Original Message ----- From: "Mike Carlson" >> >>> Scratch that last one, the cachefile had to be reset on the pool to >>> /boot/zfs/zpool.cache >>> >>> So I'm running it now, and its taking so long to traverse all >>> blocks, that it is telling me its going to take around 5400 HOURS >>> >>> I guess I'll report back 90 days? >> >> Try with just the following should be quicker: >> zdb -uuuC zroot >> >> Regards >> Steve >> > > zdb -uuumcD eventually segfaulted: > > Uberblock: > magic = 0000000000bab10c > version = 5000 > txg = 3378596 > guid_sum = 1996697515446579069 > timestamp = 1401756315 UTC = Mon Jun 2 17:45:15 2014 > rootbp = DVA[0]=<0:3f08b7fd000:600> > DVA[1]=<0:5500f66f000:600> DVA[2]=<0:86001fb9c00:600> [L0 DMU > objset] fletcher4 lzjb LE contiguous unique triple size=800L/200P > birth=3378596L/3378596P fill=326 > cksum=10553d553d:65de1705c49:1445be46ea217:2bed6cb4bc5e02 > > All DDTs are empty > > Metaslabs: > vdev 0 > metaslabs 143 offset spacemap free > --------------- ------------------- --------------- > ------------- > metaslab 0 offset 0 spacemap 34 > free 12.8G > metaslab 1 offset 1000000000 spacemap 162 > free 21.0G > metaslab 2 offset 2000000000 spacemap 170 > free 4.20G > metaslab 3 offset 3000000000 spacemap 182 > free 26.8G > metaslab 4 offset 4000000000 spacemap 183 > free 18.7G > metaslab 5 offset 5000000000 spacemap 184 > free 27.9G > metaslab 6 offset 6000000000 spacemap 185 > free 19.9G > metaslab 7 offset 7000000000 spacemap 187 > free 30.8G > metaslab 8 offset 8000000000 spacemap 188 > free 24.4G > metaslab 9 offset 9000000000 spacemap 189 > free 2.73G > metaslab 10 offset a000000000 spacemap 190 > free 17.4G > metaslab 11 offset b000000000 spacemap 193 > free 20.5G > metaslab 12 offset c000000000 spacemap 194 > free 10.0G > metaslab 13 offset d000000000 spacemap 195 > free 15.0G > metaslab 14 offset e000000000 spacemap 196 > free 19.8G > metaslab 15 offset f000000000 spacemap 197 > free 22.6G > metaslab 16 offset 10000000000 spacemap 198 > free 11.8G > metaslab 17 offset 11000000000 spacemap 199 > free 18.3G > metaslab 18 offset 12000000000 spacemap 200 > free 3.35G > metaslab 19 offset 13000000000 spacemap 201 > free 24.2G > metaslab 20 offset 14000000000 spacemap 202 > free 9.8G > metaslab 21 offset 15000000000 spacemap 205 > free 16.1G > metaslab 22 offset 16000000000 spacemap 206 > free 31.4G > metaslab 23 offset 17000000000 spacemap 207 > free 10.6G > metaslab 24 offset 18000000000 spacemap 208 > free 29.9G > metaslab 25 offset 19000000000 spacemap 209 > free 13.0G > metaslab 26 offset 1a000000000 spacemap 210 > free 15.2G > metaslab 27 offset 1b000000000 spacemap 33 > free 35.3G > metaslab 28 offset 1c000000000 spacemap 186 > free 3.40G > metaslab 29 offset 1d000000000 spacemap 211 > free 17.9G > metaslab 30 offset 1e000000000 spacemap 212 > free 11.2G > metaslab 31 offset 1f000000000 spacemap 213 > free 7.69G > metaslab 32 offset 20000000000 spacemap 214 > free 21.2G > metaslab 33 offset 21000000000 spacemap 215 > free 7.66G > metaslab 34 offset 22000000000 spacemap 216 > free 15.6G > metaslab 35 offset 23000000000 spacemap 217 > free 28.2G > metaslab 36 offset 24000000000 spacemap 218 > free 20.8G > metaslab 37 offset 25000000000 spacemap 221 > free 14.5G > metaslab 38 offset 26000000000 spacemap 192 > free 14.1G > metaslab 39 offset 27000000000 spacemap 222 > free 23.5G > metaslab 40 offset 28000000000 spacemap 223 > free 22.8G > metaslab 41 offset 29000000000 spacemap 224 > free 16.2G > metaslab 42 offset 2a000000000 spacemap 225 > free 16.7G > metaslab 43 offset 2b000000000 spacemap 226 > free 18.3G > metaslab 44 offset 2c000000000 spacemap 227 > free 3.63G > metaslab 45 offset 2d000000000 spacemap 228 > free 6.13G > metaslab 46 offset 2e000000000 spacemap 229 > free 22.8G > metaslab 47 offset 2f000000000 spacemap 230 > free 31.2G > metaslab 48 offset 30000000000 spacemap 204 > free 5.64G > metaslab 49 offset 31000000000 spacemap 232 > free 4.14G > metaslab 50 offset 32000000000 spacemap 233 > free 22.0G > metaslab 51 offset 33000000000 spacemap 234 > free 21.1G > metaslab 52 offset 34000000000 spacemap 235 > free 10.9G > metaslab 53 offset 35000000000 spacemap 236 > free 28.6G > metaslab 54 offset 36000000000 spacemap 32 > free 24.2G > metaslab 55 offset 37000000000 spacemap 237 > free 6.30G > metaslab 56 offset 38000000000 spacemap 238 > free 22.6G > metaslab 57 offset 39000000000 spacemap 239 > free 12.9G > metaslab 58 offset 3a000000000 spacemap 242 > free 22.8G > metaslab 59 offset 3b000000000 spacemap 243 > free 22.0G > metaslab 60 offset 3c000000000 spacemap 244 > free 26.4G > metaslab 61 offset 3d000000000 spacemap 245 > free 9.6G > metaslab 62 offset 3e000000000 spacemap 246 > free 22.1G > metaslab 63 offset 3f000000000 spacemap 247 > free 59.1G > metaslab 64 offset 40000000000 spacemap 220 > free 61.8G > metaslab 65 offset 41000000000 spacemap 191 > free 17.7G > metaslab 66 offset 42000000000 spacemap 248 > free 13.1G > metaslab 67 offset 43000000000 spacemap 249 > free 22.5G > metaslab 68 offset 44000000000 spacemap 250 > free 4.39G > metaslab 69 offset 45000000000 spacemap 251 > free 16.2G > metaslab 70 offset 46000000000 spacemap 252 > free 3.88G > metaslab 71 offset 47000000000 spacemap 253 > free 8.96G > metaslab 72 offset 48000000000 spacemap 254 > free 25.2G > metaslab 73 offset 49000000000 spacemap 255 > free 15.2G > metaslab 74 offset 4a000000000 spacemap 257 > free 26.1G > metaslab 75 offset 4b000000000 spacemap 203 > free 5.36G > metaslab 76 offset 4c000000000 spacemap 258 > free 59.4G > metaslab 77 offset 4d000000000 spacemap 259 > free 15.9G > metaslab 78 offset 4e000000000 spacemap 260 > free 62.1G > metaslab 79 offset 4f000000000 spacemap 261 > free 19.4G > metaslab 80 offset 50000000000 spacemap 262 > free 4.07G > metaslab 81 offset 51000000000 spacemap 263 > free 31.0G > metaslab 82 offset 52000000000 spacemap 264 > free 32.1G > metaslab 83 offset 53000000000 spacemap 265 > free 21.9G > metaslab 84 offset 54000000000 spacemap 266 > free 26.2G > metaslab 85 offset 55000000000 spacemap 241 > free 58.9G > metaslab 86 offset 56000000000 spacemap 267 > free 22.3G > metaslab 87 offset 57000000000 spacemap 268 > free 8.49G > metaslab 88 offset 58000000000 spacemap 269 > free 17.5G > metaslab 89 offset 59000000000 spacemap 270 > free 24.2G > metaslab 90 offset 5a000000000 spacemap 271 > free 6.78G > metaslab 91 offset 5b000000000 spacemap 219 > free 12.7G > metaslab 92 offset 5c000000000 spacemap 274 > free 27.4G > metaslab 93 offset 5d000000000 spacemap 275 > free 21.5G > metaslab 94 offset 5e000000000 spacemap 276 > free 25.2G > metaslab 95 offset 5f000000000 spacemap 277 > free 27.8G > metaslab 96 offset 60000000000 spacemap 278 > free 6.67G > metaslab 97 offset 61000000000 spacemap 279 > free 26.3G > metaslab 98 offset 62000000000 spacemap 280 > free 12.0G > metaslab 99 offset 63000000000 spacemap 281 > free 18.1G > metaslab 100 offset 64000000000 spacemap 282 > free 23.3G > metaslab 101 offset 65000000000 spacemap 256 > free 25.0G > metaslab 102 offset 66000000000 spacemap 231 > free 16.8G > metaslab 103 offset 67000000000 spacemap 284 > free 16.2G > metaslab 104 offset 68000000000 spacemap 285 > free 20.0G > metaslab 105 offset 69000000000 spacemap 286 > free 30.6G > metaslab 106 offset 6a000000000 spacemap 287 > free 24.5G > metaslab 107 offset 6b000000000 spacemap 288 > free 19.6G > metaslab 108 offset 6c000000000 spacemap 289 > free 16.8G > metaslab 109 offset 6d000000000 spacemap 290 > free 22.7G > metaslab 110 offset 6e000000000 spacemap 291 > free 22.0G > metaslab 111 offset 6f000000000 spacemap 292 > free 16.6G > metaslab 112 offset 70000000000 spacemap 240 > free 14.8G > metaslab 113 offset 71000000000 spacemap 293 > free 20.9G > metaslab 114 offset 72000000000 spacemap 294 > free 53.7G > metaslab 115 offset 73000000000 spacemap 295 > free 17.9G > metaslab 116 offset 74000000000 spacemap 296 > free 19.1G > metaslab 117 offset 75000000000 spacemap 297 > free 32.7G > metaslab 118 offset 76000000000 spacemap 298 > free 17.8G > metaslab 119 offset 77000000000 spacemap 273 > free 55.0G > metaslab 120 offset 78000000000 spacemap 299 > free 20.7G > metaslab 121 offset 79000000000 spacemap 300 > free 16.8G > metaslab 122 offset 7a000000000 spacemap 301 > free 16.8G > metaslab 123 offset 7b000000000 spacemap 302 > free 22.7G > metaslab 124 offset 7c000000000 spacemap 303 > free 14.8G > metaslab 125 offset 7d000000000 spacemap 304 > free 22.1G > metaslab 126 offset 7e000000000 spacemap 305 > free 15.3G > metaslab 127 offset 7f000000000 spacemap 306 > free 17.1G > metaslab 128 offset 80000000000 spacemap 307 > free 20.2G > metaslab 129 offset 81000000000 spacemap 283 > free 58.2G > metaslab 130 offset 82000000000 spacemap 308 > free 24.5G > metaslab 131 offset 83000000000 spacemap 309 > free 4.19G > metaslab 132 offset 84000000000 spacemap 310 > free 15.0G > metaslab 133 offset 85000000000 spacemap 311 > free 19.9G > metaslab 134 offset 86000000000 spacemap 312 > free 60.9G > metaslab 135 offset 87000000000 spacemap 313 > free 60.6G > metaslab 136 offset 88000000000 spacemap 314 > free 60.9G > metaslab 137 offset 89000000000 spacemap 315 > free 59.8G > metaslab 138 offset 8a000000000 spacemap 316 > free 60.9G > metaslab 139 offset 8b000000000 spacemap 272 > free 61.6G > metaslab 140 offset 8c000000000 spacemap 317 > free 62.4G > metaslab 141 offset 8d000000000 spacemap 318 > free 61.2G > metaslab 142 offset 8e000000000 spacemap 319 > free 61.5G > > > Traversing all blocks to verify metadata checksums and verify > nothing leaked ... > > load: 1.59 cmd: zdb 54160 [physrd] 31.13r 3.05u 1.15s 4% 142544k > load: 0.45 cmd: zdb 54160 [physrd] 105.37r 6.69u 2.33s 4% 263428k > 5.64T completed ( 119MB/s) estimated time remaining: 0hr 12min > 55sec Assertion failed: (bp->blk_pad[0] == 0), file > /usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, > line 2978. > Abort (core dumped) > > The second command you suggested returned: > > # zdb -uuuC zroot > > MOS Configuration: > version: 5000 > name: 'zroot' > state: 0 > txg: 3377279 > pool_guid: 9132288035431788388 > hostid: 2783470193 > hostname: 'working-1.discdrive.bayphoto.com' > vdev_children: 1 > vdev_tree: > type: 'root' > id: 0 > guid: 9132288035431788388 > children[0]: > type: 'raidz' > id: 0 > guid: 15520162542638044402 > nparity: 2 > metaslab_array: 31 > metaslab_shift: 36 > ashift: 9 > asize: 9894744555520 > is_log: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 4289437176706222104 > path: '/dev/mfid0p2' > devid: > 'id1,sd@n6b8ca3a0f13870001a02310703f4b791/b' > phys_path: '/dev/mfid0p2' > whole_disk: 1 > DTL: 181 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 5369387862706621015 > path: '/dev/mfid1p2' > devid: > 'id1,sd@n6b8ca3a0f13870001a02311604ce1965/b' > phys_path: '/dev/mfid1p2' > whole_disk: 1 > DTL: 180 > create_txg: 4 > children[2]: > type: 'disk' > id: 2 > guid: 456749962069636782 > path: '/dev/mfid2p2' > devid: > 'id1,sd@n6b8ca3a0f13870001a02312105778eef/b' > phys_path: '/dev/mfid2p2' > whole_disk: 1 > DTL: 179 > create_txg: 4 > children[3]: > type: 'disk' > id: 3 > guid: 3809413300177228462 > path: '/dev/mfid3p2' > devid: > 'id1,sd@n6b8ca3a0f13870001a02312905f430b5/b' > phys_path: '/dev/mfid3p2' > whole_disk: 1 > DTL: 178 > create_txg: 4 > children[4]: > type: 'disk' > id: 4 > guid: 4978694931676882497 > path: '/dev/mfid4p2' > devid: > 'id1,sd@n6b8ca3a0f13870001a02313606b73c4a/b' > phys_path: '/dev/mfid4p2' > whole_disk: 1 > DTL: 177 > create_txg: 4 > children[5]: > type: 'disk' > id: 5 > guid: 17831739822150458220 > path: '/dev/mfid5p2' > devid: > 'id1,sd@n6b8ca3a0f13870001a023142077914f5/b' > phys_path: '/dev/mfid5p2' > whole_disk: 1 > DTL: 176 > create_txg: 4 > children[6]: > type: 'disk' > id: 6 > guid: 1286918567594965543 > path: '/dev/mfid6p2' > devid: > 'id1,sd@n6b8ca3a0f13870001a02314c080cb066/b' > phys_path: '/dev/mfid6p2' > whole_disk: 1 > DTL: 175 > create_txg: 4 > children[7]: > type: 'disk' > id: 7 > guid: 7958718879588658810 > path: '/dev/mfid7p2' > devid: > 'id1,sd@n6b8ca3a0f13870001a02315608a7f0a2/b' > phys_path: '/dev/mfid7p2' > whole_disk: 1 > DTL: 174 > create_txg: 4 > children[8]: > type: 'disk' > id: 8 > guid: 18392960683862755998 > path: '/dev/mfid8p2' > devid: > 'id1,sd@n6b8ca3a0f13870001a023160093a9190/b' > phys_path: '/dev/mfid8p2' > whole_disk: 1 > DTL: 173 > create_txg: 4 > children[9]: > type: 'disk' > id: 9 > guid: 13046629036569375198 > path: '/dev/mfid9p2' > devid: > 'id1,sd@n6b8ca3a0f13870001a02316909c8894c/b' > phys_path: '/dev/mfid9p2' > whole_disk: 1 > DTL: 172 > create_txg: 4 > children[10]: > type: 'disk' > id: 10 > guid: 10604061156531251346 > path: '/dev/mfid11p2' > devid: > 'id1,sd@n6b8ca3a0ef7a7a0019cc18e30bbfa11e/b' > phys_path: '/dev/mfid11p2' > whole_disk: 1 > DTL: 171 > create_txg: 4 > features_for_read: > > Uberblock: > magic = 0000000000bab10c > version = 5000 > txg = 3389469 > guid_sum = 1996697515446579069 > timestamp = 1401810802 UTC = Tue Jun 3 08:53:22 2014 > rootbp = DVA[0]=<0:3f0bf445c00:c00> > DVA[1]=<0:55027e77200:c00> DVA[2]=<0:86003a4c400:c00> [L0 DMU > objset] fletcher4 uncompressed LE contiguous unique triple > size=800L/800P birth=3389469L/3389469P fill=326 > cksum=389487e40:6aa058451f9:64bbaf298ba16:3f9bfc58017be5d > > Any reason why I would have to manually re-import the cache file? I > had performed that task during the initial install (this was before > bsdinstall had a zfs on root option, so it was done manually, where > you have to export the cachefile, then at the end of the install cp it > to /boot/zfs/zpool.cache and re-import it) > > Mike C From owner-freebsd-fs@FreeBSD.ORG Wed Jun 4 20:25:06 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 22660F9F for ; Wed, 4 Jun 2014 20:25:06 +0000 (UTC) Received: from mx.got.net (mx3.mx3.got.net [207.111.237.42]) by mx1.freebsd.org (Postfix) with ESMTP id EEE542DC1 for ; Wed, 4 Jun 2014 20:25:05 +0000 (UTC) Received: from [192.168.251.39] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 24E8423B59C; Wed, 4 Jun 2014 13:25:04 -0700 (PDT) Message-ID: <538F809F.5090705@bayphoto.com> Date: Wed, 04 Jun 2014 13:25:03 -0700 From: Mike Carlson Reply-To: mike@bayphoto.com User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Steven Hartland , freebsd-fs@freebsd.org Subject: Re: ZFS Kernel Panic on 10.0-RELEASE References: <5388D64D.4030400@bayphoto.com> <5388E5B4.3030002@bayphoto.com> <538BBEB7.4070008@bayphoto.com> <782C34792E95484DBA631A96FE3BEF20@multiplay.co.uk> <538C9CF3.6070208@bayphoto.com> <16ADD4D9DC73403C9669D8F34FDBD316@multiplay.co.uk> <538CB3EA.9010807@bayphoto.com> <6C6FB182781541CEBF627998B73B1DB4@multiplay.co.uk> <538CC16A.6060207@bayphoto.com> <538CDB7F.2060408@bayphoto.com> <88B3A7562A5F4F9B9EEF0E83BCAD2FB0@multiplay.co.uk> <538CE2B3.8090008@bayphoto.com> <85184EB23AA84607A360E601D03E1741@multiplay.co.uk> <538D0174.6000906@bayphoto.com> <538D18CB.5020906@bayphoto.com> <538D1CD5.5070902@bayphoto.com> <538DF082.3030407@bayphoto.com> <538F699E.4060802@bayphoto.com> In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080007050900000709090902" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jun 2014 20:25:06 -0000 This is a cryptographically signed message in MIME format. --------------ms080007050900000709090902 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable Thanks Steve, I'll write up a summary for the openzfs-developers mailing = list. FWIW, the other server that is identical to this working-1 system was=20 built with a fresh 10.0-RELEASE install, and we haven't had any issues,=20 and its been up and running for months now. On 6/4/2014 12:23 PM, Steven Hartland wrote: > You mention mfi and 9.1, which rings alarm bells. > > They shouldn't be, but if your drives are > 2^32 sectors you'll > have corruption: > http://svnweb.freebsd.org/base?view=3Drevision&revision=3D242497 > > In addition to this I did a large number of fixes to mfi after > this point which could result in all sorts of issues, but that > doesn't explain issues with mps. > > Upgrading shouldn't have removed the cache file so I'm guessing > that your initial install was already missing this. > > zdb is picky about havin a cache file, which is something we > should fix at some point as IIRC the changes avg or mav made, > I can't remember which, means that FreeBSD doesn't rely on the cache=20 > file being present as much as it did. > > Back to the corruption, unfortunately this could be any number > of things so its almost impossible to tell at which point the > issue originally occured :( > > It might well be worth emailing a summary of the issue to the > openzfs mailing list see if someone on there has any ideas > where the DVA corruption could have occured. > > Regards > Steve > > ----- Original Message ----- From: "Mike Carlson" > To: > Sent: Wednesday, June 04, 2014 7:46 PM > Subject: Re: ZFS Kernel Panic on 10.0-RELEASE > > > Top-posting... sorry > > I'm going to have to roll this particular server back into production, = > so I'll be rebuilding it from scratch > > That is okay with this particular system, the other server that=20 > exhibited the same issue will have to have all 19TB of its usable data = > streamed off to temp storage (if we can get it) and rebuilt as well. > > Thank you Steve for being so helpful, and patient with me stumbling=20 > through kgdb :) > > > I have some lingering questions about the entire situation: > > First, these servers perform regular zpool scrubs (once a month), and=20 > have ECC memory. According the the additional logging information I=20 > was able to get from Steve's patch, it seems that even with these=20 > safeguards data was still corrupted. A scub after the initial panic=20 > did not report any errors. > > Second, these two servers had an extra anomaly, and that was the=20 > missing zpool.cache. I say missing, because zdb was unable to access=20 > the zpool, it was not until I ran "zpool set=20 > cachefile=3D/boot/zfs/zpool.cache ". This was previously not an=20 > issue. > > The two servers were upgraded fro 9.1 to 10 on the same morning,=20 > within minutes of each other. That is about it as far as=20 > commonalities. Both have different drive types (900GB SAS vs 2TB=20 > SATA), different controllers (Dell PERC (mfi) vs LSI (mps)), Dell vs=20 > SuperMicro boards... > > We do use the aio kernel module, and as well as some sysctl and=20 > loader.conf tuning. I've backed all of those out, so we're just=20 > running a stock OS. > > Ideally, I would like to never run into this situation again. However, = > I don't have any evidence to point to an upgrade misstep or some=20 > catastrophic configuration error (kernel parameters, zpool create). > > > Thank everyone, > Mike C --------------ms080007050900000709090902 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNjA0MjAy NTAzWjAjBgkqhkiG9w0BCQQxFgQUkt8fbBLu+5QTrkXE0ckXVDTTsTwwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICABeEUnfnYeigF4KW4f6EWZCxCrAEFu1wvX19 rCsOOkbt9wHdybtkwlWMR5/vKUrCuHElZ9QwDGdYGxApUldHr6u25DJq/jMUz8RCQuEoryZQ EjBAXvXVMxprZeqjYRkBWgZpjCO9rgIgn3ErZ7mQIZepz4yjvqeudip9w3XZrSyuePfGnoRF hGPfl2zXQyTIfjS6zMeGQQhSOJNv5SEcQJRY+Bq7gFd5tNSgtnJ5Fycc5xd9xz3Kx0rjLhqT WYhTCj/T37ifoZBs4+irlRP7QmRwF1doYiTsXSx7GNIvDLYo9F1u/i0VODsLtV5f/vIzo2Op ED1Y2YWYHPSiSbzVnupiOWmC7Wp56VECBQhJbmOYZsi0yMUnQZFICtS+YWN6d09Wp42JWgbi xDK9jponzB+5jszB2YytTXKnZiqp8KrTU2ebi0VOdcLGhfM1Yh6xTHY/rDAzuslRY63i5ApF QC8/fYGdwFCSw9DaFRaV42uWu0/P1oBUlkbxjtNhWDCY3Be66zK3/fa/+amLRTrAJHEl9Jnq phzH8sfVvwEadEE56ZRnp9HLprpoKg4YkhoaNcK/R+TVNb0APgCQHKllxkKWcwrStffoNCxy +o+M6xtIZPm+Pqx3F/ZjT2e2WL3OEDL2aNst/9Y8GmGM+6uGwS3nsLXuUqMWJxysDnNyhBn6 AAAAAAAA --------------ms080007050900000709090902-- From owner-freebsd-fs@FreeBSD.ORG Thu Jun 5 08:00:10 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A4F97784 for ; Thu, 5 Jun 2014 08:00:10 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8840F2B44 for ; Thu, 5 Jun 2014 08:00:10 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5580AsY038722 for ; Thu, 5 Jun 2014 09:00:10 +0100 (BST) (envelope-from bz-noreply@freebsd.org) Message-Id: <201406050800.s5580AsY038722@kenobi.freebsd.org> From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bugzilla] Commit Needs MFC MIME-Version: 1.0 X-Bugzilla-Type: whine X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated Date: Thu, 05 Jun 2014 08:00:10 +0000 Content-Type: text/plain X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Jun 2014 08:00:10 -0000 Hi, You have a bug in the "Needs MFC" state which has not been touched in 7 days. This email serves as a reminder that you may want to MFC this bug or marked it as completed. In the event you have a longer MFC timeout you may update this bug with a comment and I won't remind you again for 7 days. This reminder is an experimental feature. Please file a bug or mail bugmeister@ with concerns. This search was scheduled by eadler@FreeBSD.org. (8 bugs) Bug 133174: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=133174 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [msdosfs] [patch] msdosfs must support multibyte international characters in file names Bug 136470: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=136470 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] Cannot mount / in read-only, over NFS Bug 139651: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139651 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] mount(8): read-only remount of NFS volume does not work Bug 144447: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=144447 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] sharenfs fsunshare() & fsshare_main() non functional Bug 154228: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=154228 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [md] md getting stuck in wdrain state Bug 155411: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=155411 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [regression] [8.2-release] [tmpfs]: mount: tmpfs : No space left on device Bug 156545: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=156545 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [ufs] mv could break UFS on SMP systems Bug 180236: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=180236 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] [nullfs] Leakage free space using ZFS with nullfs on 9.1-STABLE From owner-freebsd-fs@FreeBSD.ORG Thu Jun 5 08:35:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4A5852A8 for ; Thu, 5 Jun 2014 08:35:04 +0000 (UTC) Received: from mail-yh0-x234.google.com (mail-yh0-x234.google.com [IPv6:2607:f8b0:4002:c01::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 0E8602EB8 for ; Thu, 5 Jun 2014 08:35:03 +0000 (UTC) Received: by mail-yh0-f52.google.com with SMTP id z6so547232yhz.39 for ; Thu, 05 Jun 2014 01:35:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=3KA778nbpXy8DNS12mPIfiV3HmysjlI+x0brKkXTS7s=; b=VcUEJSFhUJttGDEcwRH9Fu1aZWFJ4ZQ4QXgnfhaSDW3pL/9DgoIfHrqpVd0fgSDJKo QhyzEAbpCB7wZZZ5ONEUxyK5VjPe18C+XWe2g320SAXUcNjQrBftL7+iKcLC0dMv0SQA RfeWmqaYmkDu220+u/vKx2l7ui0avYx4EoI46xWVDw0h5f64x6OGak30Hn72R94RPFX3 5G1ZlSduSaB00HrqNd1Ti2f4DOeRvZJOI+prf11/9gXZztzGqZMT47Op0cuMQmg39qgc 2oQryGQUfN1h050kkJkYPIXE7n7Mwffs1ifVLTBlW8ObeZHPuiDWflwZIOG/ohBEjU/B omSw== MIME-Version: 1.0 X-Received: by 10.236.180.169 with SMTP id j29mr82091208yhm.47.1401957303164; Thu, 05 Jun 2014 01:35:03 -0700 (PDT) Received: by 10.170.54.8 with HTTP; Thu, 5 Jun 2014 01:35:03 -0700 (PDT) In-Reply-To: <4F8352CFDC8643D0AB6F999E4A87847D@multiplay.co.uk> References: <4F8352CFDC8643D0AB6F999E4A87847D@multiplay.co.uk> Date: Thu, 5 Jun 2014 09:35:03 +0100 Message-ID: Subject: Re: Recover ZFS pool after re-initialization From: krad To: Steven Hartland Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Jun 2014 08:35:04 -0000 try and import -D before you do anything else, you might be lucky, very lucky On 1 June 2014 16:36, Steven Hartland wrote: > ----- Original Message ----- From: "Pavlo Greenberg" > To: > Sent: Sunday, June 01, 2014 3:21 PM > Subject: Recover ZFS pool after re-initialization > > > > Hello. >> Is it possible to recover the data from a pool, that was accidentally >> destroyed? I erroneously ran the wrong command from my bash history >> and instead of "zpool import" did "zpool create". I didn't write >> anything on this pool after that. The history of the pool is empty now >> and I can't roll-back what I did. >> Is there any way to bring the previous pool back or even somehow >> restore the data it contained? Peculiar situation, I know, but >> sometimes the most foolish errors are the most fatal. >> > > I would guess not if you did a full create, but I'm very suprised that > worked as without specifying the additional device parameters the create > should fail so its rather hard to confuse the two. > > In addition I would also expect create to check for an already existing > pool on the devices before allowing you to proceed. If thats not the case > it would be a worth while check to add IMO. > > Regards > Steve > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Jun 5 12:15:26 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AE2D3C75 for ; Thu, 5 Jun 2014 12:15:26 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 95EC1218A for ; Thu, 5 Jun 2014 12:15:26 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s55CFQpq006473 for ; Thu, 5 Jun 2014 13:15:26 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 167977] [smbfs] mount_smbfs results are differ when utf-8 or UTF-8 local encoding's name is used Date: Thu, 05 Jun 2014 12:15:25 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: simplexe@mail.ru X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Jun 2014 12:15:26 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=167977 Semen Soldatov changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |simplexe@mail.ru --- Comment #3 from Semen Soldatov --- (In reply to buganini from comment #2) > How about change line 105 and 107 in /usr/src/sys/libkern/iconv_ucs.c > from strcmp to strcasecmp ? > > Regards, > Buganini with strcasecmp all works ok. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Thu Jun 5 13:52:48 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2F8B8EBE for ; Thu, 5 Jun 2014 13:52:48 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 165882021 for ; Thu, 5 Jun 2014 13:52:48 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s55DqlK5068196 for ; Thu, 5 Jun 2014 14:52:47 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 167977] [smbfs] mount_smbfs results are differ when utf-8 or UTF-8 local encoding's name is used Date: Thu, 05 Jun 2014 13:52:46 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: jhb@FreeBSD.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc attachments.created Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Jun 2014 13:52:48 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=167977 John Baldwin changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jhb@FreeBSD.org --- Comment #4 from John Baldwin --- Created attachment 143407 --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=143407&action=edit kiconv_case.patch I think this is correct as iconv(1) accepts case-insensitive encoding names. I think there is one other place that should also use strcasecmp() (when looking to see if we already have an existing encoding mapping that can be reused). -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Thu Jun 5 17:46:25 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5ACF5508 for ; Thu, 5 Jun 2014 17:46:25 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 42ED72794 for ; Thu, 5 Jun 2014 17:46:25 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s55HkPrO073008 for ; Thu, 5 Jun 2014 18:46:25 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 156545] [ufs] mv could break UFS on SMP systems Date: Thu, 05 Jun 2014 17:46:25 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: kib@FreeBSD.org X-Bugzilla-Status: Issue Resolved X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status cc resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Jun 2014 17:46:25 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=156545 Konstantin Belousov changed: What |Removed |Added ---------------------------------------------------------------------------- Status|Needs MFC |Issue Resolved CC| |kib@FreeBSD.org Resolution|--- |FIXED -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Thu Jun 5 21:10:40 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AA92B2D5 for ; Thu, 5 Jun 2014 21:10:40 +0000 (UTC) Received: from mail-qg0-x22b.google.com (mail-qg0-x22b.google.com [IPv6:2607:f8b0:400d:c04::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6627C2A90 for ; Thu, 5 Jun 2014 21:10:40 +0000 (UTC) Received: by mail-qg0-f43.google.com with SMTP id 63so2696433qgz.30 for ; Thu, 05 Jun 2014 14:10:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtual.org.ua; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=cijDBo3s94T4KNemteCnnqDD6WHXFT5sqx7Tne+sypY=; b=OwZEn7zclP9MzLIHen8uow0AHirUpNwZksblsnEDXQgXdqapCEwODeIrkEreebrLg3 N3x6a2QAh0UvEDjjSAeBkqQXCxBXRbxqooT3RwK6wnDmovEJGF5DtKBf2DFZekPZRySR o5Y0J+/F1g+hR6sVIGoCcb5FCRqRmQzGms8TM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=cijDBo3s94T4KNemteCnnqDD6WHXFT5sqx7Tne+sypY=; b=YHl9U8E2NxXW19+6JBIxPTFecmCouNtZevDxYYenvw88Qtlz2V8olW+Ne+ueUJ+r0u Q/8BPKZaIv8cOwr9RAb/caoZnkcXiuCPXMZVptFkGaunMQ4ofE3W2J40x/QgVwh+t/0n I2aK68+JzsO2vEclh24cAaZuZBzi9Y86GTa9yJHo2n3tnwIAxsXeCaYdjkQqV56UJuAg CTChlZp3aXDRYMC3rYgu0vc44OZRmUF70KSGXq7GaO/4embRkl0ZIWm0rEVOv3ZG9Z4V /BzgXvHM5RSdRBDH5TmYJPjejAgx2Kc8icmP4sK0yeQNR8b5D/jZT65AfPL8I/x3Altx yBmA== X-Gm-Message-State: ALoCoQlRFIUSzup5WcOsG6W6NSdJ1kU7s7Od9p8lTjN8480DPnUCt/A0aR4dgWesHR9KnJDIGYOC X-Received: by 10.224.63.137 with SMTP id b9mr479488qai.70.1402002639352; Thu, 05 Jun 2014 14:10:39 -0700 (PDT) MIME-Version: 1.0 Received: by 10.140.92.110 with HTTP; Thu, 5 Jun 2014 14:10:19 -0700 (PDT) In-Reply-To: References: <4F8352CFDC8643D0AB6F999E4A87847D@multiplay.co.uk> From: Pavlo Greenberg Date: Fri, 6 Jun 2014 00:10:19 +0300 Message-ID: Subject: Re: Recover ZFS pool after re-initialization To: krad Content-Type: text/plain; charset=UTF-8 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Jun 2014 21:10:40 -0000 2014-06-05 11:35 GMT+03:00 krad : > try and import -D before you do anything else, you might be lucky, very > lucky Unfortunately I am not. "zpool import -D" replies me "no pools available to import". From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 01:58:08 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 70BA2CBC for ; Fri, 6 Jun 2014 01:58:08 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 57DC623CC for ; Fri, 6 Jun 2014 01:58:08 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s561w8G8076963 for ; Fri, 6 Jun 2014 02:58:08 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 186720] [xfs] is xfs now unsupported in the kernel? Date: Fri, 06 Jun 2014 01:58:08 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: bjk@FreeBSD.org X-Bugzilla-Status: Issue Resolved X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status cc resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 01:58:08 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=186720 Benjamin Kaduk changed: What |Removed |Added ---------------------------------------------------------------------------- Status|In Discussion |Issue Resolved CC| |bjk@FreeBSD.org Resolution|--- |Works As Intended --- Comment #2 from Benjamin Kaduk --- The XFS support was removed from stable/10 (then head) in r247631, as part of the MPSAFE-VFS project. The XFS kernel code was not safe to operate without the VFS layer automatically grabbing the Giant kernel lock. The compat code in the VFS to automatically grab the Giant lock was removed to further general improvements to the VFS layer, and will not be reintroduced. Even read-only support would require substantial work, and I do not expect that it will be forthcoming. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 04:38:32 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 57ED0CBC for ; Fri, 6 Jun 2014 04:38:32 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3FC4621DB for ; Fri, 6 Jun 2014 04:38:32 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s564cWv3034671 for ; Fri, 6 Jun 2014 05:38:32 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 167977] [smbfs] mount_smbfs results are differ when utf-8 or UTF-8 local encoding's name is used Date: Fri, 06 Jun 2014 04:38:31 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: simplexe@mail.ru X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 04:38:32 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=167977 --- Comment #5 from Semen Soldatov --- (In reply to John Baldwin from comment #4) > Created attachment 143407 [details] > kiconv_case.patch > > I think this is correct as iconv(1) accepts case-insensitive encoding names. > I think there is one other place that should also use strcasecmp() (when > looking to see if we already have an existing encoding mapping that can be > reused). Thx, i set strcasecmp in iconv_ucs.c and utf8 encoding map works ok. if i set koi8-r without patch strcasecmp in iconv.c then filename broken. your patch work fine. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 08:00:10 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D38B93FB for ; Fri, 6 Jun 2014 08:00:10 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C1ED721BD for ; Fri, 6 Jun 2014 08:00:10 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5680Aos090593 for ; Fri, 6 Jun 2014 09:00:10 +0100 (BST) (envelope-from bz-noreply@freebsd.org) Message-Id: <201406060800.s5680Aos090593@kenobi.freebsd.org> From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bugzilla] Commit Needs MFC MIME-Version: 1.0 X-Bugzilla-Type: whine X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated Date: Fri, 06 Jun 2014 08:00:10 +0000 Content-Type: text/plain X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 08:00:10 -0000 Hi, You have a bug in the "Needs MFC" state which has not been touched in 7 days. This email serves as a reminder that you may want to MFC this bug or marked it as completed. In the event you have a longer MFC timeout you may update this bug with a comment and I won't remind you again for 7 days. This reminder is an experimental feature. Please file a bug or mail bugmeister@ with concerns. This search was scheduled by eadler@FreeBSD.org. (7 bugs) Bug 133174: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=133174 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [msdosfs] [patch] msdosfs must support multibyte international characters in file names Bug 136470: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=136470 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] Cannot mount / in read-only, over NFS Bug 139651: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139651 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] mount(8): read-only remount of NFS volume does not work Bug 144447: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=144447 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] sharenfs fsunshare() & fsshare_main() non functional Bug 154228: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=154228 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [md] md getting stuck in wdrain state Bug 155411: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=155411 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [regression] [8.2-release] [tmpfs]: mount: tmpfs : No space left on device Bug 180236: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=180236 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] [nullfs] Leakage free space using ZFS with nullfs on 9.1-STABLE From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 11:05:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CAE04B21 for ; Fri, 6 Jun 2014 11:05:11 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6685722F6 for ; Fri, 6 Jun 2014 11:05:11 +0000 (UTC) Received: from PAIMAIL.pai.local (paimail.pai.local [10.10.0.153]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s56B3W7C084542 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL) for ; Fri, 6 Jun 2014 07:03:32 -0400 (EDT) (envelope-from mikej@paymentallianceintl.com) Received: from PAIMAIL.pai.local ([::1]) by PAIMAIL.pai.local ([::1]) with mapi; Fri, 6 Jun 2014 07:03:32 -0400 From: Michael Jung To: "freebsd-fs@freebsd.org" Date: Fri, 6 Jun 2014 07:03:04 -0400 Subject: i/o error - all block copies unavailable Thread-Topic: i/o error - all block copies unavailable Thread-Index: Ac+BEkYdJYGz/75nTLC484fF9qnWqgAAu/WgABgnQuA= Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: multipart/mixed; boundary="_005_CE3228E2C273EA4A9DC12FA75E921B8C479814D5A3PAIMAILpailoc_" MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 11:05:11 -0000 --_005_CE3228E2C273EA4A9DC12FA75E921B8C479814D5A3PAIMAILpailoc_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hello: I have run into this issue before but dismissed it because it was the early= days of ZFS 8.x 9.0.x And simply reinstalled. I have backups of my data but this is my home machine a= nd I'd rather figure this out that submit to a re-install. For what it is worth I thought that my previous adventures with this was fr= om updating zfs version withought updating the boot loader and note that all my previous mis-advent= ures were on single drive zfs on boot this however is not the case now. This was my first upgr= ade of bootcode from an clean install of 10-stable as of ~two weeks ago. The pool was not = upgraded but I applied boot code anyway :-( This system again is mirrored zfs-on-boot and I simply upgraded world/kerne= l this AM EST ~10:00 2014/06/05 and did a installkernel && installworld and then gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada0 gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1 Which should be the correct index and rebooted and now.. well I'm *(&#&@ The attachments are from booting a current release - same results from boot= ing my daily builds ISO/IMG of work for10 stable. The booted image does not se= e the ZFS Partiations. I.E. zfs list show nothing. Any help to resolve this would be most appreciated - I find a lot of commen= ts about this Issue searching through Google but little as to how to resolve it. Kind Regards, Michael Jung p.s. If the attachments are lost they are here: http://216.26.158.189/bootloader.txt http://216.26.158.189/gpart.list http://216.26.158.189/gpart.show http://216.26.158.189/zfs.list GoPai.com | Facebook.com/PaymentAlliance CONFIDENTIALITY NOTE: This message is intended only for the use of the individual or entity to whom it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this transmission in error, please notify us by telephone at (502) 212-4001 or notify us at PAI , Dept. 99, 6060 Dutchmans Lane, Suite 320, Louisville, KY 40205 --_005_CE3228E2C273EA4A9DC12FA75E921B8C479814D5A3PAIMAILpailoc_ Content-Type: text/plain; name="booloader.txt" Content-Description: booloader.txt Content-Disposition: attachment; filename="booloader.txt"; size=431; creation-date="Thu, 05 Jun 2014 22:58:51 GMT"; modification-date="Fri, 06 Jun 2014 10:57:56 GMT" Content-Transfer-Encoding: base64 PE1FIFRZcGluZz4NCg0KTG9hZGluZyBPcGVyYXRpbmcgU3lzdGVtLi4uLg0KWkZTOiAgaS9vIGVy cm9yIC0gYWxsIGJsb2NrIGNvcGllcyB1bmF2YWlsYWJsZQ0KWkZTOiBjYW4ndCByZWFkIG9iamVj dCBzZXQgZm9yIGRhdGFzZXQgdQ0KWkZTOiBjYW4ndCBvcGVuIHJvb3QgZmlsZXN5c3RlbQ0KZ3B0 emZzYm9vdDogZmFpbGVkIHRvIG1vdW50IGRlZmF1bHQgcG9vbCB6cm9vdA0KDQpGcmVlQlNEL3g4 NiBib290DQpEZWZhdWx0OiB6cm9vdDoNCmJvb3Q6DQoNCmJvb3Q6IHN0YXR1cyANCnBvb2w6IHpy b290DQpib29mczogenJvb3QvUk9PVC9kZWZ1YWx0DQpjb25maWc6DQoNCglOQU1FIFNUQVRFDQoJ enJvb3QgT05MSU5FDQoJICBtaXJyb3IgT05MSU5FDQoJICAgIGd0cGlkL0UxNy4uLi4uLi4gT05M SU5FDQoJICAgIGd0cGlkL2UyNi4uLi4uLiBPTkxJTkU= --_005_CE3228E2C273EA4A9DC12FA75E921B8C479814D5A3PAIMAILpailoc_ Content-Type: text/plain; name="gpart.list" Content-Description: gpart.list Content-Disposition: attachment; filename="gpart.list"; size=7696; creation-date="Thu, 05 Jun 2014 22:59:08 GMT"; modification-date="Thu, 05 Jun 2014 22:59:08 GMT" Content-Transfer-Encoding: base64 R2VvbSBuYW1lOiBhZGEwCm1vZGlmaWVkOiBmYWxzZQpzdGF0ZTogT0sKZndoZWFkczogMTYKZndz ZWN0b3JzOiA2MwpsYXN0OiA1ODYwNTMzMTM0CmZpcnN0OiAzNAplbnRyaWVzOiAxMjgKc2NoZW1l OiBHUFQKUHJvdmlkZXJzOgoxLiBOYW1lOiBhZGEwcDEKICAgTWVkaWFzaXplOiA1MjQyODggKDUx MkspCiAgIFNlY3RvcnNpemU6IDUxMgogICBTdHJpcGVzaXplOiA0MDk2CiAgIFN0cmlwZW9mZnNl dDogMAogICBNb2RlOiByMHcwZTAKICAgcmF3dXVpZDogZTExYmYyODgtZGQ0Ny0xMWUzLWFlY2Qt MDAxYjIxMWUyZTQ0CiAgIHJhd3R5cGU6IDgzYmQ2YjlkLTdmNDEtMTFkYy1iZTBiLTAwMTU2MGI4 NGYwZgogICBsYWJlbDogZ3B0Ym9vdDAKICAgbGVuZ3RoOiA1MjQyODgKICAgb2Zmc2V0OiAyMDQ4 MAogICB0eXBlOiBmcmVlYnNkLWJvb3QKICAgaW5kZXg6IDEKICAgZW5kOiAxMDYzCiAgIHN0YXJ0 OiA0MAoyLiBOYW1lOiBhZGEwcDIKICAgTWVkaWFzaXplOiAzNDM1OTczODM2OCAoMzJHKQogICBT ZWN0b3JzaXplOiA1MTIKICAgU3RyaXBlc2l6ZTogNDA5NgogICBTdHJpcGVvZmZzZXQ6IDAKICAg TW9kZTogcjB3MGUwCiAgIHJhd3V1aWQ6IGUxNTNiMGI5LWRkNDctMTFlMy1hZWNkLTAwMWIyMTFl MmU0NAogICByYXd0eXBlOiA1MTZlN2NiNS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAg bGFiZWw6IHN3YXAwCiAgIGxlbmd0aDogMzQzNTk3MzgzNjgKICAgb2Zmc2V0OiA1NDQ3NjgKICAg dHlwZTogZnJlZWJzZC1zd2FwCiAgIGluZGV4OiAyCiAgIGVuZDogNjcxMDk5MjcKICAgc3RhcnQ6 IDEwNjQKMy4gTmFtZTogYWRhMHAzCiAgIE1lZGlhc2l6ZTogMjk2NjIzMjY3ODQwMCAoMi43VCkK ICAgU2VjdG9yc2l6ZTogNTEyCiAgIFN0cmlwZXNpemU6IDQwOTYKICAgU3RyaXBlb2Zmc2V0OiAw CiAgIE1vZGU6IHIwdzBlMAogICByYXd1dWlkOiBlMTdmNGNiNi1kZDQ3LTExZTMtYWVjZC0wMDFi MjExZTJlNDQKICAgcmF3dHlwZTogNTE2ZTdjYmEtNmVjZi0xMWQ2LThmZjgtMDAwMjJkMDk3MTJi CiAgIGxhYmVsOiB6ZnMwCiAgIGxlbmd0aDogMjk2NjIzMjY3ODQwMAogICBvZmZzZXQ6IDM0MzYw MjgzMTM2CiAgIHR5cGU6IGZyZWVic2QtemZzCiAgIGluZGV4OiAzCiAgIGVuZDogNTg2MDUzMzEy NwogICBzdGFydDogNjcxMDk5MjgKQ29uc3VtZXJzOgoxLiBOYW1lOiBhZGEwCiAgIE1lZGlhc2l6 ZTogMzAwMDU5Mjk4MjAxNiAoMi43VCkKICAgU2VjdG9yc2l6ZTogNTEyCiAgIFN0cmlwZXNpemU6 IDQwOTYKICAgU3RyaXBlb2Zmc2V0OiAwCiAgIE1vZGU6IHIwdzBlMAoKR2VvbSBuYW1lOiBkaXNr aWQvRElTSy1aMUY0MEpTQwptb2RpZmllZDogZmFsc2UKc3RhdGU6IE9LCmZ3aGVhZHM6IDE2CmZ3 c2VjdG9yczogNjMKbGFzdDogNTg2MDUzMzEzNApmaXJzdDogMzQKZW50cmllczogMTI4CnNjaGVt ZTogR1BUClByb3ZpZGVyczoKMS4gTmFtZTogZGlza2lkL0RJU0stWjFGNDBKU0NwMQogICBNZWRp YXNpemU6IDUyNDI4OCAoNTEySykKICAgU2VjdG9yc2l6ZTogNTEyCiAgIFN0cmlwZXNpemU6IDQw OTYKICAgU3RyaXBlb2Zmc2V0OiAwCiAgIE1vZGU6IHIwdzBlMAogICByYXd1dWlkOiBlMTFiZjI4 OC1kZDQ3LTExZTMtYWVjZC0wMDFiMjExZTJlNDQKICAgcmF3dHlwZTogODNiZDZiOWQtN2Y0MS0x MWRjLWJlMGItMDAxNTYwYjg0ZjBmCiAgIGxhYmVsOiBncHRib290MAogICBsZW5ndGg6IDUyNDI4 OAogICBvZmZzZXQ6IDIwNDgwCiAgIHR5cGU6IGZyZWVic2QtYm9vdAogICBpbmRleDogMQogICBl bmQ6IDEwNjMKICAgc3RhcnQ6IDQwCjIuIE5hbWU6IGRpc2tpZC9ESVNLLVoxRjQwSlNDcDIKICAg TWVkaWFzaXplOiAzNDM1OTczODM2OCAoMzJHKQogICBTZWN0b3JzaXplOiA1MTIKICAgU3RyaXBl c2l6ZTogNDA5NgogICBTdHJpcGVvZmZzZXQ6IDAKICAgTW9kZTogcjB3MGUwCiAgIHJhd3V1aWQ6 IGUxNTNiMGI5LWRkNDctMTFlMy1hZWNkLTAwMWIyMTFlMmU0NAogICByYXd0eXBlOiA1MTZlN2Ni NS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAgbGFiZWw6IHN3YXAwCiAgIGxlbmd0aDog MzQzNTk3MzgzNjgKICAgb2Zmc2V0OiA1NDQ3NjgKICAgdHlwZTogZnJlZWJzZC1zd2FwCiAgIGlu ZGV4OiAyCiAgIGVuZDogNjcxMDk5MjcKICAgc3RhcnQ6IDEwNjQKMy4gTmFtZTogZGlza2lkL0RJ U0stWjFGNDBKU0NwMwogICBNZWRpYXNpemU6IDI5NjYyMzI2Nzg0MDAgKDIuN1QpCiAgIFNlY3Rv cnNpemU6IDUxMgogICBTdHJpcGVzaXplOiA0MDk2CiAgIFN0cmlwZW9mZnNldDogMAogICBNb2Rl OiByMHcwZTAKICAgcmF3dXVpZDogZTE3ZjRjYjYtZGQ0Ny0xMWUzLWFlY2QtMDAxYjIxMWUyZTQ0 CiAgIHJhd3R5cGU6IDUxNmU3Y2JhLTZlY2YtMTFkNi04ZmY4LTAwMDIyZDA5NzEyYgogICBsYWJl bDogemZzMAogICBsZW5ndGg6IDI5NjYyMzI2Nzg0MDAKICAgb2Zmc2V0OiAzNDM2MDI4MzEzNgog ICB0eXBlOiBmcmVlYnNkLXpmcwogICBpbmRleDogMwogICBlbmQ6IDU4NjA1MzMxMjcKICAgc3Rh cnQ6IDY3MTA5OTI4CkNvbnN1bWVyczoKMS4gTmFtZTogZGlza2lkL0RJU0stWjFGNDBKU0MKICAg TWVkaWFzaXplOiAzMDAwNTkyOTgyMDE2ICgyLjdUKQogICBTZWN0b3JzaXplOiA1MTIKICAgU3Ry aXBlc2l6ZTogNDA5NgogICBTdHJpcGVvZmZzZXQ6IDAKICAgTW9kZTogcjB3MGUwCgpHZW9tIG5h bWU6IGFkYTEKbW9kaWZpZWQ6IGZhbHNlCnN0YXRlOiBPSwpmd2hlYWRzOiAxNgpmd3NlY3RvcnM6 IDYzCmxhc3Q6IDU4NjA1MzMxMzQKZmlyc3Q6IDM0CmVudHJpZXM6IDEyOApzY2hlbWU6IEdQVApQ cm92aWRlcnM6CjEuIE5hbWU6IGFkYTFwMQogICBNZWRpYXNpemU6IDUyNDI4OCAoNTEySykKICAg U2VjdG9yc2l6ZTogNTEyCiAgIFN0cmlwZXNpemU6IDQwOTYKICAgU3RyaXBlb2Zmc2V0OiAwCiAg IE1vZGU6IHIwdzBlMAogICByYXd1dWlkOiBlMjBhZmExNi1kZDQ3LTExZTMtYWVjZC0wMDFiMjEx ZTJlNDQKICAgcmF3dHlwZTogODNiZDZiOWQtN2Y0MS0xMWRjLWJlMGItMDAxNTYwYjg0ZjBmCiAg IGxhYmVsOiBncHRib290MQogICBsZW5ndGg6IDUyNDI4OAogICBvZmZzZXQ6IDIwNDgwCiAgIHR5 cGU6IGZyZWVic2QtYm9vdAogICBpbmRleDogMQogICBlbmQ6IDEwNjMKICAgc3RhcnQ6IDQwCjIu IE5hbWU6IGFkYTFwMgogICBNZWRpYXNpemU6IDM0MzU5NzM4MzY4ICgzMkcpCiAgIFNlY3RvcnNp emU6IDUxMgogICBTdHJpcGVzaXplOiA0MDk2CiAgIFN0cmlwZW9mZnNldDogMAogICBNb2RlOiBy MHcwZTAKICAgcmF3dXVpZDogZTI0MjUyNGMtZGQ0Ny0xMWUzLWFlY2QtMDAxYjIxMWUyZTQ0CiAg IHJhd3R5cGU6IDUxNmU3Y2I1LTZlY2YtMTFkNi04ZmY4LTAwMDIyZDA5NzEyYgogICBsYWJlbDog c3dhcDEKICAgbGVuZ3RoOiAzNDM1OTczODM2OAogICBvZmZzZXQ6IDU0NDc2OAogICB0eXBlOiBm cmVlYnNkLXN3YXAKICAgaW5kZXg6IDIKICAgZW5kOiA2NzEwOTkyNwogICBzdGFydDogMTA2NAoz LiBOYW1lOiBhZGExcDMKICAgTWVkaWFzaXplOiAyOTY2MjMyNjc4NDAwICgyLjdUKQogICBTZWN0 b3JzaXplOiA1MTIKICAgU3RyaXBlc2l6ZTogNDA5NgogICBTdHJpcGVvZmZzZXQ6IDAKICAgTW9k ZTogcjB3MGUwCiAgIHJhd3V1aWQ6IGUyNmYzZTNmLWRkNDctMTFlMy1hZWNkLTAwMWIyMTFlMmU0 NAogICByYXd0eXBlOiA1MTZlN2NiYS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAgbGFi ZWw6IHpmczEKICAgbGVuZ3RoOiAyOTY2MjMyNjc4NDAwCiAgIG9mZnNldDogMzQzNjAyODMxMzYK ICAgdHlwZTogZnJlZWJzZC16ZnMKICAgaW5kZXg6IDMKICAgZW5kOiA1ODYwNTMzMTI3CiAgIHN0 YXJ0OiA2NzEwOTkyOApDb25zdW1lcnM6CjEuIE5hbWU6IGFkYTEKICAgTWVkaWFzaXplOiAzMDAw NTkyOTgyMDE2ICgyLjdUKQogICBTZWN0b3JzaXplOiA1MTIKICAgU3RyaXBlc2l6ZTogNDA5Ngog ICBTdHJpcGVvZmZzZXQ6IDAKICAgTW9kZTogcjB3MGUwCgpHZW9tIG5hbWU6IGFkYTIKbW9kaWZp ZWQ6IGZhbHNlCnN0YXRlOiBPSwpmd2hlYWRzOiAxNgpmd3NlY3RvcnM6IDYzCmxhc3Q6IDI1MDA2 OTY0NgpmaXJzdDogMzQKZW50cmllczogMTI4CnNjaGVtZTogR1BUClByb3ZpZGVyczoKMS4gTmFt ZTogYWRhMnAxCiAgIE1lZGlhc2l6ZTogODU4OTkzNDU5MiAoOC4wRykKICAgU2VjdG9yc2l6ZTog NTEyCiAgIFN0cmlwZXNpemU6IDAKICAgU3RyaXBlb2Zmc2V0OiAyMDQ4MAogICBNb2RlOiByMHcw ZTAKICAgcmF3dXVpZDogYWFiOTFjNzEtZGRhNi0xMWUzLWIwMGYtMDAxYjIxMWUyZTQ0CiAgIHJh d3R5cGU6IDUxNmU3Y2JhLTZlY2YtMTFkNi04ZmY4LTAwMDIyZDA5NzEyYgogICBsYWJlbDogbG9n CiAgIGxlbmd0aDogODU4OTkzNDU5MgogICBvZmZzZXQ6IDIwNDgwCiAgIHR5cGU6IGZyZWVic2Qt emZzCiAgIGluZGV4OiAxCiAgIGVuZDogMTY3NzcyNTUKICAgc3RhcnQ6IDQwCjIuIE5hbWU6IGFk YTJwMgogICBNZWRpYXNpemU6IDExOTQ0NTcwMDYwOCAoMTExRykKICAgU2VjdG9yc2l6ZTogNTEy CiAgIFN0cmlwZXNpemU6IDAKICAgU3RyaXBlb2Zmc2V0OiAyMDQ4MAogICBNb2RlOiByMHcwZTAK ICAgcmF3dXVpZDogYWY5MGY2OTMtZGRhNi0xMWUzLWIwMGYtMDAxYjIxMWUyZTQ0CiAgIHJhd3R5 cGU6IDUxNmU3Y2JhLTZlY2YtMTFkNi04ZmY4LTAwMDIyZDA5NzEyYgogICBsYWJlbDogY2FjaGUK ICAgbGVuZ3RoOiAxMTk0NDU3MDA2MDgKICAgb2Zmc2V0OiA4NTg5OTU1MDcyCiAgIHR5cGU6IGZy ZWVic2QtemZzCiAgIGluZGV4OiAyCiAgIGVuZDogMjUwMDY5NjM5CiAgIHN0YXJ0OiAxNjc3NzI1 NgpDb25zdW1lcnM6CjEuIE5hbWU6IGFkYTIKICAgTWVkaWFzaXplOiAxMjgwMzU2NzYxNjAgKDEx OUcpCiAgIFNlY3RvcnNpemU6IDUxMgogICBNb2RlOiByMHcwZTAKCkdlb20gbmFtZTogZGlza2lk L0RJU0stWjFGNDQ2TVIKbW9kaWZpZWQ6IGZhbHNlCnN0YXRlOiBPSwpmd2hlYWRzOiAxNgpmd3Nl Y3RvcnM6IDYzCmxhc3Q6IDU4NjA1MzMxMzQKZmlyc3Q6IDM0CmVudHJpZXM6IDEyOApzY2hlbWU6 IEdQVApQcm92aWRlcnM6CjEuIE5hbWU6IGRpc2tpZC9ESVNLLVoxRjQ0Nk1ScDEKICAgTWVkaWFz aXplOiA1MjQyODggKDUxMkspCiAgIFNlY3RvcnNpemU6IDUxMgogICBTdHJpcGVzaXplOiA0MDk2 CiAgIFN0cmlwZW9mZnNldDogMAogICBNb2RlOiByMHcwZTAKICAgcmF3dXVpZDogZTIwYWZhMTYt ZGQ0Ny0xMWUzLWFlY2QtMDAxYjIxMWUyZTQ0CiAgIHJhd3R5cGU6IDgzYmQ2YjlkLTdmNDEtMTFk Yy1iZTBiLTAwMTU2MGI4NGYwZgogICBsYWJlbDogZ3B0Ym9vdDEKICAgbGVuZ3RoOiA1MjQyODgK ICAgb2Zmc2V0OiAyMDQ4MAogICB0eXBlOiBmcmVlYnNkLWJvb3QKICAgaW5kZXg6IDEKICAgZW5k OiAxMDYzCiAgIHN0YXJ0OiA0MAoyLiBOYW1lOiBkaXNraWQvRElTSy1aMUY0NDZNUnAyCiAgIE1l ZGlhc2l6ZTogMzQzNTk3MzgzNjggKDMyRykKICAgU2VjdG9yc2l6ZTogNTEyCiAgIFN0cmlwZXNp emU6IDQwOTYKICAgU3RyaXBlb2Zmc2V0OiAwCiAgIE1vZGU6IHIwdzBlMAogICByYXd1dWlkOiBl MjQyNTI0Yy1kZDQ3LTExZTMtYWVjZC0wMDFiMjExZTJlNDQKICAgcmF3dHlwZTogNTE2ZTdjYjUt NmVjZi0xMWQ2LThmZjgtMDAwMjJkMDk3MTJiCiAgIGxhYmVsOiBzd2FwMQogICBsZW5ndGg6IDM0 MzU5NzM4MzY4CiAgIG9mZnNldDogNTQ0NzY4CiAgIHR5cGU6IGZyZWVic2Qtc3dhcAogICBpbmRl eDogMgogICBlbmQ6IDY3MTA5OTI3CiAgIHN0YXJ0OiAxMDY0CjMuIE5hbWU6IGRpc2tpZC9ESVNL LVoxRjQ0Nk1ScDMKICAgTWVkaWFzaXplOiAyOTY2MjMyNjc4NDAwICgyLjdUKQogICBTZWN0b3Jz aXplOiA1MTIKICAgU3RyaXBlc2l6ZTogNDA5NgogICBTdHJpcGVvZmZzZXQ6IDAKICAgTW9kZTog cjB3MGUwCiAgIHJhd3V1aWQ6IGUyNmYzZTNmLWRkNDctMTFlMy1hZWNkLTAwMWIyMTFlMmU0NAog ICByYXd0eXBlOiA1MTZlN2NiYS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAgbGFiZWw6 IHpmczEKICAgbGVuZ3RoOiAyOTY2MjMyNjc4NDAwCiAgIG9mZnNldDogMzQzNjAyODMxMzYKICAg dHlwZTogZnJlZWJzZC16ZnMKICAgaW5kZXg6IDMKICAgZW5kOiA1ODYwNTMzMTI3CiAgIHN0YXJ0 OiA2NzEwOTkyOApDb25zdW1lcnM6CjEuIE5hbWU6IGRpc2tpZC9ESVNLLVoxRjQ0Nk1SCiAgIE1l ZGlhc2l6ZTogMzAwMDU5Mjk4MjAxNiAoMi43VCkKICAgU2VjdG9yc2l6ZTogNTEyCiAgIFN0cmlw ZXNpemU6IDQwOTYKICAgU3RyaXBlb2Zmc2V0OiAwCiAgIE1vZGU6IHIwdzBlMAoKR2VvbSBuYW1l OiBkaXNraWQvRElTSy01MzlDMDczODA5MzEwMDAwNTA0Mgptb2RpZmllZDogZmFsc2UKc3RhdGU6 IE9LCmZ3aGVhZHM6IDE2CmZ3c2VjdG9yczogNjMKbGFzdDogMjUwMDY5NjQ2CmZpcnN0OiAzNApl bnRyaWVzOiAxMjgKc2NoZW1lOiBHUFQKUHJvdmlkZXJzOgoxLiBOYW1lOiBkaXNraWQvRElTSy01 MzlDMDczODA5MzEwMDAwNTA0MnAxCiAgIE1lZGlhc2l6ZTogODU4OTkzNDU5MiAoOC4wRykKICAg U2VjdG9yc2l6ZTogNTEyCiAgIFN0cmlwZXNpemU6IDAKICAgU3RyaXBlb2Zmc2V0OiAyMDQ4MAog ICBNb2RlOiByMHcwZTAKICAgcmF3dXVpZDogYWFiOTFjNzEtZGRhNi0xMWUzLWIwMGYtMDAxYjIx MWUyZTQ0CiAgIHJhd3R5cGU6IDUxNmU3Y2JhLTZlY2YtMTFkNi04ZmY4LTAwMDIyZDA5NzEyYgog ICBsYWJlbDogbG9nCiAgIGxlbmd0aDogODU4OTkzNDU5MgogICBvZmZzZXQ6IDIwNDgwCiAgIHR5 cGU6IGZyZWVic2QtemZzCiAgIGluZGV4OiAxCiAgIGVuZDogMTY3NzcyNTUKICAgc3RhcnQ6IDQw CjIuIE5hbWU6IGRpc2tpZC9ESVNLLTUzOUMwNzM4MDkzMTAwMDA1MDQycDIKICAgTWVkaWFzaXpl OiAxMTk0NDU3MDA2MDggKDExMUcpCiAgIFNlY3RvcnNpemU6IDUxMgogICBTdHJpcGVzaXplOiAw CiAgIFN0cmlwZW9mZnNldDogMjA0ODAKICAgTW9kZTogcjB3MGUwCiAgIHJhd3V1aWQ6IGFmOTBm NjkzLWRkYTYtMTFlMy1iMDBmLTAwMWIyMTFlMmU0NAogICByYXd0eXBlOiA1MTZlN2NiYS02ZWNm LTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAgbGFiZWw6IGNhY2hlCiAgIGxlbmd0aDogMTE5NDQ1 NzAwNjA4CiAgIG9mZnNldDogODU4OTk1NTA3MgogICB0eXBlOiBmcmVlYnNkLXpmcwogICBpbmRl eDogMgogICBlbmQ6IDI1MDA2OTYzOQogICBzdGFydDogMTY3NzcyNTYKQ29uc3VtZXJzOgoxLiBO YW1lOiBkaXNraWQvRElTSy01MzlDMDczODA5MzEwMDAwNTA0MgogICBNZWRpYXNpemU6IDEyODAz NTY3NjE2MCAoMTE5RykKICAgU2VjdG9yc2l6ZTogNTEyCiAgIE1vZGU6IHIwdzBlMAoKR2VvbSBu YW1lOiBkYTAKbW9kaWZpZWQ6IGZhbHNlCnN0YXRlOiBPSwpmd2hlYWRzOiAyNTUKZndzZWN0b3Jz OiA2MwpsYXN0OiA0MDg1NzU5CmZpcnN0OiAwCmVudHJpZXM6IDgKc2NoZW1lOiBCU0QKUHJvdmlk ZXJzOgoxLiBOYW1lOiBkYTBhCiAgIE1lZGlhc2l6ZTogNjk3Mzg0OTYwICg2NjVNKQogICBTZWN0 b3JzaXplOiA1MTIKICAgTW9kZTogcjF3MWUxCiAgIHJhd3R5cGU6IDcKICAgbGVuZ3RoOiA2OTcz ODQ5NjAKICAgb2Zmc2V0OiAwCiAgIHR5cGU6IGZyZWVic2QtdWZzCiAgIGluZGV4OiAxCiAgIGVu ZDogMTM2MjA3OQogICBzdGFydDogMApDb25zdW1lcnM6CjEuIE5hbWU6IGRhMAogICBNZWRpYXNp emU6IDIwOTE5MDkxMjAgKDEuOUcpCiAgIFNlY3RvcnNpemU6IDUxMgogICBNb2RlOiByMXcxZTIK Cg== --_005_CE3228E2C273EA4A9DC12FA75E921B8C479814D5A3PAIMAILpailoc_ Content-Type: text/plain; name="gpart.show" Content-Description: gpart.show Content-Disposition: attachment; filename="gpart.show"; size=2131; creation-date="Thu, 05 Jun 2014 22:59:17 GMT"; modification-date="Thu, 05 Jun 2014 22:59:17 GMT" Content-Transfer-Encoding: base64 PT4gICAgICAgIDM0ICA1ODYwNTMzMTAxICBhZGEwICBHUFQgICgyLjdUKQogICAgICAgICAgMzQg ICAgICAgICAgIDYgICAgICAgIC0gZnJlZSAtICAoMy4wSykKICAgICAgICAgIDQwICAgICAgICAx MDI0ICAgICAxICBmcmVlYnNkLWJvb3QgICg1MTJLKQogICAgICAgIDEwNjQgICAgNjcxMDg4NjQg ICAgIDIgIGZyZWVic2Qtc3dhcCAgKDMyRykKICAgIDY3MTA5OTI4ICA1NzkzNDIzMjAwICAgICAz ICBmcmVlYnNkLXpmcyAgKDIuN1QpCiAgNTg2MDUzMzEyOCAgICAgICAgICAgNyAgICAgICAgLSBm cmVlIC0gICgzLjVLKQoKPT4gICAgICAgIDM0ICA1ODYwNTMzMTAxICBkaXNraWQvRElTSy1aMUY0 MEpTQyAgR1BUICAoMi43VCkKICAgICAgICAgIDM0ICAgICAgICAgICA2ICAgICAgICAgICAgICAg ICAgICAgICAgLSBmcmVlIC0gICgzLjBLKQogICAgICAgICAgNDAgICAgICAgIDEwMjQgICAgICAg ICAgICAgICAgICAgICAxICBmcmVlYnNkLWJvb3QgICg1MTJLKQogICAgICAgIDEwNjQgICAgNjcx MDg4NjQgICAgICAgICAgICAgICAgICAgICAyICBmcmVlYnNkLXN3YXAgICgzMkcpCiAgICA2NzEw OTkyOCAgNTc5MzQyMzIwMCAgICAgICAgICAgICAgICAgICAgIDMgIGZyZWVic2QtemZzICAoMi43 VCkKICA1ODYwNTMzMTI4ICAgICAgICAgICA3ICAgICAgICAgICAgICAgICAgICAgICAgLSBmcmVl IC0gICgzLjVLKQoKPT4gICAgICAgIDM0ICA1ODYwNTMzMTAxICBhZGExICBHUFQgICgyLjdUKQog ICAgICAgICAgMzQgICAgICAgICAgIDYgICAgICAgIC0gZnJlZSAtICAoMy4wSykKICAgICAgICAg IDQwICAgICAgICAxMDI0ICAgICAxICBmcmVlYnNkLWJvb3QgICg1MTJLKQogICAgICAgIDEwNjQg ICAgNjcxMDg4NjQgICAgIDIgIGZyZWVic2Qtc3dhcCAgKDMyRykKICAgIDY3MTA5OTI4ICA1Nzkz NDIzMjAwICAgICAzICBmcmVlYnNkLXpmcyAgKDIuN1QpCiAgNTg2MDUzMzEyOCAgICAgICAgICAg NyAgICAgICAgLSBmcmVlIC0gICgzLjVLKQoKPT4gICAgICAgMzQgIDI1MDA2OTYxMyAgYWRhMiAg R1BUICAoMTE5RykKICAgICAgICAgMzQgICAgICAgICAgNiAgICAgICAgLSBmcmVlIC0gICgzLjBL KQogICAgICAgICA0MCAgIDE2Nzc3MjE2ICAgICAxICBmcmVlYnNkLXpmcyAgKDguMEcpCiAgIDE2 Nzc3MjU2ICAyMzMyOTIzODQgICAgIDIgIGZyZWVic2QtemZzICAoMTExRykKICAyNTAwNjk2NDAg ICAgICAgICAgNyAgICAgICAgLSBmcmVlIC0gICgzLjVLKQoKPT4gICAgICAgIDM0ICA1ODYwNTMz MTAxICBkaXNraWQvRElTSy1aMUY0NDZNUiAgR1BUICAoMi43VCkKICAgICAgICAgIDM0ICAgICAg ICAgICA2ICAgICAgICAgICAgICAgICAgICAgICAgLSBmcmVlIC0gICgzLjBLKQogICAgICAgICAg NDAgICAgICAgIDEwMjQgICAgICAgICAgICAgICAgICAgICAxICBmcmVlYnNkLWJvb3QgICg1MTJL KQogICAgICAgIDEwNjQgICAgNjcxMDg4NjQgICAgICAgICAgICAgICAgICAgICAyICBmcmVlYnNk LXN3YXAgICgzMkcpCiAgICA2NzEwOTkyOCAgNTc5MzQyMzIwMCAgICAgICAgICAgICAgICAgICAg IDMgIGZyZWVic2QtemZzICAoMi43VCkKICA1ODYwNTMzMTI4ICAgICAgICAgICA3ICAgICAgICAg ICAgICAgICAgICAgICAgLSBmcmVlIC0gICgzLjVLKQoKPT4gICAgICAgMzQgIDI1MDA2OTYxMyAg ZGlza2lkL0RJU0stNTM5QzA3MzgwOTMxMDAwMDUwNDIgIEdQVCAgKDExOUcpCiAgICAgICAgIDM0 ICAgICAgICAgIDYgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAtIGZyZWUgLSAg KDMuMEspCiAgICAgICAgIDQwICAgMTY3NzcyMTYgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAxICBmcmVlYnNkLXpmcyAgKDguMEcpCiAgIDE2Nzc3MjU2ICAyMzMyOTIzODQgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAyICBmcmVlYnNkLXpmcyAgKDExMUcpCiAgMjUwMDY5 NjQwICAgICAgICAgIDcgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAtIGZyZWUg LSAgKDMuNUspCgo9PiAgICAgIDAgIDQwODU3NjAgIGRhMCAgQlNEICAoMS45RykKICAgICAgICAw ICAxMzYyMDgwICAgIDEgIGZyZWVic2QtdWZzICAoNjY1TSkKICAxMzYyMDgwICAyNzIzNjgwICAg ICAgIC0gZnJlZSAtICAoMS4zRykKCg== --_005_CE3228E2C273EA4A9DC12FA75E921B8C479814D5A3PAIMAILpailoc_ Content-Type: text/plain; name="zfs.list" Content-Description: zfs.list Content-Disposition: attachment; filename="zfs.list"; size=22; creation-date="Thu, 05 Jun 2014 22:59:30 GMT"; modification-date="Thu, 05 Jun 2014 22:59:30 GMT" Content-Transfer-Encoding: base64 bm8gZGF0YXNldHMgYXZhaWxhYmxlCg== --_005_CE3228E2C273EA4A9DC12FA75E921B8C479814D5A3PAIMAILpailoc_-- From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 11:09:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EE12EE62 for ; Fri, 6 Jun 2014 11:09:13 +0000 (UTC) Received: from na01-bn1-obe.outbound.protection.outlook.com (mail-bn1blp0187.outbound.protection.outlook.com [207.46.163.187]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (Client CN "mail.protection.outlook.com", Issuer "MSIT Machine Auth CA 2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AE9252329 for ; Fri, 6 Jun 2014 11:09:13 +0000 (UTC) Received: from SN2PRD0310HT003.namprd03.prod.outlook.com (10.255.112.38) by BL2PR03MB164.namprd03.prod.outlook.com (10.255.230.148) with Microsoft SMTP Server (TLS) id 15.0.954.9; Fri, 6 Jun 2014 11:09:10 +0000 Received: from [10.0.0.114] (98.240.141.71) by pod51008.outlook.com (10.255.112.38) with Microsoft SMTP Server (TLS) id 14.16.459.0; Fri, 6 Jun 2014 11:09:10 +0000 Message-ID: <5391A154.80309@my.hennepintech.edu> Date: Fri, 6 Jun 2014 06:09:08 -0500 From: Andrew Berg User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Subject: Re: i/o error - all block copies unavailable References: In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [98.240.141.71] X-Microsoft-Antispam: BL:0; ACTION:Default; RISK:Low; SCL:0; SPMLVL:NotSpam; PCL:0; RULEID: X-Forefront-PRVS: 023495660C X-Forefront-Antispam-Report: SFV:NSPM; SFS:(6009001)(428001)(199002)(189002)(24454002)(20776003)(64706001)(50986999)(54356999)(79102001)(74502001)(87936001)(31966008)(47776003)(76482001)(65956001)(66066001)(74662001)(77096999)(83322001)(76176999)(83506001)(65816999)(46102001)(99396002)(4396001)(77982001)(75432001)(83072002)(85852003)(86362001)(102836001)(21056001)(92566001)(81542001)(81342001)(33656002)(558084003)(80022001)(92726001)(65806001)(64126003)(88552001)(23676002)(101416001)(50466002); DIR:OUT; SFP:; SCL:1; SRVR:BL2PR03MB164; H:SN2PRD0310HT003.namprd03.prod.outlook.com; FPR:; MLV:sfv; PTR:InfoNoRecords; A:0; MX:1; LANG:en; Received-SPF: None (: my.HennepinTech.edu does not designate permitted sender hosts) Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=aberg010@my.HennepinTech.edu; X-OriginatorOrg: my.hennepintech.edu X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 11:09:14 -0000 On 2014.06.06 06:03, Michael Jung wrote: > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada0 > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1 You probably want /boot/gptzfsboot. From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 11:12:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E21DDEF for ; Fri, 6 Jun 2014 11:12:45 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8E89523B8 for ; Fri, 6 Jun 2014 11:12:45 +0000 (UTC) Received: from PAIMAIL.pai.local (paimail.pai.local [10.10.0.153]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s56BCitJ085208 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Fri, 6 Jun 2014 07:12:44 -0400 (EDT) (envelope-from mikej@paymentallianceintl.com) Received: from PAICAS.pai.local (10.10.0.154) by PAIMAIL.pai.local (10.10.0.153) with Microsoft SMTP Server (TLS) id 8.3.348.2; Fri, 6 Jun 2014 07:12:43 -0400 Received: from PAIMAIL.pai.local ([::1]) by PAICAS.pai.local ([::1]) with mapi; Fri, 6 Jun 2014 07:12:43 -0400 From: Michael Jung To: Andrew Berg , "freebsd-fs@freebsd.org" Date: Fri, 6 Jun 2014 07:12:11 -0400 Subject: RE: i/o error - all block copies unavailable Thread-Topic: i/o error - all block copies unavailable Thread-Index: Ac+Bd8QQNwbdtlvESL+ZYp2IMqHNQQAAEqtw Message-ID: References: <5391A154.80309@my.hennepintech.edu> In-Reply-To: <5391A154.80309@my.hennepintech.edu> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 11:12:46 -0000 Argggh stupid me - I guess that what you get after working on things after = a 14 hour day. Thank you - I'll report back. Regards, Michael Jung -----Original Message----- From: owner-freebsd-fs@freebsd.org [mailto:owner-freebsd-fs@freebsd.org] On= Behalf Of Andrew Berg Sent: Friday, June 06, 2014 7:09 AM To: freebsd-fs@freebsd.org Subject: Re: i/o error - all block copies unavailable On 2014.06.06 06:03, Michael Jung wrote: > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada0 gpart bootcode=20 > -b /boot/pmbr -p /boot/gptboot -i 1 ada1 You probably want /boot/gptzfsboot. _______________________________________________ freebsd-fs@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" =0A= GoPai.com | Facebook.com/PaymentAlliance =20 CONFIDENTIALITY NOTE: This message is intended only for the use of the individual or entity to whom it is addressed and may=20 contain information that is privileged, confidential, and=20 exempt from disclosure under applicable law. If the reader=20 of this message is not the intended recipient, you are hereby=20 notified that any dissemination, distribution or copying=20 of this communication is strictly prohibited. If you have=20 received this transmission in error, please notify us by=20 telephone at (502) 212-4001 or notify us at PAI , Dept. 99,=20 6060 Dutchmans Lane, Suite 320, Louisville, KY 40205 From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 11:13:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 42DE124E for ; Fri, 6 Jun 2014 11:13:44 +0000 (UTC) Received: from forward1l.mail.yandex.net (forward1l.mail.yandex.net [84.201.143.144]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "forwards.mail.yandex.net", Issuer "Certum Level IV CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EB64323CD for ; Fri, 6 Jun 2014 11:13:43 +0000 (UTC) Received: from smtp18.mail.yandex.net (smtp18.mail.yandex.net [95.108.252.18]) by forward1l.mail.yandex.net (Yandex) with ESMTP id 2A2481520DB5; Fri, 6 Jun 2014 15:13:35 +0400 (MSK) Received: from smtp18.mail.yandex.net (localhost [127.0.0.1]) by smtp18.mail.yandex.net (Yandex) with ESMTP id C897118A05AD; Fri, 6 Jun 2014 15:13:34 +0400 (MSK) Received: from 84.201.164.102-vpn.dhcp.yndx.net (84.201.164.102-vpn.dhcp.yndx.net [84.201.164.102]) by smtp18.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id jNg5ZQvxUP-DYmujhjl; Fri, 6 Jun 2014 15:13:34 +0400 (using TLSv1 with cipher AES128-SHA (128/128 bits)) (Client certificate not present) X-Yandex-Uniq: 197875ec-d137-4a4d-8bcf-dd5954b7f665 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1402053214; bh=xab40lEOtUQYq3ZnhjMEsYDHe+XBxj7jAuoxmW8OroQ=; h=Message-ID:Date:From:User-Agent:MIME-Version:To:Subject: References:In-Reply-To:X-Enigmail-Version:Content-Type: Content-Transfer-Encoding; b=TslWuTadUNhkKNP98KiJw0t2kndTzX11eUXHtzT8N6fgkuP4HUjdKzZPwPyHMYxZM I7UiRh5BGBqFiwSUkALVzQFRxWjwzu11firvFZyfRiGE3UBDYmz7Qe05Bs+Ald58Hm SNQPSFHyn674ezPXp7Vjha828QvTwQXWqHusZNC8= Authentication-Results: smtp18.mail.yandex.net; dkim=pass header.i=@yandex.ru Message-ID: <5391A23B.5050406@yandex.ru> Date: Fri, 06 Jun 2014 15:12:59 +0400 From: "Andrey V. Elsukov" User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: Michael Jung , "freebsd-fs@freebsd.org" Subject: Re: i/o error - all block copies unavailable References: In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 11:13:44 -0000 On 06.06.2014 15:03, Michael Jung wrote: > This system again is mirrored zfs-on-boot and I simply upgraded > world/kernel this AM EST ~10:00 2014/06/05 and did a installkernel && > installworld and then > > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada0 gpart > bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1 I guess it's a typo, but you should use gptzfsboot to be able to boot from ZFS root. > Which should be the correct index and rebooted and now.. well I'm > *(&#&@ > > The attachments are from booting a current release - same results > from booting my daily builds ISO/IMG of work for10 stable. The > booted image does not see the ZFS Partiations. I.E. zfs list show > nothing. You need to import the pool first with `zpool import zroot`, then you will see it in `zpool list` and `zfs list`. -- WBR, Andrey V. Elsukov From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 15:54:53 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8084DA06 for ; Fri, 6 Jun 2014 15:54:53 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 01D442172 for ; Fri, 6 Jun 2014 15:54:52 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id s56FshR3064215 for ; Fri, 6 Jun 2014 19:54:43 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Fri, 6 Jun 2014 19:54:43 +0400 (MSK) From: Dmitry Morozovsky To: freebsd-fs@FreeBSD.org Subject: zfs set on a faulty volume Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Fri, 06 Jun 2014 19:54:43 +0400 (MSK) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 15:54:53 -0000 Dear colleagues, reviving home file server with two consecutively faulting disks (one starts ATA detaches-attaches, changing cables does not help; the second just has 100+remaps and some smart pending sectors) I've found that usual zfs send | zfs receive sequence does not work: despite there are very few file that could not be recovered fully, the whole process stops with cannot receive: invalid stream (checksum mismatch) I gradually zfs send&received all "clean enough" FSes, and then use rsync on the largest file storage, and in my case it was not a problem, but: is there a way to instruct zfs send to skip (and log, of course) unrecoverable parts of data? Quick googling does not help much. Thanks! -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 16:35:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0218779D for ; Fri, 6 Jun 2014 16:35:13 +0000 (UTC) Received: from lwfs1-cam.cam.lispworks.com (mail.lispworks.com [46.17.166.21]) by mx1.freebsd.org (Postfix) with ESMTP id 7A37F2575 for ; Fri, 6 Jun 2014 16:35:11 +0000 (UTC) Received: from higson.cam.lispworks.com (higson.cam.lispworks.com [192.168.1.7]) by lwfs1-cam.cam.lispworks.com (8.14.5/8.14.5) with ESMTP id s56GOOu5098812; Fri, 6 Jun 2014 17:24:24 +0100 (BST) (envelope-from martin@lispworks.com) Received: from higson.cam.lispworks.com (localhost.localdomain [127.0.0.1]) by higson.cam.lispworks.com (8.14.4) id s56GOOSc015825; Fri, 6 Jun 2014 17:24:24 +0100 Received: (from martin@localhost) by higson.cam.lispworks.com (8.14.4/8.14.4/Submit) id s56GOOx7015821; Fri, 6 Jun 2014 17:24:24 +0100 Date: Fri, 6 Jun 2014 17:24:24 +0100 Message-Id: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> From: Martin Simmons To: freebsd-fs@freebsd.org Subject: Is ZFS Multi-vdev root pool configuration discovery supported? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 16:35:13 -0000 Hi, Does anyone know if FreeBSD 9.2 (or indeed head) supports "Root On ZFS" with a multi-vdev pool (e.g. 4 disks configured as two mirrored pairs)? Looking at the source, sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c contains the comment: /* * Multi-vdev root pool configuration discovery is not supported yet. */ but the code below it from r243502 suggests that this is no longer true. __Martin From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 17:37:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 75FDED33 for ; Fri, 6 Jun 2014 17:37:10 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 2424E2B27 for ; Fri, 6 Jun 2014 17:37:09 +0000 (UTC) Received: from PAIMAIL.pai.local (paimail.pai.local [10.10.0.153]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s56Hb76d029061 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Fri, 6 Jun 2014 13:37:07 -0400 (EDT) (envelope-from mikej@paymentallianceintl.com) Received: from PAIMAIL.pai.local ([::1]) by PAIMAIL.pai.local ([::1]) with mapi; Fri, 6 Jun 2014 13:37:06 -0400 From: Michael Jung To: Andrew Berg , "freebsd-fs@freebsd.org" Date: Fri, 6 Jun 2014 13:37:05 -0400 Subject: RE: i/o error - all block copies unavailable Thread-Topic: i/o error - all block copies unavailable Thread-Index: Ac+Bd8QQNwbdtlvESL+ZYp2IMqHNQQANNK6t Message-ID: References: , <5391A154.80309@my.hennepintech.edu> In-Reply-To: <5391A154.80309@my.hennepintech.edu> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 17:37:10 -0000 Andrew: My problem is that I can not get the pool mounted even with a different mou= nt point to re-reapply the bootcode. "zfs list" returns "no datasets availa= ble" I have tried booting a current 10-stable ISO and a current ISO. I am dropp= ing into single user and made sure that opensolaris.ko and zfs.ko had been = loaded. Regards, --mikej ________________________________________ From: owner-freebsd-fs@freebsd.org [owner-freebsd-fs@freebsd.org] On Behalf= Of Andrew Berg [aberg010@my.hennepintech.edu] Sent: Friday, June 06, 2014 7:09 AM To: freebsd-fs@freebsd.org Subject: Re: i/o error - all block copies unavailable On 2014.06.06 06:03, Michael Jung wrote: > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada0 > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1 You probably want /boot/gptzfsboot. _______________________________________________ freebsd-fs@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" GoPai.com | Facebook.com/PaymentAlliance CONFIDENTIALITY NOTE: This message is intended only for the use of the individual or entity to whom it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this transmission in error, please notify us by telephone at (502) 212-4001 or notify us at PAI , Dept. 99, 6060 Dutchmans Lane, Suite 320, Louisville, KY 40205 From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 17:46:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 897C8663 for ; Fri, 6 Jun 2014 17:46:50 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4C3DB2C2E for ; Fri, 6 Jun 2014 17:46:50 +0000 (UTC) Received: from PAIMAIL.pai.local (paimail.pai.local [10.10.0.153]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s56Hkm26030690 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Fri, 6 Jun 2014 13:46:49 -0400 (EDT) (envelope-from mikej@paymentallianceintl.com) Received: from PAIMAIL.pai.local ([::1]) by PAIMAIL.pai.local ([::1]) with mapi; Fri, 6 Jun 2014 13:46:48 -0400 From: Michael Jung To: Michael Jung , Andrew Berg , "freebsd-fs@freebsd.org" Date: Fri, 6 Jun 2014 13:42:34 -0400 Subject: RE: i/o error - all block copies unavailable Thread-Topic: i/o error - all block copies unavailable Thread-Index: Ac+Bd8QQNwbdtlvESL+ZYp2IMqHNQQANNK6tAACHG20= Message-ID: References: , <5391A154.80309@my.hennepintech.edu>, In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 17:46:50 -0000 Arggg.. I shouldn't need to mount the pool! #/sbin/gpart bootcode -b /boot/gptzfsboot -p /boot/gptboot -I 1 ada0 gpart: File too large # But if one looks at provider 1 on ada0 it is 512k and type freebsd.boot... #ls -la /boot/gptzfsboot -r--r--r--1 root wheel 41476 May 25 19:26 /boot/gptzfsboot # So why the file to large message? Regards, --mikej ________________________________________ From: Michael Jung Sent: Friday, June 06, 2014 1:37 PM To: Andrew Berg; freebsd-fs@freebsd.org Subject: RE: i/o error - all block copies unavailable Andrew: My problem is that I can not get the pool mounted even with a different mou= nt point to re-reapply the bootcode. "zfs list" returns "no datasets availa= ble" I have tried booting a current 10-stable ISO and a current ISO. I am dropp= ing into single user and made sure that opensolaris.ko and zfs.ko had been = loaded. Regards, --mikej ________________________________________ From: owner-freebsd-fs@freebsd.org [owner-freebsd-fs@freebsd.org] On Behalf= Of Andrew Berg [aberg010@my.hennepintech.edu] Sent: Friday, June 06, 2014 7:09 AM To: freebsd-fs@freebsd.org Subject: Re: i/o error - all block copies unavailable On 2014.06.06 06:03, Michael Jung wrote: > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada0 > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1 You probably want /boot/gptzfsboot. _______________________________________________ freebsd-fs@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" GoPai.com | Facebook.com/PaymentAlliance CONFIDENTIALITY NOTE: This message is intended only for the use of the individual or entity to whom it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this transmission in error, please notify us by telephone at (502) 212-4001 or notify us at PAI , Dept. 99, 6060 Dutchmans Lane, Suite 320, Louisville, KY 40205 From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 17:55:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 81B05DF1 for ; Fri, 6 Jun 2014 17:55:46 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0F4A92D2D for ; Fri, 6 Jun 2014 17:55:45 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id s56HtgtO065425; Fri, 6 Jun 2014 21:55:42 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Fri, 6 Jun 2014 21:55:42 +0400 (MSK) From: Dmitry Morozovsky To: Michael Jung Subject: RE: i/o error - all block copies unavailable In-Reply-To: Message-ID: References: , <5391A154.80309@my.hennepintech.edu>, User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Fri, 06 Jun 2014 21:55:42 +0400 (MSK) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 17:55:46 -0000 On Fri, 6 Jun 2014, Michael Jung wrote: > Arggg.. I shouldn't need to mount the pool! > > #/sbin/gpart bootcode -b /boot/gptzfsboot -p /boot/gptboot -I 1 ada0 > gpart: File too large > # > > But if one looks at provider 1 on ada0 it is 512k and type freebsd.boot... > > #ls -la /boot/gptzfsboot > -r--r--r--1 root wheel 41476 May 25 19:26 /boot/gptzfsboot > # > > So why the file to large message? You overmixed them ;) gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 (lowercase 'i' and pmbr, 1-sector pre-loader for -b) > > > Regards, > --mikej > > ________________________________________ > From: Michael Jung > Sent: Friday, June 06, 2014 1:37 PM > To: Andrew Berg; freebsd-fs@freebsd.org > Subject: RE: i/o error - all block copies unavailable > > Andrew: > > My problem is that I can not get the pool mounted even with a different mount point to re-reapply the bootcode. "zfs list" returns "no datasets available" > > I have tried booting a current 10-stable ISO and a current ISO. I am dropping into single user and made sure that opensolaris.ko and zfs.ko had been loaded. > > Regards, > --mikej > > ________________________________________ > From: owner-freebsd-fs@freebsd.org [owner-freebsd-fs@freebsd.org] On Behalf Of Andrew Berg [aberg010@my.hennepintech.edu] > Sent: Friday, June 06, 2014 7:09 AM > To: freebsd-fs@freebsd.org > Subject: Re: i/o error - all block copies unavailable > > On 2014.06.06 06:03, Michael Jung wrote: > > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada0 > > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1 > You probably want /boot/gptzfsboot. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > GoPai.com | Facebook.com/PaymentAlliance > > > CONFIDENTIALITY NOTE: This message is intended only for the use > of the individual or entity to whom it is addressed and may > contain information that is privileged, confidential, and > exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby > notified that any dissemination, distribution or copying > of this communication is strictly prohibited. If you have > received this transmission in error, please notify us by > telephone at (502) 212-4001 or notify us at PAI , Dept. 99, > 6060 Dutchmans Lane, Suite 320, Louisville, KY 40205 > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 18:05:33 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4B6975F4 for ; Fri, 6 Jun 2014 18:05:33 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 0FF632E4F for ; Fri, 6 Jun 2014 18:05:32 +0000 (UTC) Received: from PAIMAIL.pai.local (paimail.pai.local [10.10.0.153]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s56I5VYI033254 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL) for ; Fri, 6 Jun 2014 14:05:31 -0400 (EDT) (envelope-from mikej@paymentallianceintl.com) Received: from PAIMAIL.pai.local ([::1]) by PAIMAIL.pai.local ([::1]) with mapi; Fri, 6 Jun 2014 14:05:31 -0400 From: Michael Jung To: Dmitry Morozovsky Date: Fri, 6 Jun 2014 14:05:30 -0400 Subject: RE: i/o error - all block copies unavailable Thread-Topic: i/o error - all block copies unavailable Thread-Index: Ac+BsI8cIQNglkb7RMqspr3R3fl/MQAAG4pN Message-ID: References: , <5391A154.80309@my.hennepintech.edu>, , In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 18:05:33 -0000 ________________________________________ From: owner-freebsd-fs@freebsd.org [owner-freebsd-fs@freebsd.org] On Behalf= Of Dmitry Morozovsky [marck@rinet.ru] Sent: Friday, June 06, 2014 1:55 PM To: Michael Jung Cc: freebsd-fs@freebsd.org Subject: RE: i/o error - all block copies unavailable On Fri, 6 Jun 2014, Michael Jung wrote: > Arggg.. I shouldn't need to mount the pool! > > #/sbin/gpart bootcode -b /boot/gptzfsboot -p /boot/gptboot -I 1 ada0 > gpart: File too large > # > > But if one looks at provider 1 on ada0 it is 512k and type freebsd.boot..= . > > #ls -la /boot/gptzfsboot > -r--r--r--1 root wheel 41476 May 25 19:26 /boot/gptzfsboot > # > > So why the file to large message? You overmixed them ;) gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 (lowercase 'i' and pmbr, 1-sector pre-loader for -b) > > > Regards, > --mikej > > ________________________________________ > From: Michael Jung > Sent: Friday, June 06, 2014 1:37 PM > To: Andrew Berg; freebsd-fs@freebsd.org > Subject: RE: i/o error - all block copies unavailable > > Andrew: > > My problem is that I can not get the pool mounted even with a different m= ount point to re-reapply the bootcode. "zfs list" returns "no datasets avai= lable" > > I have tried booting a current 10-stable ISO and a current ISO. I am dro= pping into single user and made sure that opensolaris.ko and zfs.ko had bee= n loaded. > > Regards, > --mikej > > ________________________________________ > From: owner-freebsd-fs@freebsd.org [owner-freebsd-fs@freebsd.org] On Beha= lf Of Andrew Berg [aberg010@my.hennepintech.edu] > Sent: Friday, June 06, 2014 7:09 AM > To: freebsd-fs@freebsd.org > Subject: Re: i/o error - all block copies unavailable > > On 2014.06.06 06:03, Michael Jung wrote: > > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada0 > > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1 > You probably want /boot/gptzfsboot. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > GoPai.com | Facebook.com/PaymentAlliance > > > CONFIDENTIALITY NOTE: This message is intended only for the use > of the individual or entity to whom it is addressed and may > contain information that is privileged, confidential, and > exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby > notified that any dissemination, distribution or copying > of this communication is strictly prohibited. If you have > received this transmission in error, please notify us by > telephone at (502) 212-4001 or notify us at PAI , Dept. 99, > 6060 Dutchmans Lane, Suite 320, Louisville, KY 40205 > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ _______________________________________________ freebsd-fs@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to freebsd-fs-unsubscribe@freebsd.org Dmitry: I should have read UPDATING AGAIN before even posting my problem and doing = things from memory ;-) But after from a current 10-stable ISO. gpart bootcode -p /boot/gptzfsboot -i 1 ad0 gpart bootcode -p /boot/gptzfsboot -i 1 ad1 Still yields my original issue Loading Operating System.... ZFS: i/o error - all block copies unavailable ZFS: can't read object set for dataset u ZFS: can't open root filesystem gptzfsboot: failed to mount default pool zroot FreeBSD/x86 boot Default: zroot: boot: boot: status pool: zroot boofs: zroot/ROOT/defualt config: NAME STATE zroot ONLINE mirror ONLINE gtpid/E17....... ONLINE gtpid/e26...... ONLINE FreeBSD/x86 boot Default: zroot: boot: GoPai.com | Facebook.com/PaymentAlliance CONFIDENTIALITY NOTE: This message is intended only for the use of the individual or entity to whom it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this transmission in error, please notify us by telephone at (502) 212-4001 or notify us at PAI , Dept. 99, 6060 Dutchmans Lane, Suite 320, Louisville, KY 40205 From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 19:18:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2369F4FC for ; Fri, 6 Jun 2014 19:18:27 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A59D224DC for ; Fri, 6 Jun 2014 19:18:26 +0000 (UTC) Received: from PAIMAIL.pai.local (paimail.pai.local [10.10.0.153]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s56JIOIh042015 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL) for ; Fri, 6 Jun 2014 15:18:24 -0400 (EDT) (envelope-from mikej@paymentallianceintl.com) Received: from PAIMAIL.pai.local ([::1]) by PAIMAIL.pai.local ([::1]) with mapi; Fri, 6 Jun 2014 15:18:24 -0400 From: Michael Jung To: "freebsd-fs@freebsd.org" Date: Fri, 6 Jun 2014 15:18:23 -0400 Subject: RE: i/o error - all block copies unavailable - BTX errors Thread-Topic: i/o error - all block copies unavailable - BTX errors Thread-Index: AQHPgbwVTPdDTjLPF06KFAjxLz0nTg== Message-ID: References: , <5391A154.80309@my.hennepintech.edu>, , , In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 19:18:27 -0000 ________________________________________ From: owner-freebsd-fs@freebsd.org [owner-freebsd-fs@freebsd.org] On Behalf= Of Dmitry Morozovsky [marck@rinet.ru] Sent: Friday, June 06, 2014 1:55 PM To: Michael Jung Cc: freebsd-fs@freebsd.org Subject: RE: i/o error - all block copies unavailable On Fri, 6 Jun 2014, Michael Jung wrote: > Arggg.. I shouldn't need to mount the pool! > > #/sbin/gpart bootcode -b /boot/gptzfsboot -p /boot/gptboot -I 1 ada0 > gpart: File too large > # > > But if one looks at provider 1 on ada0 it is 512k and type freebsd.boot..= . > > #ls -la /boot/gptzfsboot > -r--r--r--1 root wheel 41476 May 25 19:26 /boot/gptzfsboot > # > > So why the file to large message? You overmixed them ;) gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 (lowercase 'i' and pmbr, 1-sector pre-loader for -b) > > > Regards, > --mikej > > ________________________________________ > From: Michael Jung > Sent: Friday, June 06, 2014 1:37 PM > To: Andrew Berg; freebsd-fs@freebsd.org > Subject: RE: i/o error - all block copies unavailable > > Andrew: > > My problem is that I can not get the pool mounted even with a different m= ount point to re-reapply the bootcode. "zfs list" returns "no datasets avai= lable" > > I have tried booting a current 10-stable ISO and a current ISO. I am dro= pping into single user and made sure that opensolaris.ko and zfs.ko had bee= n loaded. > > Regards, > --mikej > > ________________________________________ > From: owner-freebsd-fs@freebsd.org [owner-freebsd-fs@freebsd.org] On Beha= lf Of Andrew Berg [aberg010@my.hennepintech.edu] > Sent: Friday, June 06, 2014 7:09 AM > To: freebsd-fs@freebsd.org > Subject: Re: i/o error - all block copies unavailable > > On 2014.06.06 06:03, Michael Jung wrote: > > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada0 > > gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 ada1 > You probably want /boot/gptzfsboot. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > GoPai.com | Facebook.com/PaymentAlliance > > > CONFIDENTIALITY NOTE: This message is intended only for the use > of the individual or entity to whom it is addressed and may > contain information that is privileged, confidential, and > exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby > notified that any dissemination, distribution or copying > of this communication is strictly prohibited. If you have > received this transmission in error, please notify us by > telephone at (502) 212-4001 or notify us at PAI , Dept. 99, > 6060 Dutchmans Lane, Suite 320, Louisville, KY 40205 > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ _______________________________________________ freebsd-fs@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to freebsd-fs-unsubscribe@freebsd.org Dmitry: I should have read UPDATING AGAIN before even posting my problem and doing = things from memory ;-) But after from a current 10-stable ISO. gpart bootcode -p /boot/gptzfsboot -i 1 ad0 gpart bootcode -p /boot/gptzfsboot -i 1 ad1 Still yields my original issue Loading Operating System.... ZFS: i/o error - all block copies unavailable ZFS: can't read object set for dataset u ZFS: can't open root filesystem gptzfsboot: failed to mount default pool zroot FreeBSD/x86 boot Default: zroot: boot: boot: status pool: zroot boofs: zroot/ROOT/defualt config: NAME STATE zroot ONLINE mirror ONLINE gtpid/E17....... ONLINE gtpid/e26...... ONLINE FreeBSD/x86 boot Default: zroot: boot +++++++++++++++++++++++++++++++++ An added fact that I should have stated but did not think would cause an is= sue. When the reboot failed I pulled an additional mirror TANK and cabled up my SATA DVD-RW to boot and re-install the boot loader as I did not have a USB stick to boot from. Since efforts so far have not corrected my issued I reconnect all SATA devi= ces Remove the DVD-ROM and added back the mirrored tank pool all I assure you to the same SATA ports. Interestingly on boot I get the same error I described originally but runni= ng status at the boot prompt causes a BTX error - image attached. I don't know one has noticed ada2 is a SSD which I have for ZIL/L2ARC could that cause a problem? I am perplexed why when I boot a head or 10stable ISO and am in single user mode that the OS does not find any ZFS partitions. I can apply patches, build ISO's on another machines to help troubleshoot but this issue seems to be prevalent and random for a long time since 8.x Regards, --mikej boot: status with the non-root mirror tank attached If the attachment doesn't make it http://216.26.158.189/btx-error.JPG GoPai.com | Facebook.com/PaymentAlliance CONFIDENTIALITY NOTE: This message is intended only for the use of the individual or entity to whom it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this transmission in error, please notify us by telephone at (502) 212-4001 or notify us at PAI , Dept. 99, 6060 Dutchmans Lane, Suite 320, Louisville, KY 40205 From owner-freebsd-fs@FreeBSD.ORG Fri Jun 6 23:16:06 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 418EE6DE for ; Fri, 6 Jun 2014 23:16:06 +0000 (UTC) Received: from mail-ob0-x22f.google.com (mail-ob0-x22f.google.com [IPv6:2607:f8b0:4003:c01::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 0A5472AC3 for ; Fri, 6 Jun 2014 23:16:05 +0000 (UTC) Received: by mail-ob0-f175.google.com with SMTP id wo20so3530512obc.6 for ; Fri, 06 Jun 2014 16:16:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=ZTfXTSPiZ+pxPmkdiLqsy9L2UUi5LsmC2w2eBINA+jU=; b=HgBpSKlYqMxPI86fHIyV314IbCv0Ezt+Tf60H/xHrUb19nAOB/NMGqA0XHJRtSm6DJ bVcgeZszrJNf6/Z5E4hkKWjzC3VmQsEuGormZ/2ucheLKHtpgNPpnL259nzUGv1FHq42 BzZpinQ5djA3kLdVb5V/qRSJ6qt0S2N658w7h/jE5MyGQmBhenMOjmtIC9SRTM3X+WBK +AGreynE5xwQ+ep+/K23q8V7MSUbfwZ/lSqQj4xnAz1tMxhGXzyqjQe4a7wV1Pmvf2zm ZJmK8Hq+ZSWlKxcAv8i4JVIquFkYW3e23JtGtab3gSBQXDsF2YdXz5/KOBSF8eHvII23 jFYw== MIME-Version: 1.0 X-Received: by 10.182.60.42 with SMTP id e10mr8673750obr.33.1402096565168; Fri, 06 Jun 2014 16:16:05 -0700 (PDT) Received: by 10.76.167.164 with HTTP; Fri, 6 Jun 2014 16:16:03 -0700 (PDT) Received: by 10.76.167.164 with HTTP; Fri, 6 Jun 2014 16:16:03 -0700 (PDT) In-Reply-To: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> References: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> Date: Fri, 6 Jun 2014 16:16:03 -0700 Message-ID: Subject: Re: Is ZFS Multi-vdev root pool configuration discovery supported? From: Freddie Cash To: Martin Simmons Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2014 23:16:06 -0000 Typos courtesy of my Android phone. On Jun 6, 2014 9:35 AM, "Martin Simmons" wrote: > > Hi, > > Does anyone know if FreeBSD 9.2 (or indeed head) supports "Root On ZFS" with a > multi-vdev pool (e.g. 4 disks configured as two mirrored pairs)? > > Looking at the source, sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c > contains the comment: > > /* > * Multi-vdev root pool configuration discovery is not supported yet. > */ > > but the code below it from r243502 suggests that this is no longer true. FreeBSD 9.2 boots successfully off a 2-mirror vdev pool. I run that at home. gptzfsloader is installed on each disk in the system, and it will boot off any of them (changed order manually in the BIOS to test). From owner-freebsd-fs@FreeBSD.ORG Sat Jun 7 14:58:18 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6B559DE4 for ; Sat, 7 Jun 2014 14:58:18 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id BDF92241F for ; Sat, 7 Jun 2014 14:58:17 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id RAA11184; Sat, 07 Jun 2014 17:58:06 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1WtI4M-000Gmb-Gi; Sat, 07 Jun 2014 17:58:06 +0300 Message-ID: <53932846.20806@FreeBSD.org> Date: Sat, 07 Jun 2014 17:57:10 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Martin Simmons , freebsd-fs@FreeBSD.org Subject: Re: Is ZFS Multi-vdev root pool configuration discovery supported? References: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> In-Reply-To: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 07 Jun 2014 14:58:18 -0000 on 06/06/2014 19:24 Martin Simmons said the following: > Hi, > > Does anyone know if FreeBSD 9.2 (or indeed head) supports "Root On ZFS" with a > multi-vdev pool (e.g. 4 disks configured as two mirrored pairs)? > > Looking at the source, sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c > contains the comment: > > /* > * Multi-vdev root pool configuration discovery is not supported yet. > */ > > but the code below it from r243502 suggests that this is no longer true. This is a stale comment, it must be removed. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Sat Jun 7 15:09:41 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AC907FBE for ; Sat, 7 Jun 2014 15:09:41 +0000 (UTC) Received: from smtprelay05.ispgateway.de (smtprelay05.ispgateway.de [80.67.31.97]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6E8AF24E3 for ; Sat, 7 Jun 2014 15:09:41 +0000 (UTC) Received: from [84.44.176.61] (helo=fabiankeil.de) by smtprelay05.ispgateway.de with esmtpsa (SSLv3:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1WtIE3-0002gv-Q9 for freebsd-fs@freebsd.org; Sat, 07 Jun 2014 17:08:07 +0200 Date: Sat, 7 Jun 2014 17:08:03 +0200 From: Fabian Keil To: freebsd-fs@freebsd.org Subject: Re: freebsd vfs, solaris vfs, zfs Message-ID: <20140607170803.6b5d624b@fabiankeil.de> In-Reply-To: <5346C3E2.2080302@FreeBSD.org> References: <5346C3E2.2080302@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; boundary="Sig_/_iLZ520kjNELAjHt9.lkptS"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 07 Jun 2014 15:09:41 -0000 --Sig_/_iLZ520kjNELAjHt9.lkptS Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Andriy Gapon wrote: > I've tried to express some of my understanding of how FreeBSD VFS works a= nd how > it compares to Solaris VFS model, maybe you would find that interesting: > http://www.hybridcluster.com/blog/complexity-freebsd-vfs-using-zfs-exampl= e-part-2/ > I will certainly appreciate any feedback. I'm interested in articles like this, thanks for taking the time to write t= hem. Fabian --Sig_/_iLZ520kjNELAjHt9.lkptS Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iEYEARECAAYFAlOTKtgACgkQBYqIVf93VJ1PfwCfQ/mMCE/+y9eT69hB3NLOPDeB Zx4An2kf33PRniVv0qZ7p7qYhrlKwnke =IuAF -----END PGP SIGNATURE----- --Sig_/_iLZ520kjNELAjHt9.lkptS-- From owner-freebsd-fs@FreeBSD.ORG Sat Jun 7 17:52:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3281E176 for ; Sat, 7 Jun 2014 17:52:24 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 12D4422F4 for ; Sat, 7 Jun 2014 17:52:23 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id A289476AF8; Sat, 7 Jun 2014 10:52:22 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 12596-01; Sat, 7 Jun 2014 10:52:22 -0700 (PDT) Received: from [10.8.0.30] (unknown [10.8.0.30]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id D582876AEE; Sat, 7 Jun 2014 10:52:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=ixsystems.com; s=newknight0; t=1402163542; bh=uOKAvtpIp5RgjsfUqdj5dv3hDHnk8Nkvz7GAdrofGTM=; h=Subject:From:In-Reply-To:Date:Cc:References:To; b=msQ3VsVKwvIB0vVvLv2lLS1jqen+2pjAJ6BiWFw55eA94V8ROz2D+wb+hYBYkJuqv LA2woP5oyyrY+TwaMETK8+0rPd/K2kqC0TFlnhjFiU1+TLRV6O3JMPBQPLkmIZC9eV qmmMlhC4SGwJHNgzqKmocdbrKADjpGd1FN8Ncys8= Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) Subject: Re: freebsd vfs, solaris vfs, zfs From: Jordan Hubbard In-Reply-To: <20140607170803.6b5d624b@fabiankeil.de> Date: Sat, 7 Jun 2014 10:52:20 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <20A7B2EB-CD96-4952-BB20-4B8E41200AF6@ixsystems.com> References: <5346C3E2.2080302@FreeBSD.org> <20140607170803.6b5d624b@fabiankeil.de> To: Fabian Keil X-Mailer: Apple Mail (2.1878.2) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 07 Jun 2014 17:52:24 -0000 On Jun 7, 2014, at 8:08 AM, Fabian Keil = wrote: > Andriy Gapon wrote: >=20 >> I've tried to express some of my understanding of how FreeBSD VFS = works and how >> it compares to Solaris VFS model, maybe you would find that = interesting: >> = http://www.hybridcluster.com/blog/complexity-freebsd-vfs-using-zfs-example= -part-2/ >> I will certainly appreciate any feedback. >=20 > I'm interested in articles like this, thanks for taking the time to = write them. Yes, this is a well-written (albeit deeply technical) article on BSD = VFS. I get that the author is clearly more familiar with Solaris, and = therefore used it as a point of comparison, but I wonder if he has any = appetite for a Linux VFS (http://www.win.tue.nl/~aeb/linux/lk/lk-8.html) = vs BSD VFS article as well. I=92ve never really investigated the Linux = VFS implementation in any detail, but I=92m told it has some nice = features to facilitate file change monitoring and simply provides a = richer set of semantics for permuting filesystem behaviors. Maybe we = could learn a thing or two from it? - Jordan From owner-freebsd-fs@FreeBSD.ORG Sun Jun 8 03:54:51 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4B63AD62 for ; Sun, 8 Jun 2014 03:54:51 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 337AC2EF0 for ; Sun, 8 Jun 2014 03:54:51 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s583spJ0075125 for ; Sun, 8 Jun 2014 04:54:51 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 188328] [zfs] UPDATING should provide caveats for running `zpool upgrade` Date: Sun, 08 Jun 2014 03:54:50 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: ari@ish.com.au X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 08 Jun 2014 03:54:51 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=188328 ari@ish.com.au changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ari@ish.com.au --- Comment #2 from ari@ish.com.au --- Seems unnecessary since 'zpool upgrade' already spits out exactly what you need to do with an appropriate warning. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 08:00:10 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 594A8639 for ; Mon, 9 Jun 2014 08:00:10 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4790C27DC for ; Mon, 9 Jun 2014 08:00:10 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5980AJC056962 for ; Mon, 9 Jun 2014 09:00:10 +0100 (BST) (envelope-from bz-noreply@freebsd.org) Message-Id: <201406090800.s5980AJC056962@kenobi.freebsd.org> From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bugzilla] Commit Needs MFC MIME-Version: 1.0 X-Bugzilla-Type: whine X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated Date: Mon, 09 Jun 2014 08:00:10 +0000 Content-Type: text/plain X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 08:00:10 -0000 Hi, You have a bug in the "Needs MFC" state which has not been touched in 7 or more days. This email serves as a reminder that you may want to MFC this bug or marked it as completed. In the event you have a longer MFC timeout you may update this bug with a comment and I won't remind you again for 7 days. This reminder is only sent on Mondays. Please file a bug about concerns you may have. This search was scheduled by eadler@FreeBSD.org. (7 bugs) Bug 133174: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=133174 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [msdosfs] [patch] msdosfs must support multibyte international characters in file names Bug 136470: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=136470 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] Cannot mount / in read-only, over NFS Bug 139651: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139651 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] mount(8): read-only remount of NFS volume does not work Bug 144447: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=144447 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] sharenfs fsunshare() & fsshare_main() non functional Bug 154228: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=154228 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [md] md getting stuck in wdrain state Bug 155411: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=155411 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [regression] [8.2-release] [tmpfs]: mount: tmpfs : No space left on device Bug 180236: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=180236 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] [nullfs] Leakage free space using ZFS with nullfs on 9.1-STABLE From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 12:43:14 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B1952589 for ; Mon, 9 Jun 2014 12:43:14 +0000 (UTC) Received: from lwfs1-cam.cam.lispworks.com (mail.lispworks.com [46.17.166.21]) by mx1.freebsd.org (Postfix) with ESMTP id 4EF1B20B0 for ; Mon, 9 Jun 2014 12:43:13 +0000 (UTC) Received: from higson.cam.lispworks.com (higson.cam.lispworks.com [192.168.1.7]) by lwfs1-cam.cam.lispworks.com (8.14.5/8.14.5) with ESMTP id s59Ch5ht088843; Mon, 9 Jun 2014 13:43:05 +0100 (BST) (envelope-from martin@lispworks.com) Received: from higson.cam.lispworks.com (localhost.localdomain [127.0.0.1]) by higson.cam.lispworks.com (8.14.4) id s59Ch52l026821; Mon, 9 Jun 2014 13:43:05 +0100 Received: (from martin@localhost) by higson.cam.lispworks.com (8.14.4/8.14.4/Submit) id s59Ch55n026817; Mon, 9 Jun 2014 13:43:05 +0100 Date: Mon, 9 Jun 2014 13:43:05 +0100 Message-Id: <201406091243.s59Ch55n026817@higson.cam.lispworks.com> From: Martin Simmons To: freebsd-fs@FreeBSD.org In-reply-to: <53932846.20806@FreeBSD.org> (message from Andriy Gapon on Sat, 07 Jun 2014 17:57:10 +0300) Subject: Re: Is ZFS Multi-vdev root pool configuration discovery supported? References: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> <53932846.20806@FreeBSD.org> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 12:43:14 -0000 >>>>> On Sat, 07 Jun 2014 17:57:10 +0300, Andriy Gapon said: > > on 06/06/2014 19:24 Martin Simmons said the following: > > Hi, > > > > Does anyone know if FreeBSD 9.2 (or indeed head) supports "Root On ZFS" with a > > multi-vdev pool (e.g. 4 disks configured as two mirrored pairs)? > > > > Looking at the source, sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c > > contains the comment: > > > > /* > > * Multi-vdev root pool configuration discovery is not supported yet. > > */ > > > > but the code below it from r243502 suggests that this is no longer true. > > This is a stale comment, it must be removed. Thanks, that's good to know. __Martin From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 12:50:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9467D671; Mon, 9 Jun 2014 12:50:00 +0000 (UTC) Received: from thebighonker.lerctr.org (lrosenman-1-pt.tunnel.tserv8.dal1.ipv6.he.net [IPv6:2001:470:1f0e:3ad::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "thebighonker.lerctr.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 64C682101; Mon, 9 Jun 2014 12:50:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Message-ID:References:In-Reply-To:Subject:Cc:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=WfPz/ebV6CHfMudZTztNMUQhnN9F/0vqmCH8aV+XGlw=; b=XkqkkF4gaaO6z+xZkB+cE2bZwlpe+xMc4uLfeNH85Sgaz46ol5sOE6uPNzeMDIDC2CUf1cUgMPrqwo8K4HD9FivJJ0SvNgKsCVX+bSxq55m09gMSh44AeU1vEz+r8j60uRackY5ziqURTP/BGSwcn8GxeqDUIvX6Fbxzo56tzo0=; Received: from localhost.lerctr.org ([127.0.0.1]:12509 helo=webmail.lerctr.org) by thebighonker.lerctr.org with esmtpsa (TLSv1:DHE-RSA-AES256-SHA:256) (Exim 4.82 (FreeBSD)) (envelope-from ) id 1Wtz1Q-000ILE-Re; Mon, 09 Jun 2014 07:49:58 -0500 Received: from host.alcatel.com ([198.205.55.139]) by webmail.lerctr.org with HTTP (HTTP/1.1 POST); Mon, 09 Jun 2014 07:48:04 -0500 MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Mon, 09 Jun 2014 07:48:04 -0500 From: Larry Rosenman To: Martin Simmons Subject: Re: Is ZFS Multi-vdev root pool configuration discovery =?UTF-8?Q?supported=3F?= In-Reply-To: <201406091243.s59Ch55n026817@higson.cam.lispworks.com> References: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> <53932846.20806@FreeBSD.org> <201406091243.s59Ch55n026817@higson.cam.lispworks.com> Message-ID: X-Sender: ler@lerctr.org User-Agent: Roundcube Webmail/1.0.1 X-Spam-Score: -3.6 (---) X-LERCTR-Spam-Score: -3.6 (---) X-Spam-Report: SpamScore (-3.6/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-0.651 X-LERCTR-Spam-Report: SpamScore (-3.6/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-0.651 Cc: freebsd-fs@freebsd.org, owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 12:50:00 -0000 On 2014-06-09 07:43, Martin Simmons wrote: >>>>>> On Sat, 07 Jun 2014 17:57:10 +0300, Andriy Gapon said: >> >> on 06/06/2014 19:24 Martin Simmons said the following: >> > Hi, >> > >> > Does anyone know if FreeBSD 9.2 (or indeed head) supports "Root On ZFS" with a >> > multi-vdev pool (e.g. 4 disks configured as two mirrored pairs)? >> > >> > Looking at the source, sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c >> > contains the comment: >> > >> > /* >> > * Multi-vdev root pool configuration discovery is not supported yet. >> > */ >> > >> > but the code below it from r243502 suggests that this is no longer true. >> >> This is a stale comment, it must be removed. > > Thanks, that's good to know. For the record I've been booting off a 6-disk raidZ1 pool for a LONG time. I have boot code on all 6 disks, and can rearrange them at will and boot off any of them. > > __Martin > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: ler@lerctr.org US Mail: 108 Turvey Cove, Hutto, TX 78634-5688 From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 12:51:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4EA2D6F0; Mon, 9 Jun 2014 12:51:18 +0000 (UTC) Received: from thebighonker.lerctr.org (lrosenman-1-pt.tunnel.tserv8.dal1.ipv6.he.net [IPv6:2001:470:1f0e:3ad::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "thebighonker.lerctr.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 1F7902116; Mon, 9 Jun 2014 12:51:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Message-ID:References:In-Reply-To:Subject:Cc:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=WfPz/ebV6CHfMudZTztNMUQhnN9F/0vqmCH8aV+XGlw=; b=MbaB44HLd+RREe2z3qBcUROzVQMU0Of6NqifsdnrhFr6jUV+IBeLox5m4tGHSVhcLMTU8yIViFGLcRcFZBfhQgTho9kOtFCA4uVgFnaGSJrMGz99wilUO3Xn0QuwQt3TKpCFyhxFc61ot2ahMeawAakXsxsLdIcU2NLSkW8hS5c=; Received: from localhost.lerctr.org ([127.0.0.1]:12556 helo=webmail.lerctr.org) by thebighonker.lerctr.org with esmtpsa (TLSv1:DHE-RSA-AES256-SHA:256) (Exim 4.82 (FreeBSD)) (envelope-from ) id 1Wtz2h-000IMb-V6; Mon, 09 Jun 2014 07:51:17 -0500 Received: from host.alcatel.com ([198.205.55.139]) by webmail.lerctr.org with HTTP (HTTP/1.1 POST); Mon, 09 Jun 2014 07:49:26 -0500 MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Mon, 09 Jun 2014 07:49:26 -0500 From: Larry Rosenman To: Martin Simmons Subject: Re: Is ZFS Multi-vdev root pool configuration discovery =?UTF-8?Q?supported=3F?= In-Reply-To: <201406091243.s59Ch55n026817@higson.cam.lispworks.com> References: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> <53932846.20806@FreeBSD.org> <201406091243.s59Ch55n026817@higson.cam.lispworks.com> Message-ID: X-Sender: ler@lerctr.org User-Agent: Roundcube Webmail/1.0.1 X-Spam-Score: -3.6 (---) X-LERCTR-Spam-Score: -3.6 (---) X-Spam-Report: SpamScore (-3.6/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-0.651 X-LERCTR-Spam-Report: SpamScore (-3.6/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-0.651 Cc: freebsd-fs@freebsd.org, owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 12:51:18 -0000 On 2014-06-09 07:43, Martin Simmons wrote: >>>>>> On Sat, 07 Jun 2014 17:57:10 +0300, Andriy Gapon said: >> >> on 06/06/2014 19:24 Martin Simmons said the following: >> > Hi, >> > >> > Does anyone know if FreeBSD 9.2 (or indeed head) supports "Root On ZFS" with a >> > multi-vdev pool (e.g. 4 disks configured as two mirrored pairs)? >> > >> > Looking at the source, sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c >> > contains the comment: >> > >> > /* >> > * Multi-vdev root pool configuration discovery is not supported yet. >> > */ >> > >> > but the code below it from r243502 suggests that this is no longer true. >> >> This is a stale comment, it must be removed. > > Thanks, that's good to know. For the record I've been booting off a 6-disk raidZ1 pool for a LONG time. I have boot code on all 6 disks, and can rearrange them at will and boot off any of them. > > __Martin > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: ler@lerctr.org US Mail: 108 Turvey Cove, Hutto, TX 78634-5688 From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 14:44:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D5A8BF2E for ; Mon, 9 Jun 2014 14:44:55 +0000 (UTC) Received: from lwfs1-cam.cam.lispworks.com (mail.lispworks.com [46.17.166.21]) by mx1.freebsd.org (Postfix) with ESMTP id 6DA672B80 for ; Mon, 9 Jun 2014 14:44:54 +0000 (UTC) Received: from higson.cam.lispworks.com (higson.cam.lispworks.com [192.168.1.7]) by lwfs1-cam.cam.lispworks.com (8.14.5/8.14.5) with ESMTP id s59EijsK095276; Mon, 9 Jun 2014 15:44:46 +0100 (BST) (envelope-from martin@lispworks.com) Received: from higson.cam.lispworks.com (localhost.localdomain [127.0.0.1]) by higson.cam.lispworks.com (8.14.4) id s59Eiji6032361; Mon, 9 Jun 2014 15:44:45 +0100 Received: (from martin@localhost) by higson.cam.lispworks.com (8.14.4/8.14.4/Submit) id s59Eijuh032276; Mon, 9 Jun 2014 15:44:45 +0100 Date: Mon, 9 Jun 2014 15:44:45 +0100 Message-Id: <201406091444.s59Eijuh032276@higson.cam.lispworks.com> From: Martin Simmons To: freebsd-fs@freebsd.org In-reply-to: (message from Larry Rosenman on Mon, 09 Jun 2014 07:48:04 -0500) Subject: Re: Is ZFS Multi-vdev root pool configuration discovery =?UTF-8?Q?supported=3F?= References: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> <53932846.20806@FreeBSD.org> <201406091243.s59Ch55n026817@higson.cam.lispworks.com> X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 14:44:55 -0000 >>>>> On Mon, 09 Jun 2014 07:48:04 -0500, Larry Rosenman said: > > On 2014-06-09 07:43, Martin Simmons wrote: > >>>>>> On Sat, 07 Jun 2014 17:57:10 +0300, Andriy Gapon said: > >> > >> on 06/06/2014 19:24 Martin Simmons said the following: > >> > Hi, > >> > > >> > Does anyone know if FreeBSD 9.2 (or indeed head) supports "Root On ZFS" with a > >> > multi-vdev pool (e.g. 4 disks configured as two mirrored pairs)? > >> > > >> > Looking at the source, sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c > >> > contains the comment: > >> > > >> > /* > >> > * Multi-vdev root pool configuration discovery is not supported yet. > >> > */ > >> > > >> > but the code below it from r243502 suggests that this is no longer true. > >> > >> This is a stale comment, it must be removed. > > > > Thanks, that's good to know. > > For the record I've been booting off a 6-disk raidZ1 pool for a LONG > time. > > I have boot code on all 6 disks, and can rearrange them at will and boot > off any of them. OK, but that sounds like a configuration with a single vdev. I was asking about something like RAID10 with muliple top level vdevs. __Martin From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 18:37:37 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 550E763E for ; Mon, 9 Jun 2014 18:37:37 +0000 (UTC) Received: from mail-vc0-x230.google.com (mail-vc0-x230.google.com [IPv6:2607:f8b0:400c:c03::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 18CE42480 for ; Mon, 9 Jun 2014 18:37:37 +0000 (UTC) Received: by mail-vc0-f176.google.com with SMTP id im17so6630199vcb.7 for ; Mon, 09 Jun 2014 11:37:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=xulQIJMiDQa4tRjvx25Mo9MQTGsU0YZAwQCldU/C2IY=; b=jbVq3dborqF0MyjmRBWg0M31V6kPD5r0mrrX6L6ukRqqBZ6aBvum/ge8xl8tbbeqsH RS6hWVAZw+l1nhljAw8nnvPrNzEhwmumkLTji30JTfQeUksS60M1wDiIq87oBzQBJR5V nyoAcrBRL7CzVd5d37MMVd69qjxHG6DrZhtHMcBgefQQhfo6GoKKNj1WjNwCBax9+D4m 4+CBYOyxsdb9Lh4vnqWLWgzusvf+J1LolCt46bdIxH/wQcsILThegs+qMOBDOHV9LW9K VLm/PutXdpx9veYx6rsQKQ7QQwXviDcCrhAm5APA7aFLuTn6KC9/6TWVfgoyYU0tnvTT 7fig== MIME-Version: 1.0 X-Received: by 10.58.106.104 with SMTP id gt8mr11362048veb.46.1402339056138; Mon, 09 Jun 2014 11:37:36 -0700 (PDT) Received: by 10.221.65.198 with HTTP; Mon, 9 Jun 2014 11:37:36 -0700 (PDT) Date: Mon, 9 Jun 2014 14:37:36 -0400 Message-ID: Subject: ZFS import panic (kgdb backtrace attached) From: grarpamp To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 18:37:37 -0000 ZFS pool was 96% full and under heavy sequential write, and panicked. Dumps were not enabled so this first panic was lost. Fixed that, then... # zpool import -o readonly=on -f pool (ok, zpool export pool) # zpool import -f pool (repeatably panics, coredump, reboot) FreeBSD 8.4-STABLE #0 r265935 i386 [/usr/include]% kgdb /boot/kernel/kernel /.../vmcore.1 GNU gdb 6.1.1 [FreeBSD] This GDB was configured as "i386-marcel-freebsd"... Unread portion of the kernel message buffer: Fatal trap 12: page fault while in kernel mode cpuid = 0; apic id = 00 fault virtual address = 0x11 fault code = supervisor read, page not present instruction pointer = 0x20:0xc13cb9f4 stack pointer = 0x28:0xfcfb5ac0 frame pointer = 0x28:0xfcfb5ae4 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, def32 1, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 8 (txg_thread_enter) trap number = 12 panic: page fault cpuid = 0 KDB: stack backtrace: #0 0xc094cd8f at kdb_backtrace+0x4f #1 0xc091c5bc at panic+0x15c #2 0xc0d75193 at trap_fatal+0x323 #3 0xc0d7529c at trap_pfault+0xfc #4 0xc0d7600a at trap+0x44a #5 0xc0d5c2dc at calltrap+0x6 #6 0xc13c91d9 at metaslab_sync+0x509 #7 0xc13eb280 at vdev_sync+0x90 #8 0xc13dded6 at spa_sync+0x496 #9 0xc13e8835 at txg_sync_thread+0x145 #10 0xc08ef767 at fork_exit+0x97 #11 0xc0d5c354 at fork_trampoline+0x8 Uptime: 17m21s Physical memory: 2026 MB Dumping 162 MB: 147 131 115 99 83 67 51 35 19 3 Loaded symbols for /boot/kernel/zfs.ko Loaded symbols for /boot/kernel/opensolaris.ko Loaded symbols for /boot/kernel/geom_eli.ko Loaded symbols for /boot/kernel/crypto.ko Loaded symbols for /boot/kernel/zlib.ko Loaded symbols for /boot/kernel/snd_ich.ko Loaded symbols for /boot/kernel/sound.ko Loaded symbols for /boot/kernel/drm.ko Loaded symbols for /boot/kernel/i915.ko Loaded symbols for /boot/kernel/atapicam.ko Loaded symbols for /boot/kernel/cpuctl.ko #0 doadump () at pcpu.h:244 244 __asm("movl %%fs:0,%0" : "=r" (td)); (kgdb) bt #0 doadump () at pcpu.h:244 #1 0xc091c313 in boot (howto=260) at /.../src/sys/kern/kern_shutdown.c:443 #2 0xc091c5fe in panic (fmt=) at /.../src/sys/kern/kern_shutdown.c:634 #3 0xc0d75193 in trap_fatal (frame=0xfcfb5a80, eva=17) at /.../src/sys/i386/i386/trap.c:1010 #4 0xc0d7529c in trap_pfault (frame=0xfcfb5a80, usermode=0, eva=17) at /.../src/sys/i386/i386/trap.c:872 #5 0xc0d7600a in trap (frame=0xfcfb5a80) at /.../src/sys/i386/i386/trap.c:546 #6 0xc0d5c2dc in calltrap () at /.../src/sys/i386/i386/exception.s:168 #7 0xc13cb9f4 in range_tree_vacate (rt=0xc83dc000, func=0, arg=0x0) at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c:364 #8 0xc13c91d9 in metaslab_sync (msp=0xc8309000, txg=21088308) at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c:1486 #9 0xc13eb280 in vdev_sync (vd=0xc7f69800, txg=Unhandled dwarf expression opcode 0x93 ) at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c:2274 #10 0xc13dded6 in spa_sync (spa=0xdf3bf000, txg=21088308) at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6506 #11 0xc13e8835 in txg_sync_thread (arg=0xc7907400) at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:518 #12 0xc08ef767 in fork_exit (callout=0xc13e86f0 , arg=0xc7907400, frame=0xfcfb5d28) at /.../src/sys/kern/kern_fork.c:872 #13 0xc0d5c354 in fork_trampoline () at /.../src/sys/i386/i386/exception.s:275 (kgdb) list *0xc13cb9f4 0xc13cb9f4 is in range_tree_vacate (/.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c:364). 359 void *cookie = NULL; 360 361 ASSERT(MUTEX_HELD(rt->rt_lock)); 362 363 if (rt->rt_ops != NULL) 364 rt->rt_ops->rtop_vacate(rt, rt->rt_arg); 365 366 while ((rs = avl_destroy_nodes(&rt->rt_root, &cookie)) != NULL) { 367 if (func != NULL) 368 func(arg, rs->rs_start, rs->rs_end - rs->rs_start); I'm copying over the readonly import now and can probably play with this if you need more. Thanks. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 19:01:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 65DACCEF for ; Mon, 9 Jun 2014 19:01:30 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 198E326D0 for ; Mon, 9 Jun 2014 19:01:29 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id D55F320E7088C; Mon, 9 Jun 2014 19:01:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 8774120E7088A; Mon, 9 Jun 2014 19:01:15 +0000 (UTC) Message-ID: From: "Steven Hartland" To: "grarpamp" , References: Subject: Re: ZFS import panic (kgdb backtrace attached) Date: Mon, 9 Jun 2014 20:01:15 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 19:01:30 -0000 ZFS doesn't really like i386 due to the limited stack size So the first think I would try is increasing your kernel stack size: options KSTACK_PAGES=4 Regards Steve ----- Original Message ----- From: "grarpamp" To: Sent: Monday, June 09, 2014 7:37 PM Subject: ZFS import panic (kgdb backtrace attached) > ZFS pool was 96% full and under heavy sequential write, and panicked. > Dumps were not enabled so this first panic was lost. Fixed that, then... > > # zpool import -o readonly=on -f pool > (ok, zpool export pool) > # zpool import -f pool > (repeatably panics, coredump, reboot) > > FreeBSD 8.4-STABLE #0 r265935 i386 > > [/usr/include]% kgdb /boot/kernel/kernel /.../vmcore.1 > GNU gdb 6.1.1 [FreeBSD] > This GDB was configured as "i386-marcel-freebsd"... > > Unread portion of the kernel message buffer: > > Fatal trap 12: page fault while in kernel mode > cpuid = 0; apic id = 00 > fault virtual address = 0x11 > fault code = supervisor read, page not present > instruction pointer = 0x20:0xc13cb9f4 > stack pointer = 0x28:0xfcfb5ac0 > frame pointer = 0x28:0xfcfb5ae4 > code segment = base 0x0, limit 0xfffff, type 0x1b > = DPL 0, pres 1, def32 1, gran 1 > processor eflags = interrupt enabled, resume, IOPL = 0 > current process = 8 (txg_thread_enter) > trap number = 12 > panic: page fault > cpuid = 0 > KDB: stack backtrace: > #0 0xc094cd8f at kdb_backtrace+0x4f > #1 0xc091c5bc at panic+0x15c > #2 0xc0d75193 at trap_fatal+0x323 > #3 0xc0d7529c at trap_pfault+0xfc > #4 0xc0d7600a at trap+0x44a > #5 0xc0d5c2dc at calltrap+0x6 > #6 0xc13c91d9 at metaslab_sync+0x509 > #7 0xc13eb280 at vdev_sync+0x90 > #8 0xc13dded6 at spa_sync+0x496 > #9 0xc13e8835 at txg_sync_thread+0x145 > #10 0xc08ef767 at fork_exit+0x97 > #11 0xc0d5c354 at fork_trampoline+0x8 > Uptime: 17m21s > Physical memory: 2026 MB > Dumping 162 MB: 147 131 115 99 83 67 51 35 19 3 > > Loaded symbols for /boot/kernel/zfs.ko > Loaded symbols for /boot/kernel/opensolaris.ko > Loaded symbols for /boot/kernel/geom_eli.ko > Loaded symbols for /boot/kernel/crypto.ko > Loaded symbols for /boot/kernel/zlib.ko > Loaded symbols for /boot/kernel/snd_ich.ko > Loaded symbols for /boot/kernel/sound.ko > Loaded symbols for /boot/kernel/drm.ko > Loaded symbols for /boot/kernel/i915.ko > Loaded symbols for /boot/kernel/atapicam.ko > Loaded symbols for /boot/kernel/cpuctl.ko > > #0 doadump () at pcpu.h:244 > 244 __asm("movl %%fs:0,%0" : "=r" (td)); > > (kgdb) bt > #0 doadump () at pcpu.h:244 > #1 0xc091c313 in boot (howto=260) at /.../src/sys/kern/kern_shutdown.c:443 > #2 0xc091c5fe in panic (fmt=) at > /.../src/sys/kern/kern_shutdown.c:634 > #3 0xc0d75193 in trap_fatal (frame=0xfcfb5a80, eva=17) at > /.../src/sys/i386/i386/trap.c:1010 > #4 0xc0d7529c in trap_pfault (frame=0xfcfb5a80, usermode=0, eva=17) > at /.../src/sys/i386/i386/trap.c:872 > #5 0xc0d7600a in trap (frame=0xfcfb5a80) at /.../src/sys/i386/i386/trap.c:546 > #6 0xc0d5c2dc in calltrap () at /.../src/sys/i386/i386/exception.s:168 > #7 0xc13cb9f4 in range_tree_vacate (rt=0xc83dc000, func=0, arg=0x0) > at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c:364 > #8 0xc13c91d9 in metaslab_sync (msp=0xc8309000, txg=21088308) > at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c:1486 > #9 0xc13eb280 in vdev_sync (vd=0xc7f69800, txg=Unhandled dwarf > expression opcode 0x93 > ) at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c:2274 > #10 0xc13dded6 in spa_sync (spa=0xdf3bf000, txg=21088308) > at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6506 > #11 0xc13e8835 in txg_sync_thread (arg=0xc7907400) at > /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:518 > #12 0xc08ef767 in fork_exit (callout=0xc13e86f0 , > arg=0xc7907400, frame=0xfcfb5d28) at /.../src/sys/kern/kern_fork.c:872 > #13 0xc0d5c354 in fork_trampoline () at /.../src/sys/i386/i386/exception.s:275 > > (kgdb) list *0xc13cb9f4 > 0xc13cb9f4 is in range_tree_vacate > (/.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c:364). > 359 void *cookie = NULL; > 360 > 361 ASSERT(MUTEX_HELD(rt->rt_lock)); > 362 > 363 if (rt->rt_ops != NULL) > 364 rt->rt_ops->rtop_vacate(rt, rt->rt_arg); > 365 > 366 while ((rs = avl_destroy_nodes(&rt->rt_root, &cookie)) > != NULL) { > 367 if (func != NULL) > 368 func(arg, rs->rs_start, rs->rs_end - > rs->rs_start); > > I'm copying over the readonly import now and can > probably play with this if you need more. Thanks. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 19:28:01 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8A9D0750 for ; Mon, 9 Jun 2014 19:28:01 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 71A0128E0 for ; Mon, 9 Jun 2014 19:28:01 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s59JS1qi078793 for ; Mon, 9 Jun 2014 20:28:01 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 167977] [smbfs] mount_smbfs results are differ when utf-8 or UTF-8 local encoding's name is used Date: Mon, 09 Jun 2014 19:28:00 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: commit-hook@freebsd.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 19:28:01 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=167977 --- Comment #6 from commit-hook@freebsd.org --- A commit references this bug: Author: jhb Date: Mon Jun 9 19:27:48 UTC 2014 New revision: 267291 URL: http://svnweb.freebsd.org/changeset/base/267291 Log: Use strcasecmp() instead of strcmp() when checking user-supplied encoding names so that encoding names are treated as case-insensitive. This allows the use of 'utf-8' instead of 'UTF-8' for example and matches the behavior of iconv(1). PR: 167977 Submitted by: buganini@gmail.com MFC after: 1 week Changes: head/sys/libkern/iconv.c head/sys/libkern/iconv_ucs.c -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 19:29:22 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A95DC91C for ; Mon, 9 Jun 2014 19:29:22 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9049428F9 for ; Mon, 9 Jun 2014 19:29:22 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s59JTMRp094899 for ; Mon, 9 Jun 2014 20:29:22 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 167977] [smbfs] mount_smbfs results are differ when utf-8 or UTF-8 local encoding's name is used Date: Mon, 09 Jun 2014 19:29:22 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: jhb@FreeBSD.org X-Bugzilla-Status: Needs MFC X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: jhb@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 19:29:22 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=167977 John Baldwin changed: What |Removed |Added ---------------------------------------------------------------------------- Status|In Discussion |Needs MFC Assignee|freebsd-fs@FreeBSD.org |jhb@FreeBSD.org --- Comment #7 from John Baldwin --- Committed to HEAD, thanks! -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 20:15:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F340A190 for ; Mon, 9 Jun 2014 20:15:35 +0000 (UTC) Received: from mail-vc0-x233.google.com (mail-vc0-x233.google.com [IPv6:2607:f8b0:400c:c03::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B47282E06 for ; Mon, 9 Jun 2014 20:15:35 +0000 (UTC) Received: by mail-vc0-f179.google.com with SMTP id id10so5033189vcb.24 for ; Mon, 09 Jun 2014 13:15:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=c8qA7ULWdA8/bjdXd2qmRZgH8OceLnsTU73ZCJupgec=; b=fAoKfO57IL4Is7r7ykgGv5G1YnhMFgOkxoBqUeu2YKVMYODKZ1ez+pZYjieoBwlbz1 esZjlVJTTlPqNK8aJodeQcYT/CbfLIs06/ECOJbT3XRKvMwX+/p3w0KSVqa0MHXfuAqc Tb1QEk/cmBSajtL/bVn0H2T/0kVPQgUCM+IB9r1FEmcm/Pmb1V3JjNvPdZPE0M4wv+CL UmIG++SH80kfU9F5VfbD1kQSprx4uNpmDeVi7tcqGSFzqgCx2bl+CIniND3Z3BhR576o MFu6QpDMl1qbYQpyWRSBplF3I/xSZsld64Ly9rSn2KC1AEG5sgFFXQx9OpURUes5TuWo 61dA== MIME-Version: 1.0 X-Received: by 10.220.92.135 with SMTP id r7mr27616813vcm.11.1402344933990; Mon, 09 Jun 2014 13:15:33 -0700 (PDT) Received: by 10.221.65.198 with HTTP; Mon, 9 Jun 2014 13:15:33 -0700 (PDT) In-Reply-To: References: Date: Mon, 9 Jun 2014 16:15:33 -0400 Message-ID: Subject: Re: ZFS import panic (kgdb backtrace attached) From: grarpamp To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 20:15:36 -0000 On Mon, Jun 9, 2014 at 3:01 PM, Steven Hartland wrote: > ZFS doesn't really like i386 due to the limited stack size > > So the first think I would try is increasing your kernel stack size: > options KSTACK_PAGES=4 If it's not a genuine bug (ie: just the usual ZFS on i386 issues), I can move it to default x64 with 8GB in maybe a week. I forgot to include, this is stock GENERIC with: real memory = 2147483648 (2048 MB) vm.kmem_size=650000000 I don't recall why I set kmem_size. And when wired mem gets above 600 things get unrecoverably slow and the box will lock/panic if I press on zfs at/above 600. With the 100 or so heavy write panics and a dozen+ power cuts across multiple zfs/pool/bsd versions, I'm amazed I still have any readable zfs fs's at all. Anyway, posted in case it's a genuine bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 9 20:29:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D7D10536 for ; Mon, 9 Jun 2014 20:29:25 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 9BEED2EF3 for ; Mon, 9 Jun 2014 20:29:25 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 78E4120E7088C; Mon, 9 Jun 2014 20:29:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 4997820E7088A; Mon, 9 Jun 2014 20:29:17 +0000 (UTC) Message-ID: <48A9B8D0A85E4088B6EB712173219681@multiplay.co.uk> From: "Steven Hartland" To: "grarpamp" , References: Subject: Re: ZFS import panic (kgdb backtrace attached) Date: Mon, 9 Jun 2014 21:29:18 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2014 20:29:25 -0000 Can't say for sure, usually its a double fault panic for stack issue, but worth eliminating as an initial step. I would also consider moving to 10 from 8.4 tbh. Regards Steve ----- Original Message ----- From: "grarpamp" To: Sent: Monday, June 09, 2014 9:15 PM Subject: Re: ZFS import panic (kgdb backtrace attached) > On Mon, Jun 9, 2014 at 3:01 PM, Steven Hartland wrote: >> ZFS doesn't really like i386 due to the limited stack size >> >> So the first think I would try is increasing your kernel stack size: >> options KSTACK_PAGES=4 > > If it's not a genuine bug (ie: just the usual ZFS on i386 issues), > I can move it to default x64 with 8GB in maybe a week. > > I forgot to include, this is stock GENERIC with: > real memory = 2147483648 (2048 MB) > vm.kmem_size=650000000 > > I don't recall why I set kmem_size. And when wired > mem gets above 600 things get unrecoverably slow and > the box will lock/panic if I press on zfs at/above 600. > With the 100 or so heavy write panics and a dozen+ > power cuts across multiple zfs/pool/bsd versions, I'm > amazed I still have any readable zfs fs's at all. > > Anyway, posted in case it's a genuine bug. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Jun 10 08:10:16 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5646FAF7 for ; Tue, 10 Jun 2014 08:10:16 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 1BF1C26CD for ; Tue, 10 Jun 2014 08:10:15 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id BA21D20E7088D; Tue, 10 Jun 2014 08:10:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id C925A20E7088B; Tue, 10 Jun 2014 08:10:07 +0000 (UTC) Message-ID: From: "Steven Hartland" To: , "grarpamp" References: <20140610032018.GA46419@neutralgood.org> Subject: Re: ZFS import panic (kgdb backtrace attached) Date: Tue, 10 Jun 2014 09:10:08 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jun 2014 08:10:16 -0000 Thats known issue, there's been some improvements in later versions. ----- Original Message ----- From: > On Mon, Jun 09, 2014 at 02:37:36PM -0400, grarpamp wrote: >> ZFS pool was 96% full and under heavy sequential write, and panicked. > > You also probably noticed that with it that full performance was probably > total ass. > > Good practice says to not get a ZFS pool too full. Depending on who you > ask, "too full" ranges from 75-80-90%. > -- > Kevin P. Neal http://www.pobox.com/~kpn/ > > "I like being on The Daily Show." - Kermit the Frog, Feb 13 2001 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Jun 10 10:36:11 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0AF52732 for ; Tue, 10 Jun 2014 10:36:11 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E660123FD for ; Tue, 10 Jun 2014 10:36:10 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5AAaA6i080778 for ; Tue, 10 Jun 2014 11:36:10 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 157399] [zfs] trouble with: mdconfig force delete && zfs stripe Date: Tue, 10 Jun 2014 10:36:09 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: danfe@FreeBSD.org X-Bugzilla-Status: Issue Resolved X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status cc resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jun 2014 10:36:11 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=157399 Alexey Dokuchaev changed: What |Removed |Added ---------------------------------------------------------------------------- Status|In Discussion |Issue Resolved CC| |danfe@FreeBSD.org Resolution|--- |Unable to Reproduce --- Comment #4 from Alexey Dokuchaev --- Closed per submitter's request (no longer reproducible). -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Jun 10 15:06:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 77325E6B for ; Tue, 10 Jun 2014 15:06:52 +0000 (UTC) Received: from mail-lb0-x22a.google.com (mail-lb0-x22a.google.com [IPv6:2a00:1450:4010:c04::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 01F3B2FEF for ; Tue, 10 Jun 2014 15:06:51 +0000 (UTC) Received: by mail-lb0-f170.google.com with SMTP id w7so4034876lbi.15 for ; Tue, 10 Jun 2014 08:06:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=A02IgeScFc235IycEe3plnwXfcGScrWYD1UdoooYb+w=; b=SvtyLR5kZ79jtQGOe4Lo/vNl3QZ5rULD2BGBCZITWClpiCy6xyuYfIBjQKNecvRrM+ 6HvuoZQew1ZptgPnHYLuGouF2wpNOIuEYk/iac5WlCVpP5d/2riez8Xl4GAq4vaxd3WD GBA343b46AA2udcND3Sbcwni+qlpjYUu7WkG2nslZpj4QtzFlsSvr+SCbvB4NfBMcFiP i9CtGR0n9HpyvM9aR8z6BEzTIoPLni6Ff5IWkyR8PgUC/gyZiVDSpq8rc7cjH3a8KDqY mVMHxw2CsZrFHoqxFQsVMPq0tSBpzngtityGfkvAYE9lfnzDxheJOhLtCmf9zAcFaTY0 vXZQ== MIME-Version: 1.0 X-Received: by 10.152.121.72 with SMTP id li8mr2989191lab.45.1402412809689; Tue, 10 Jun 2014 08:06:49 -0700 (PDT) Received: by 10.112.137.69 with HTTP; Tue, 10 Jun 2014 08:06:49 -0700 (PDT) In-Reply-To: <201406091444.s59Eijuh032276@higson.cam.lispworks.com> References: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> <53932846.20806@FreeBSD.org> <201406091243.s59Ch55n026817@higson.cam.lispworks.com> <201406091444.s59Eijuh032276@higson.cam.lispworks.com> Date: Tue, 10 Jun 2014 16:06:49 +0100 Message-ID: Subject: Re: Is ZFS Multi-vdev root pool configuration discovery supported? From: Tom Evans To: Martin Simmons Content-Type: text/plain; charset=UTF-8 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jun 2014 15:06:52 -0000 On Mon, Jun 9, 2014 at 3:44 PM, Martin Simmons wrote: >>> On Mon, 09 Jun 2014 07:48:04 -0500, Larry Rosenman said: >> >> For the record I've been booting off a 6-disk raidZ1 pool for a LONG >> time. >> >> I have boot code on all 6 disks, and can rearrange them at will and boot >> off any of them. > > OK, but that sounds like a configuration with a single vdev. I was asking > about something like RAID10 with muliple top level vdevs. > > __Martin IIRC I was booting from a multi-vdev root, with 6 disk raidz in each vdev (but I'm not anymore, I put in a proper mirrored root pool). Cheers Tom From owner-freebsd-fs@FreeBSD.ORG Tue Jun 10 17:31:19 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C327989A for ; Tue, 10 Jun 2014 17:31:19 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [IPv6:2001:470:1f11:75::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7BE9F2E25 for ; Tue, 10 Jun 2014 17:31:19 +0000 (UTC) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 95CA0B94A; Tue, 10 Jun 2014 13:31:17 -0400 (EDT) From: John Baldwin To: freebsd-fs@freebsd.org Subject: Re: ZFS import panic (kgdb backtrace attached) Date: Tue, 10 Jun 2014 11:58:08 -0400 User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20140415; KDE/4.5.5; amd64; ; ) References: In-Reply-To: MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <201406101158.08599.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Tue, 10 Jun 2014 13:31:17 -0400 (EDT) Cc: grarpamp X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jun 2014 17:31:19 -0000 On Monday, June 09, 2014 2:37:36 pm grarpamp wrote: > ZFS pool was 96% full and under heavy sequential write, and panicked. > Dumps were not enabled so this first panic was lost. Fixed that, then... > > # zpool import -o readonly=on -f pool > (ok, zpool export pool) > # zpool import -f pool > (repeatably panics, coredump, reboot) > > FreeBSD 8.4-STABLE #0 r265935 i386 > > [/usr/include]% kgdb /boot/kernel/kernel /.../vmcore.1 > GNU gdb 6.1.1 [FreeBSD] > This GDB was configured as "i386-marcel-freebsd"... > > Unread portion of the kernel message buffer: > > Fatal trap 12: page fault while in kernel mode > cpuid = 0; apic id = 00 > fault virtual address = 0x11 > fault code = supervisor read, page not present > instruction pointer = 0x20:0xc13cb9f4 > stack pointer = 0x28:0xfcfb5ac0 > frame pointer = 0x28:0xfcfb5ae4 > code segment = base 0x0, limit 0xfffff, type 0x1b > = DPL 0, pres 1, def32 1, gran 1 > processor eflags = interrupt enabled, resume, IOPL = 0 > current process = 8 (txg_thread_enter) > trap number = 12 > panic: page fault > cpuid = 0 > KDB: stack backtrace: > #0 0xc094cd8f at kdb_backtrace+0x4f > #1 0xc091c5bc at panic+0x15c > #2 0xc0d75193 at trap_fatal+0x323 > #3 0xc0d7529c at trap_pfault+0xfc > #4 0xc0d7600a at trap+0x44a > #5 0xc0d5c2dc at calltrap+0x6 > #6 0xc13c91d9 at metaslab_sync+0x509 > #7 0xc13eb280 at vdev_sync+0x90 > #8 0xc13dded6 at spa_sync+0x496 > #9 0xc13e8835 at txg_sync_thread+0x145 > #10 0xc08ef767 at fork_exit+0x97 > #11 0xc0d5c354 at fork_trampoline+0x8 > Uptime: 17m21s > Physical memory: 2026 MB > Dumping 162 MB: 147 131 115 99 83 67 51 35 19 3 > > Loaded symbols for /boot/kernel/zfs.ko > Loaded symbols for /boot/kernel/opensolaris.ko > Loaded symbols for /boot/kernel/geom_eli.ko > Loaded symbols for /boot/kernel/crypto.ko > Loaded symbols for /boot/kernel/zlib.ko > Loaded symbols for /boot/kernel/snd_ich.ko > Loaded symbols for /boot/kernel/sound.ko > Loaded symbols for /boot/kernel/drm.ko > Loaded symbols for /boot/kernel/i915.ko > Loaded symbols for /boot/kernel/atapicam.ko > Loaded symbols for /boot/kernel/cpuctl.ko > > #0 doadump () at pcpu.h:244 > 244 __asm("movl %%fs:0,%0" : "=r" (td)); > > (kgdb) bt > #0 doadump () at pcpu.h:244 > #1 0xc091c313 in boot (howto=260) at /.../src/sys/kern/kern_shutdown.c:443 > #2 0xc091c5fe in panic (fmt=) at > /.../src/sys/kern/kern_shutdown.c:634 > #3 0xc0d75193 in trap_fatal (frame=0xfcfb5a80, eva=17) at > /.../src/sys/i386/i386/trap.c:1010 > #4 0xc0d7529c in trap_pfault (frame=0xfcfb5a80, usermode=0, eva=17) > at /.../src/sys/i386/i386/trap.c:872 > #5 0xc0d7600a in trap (frame=0xfcfb5a80) at /.../src/sys/i386/i386/trap.c:546 > #6 0xc0d5c2dc in calltrap () at /.../src/sys/i386/i386/exception.s:168 > #7 0xc13cb9f4 in range_tree_vacate (rt=0xc83dc000, func=0, arg=0x0) > at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c:364 > #8 0xc13c91d9 in metaslab_sync (msp=0xc8309000, txg=21088308) > at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c:1486 > #9 0xc13eb280 in vdev_sync (vd=0xc7f69800, txg=Unhandled dwarf > expression opcode 0x93 > ) at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c:2274 > #10 0xc13dded6 in spa_sync (spa=0xdf3bf000, txg=21088308) > at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6506 > #11 0xc13e8835 in txg_sync_thread (arg=0xc7907400) at > /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:518 > #12 0xc08ef767 in fork_exit (callout=0xc13e86f0 , > arg=0xc7907400, frame=0xfcfb5d28) at /.../src/sys/kern/kern_fork.c:872 > #13 0xc0d5c354 in fork_trampoline () at /.../src/sys/i386/i386/exception.s:275 > > (kgdb) list *0xc13cb9f4 > 0xc13cb9f4 is in range_tree_vacate > (/.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c:364). > 359 void *cookie = NULL; > 360 > 361 ASSERT(MUTEX_HELD(rt->rt_lock)); > 362 > 363 if (rt->rt_ops != NULL) > 364 rt->rt_ops->rtop_vacate(rt, rt->rt_arg); > 365 > 366 while ((rs = avl_destroy_nodes(&rt->rt_root, &cookie)) > != NULL) { > 367 if (func != NULL) > 368 func(arg, rs->rs_start, rs->rs_end - > rs->rs_start); Can you do 'frame 7' and 'p *rt' and 'p *rt->rt_ops'? -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Tue Jun 10 18:00:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 76B54713 for ; Tue, 10 Jun 2014 18:00:38 +0000 (UTC) Received: from mail-vc0-x22e.google.com (mail-vc0-x22e.google.com [IPv6:2607:f8b0:400c:c03::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 387A02133 for ; Tue, 10 Jun 2014 18:00:38 +0000 (UTC) Received: by mail-vc0-f174.google.com with SMTP id hy4so2302394vcb.33 for ; Tue, 10 Jun 2014 11:00:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=3Wul7bl0BioAc8PdxraKTtq7k0lSlZ7xUZZ5fdZ1ngw=; b=axJuJ0USBoE/jlUh6EXiQjV6nd3FIGRZ7UWAEUAJbpTL36WQXnLs9+1Epg0p3OkzYw LBrkGc7x0EZYDHTR4cgcnZX0oe+/DjTtG3IJatQCYNt4X6QoJK5AhXCcEGVu8Bso274A RL59kGfbnqdodnu3NzJsIP+DW4bAaA9MTg8bwowe6eY1A7KEuAFDhALHSrNCe7YaF9J2 NsRcaiA+IRcJbbLPhDk26rs/AgzWfS6RPaGyTU2E4q3r5gxZK5XmlUYdXH1J6cVOQab4 J2vaU4Vnt3EmJ1iqAODbhO7m60vzZBYRsPAM4E5Lx5AGTQMgl7FSUBMQ6jBFS5pmXRN/ C5Ew== MIME-Version: 1.0 X-Received: by 10.58.112.65 with SMTP id io1mr1097197veb.61.1402423237305; Tue, 10 Jun 2014 11:00:37 -0700 (PDT) Received: by 10.221.65.198 with HTTP; Tue, 10 Jun 2014 11:00:37 -0700 (PDT) In-Reply-To: <201406101158.08599.jhb@freebsd.org> References: <201406101158.08599.jhb@freebsd.org> Date: Tue, 10 Jun 2014 14:00:37 -0400 Message-ID: Subject: Re: ZFS import panic (kgdb backtrace attached) From: grarpamp To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jun 2014 18:00:38 -0000 On Tue, Jun 10, 2014 at 11:58 AM, John Baldwin wrote: > Can you do 'frame 7' and 'p *rt' and 'p *rt->rt_ops'? (kgdb) frame 7 #7 0xc13cb9f4 in range_tree_vacate (rt=0xc83dc000, func=0, arg=0x0) at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c:364 364 rt->rt_ops->rtop_vacate(rt, rt->rt_arg); (kgdb) p *rt $1 = {rt_root = {avl_root = 0xd6900780, avl_compar = 0xc13cb890 , avl_offset = 0, avl_numnodes = 1, avl_size = 48}, rt_space = 4294967296, rt_ops = 0x1, rt_arg = 0x0, rt_histogram = {0 }, rt_lock = 0xc8309000} (kgdb) p *rt->rt_ops Cannot access memory at address 0x1 From owner-freebsd-fs@FreeBSD.ORG Wed Jun 11 16:16:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9D4C4C21 for ; Wed, 11 Jun 2014 16:16:12 +0000 (UTC) Received: from mail-pb0-x231.google.com (mail-pb0-x231.google.com [IPv6:2607:f8b0:400e:c01::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6BE8B28CB for ; Wed, 11 Jun 2014 16:16:12 +0000 (UTC) Received: by mail-pb0-f49.google.com with SMTP id jt11so7486966pbb.8 for ; Wed, 11 Jun 2014 09:16:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphix.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=GXQLcEu3eKp/XF84jg1tTHZVLeVYML1e6gD3XQv6b5k=; b=RzgUMNKGdGr83IPtaL/GUrGhNe5V5h9BrZ4w9AK5SssSpBF9aF9KX+CMmg7zsi1i66 ydjlbxA3Ka6F1RUXDrcXn/xuMXfwVjXb6FL7qylN9dhppWQcZbvLbhf0GmJEMH+7OQYj jdISYjIX9sTfOHDfsG5tKGyugXOn+FgXcvTtc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=GXQLcEu3eKp/XF84jg1tTHZVLeVYML1e6gD3XQv6b5k=; b=EFTEJnmETvPVc/zRx2PZR56IJZmLjaWfWnGfnj3f8I2tiO752dud9yFRLXdvkEZGZJ ZvOTTJGGzfJByKOo1F3DapbuhGJptT7VmmcGoUKPY7gamJiUrWZWQlDVhpP0T3l5ANmj 1UzKzSFY+/xck/plmw3e33ZT9NcG5IG5UiszBTt9s4r1a26vt4tVtIXxC0tIamPWdnmG poXn0/O8ICaYyeTsymzJSb4ISFgDn1nfCmlvwGNjLvR5UQKL5k3A9811w47/Z4seyKAW Ht6vlVhBEXeGpKrQ5C30Z9rf4ruXMhWBosC0XFvEngHJJd235joXyLbbFTs0URsZuq47 N5wQ== X-Gm-Message-State: ALoCoQkzpKZLo61Cg9zbbJI17NovXlDmwfXNeF96K2GaEySECw9TPzf1NzxuGvKD7qX5HnDv0mLt MIME-Version: 1.0 X-Received: by 10.66.228.37 with SMTP id sf5mr6020477pac.19.1402503371835; Wed, 11 Jun 2014 09:16:11 -0700 (PDT) Received: by 10.70.0.202 with HTTP; Wed, 11 Jun 2014 09:16:11 -0700 (PDT) In-Reply-To: <20A7B2EB-CD96-4952-BB20-4B8E41200AF6@ixsystems.com> References: <5346C3E2.2080302@FreeBSD.org> <20140607170803.6b5d624b@fabiankeil.de> <20A7B2EB-CD96-4952-BB20-4B8E41200AF6@ixsystems.com> Date: Wed, 11 Jun 2014 09:16:11 -0700 Message-ID: Subject: Re: freebsd vfs, solaris vfs, zfs From: Matthew Ahrens To: Jordan Hubbard Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jun 2014 16:16:12 -0000 If you liked Andriy's article, you will probably also enjoy his talk on this subject from the European OpenZFS Conference: https://www.youtube.com/watch?v=3DoB-QDwVuBH4&index=3D8&list=3DPLaUVvul17xS= cyhIYmGjaNaGSWI49qyf6K http://www.open-zfs.org/w/images/9/98/Andriy_-_FreeBSD_Dev_Talk.pdf The video is much more detailed than the slides, so I'd encourage checking it out. If you only have 10 minutes, start around 16:30 for the "TL;DW". The beginning of the video is about how they tested ZFS, the ZPL in particular. Around 12:00 he talks about several problems he discovered with the FreeBSD ZFS code that were due to differences between illumos and FreeBSD VFS. Then around 20:00 he talks about how to fix the problems. --matt On Sat, Jun 7, 2014 at 10:52 AM, Jordan Hubbard wrote: > > On Jun 7, 2014, at 8:08 AM, Fabian Keil > wrote: > > > Andriy Gapon wrote: > > > >> I've tried to express some of my understanding of how FreeBSD VFS work= s > and how > >> it compares to Solaris VFS model, maybe you would find that interestin= g: > >> > http://www.hybridcluster.com/blog/complexity-freebsd-vfs-using-zfs-exampl= e-part-2/ > >> I will certainly appreciate any feedback. > > > > I'm interested in articles like this, thanks for taking the time to > write them. > > Yes, this is a well-written (albeit deeply technical) article on BSD VFS. > I get that the author is clearly more familiar with Solaris, and therefo= re > used it as a point of comparison, but I wonder if he has any appetite for= a > Linux VFS (http://www.win.tue.nl/~aeb/linux/lk/lk-8.html) vs BSD VFS > article as well. I=E2=80=99ve never really investigated the Linux VFS > implementation in any detail, but I=E2=80=99m told it has some nice featu= res to > facilitate file change monitoring and simply provides a richer set of > semantics for permuting filesystem behaviors. Maybe we could learn a thi= ng > or two from it? > > - Jordan > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Jun 11 17:55:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BA86FCF2 for ; Wed, 11 Jun 2014 17:55:50 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [IPv6:2001:470:1f11:75::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 93800226B for ; Wed, 11 Jun 2014 17:55:50 +0000 (UTC) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 7BA09B9C4; Wed, 11 Jun 2014 13:55:49 -0400 (EDT) From: John Baldwin To: freebsd-fs@freebsd.org Subject: Re: ZFS import panic (kgdb backtrace attached) Date: Wed, 11 Jun 2014 12:52:02 -0400 User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20140415; KDE/4.5.5; amd64; ; ) References: <201406101158.08599.jhb@freebsd.org> In-Reply-To: MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <201406111252.02544.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Wed, 11 Jun 2014 13:55:49 -0400 (EDT) Cc: grarpamp X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jun 2014 17:55:50 -0000 On Tuesday, June 10, 2014 2:00:37 pm grarpamp wrote: > On Tue, Jun 10, 2014 at 11:58 AM, John Baldwin wrote: > > Can you do 'frame 7' and 'p *rt' and 'p *rt->rt_ops'? > > (kgdb) frame 7 > #7 0xc13cb9f4 in range_tree_vacate (rt=0xc83dc000, func=0, arg=0x0) > at /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c:364 > 364 rt->rt_ops->rtop_vacate(rt, rt->rt_arg); > (kgdb) p *rt > $1 = {rt_root = {avl_root = 0xd6900780, avl_compar = 0xc13cb890 > , avl_offset = 0, avl_numnodes = 1, avl_size = > 48}, > rt_space = 4294967296, rt_ops = 0x1, rt_arg = 0x0, rt_histogram = {0 > }, rt_lock = 0xc8309000} > (kgdb) p *rt->rt_ops > Cannot access memory at address 0x1 Humm, that is the source of the actual fault. I've no idea why that would be set to 1 however. Unfortunately you need someone more familiar with ZFS to look at this further. -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Wed Jun 11 18:25:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 02039B28; Wed, 11 Jun 2014 18:25:59 +0000 (UTC) Received: from mail-ve0-x22b.google.com (mail-ve0-x22b.google.com [IPv6:2607:f8b0:400c:c01::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A4E84256F; Wed, 11 Jun 2014 18:25:58 +0000 (UTC) Received: by mail-ve0-f171.google.com with SMTP id jz11so291152veb.16 for ; Wed, 11 Jun 2014 11:25:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=HgasuEKU6cFDC+gzGNNyiyUTZf2s4nCqq9yJNzcRgH0=; b=fk8+HSc7c6s7fAjsnTaUGUr/XHBHoA24wj7KBvwYlY/y8X8toUzc0JWs56NFfRDzlX kF/wNHk6Du4mZVBJFpeC9m7X7ydaqd7f4BxcK79AyvAazkKrzXXbmNxj3rDf65rDHlRB oganF3xajPdAQO6+vi0pFSYgcnPAIOc+r2j+d3hHe1ahfspS6C8cyBcaO+vLxsHf4VJA h2+wTiZlhV4g50zlshVHRK6tSGm83uXsdjjmkr8nbmAzMCWN6U3jZKfPqaTTeHpNX7RP 0smYInPUKd6DECbfkfTcl1V4jhd4ioUGtnkGgnftZp3OHiZB6V+2lQzCNJRyiRvETG0N 80Zg== MIME-Version: 1.0 X-Received: by 10.58.46.141 with SMTP id v13mr40611003vem.18.1402511157766; Wed, 11 Jun 2014 11:25:57 -0700 (PDT) Received: by 10.221.65.198 with HTTP; Wed, 11 Jun 2014 11:25:57 -0700 (PDT) In-Reply-To: <201406111252.02544.jhb@freebsd.org> References: <201406101158.08599.jhb@freebsd.org> <201406111252.02544.jhb@freebsd.org> Date: Wed, 11 Jun 2014 14:25:57 -0400 Message-ID: Subject: Re: ZFS import panic (kgdb backtrace attached) From: grarpamp To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Cc: zfs-devel@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jun 2014 18:25:59 -0000 On Wed, Jun 11, 2014 at 12:52 PM, John Baldwin wrote: > On Tuesday, June 10, 2014 2:00:37 pm grarpamp wrote: >> On Tue, Jun 10, 2014 at 11:58 AM, John Baldwin wrote: >> > Can you do 'frame 7' and 'p *rt' and 'p *rt->rt_ops'? >> >> (kgdb) frame 7 >> #7 0xc13cb9f4 in range_tree_vacate (rt=0xc83dc000, func=0, arg=0x0) >> at > /.../src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c:364 >> 364 rt->rt_ops->rtop_vacate(rt, rt->rt_arg); >> (kgdb) p *rt >> $1 = {rt_root = {avl_root = 0xd6900780, avl_compar = 0xc13cb890 >> , avl_offset = 0, avl_numnodes = 1, avl_size = >> 48}, >> rt_space = 4294967296, rt_ops = 0x1, rt_arg = 0x0, rt_histogram = {0 >> }, rt_lock = 0xc8309000} >> (kgdb) p *rt->rt_ops >> Cannot access memory at address 0x1 > > Humm, that is the source of the actual fault. I've no idea why that would > be set to 1 however. Unfortunately you need someone more familiar with ZFS to > look at this further. Ok, copying thread to zfs-devel, not sure if it's the right place but probably has open-zfs.org people. I'll see about posting status on RELENG_10 x64 when I can move it there. http://docs.freebsd.org/cgi/mid.cgi?CAD2Ti29gKmED34S5z6NEUnaGOsx8m2uPEJiPWPZLcebJ6PD-mw http://docs.freebsd.org/cgi/mid.cgi?CAD2Ti2_DZqDbOnbwap-YrOEjavyRZ4H7JZ1r8mkk4_OPrYQEUg From owner-freebsd-fs@FreeBSD.ORG Fri Jun 13 11:57:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B070A20F for ; Fri, 13 Jun 2014 11:57:52 +0000 (UTC) Received: from mail-wg0-f52.google.com (mail-wg0-f52.google.com [74.125.82.52]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47C1623CC for ; Fri, 13 Jun 2014 11:57:51 +0000 (UTC) Received: by mail-wg0-f52.google.com with SMTP id b13so2587240wgh.11 for ; Fri, 13 Jun 2014 04:57:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=O/JkrxsdgM6rcUlaRI7Rhe30Vv0HMlfW580WyzPWrrk=; b=hFoPKBPaAxyKg8z75aIXki5L50GFXSc2kJ8gf8vhCSVjQUXZtZKfMsvkm5i+lajQvx KFV9z2pHjtu28lB03MUOMl5bcRRDoXREdtHG/wyzQ4EAlQNXC4vQn36p2sqsnIQwZYG8 H/MaKk1eG/lOUbAjWJiZ/Qlkt5VZKPrKTlz4mfX3QqU9QSSxTnmYz6M5iqu4yoeFV6vw Bwb+GmuKZR8mdZeaGaTpBTVN47vScBYXHKL10FGUAMKDTWzAb7OxHwvM7XCdoASKBMRV 4uVCUa5jkoZCYI+BPr/I38S5HyR9/tYd3Lwm+ZeuD4YEtWwznA5zJOs7z7w7prTWgQDX zkLQ== X-Gm-Message-State: ALoCoQkt9+TuSKZeeENkvcGDXjRwDtJczmX5dJeYNukHckWVkJdBaAK6TaPZB7WqlEHtAZFdhqQs MIME-Version: 1.0 X-Received: by 10.194.60.35 with SMTP id e3mr3866207wjr.12.1402660663696; Fri, 13 Jun 2014 04:57:43 -0700 (PDT) Received: by 10.180.13.242 with HTTP; Fri, 13 Jun 2014 04:57:43 -0700 (PDT) In-Reply-To: References: <201406061624.s56GOOx7015821@higson.cam.lispworks.com> <53932846.20806@FreeBSD.org> <201406091243.s59Ch55n026817@higson.cam.lispworks.com> <201406091444.s59Eijuh032276@higson.cam.lispworks.com> Date: Fri, 13 Jun 2014 13:57:43 +0200 Message-ID: Subject: Re: Is ZFS Multi-vdev root pool configuration discovery supported? From: Olav Gjerde To: Tom Evans Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jun 2014 11:57:52 -0000 I have two vdevs with three disks mirrors, booting from root just fine from all of them with FreeBSD 9.2! I have done as Larry Rosenman and installed boot code on all 6 drives. On Tue, Jun 10, 2014 at 5:06 PM, Tom Evans wrote= : > On Mon, Jun 9, 2014 at 3:44 PM, Martin Simmons wro= te: >>>> On Mon, 09 Jun 2014 07:48:04 -0500, Larry Rosenman said: >>> >>> For the record I've been booting off a 6-disk raidZ1 pool for a LONG >>> time. >>> >>> I have boot code on all 6 disks, and can rearrange them at will and boo= t >>> off any of them. >> >> OK, but that sounds like a configuration with a single vdev. I was aski= ng >> about something like RAID10 with muliple top level vdevs. >> >> __Martin > > IIRC I was booting from a multi-vdev root, with 6 disk raidz in each > vdev (but I'm not anymore, I put in a proper mirrored root pool). > > Cheers > > Tom > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" --=20 Olav Gr=C3=B8n=C3=A5s Gjerde From owner-freebsd-fs@FreeBSD.ORG Sun Jun 15 04:53:01 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3607F9B6 for ; Sun, 15 Jun 2014 04:53:01 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1BC992078 for ; Sun, 15 Jun 2014 04:53:01 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5F4r0ou062385 for ; Sun, 15 Jun 2014 05:53:00 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 190655] [cd9660] cd9660 cannot mount ISO 9660 multi-session above 4 GiB Date: Sun, 15 Jun 2014 04:53:00 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 8.4-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: Needs Triage X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to short_desc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Jun 2014 04:53:01 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=190655 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org Summary|cd9660 cannot mount ISO |[cd9660] cd9660 cannot |9660 multi-session above 4 |mount ISO 9660 |GiB |multi-session above 4 GiB --- Comment #1 from Mark Linimon --- Over to maintainers. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sun Jun 15 05:04:19 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5CDBBD0F for ; Sun, 15 Jun 2014 05:04:19 +0000 (UTC) Received: from mail-vc0-f178.google.com (mail-vc0-f178.google.com [209.85.220.178]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1A6F72149 for ; Sun, 15 Jun 2014 05:04:18 +0000 (UTC) Received: by mail-vc0-f178.google.com with SMTP id ij19so3726503vcb.23 for ; Sat, 14 Jun 2014 22:04:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=gcnODdki8v2jCjyttjjQ6JeHUfjw2y2lr7wTUX3jJdk=; b=be+yxo7Q2A35o+Sw3FNzS6HU2kQ+p2wRKu2NoLY3p6/fE2JFCyiU2m6/LV/2rM5vhG fuYmYcWVP3EBm72/AMenqZC2oaryE1RlllHytgZy1nXXFLuZuPiABdTkoyxb88T0GxXZ f/J9ZMIDt/1gkHhn5c/JrVMqo/P7vVNh0W0Rj8r+vH0oi8NOIt1aWplDDKBEocDCiZ1e BAMG+hVfEwXVodjRcLREeuA8uEA4Jmlk7h9kvI17V1xhTQG+S7tku5eU4AosM0A5AX0g Y/MPBn1pC6Tvyrc01BMwUPqkoOWCGrMAFcb+HxZNhhXdkRBrhGm3Gvd3qk2cFWsGCPk3 U5Pw== X-Gm-Message-State: ALoCoQn5obxlg60GToppAaQkmnNi9AZCCPdXUSIvPy0XlKs13MoWFLyBF5klOatogHhtVT0t9IvS MIME-Version: 1.0 X-Received: by 10.220.167.2 with SMTP id o2mr9914847vcy.8.1402808656986; Sat, 14 Jun 2014 22:04:16 -0700 (PDT) Received: by 10.58.187.162 with HTTP; Sat, 14 Jun 2014 22:04:16 -0700 (PDT) X-Originating-IP: [60.242.115.240] Date: Sun, 15 Jun 2014 15:04:16 +1000 Message-ID: Subject: ZFS pool permanent error question -- errors: Permanent errors have been detected in the following files: storage: <0x0> From: Anders Jensen-Waud To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Jun 2014 05:04:19 -0000 Hi all, My main zfs storage pool (named ``storage'') has recently started displaying a very odd error: root@beastie> zpool status -v / pool: backup state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 da1 ONLINE 0 0 0 errors: No known data errors pool: storage state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub in progress since Sun Jun 15 14:18:45 2014 34.3G scanned out of 839G at 19.3M/s, 11h50m to go 72K repaired, 4.08% done config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 da0 ONLINE 0 0 0 (repairing) errors: Permanent errors have been detected in the following files: storage:<0x0> My dmesg: Copyright (c) 1992-2014 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 10.0-RELEASE-p1 #0: Tue Apr 8 06:45:06 UTC 2014 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 FreeBSD clang version 3.3 (tags/RELEASE_33/final 183502) 20130610 CPU: Intel(R) Core(TM)2 Duo CPU T7300 @ 2.00GHz (1995.04-MHz K8-class CPU) Origin = "GenuineIntel" Id = 0x6fa Family = 0x6 Model = 0xf Stepping = 10 Features=0xbfebfbff Features2=0xe3bd AMD Features=0x20100800 AMD Features2=0x1 TSC: P-state invariant, performance statistics real memory = 3221225472 (3072 MB) avail memory = 3074908160 (2932 MB) Event timer "LAPIC" quality 400 ACPI APIC Table: FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs FreeBSD/SMP: 1 package(s) x 2 core(s) cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 1 ACPI BIOS Warning (bug): 32/64X length mismatch in FADT/Gpe1Block: 0/32 (20130823/tbfadt-601) ACPI BIOS Warning (bug): Optional FADT field Gpe1Block has zero address or length: 0x000000000000102C/0x0 (20130823/tbfadt-630) ioapic0: Changing APIC ID to 1 ioapic0 irqs 0-23 on motherboard kbd1 at kbdmux0 random: initialized acpi0: on motherboard CPU0: local APIC error 0x40 acpi_ec0: port 0x62,0x66 on acpi0 acpi0: Power Button (fixed) acpi0: reservation of 0, a0000 (3) failed acpi0: reservation of 100000, bef00000 (3) failed cpu0: on acpi0 cpu1: on acpi0 attimer0: port 0x40-0x43 irq 0 on acpi0 Timecounter "i8254" frequency 1193182 Hz quality 0 Event timer "i8254" frequency 1193182 Hz quality 100 hpet0: iomem 0xfed00000-0xfed003ff on acpi0 Timecounter "HPET" frequency 14318180 Hz quality 950 Event timer "HPET" frequency 14318180 Hz quality 450 Event timer "HPET1" frequency 14318180 Hz quality 440 Event timer "HPET2" frequency 14318180 Hz quality 440 atrtc0: port 0x70-0x71 irq 8 on acpi0 Event timer "RTC" frequency 32768 Hz quality 0 Timecounter "ACPI-fast" frequency 3579545 Hz quality 900 acpi_timer0: <24-bit timer at 3.579545MHz> port 0x1008-0x100b on acpi0 acpi_lid0: on acpi0 acpi_button0: on acpi0 pcib0: port 0xcf8-0xcff on acpi0 pci0: on pcib0 pcib1: irq 16 at device 1.0 on pci0 pci1: on pcib1 vgapci0: port 0x2000-0x207f mem 0xd6000000-0xd6ffffff,0xe0000000-0xefffffff,0xd4000000-0xd5ffffff irq 16 at device 0.0 on pci1 vgapci0: Boot video device em0: port 0x1840-0x185f mem 0xfe200000-0xfe21ffff,0xfe225000-0xfe225fff irq 20 at device 25.0 on pci0 em0: Using an MSI interrupt em0: Ethernet address: 00:15:58:c6:c3:3f uhci0: port 0x1860-0x187f irq 20 at device 26.0 on pci0 usbus0 on uhci0 uhci1: port 0x1880-0x189f irq 21 at device 26.1 on pci0 usbus1 on uhci1 ehci0: mem 0xfe226c00-0xfe226fff irq 22 at device 26.7 on pci0 usbus2: EHCI version 1.0 usbus2 on ehci0 hdac0: mem 0xfe220000-0xfe223fff irq 17 at device 27.0 on pci0 pcib2: irq 20 at device 28.0 on pci0 pci2: on pcib2 pcib3: irq 21 at device 28.1 on pci0 pci3: on pcib3 iwn0: mem 0xdf2fe000-0xdf2fffff irq 17 at device 0.0 on pci3 pcib4: irq 22 at device 28.2 on pci0 pci4: on pcib4 pcib5: irq 23 at device 28.3 on pci0 pci5: on pcib5 pcib6: irq 20 at device 28.4 on pci0 pci13: on pcib6 uhci2: port 0x18a0-0x18bf irq 16 at device 29.0 on pci0 usbus3 on uhci2 uhci3: port 0x18c0-0x18df irq 17 at device 29.1 on pci0 usbus4 on uhci3 uhci4: port 0x18e0-0x18ff irq 18 at device 29.2 on pci0 usbus5 on uhci4 ehci1: mem 0xfe227000-0xfe2273ff irq 19 at device 29.7 on pci0 usbus6: EHCI version 1.0 usbus6 on ehci1 pcib7: at device 30.0 on pci0 pci21: on pcib7 cbb0: mem 0xf8100000-0xf8100fff irq 16 at device 0.0 on pci21 cardbus0: on cbb0 pccard0: <16-bit PCCard bus> on cbb0 pci21: at device 0.1 (no driver attached) sdhci_pci0: mem 0xf8101800-0xf81018ff irq 18 at device 0.2 on pci21 sdhci_pci0: 1 slot(s) allocated pci21: at device 0.3 (no driver attached) pci21: at device 0.4 (no driver attached) pci21: at device 0.5 (no driver attached) isab0: at device 31.0 on pci0 isa0: on isab0 atapci0: port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0x1830-0x183f at device 31.1 on pci0 ata0: at channel 0 on atapci0 ahci0: port 0x1c48-0x1c4f,0x1c1c-0x1c1f,0x1c40-0x1c47,0x1c18-0x1c1b,0x1c20-0x1c3f mem 0xfe226000-0xfe2267ff irq 16 at device 31.2 on pci0 ahci0: AHCI v1.10 with 3 1.5Gbps ports, Port Multiplier not supported ahcich0: at channel 0 on ahci0 ahcich2: at channel 2 on ahci0 pci0: at device 31.3 (no driver attached) acpi_tz0: on acpi0 acpi_tz1: on acpi0 atkbdc0: port 0x60,0x64 irq 1 on acpi0 atkbd0: irq 1 on atkbdc0 kbd0 at atkbd0 atkbd0: [GIANT-LOCKED] psm0: irq 12 on atkbdc0 psm0: [GIANT-LOCKED] psm0: model Generic PS/2 mouse, device ID 0psm0: model Generic PS/2 mouse, device ID 0 battery0: on acpi0 acpi_acad0: on acpi0 orm0: at iomem 0xc0000-0xcefff,0xcf000-0xcffff,0xd0000-0xd0fff,0xe0000-0xeffff on isa0 sc0: at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=0x300> vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0 ppc0: cannot reserve I/O port range est0: on cpu0 p4tcc0: on cpu0 est1: on cpu1 p4tcc1: on cpu1 Timecounters tick every 1.000 msec hdacc0: at cad 0 on hdac0 hdaa0: at nid 1 on hdacc0 pcm0: at nid 18,17 and 28,20,21 on hdaa0 pcm1: at nid 27 on hdaa0 hdacc1: at cad 1 on hdac0 unknown: at nid 2 on hdacc1 (no driver attached) random: unblocking device. usbus0: 12Mbps Full Speed USB v1.0 usbus1: 12Mbps Full Speed USB v1.0 usbus2: 480Mbps High Speed USB v2.0 usbus3: 12Mbps Full Speed USB v1.0 usbus4: 12Mbps Full Speed USB v1.0 usbus5: 12Mbps Full Speed USB v1.0 usbus6: 480Mbps High Speed USB v2.0 ugen0.1: at usbus0 uhub0: on usbus0 ugen2.1: at usbus2 uhub1: on usbus2 ugen1.1: at usbus1 uhub2: on usbus1 ugen5.1: at usbus5 uhub3: on usbus5 ugen4.1: at usbus4 uhub4: on usbus4 ugen3.1: at usbus3 uhub5: on usbus3 ugen6.1: at usbus6 uhub6: on usbus6 ada0 at ahcich0 bus 0 scbus1 target 0 lun 0 ada0: ATA-8 SATA 1.x device ada0: Serial Number 071201DP0410DTG7P9AP ada0: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled ada0: 190782MB (390721968 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad4 cd0 at ata0 bus 0 scbus0 target 0 lun 0 cd0: Removable CD-ROM SCSI-0 device cd0: Serial Number HC43 045371 cd0: 33.300MB/s transfers (UDMA2, ATAPI 12bytes, PIO 65534bytes) cd0: Attempt to query device size failed: NOT READY, Medium not present Netvsc initializing... SMP: AP CPU #1 Launched! Root mount waiting for: usbus6 usbus5 usbus4 usbus3 usbus2 usbus1 usbus0 uhub2: 2 ports with 2 removable, self powered uhub3: 2 ports with 2 removable, self powered uhub0: 2 ports with 2 removable, self powered uhub5: 2 ports with 2 removable, self powered uhub4: 2 ports with 2 removable, self powered Root mount waiting for: usbus6 usbus2 uhub1: 4 ports with 4 removable, self powered Root mount waiting for: usbus6 usbus2 uhub6: 6 ports with 6 removable, self powered ada0 at ahcich0 bus 0 scbus1 target 0 lun 0 ada0: ATA-8 SATA 1.x device ada0: Serial Number 071201DP0410DTG7P9AP ada0: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled ada0: 190782MB (390721968 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad4 cd0 at ata0 bus 0 scbus0 target 0 lun 0 cd0: Removable CD-ROM SCSI-0 device cd0: Serial Number HC43 045371 cd0: 33.300MB/s transfers (UDMA2, ATAPI 12bytes, PIO 65534bytes) cd0: Attempt to query device size failed: NOT READY, Medium not present Netvsc initializing... SMP: AP CPU #1 Launched! Root mount waiting for: usbus6 usbus5 usbus4 usbus3 usbus2 usbus1 usbus0 uhub2: 2 ports with 2 removable, self powered uhub3: 2 ports with 2 removable, self powered uhub0: 2 ports with 2 removable, self powered uhub5: 2 ports with 2 removable, self powered uhub4: 2 ports with 2 removable, self powered Root mount waiting for: usbus6 usbus2 uhub1: 4 ports with 4 removable, self powered Root mount waiting for: usbus6 usbus2 uhub6: 6 ports with 6 removable, self powered Root mount waiting for: usbus6 ugen0.2: at usbus0 Root mount waiting for: usbus6 ugen6.2: at usbus6 umass0: on usbus6 umass0: SCSI over Bulk-Only; quirks = 0x0100 umass0:3:0:-1: Attached to scbus3 da0 at umass-sim0 bus 0 scbus3 target 0 lun 0 da0: Fixed Direct Access SCSI-6 device da0: Serial Number NA46H44R da0: 40.000MB/s transfers da0: 953869MB (1953525167 512 byte sectors: 255H 63S/T 121601C) da0: quirks=0x2 GEOM: da0: the primary GPT table is corrupt or invalid. GEOM: da0: using the secondary instead -- recovery strongly advised. GEOM: diskid/DISK-NA46H44R: the primary GPT table is corrupt or invalid. GEOM: diskid/DISK-NA46H44R: using the secondary instead -- recovery strongly advised. Root mount waiting for: usbus6 ugen6.3: at usbus6 umass1: on usbus6 umass1: SCSI over Bulk-Only; quirks = 0x0100 umass1:4:1:-1: Attached to scbus4 Trying to mount root from ufs:/dev/ada0p2 [rw]... da1 at umass-sim1 bus 1 scbus4 target 0 lun 0 da1: Fixed Direct Access SCSI-4 device da1: Serial Number 2GE1GTVM da1: 40.000MB/s transfers da1: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C) da1: quirks=0x2 GEOM: da1: the primary GPT table is corrupt or invalid. GEOM: da1: using the secondary instead -- recovery strongly advised. GEOM: diskid/DISK-2GE1GTVM: the primary GPT table is corrupt or invalid. GEOM: diskid/DISK-2GE1GTVM: using the secondary instead -- recovery strongly advised. ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is present; to enable, add "vfs.zfs.prefetch_disable=0" to /boot/loader.conf. ZFS filesystem version: 5 ZFS storage pool version: features support (5000) Cheers Anders From owner-freebsd-fs@FreeBSD.ORG Sun Jun 15 15:29:10 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CA167CE1 for ; Sun, 15 Jun 2014 15:29:10 +0000 (UTC) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7BA7D23BC for ; Sun, 15 Jun 2014 15:29:10 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.8/8.14.8) with ESMTP id s5FFSx3P043961 for ; Sun, 15 Jun 2014 08:28:59 -0700 (PDT) (envelope-from freebsd@pki2.com) Subject: Large ZFS arrays? From: Dennis Glatting To: freebsd-fs@freebsd.org Content-Type: text/plain; charset="ISO-8859-1" Date: Sun, 15 Jun 2014 08:28:59 -0700 Message-ID: <1402846139.4722.352.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-SoftwareMunitions-MailScanner-Information: Dennis Glatting X-SoftwareMunitions-MailScanner-ID: s5FFSx3P043961 X-SoftwareMunitions-MailScanner: Found to be clean X-MailScanner-From: freebsd@pki2.com X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Jun 2014 15:29:10 -0000 Anyone built a large ZFS infrastructures (PB size) and care to share words of wisdom? From owner-freebsd-fs@FreeBSD.ORG Sun Jun 15 15:43:16 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 61B1A1AB for ; Sun, 15 Jun 2014 15:43:16 +0000 (UTC) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 291E52510 for ; Sun, 15 Jun 2014 15:43:16 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.8/8.14.8) with ESMTP id s5FFh43K049517 for ; Sun, 15 Jun 2014 08:43:04 -0700 (PDT) (envelope-from dg@pki2.com) Subject: [Fwd: Re: Large ZFS arrays?] From: Dennis Glatting To: freebsd-fs@freebsd.org Content-Type: text/plain; charset="ISO-8859-1" Date: Sun, 15 Jun 2014 08:43:04 -0700 Message-ID: <1402846984.4722.363.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-SoftwareMunitions-MailScanner-Information: Dennis Glatting X-SoftwareMunitions-MailScanner-ID: s5FFh43K049517 X-SoftwareMunitions-MailScanner: Found to be clean X-MailScanner-From: dg@pki2.com X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Jun 2014 15:43:16 -0000 Sorry. Forgot to CC the list. -------- Forwarded Message -------- From: Dennis Glatting To: Rich Subject: Re: Large ZFS arrays? Date: Sun, 15 Jun 2014 08:42:25 -0700 On Sun, 2014-06-15 at 11:30 -0400, Rich wrote: > What would you like to know? > > Did you mean PB per-storage node or total? > Total. I am looking at three pieces in total: * Two 1PT storage "blocks" providing load sharing and mirroring for failover. * One 5PB storage block for on-line archives (3-5 years). The 1PB nodes will divided into something that makes sense, such as multiple SuperMicro 847 chassis with 3TB disks providing some number of volumes. Division is a function of application, such as a 100TB RAIDz2 volumes for bulk storage whereas smaller 8TB volumes for active data, such as iSCSI, databases, and home directories. Thanks. > - Rich > > On Sun, Jun 15, 2014 at 11:28 AM, Dennis Glatting wrote: > > Anyone built a large ZFS infrastructures (PB size) and care to share > > words of wisdom? > > > > > > > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- Dennis Glatting From owner-freebsd-fs@FreeBSD.ORG Sun Jun 15 16:00:26 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BE79D64E for ; Sun, 15 Jun 2014 16:00:26 +0000 (UTC) Received: from mail.your.org (mail.your.org [IPv6:2001:4978:1:2::cc09:3717]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 225992609 for ; Sun, 15 Jun 2014 16:00:26 +0000 (UTC) Received: from mail.your.org (chi02.mail.your.org [204.9.55.23]) by mail.your.org (Postfix) with ESMTP id 58058109391; Sun, 15 Jun 2014 16:00:25 +0000 (UTC) Received: from unassigned.v6.your.org (unknown [IPv6:2001:4978:1:45:5cd9:d206:8dcf:5193]) by mail.your.org (Postfix) with ESMTPA id 3230E109390; Sun, 15 Jun 2014 16:00:25 +0000 (UTC) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.3\)) Subject: Re: [Fwd: Re: Large ZFS arrays?] From: Kevin Day In-Reply-To: <1402846984.4722.363.camel@btw.pki2.com> Date: Sun, 15 Jun 2014 11:00:24 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: References: <1402846984.4722.363.camel@btw.pki2.com> To: Dennis Glatting X-Mailer: Apple Mail (2.1878.3) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Jun 2014 16:00:26 -0000 On Jun 15, 2014, at 10:43 AM, Dennis Glatting wrote: >=20 > Total. I am looking at three pieces in total: >=20 > * Two 1PT storage "blocks" providing load sharing and=20 > mirroring for failover. >=20 > * One 5PB storage block for on-line archives (3-5 years). >=20 > The 1PB nodes will divided into something that makes sense, such as > multiple SuperMicro 847 chassis with 3TB disks providing some number = of > volumes. Division is a function of application, such as a 100TB RAIDz2 > volumes for bulk storage whereas smaller 8TB volumes for active data, > such as iSCSI, databases, and home directories. >=20 > Thanks. We=92re currently using multiples of the SuperMicro 847 chassis with 3TB = and 4TB drives, and LSI 9207 controllers. Each 45 drive array is = configured as 4 11 drive raidz2 groups, plus one hot spare.=20 A few notes: 1) I=92d highly recommend against grouping them together into one giant = zpool unless you really really have to. We just spent a lot of time = redoing everything so that each 45 drive array is its own = zpool/filesystem. You=92re otherwise putting all your eggs into one very = big basket, and if something went wrong you=92d lose everything rather = than just a subset of your data. If you don=92t do this, you=92ll almost = definitely have to run with sync=3Ddisabled, or the number of sync = requests hitting every drive will kill write performance. 2) You definitely want a JBOD controller instead of a smart RAID = controller. The LSI 9207 works pretty well, but when you exceed 192 = drives it complains on boot up of running out of heap space and makes = you press a key to continue, which then works fine. There is a very = recently released firmware update for the card that seems to fix this, = but we haven=92t completed testing yet. You=92ll also want to increase = hw.mps.max_chains. The driver warns you when you need to, but you need = to reboot to change this, and you=92re probably only going to discover = this under heavy load. 3) We=92ve played with L2ARC ssd devices, and aren=92t seeing much = gains. It appears that our active data set is so large that it=92d need = a huge SSD to even hit a small percentage of our frequently used files. = setting =93secondarycache=3Dmetadata=94 does seem to help a bit, but = probably not worth the hassle for us. This probably will depend entirely = on your workload though. 4) =93zfs destroy=94 can be excruciatingly expensive on large datasets. = http://blog.delphix.com/matt/2012/07/11/performance-of-zfs-destroy/ = It=92s a bit better now, but don=92t assume you can =93zfs destroy=94 = without killing performance to everything. If you have specific questions, I=92m happy to help, but I think most of = the advice I can offer is going to be workload specific. If I had to do = it all over again, I=92d probably break things down into many smaller = servers than trying to put as much onto one. From owner-freebsd-fs@FreeBSD.ORG Sun Jun 15 16:11:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8A7B6A87 for ; Sun, 15 Jun 2014 16:11:13 +0000 (UTC) Received: from mail-ob0-x22f.google.com (mail-ob0-x22f.google.com [IPv6:2607:f8b0:4003:c01::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 544A9273C for ; Sun, 15 Jun 2014 16:11:13 +0000 (UTC) Received: by mail-ob0-f175.google.com with SMTP id wm4so3951597obc.20 for ; Sun, 15 Jun 2014 09:11:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=cBeixkLhg9fksu0VEubSgYAM1XnIH9lgSpgi14u0r9U=; b=I3ZS6YM/yo79at88W5iNzcPnHTdml2lsKlzb8JWWYBx4ftjnw8cT+z1sSfwqo+qgES KjCGgOKrSM0jtom/UIkAhZNKTOwAUN2crccaUSjQMAjNXIdvMUgzJNf9DGmwfOT8sFlH T6asOvRxm8QzfAf4npyqmAfIXVDSAZZ+WKbrT6UGHxk/qWSZxbDWCMmEZ55H2cKTV60s 0TgTsXnRnhYNl2gUcbCq6rCZw02xqPCgycApwwTBqk2fSfr2AeqlPYrEJsJbuXtqxFFt nVRiBWdy3LVXWolviU88YC+dtzlBGFLVk5hCnpwAKcuOeqktMOlvhSYCDd+RHxDvkZLq 8dKw== MIME-Version: 1.0 X-Received: by 10.60.43.199 with SMTP id y7mr2840636oel.58.1402848672395; Sun, 15 Jun 2014 09:11:12 -0700 (PDT) Received: by 10.76.167.164 with HTTP; Sun, 15 Jun 2014 09:11:12 -0700 (PDT) Received: by 10.76.167.164 with HTTP; Sun, 15 Jun 2014 09:11:12 -0700 (PDT) In-Reply-To: <1402846139.4722.352.camel@btw.pki2.com> References: <1402846139.4722.352.camel@btw.pki2.com> Date: Sun, 15 Jun 2014 09:11:12 -0700 Message-ID: Subject: Re: Large ZFS arrays? From: Freddie Cash To: Dennis Glatting Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Jun 2014 16:11:13 -0000 On Jun 15, 2014 8:29 AM, "Dennis Glatting" wrote: > > Anyone built a large ZFS infrastructures (PB size) and care to share > words of wisdom? We don't yet have a petabyte of storage (currently just under 200 TB raw), but our infrastructure will scale to 720 TB raw (using 4 TB drives) without daisy-changing storage boxes, or 1.4 PB if daisy-chained. We use a SuperMicro H8DGi-F6 motherboard in an SC826 2U chassis with SSDs for the OS, log and cache vdevs directly connected to the onboard SAS controller. We have multiple LSI 9211-8e controllers connected to the external storage boxes (each chassis has an SAS expander). The storage chassis are 45-bay SC846-JBOD chassis, currently using 2TB drives. We currently only have 2 storage chassis connected. It supports 4 chassis directly, or 8 if you daisy-chain the storage chassis. We currently only use these for backups storage, so we configured things for bulk storage and not raw I/O or throughout. We only have gigabit Ethernet, and we saturate that with zfs send every day for several hours. Hope that helps. From owner-freebsd-fs@FreeBSD.ORG Sun Jun 15 22:24:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B829185C for ; Sun, 15 Jun 2014 22:24:29 +0000 (UTC) Received: from na01-bn1-obe.outbound.protection.outlook.com (mail-bn1blp0187.outbound.protection.outlook.com [207.46.163.187]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (Client CN "mail.protection.outlook.com", Issuer "MSIT Machine Auth CA 2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7AC1328BC for ; Sun, 15 Jun 2014 22:24:27 +0000 (UTC) Received: from CH1PRD0310HT001.namprd03.prod.outlook.com (10.255.137.36) by BN1PR0301MB0689.namprd03.prod.outlook.com (25.160.171.26) with Microsoft SMTP Server (TLS) id 15.0.949.11; Sun, 15 Jun 2014 22:24:20 +0000 Received: from [10.0.0.114] (98.240.141.71) by pod51008.outlook.com (10.255.137.36) with Microsoft SMTP Server (TLS) id 14.16.459.0; Sun, 15 Jun 2014 22:24:18 +0000 Message-ID: <539E1D11.9070004@my.hennepintech.edu> Date: Sun, 15 Jun 2014 17:24:17 -0500 From: Andrew Berg User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: Subject: Re: ZFS pool permanent error question -- errors: Permanent errors have been detected in the following files: storage: <0x0> References: <20140615211052.GA63247@neutralgood.org> In-Reply-To: <20140615211052.GA63247@neutralgood.org> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [98.240.141.71] X-Microsoft-Antispam: BL:0; ACTION:Default; RISK:Low; SCL:0; SPMLVL:NotSpam; PCL:0; RULEID: X-Forefront-PRVS: 0243E5FD68 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(6009001)(428001)(199002)(189002)(24454002)(74662001)(85852003)(92726001)(92566001)(4396001)(21056001)(65806001)(102836001)(83072002)(85306003)(83506001)(74502001)(101416001)(59896001)(75432001)(23676002)(33656002)(80022001)(65956001)(64706001)(99396002)(81342001)(54356999)(19580405001)(76176999)(81542001)(87266999)(46102001)(65816999)(83322001)(86362001)(20776003)(47776003)(64126003)(66066001)(87936001)(76482001)(50466002)(88552001)(19580395003)(50986999)(105586001)(31966008)(77982001); DIR:OUT; SFP:; SCL:1; SRVR:BN1PR0301MB0689; H:CH1PRD0310HT001.namprd03.prod.outlook.com; FPR:; MLV:sfv; PTR:InfoNoRecords; A:0; MX:1; LANG:en; Received-SPF: None (: my.HennepinTech.edu does not designate permitted sender hosts) Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=aberg010@my.HennepinTech.edu; X-OriginatorOrg: my.hennepintech.edu X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Jun 2014 22:24:29 -0000 On 2014.06.15 16:10, kpneal@pobox.com wrote: > It looks like you are running ZFS with pools consisting of a single disk. > In cases like this if ZFS detects that a file has been corrupted ZFS is > unable to do anything to fix it. Run with the option "copies=2" to have > two copies of every file if you want ZFS to be able to fix broken files. > Of course, this doubles the amount of space you will use, so you have to > think about how important your data is to you. A proper mirror with another disk would protect against disk failure and give better performance with the same space cost, so doing that is recommended over using copies=2. > Running ZFS in a partition or on the entire disk is fine either way. But > you have to be consistent. Partitioning a disk and then writing outside > of the partition creates errors like the above GEOM one. I recommend using a partition solely to take advantage of GPT labels. Identifying disks is much easier when you create a pool using devices from labels (/dev/gpt/yourlabel). Even more so if you have a matching physical label on the disk. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 02:49:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 85D828D1 for ; Mon, 16 Jun 2014 02:49:55 +0000 (UTC) Received: from mail-pd0-f170.google.com (mail-pd0-f170.google.com [209.85.192.170]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 580742AB8 for ; Mon, 16 Jun 2014 02:49:54 +0000 (UTC) Received: by mail-pd0-f170.google.com with SMTP id z10so2471145pdj.29 for ; Sun, 15 Jun 2014 19:49:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :organization:user-agent; bh=5cGxsSGdDDr+/JKQzzp8ke42xO8JYFH8dZikoEKCMDI=; b=KYDxD0DqL18Pvyf3Ig4f8S7rqyjk3/EO7rBxlPxFuLT3l3eKPqHh8TGeVecoFc/SKl X/ifgMd1rs/+s3rMhRb1wYVj30zBWfV0JgNa2ofnMohaxTxGE0hPqrJKyrGt5Z61/09q qqjXArtRRqmbvQsNb//5XWkrlhKio9BTiI4PwdK0AoExZOlTrnSmRd3Lva9n1zVcDYIa LCSlwlZ9a2EqHShvDdHQMHBDscLVS4Hf/6xkcXmIPiBTsuesfTM1VJ5fwURd0tS+7GKd 1eS8iPMyZxTmzklEe7GnEJz8RmdZWdIDBz4t0KfsiHykimD0TO5OBh1QGN/M3feyiopp 6PGA== X-Gm-Message-State: ALoCoQlyQRhL4d+JD1S5zGykyfbQoOVQGLoFZ49FGlHHpdaZN7RJp6gUUnwP/uBy9RGj5sgZ7o6M X-Received: by 10.68.225.133 with SMTP id rk5mr20721297pbc.98.1402886988332; Sun, 15 Jun 2014 19:49:48 -0700 (PDT) Received: from localhost ([1.147.125.162]) by mx.google.com with ESMTPSA id qv3sm15682499pbb.87.2014.06.15.19.49.44 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 15 Jun 2014 19:49:47 -0700 (PDT) Date: Mon, 16 Jun 2014 12:49:42 +1000 From: Anders Jensen-Waud To: kpneal@pobox.com Subject: Re: ZFS pool permanent error question -- errors: Permanent errors have been detected in the following files: storage: <0x0> Message-ID: <20140616024942.GA13697@koodekoo.local> References: <20140615211052.GA63247@neutralgood.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20140615211052.GA63247@neutralgood.org> Organization: Jensen-Waud User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 02:49:55 -0000 On Sun, Jun 15, 2014 at 05:10:52PM -0400, kpneal@pobox.com wrote: > On Sun, Jun 15, 2014 at 03:04:16PM +1000, Anders Jensen-Waud wrote: > > Hi all, > > > > My main zfs storage pool (named ``storage'') has recently started > > displaying a very odd error: > > > > root@beastie> zpool status -v > > / > > > > pool: backup > > state: ONLINE > > scan: none requested > > config: > > NAME STATE READ WRITE CKSUM > > backup ONLINE 0 0 0 > > da1 ONLINE 0 0 0 > > errors: No known data errors > > pool: storage > > state: ONLINE > > status: One or more devices has experienced an error resulting in data > > corruption. Applications may be affected. > > action: Restore the file in question if possible. Otherwise restore the > > entire pool from backup. > > see: http://illumos.org/msg/ZFS-8000-8A > > scan: scrub in progress since Sun Jun 15 14:18:45 2014 > > 34.3G scanned out of 839G at 19.3M/s, 11h50m to go > > 72K repaired, 4.08% done > > config: > > NAME STATE READ WRITE CKSUM > > storage ONLINE 0 0 0 > > da0 ONLINE 0 0 0 (repairing) > > > > errors: Permanent errors have been detected in the following files: > > storage:<0x0> > > I'm not sure what causes ZFS to lose the filename like this. I'll let > someone else comment. I want to say you have a corrupt file in a snapshot, > but don't hold me to that. > > It looks like you are running ZFS with pools consisting of a single disk. > In cases like this if ZFS detects that a file has been corrupted ZFS is > unable to do anything to fix it. Run with the option "copies=2" to have > two copies of every file if you want ZFS to be able to fix broken files. > Of course, this doubles the amount of space you will use, so you have to > think about how important your data is to you. Thank you for the tip. I didn't know about copies=2, so I will definitely consider that option. I am running ZFS on a single disk -- a 1 TB USB drive -- attached to my "server" at home. It is not exactly an enterprise server, but it fits well for my home purposes, namely file backup from my different computers. On a nightly basis I then copy and compress the data sets from storage to another USB drive to have a second copy. In this instance, the nightly backup script (zfs send/recv based) hadn't run properly so I had no backup to recover from. Given that my machine only has 3 GB RAM, I was wondering if the issue might be memory related and if I am better off converting the volume back to UFS. I am keen to stay on ZFS to benefit from snapshots, compression, security etc. Any thoughts? > > I don't know what caused the corrupt file. It could be random chance, or > it could be that you accidentally did something to damage the pool. I say > that because: > > > da1 at umass-sim1 bus 1 scbus4 target 0 lun 0 > > da1: Fixed Direct Access SCSI-4 device > > da1: Serial Number 2GE1GTVM > > da1: 40.000MB/s transfers > > da1: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C) > > da1: quirks=0x2 > > GEOM: da1: the primary GPT table is corrupt or invalid. > > GEOM: da1: using the secondary instead -- recovery strongly advised. > > GEOM: diskid/DISK-2GE1GTVM: the primary GPT table is corrupt or invalid. > > GEOM: diskid/DISK-2GE1GTVM: using the secondary instead -- recovery > > strongly advised. > > You've got something going on here. Did you GPT partition the disk? The > zpool status you posted says you built your pools on the entire disk and > not inside a partition. But GEOM is saying the disk has been partitioned. > GPT stores data at both the beginning and end of the disk. ZFS may have > trashed the beginning of the disk but not gotten to the end yet. This disk is not the ``storage'' zpool -- it is my ``backup'' pool, which is on a different drive: NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT backup 464G 235G 229G 50% 1.00x ONLINE - storage 928G 841G 87.1G 90% 1.00x ONLINE - Running 'gpt recover /dev/da1' fixes the error above but after a reboot it reappears. Would it be better to completely wipe the disk and reinitialise it with zfs? Miraculously, an overnight 'zpool scrub storage' has wiped out the errors from yesterday, and I am puzzled why that is the case. As per the original zpool status from yesterday, ZFS warned that I needed to recover all the files from backup aj@beastie> zpool status ~ pool: backup state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 da1 ONLINE 0 0 0 errors: No known data errors pool: storage state: ONLINE scan: scrub repaired 984K in 11h37m with 0 errors on Mon Jun 16 01:55:48 2014 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 da0 ONLINE 0 0 0 errors: No known data errors > Running ZFS in a partition or on the entire disk is fine either way. But > you have to be consistent. Partitioning a disk and then writing outside > of the partition creates errors like the above GEOM one. Agree. In this instance it wasn't da0/storage, however. > -- > Kevin P. Neal http://www.pobox.com/~kpn/ > "Not even the dumbest terrorist would choose an encryption program that > allowed the U.S. government to hold the key." -- (Fortune magazine > is smarter than the US government, Oct 29 2001, page 196.) -- Anders Jensen-Waud E: anders@jensenwaud.com From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 04:50:57 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B297A97B for ; Mon, 16 Jun 2014 04:50:57 +0000 (UTC) Received: from mail-ig0-x22f.google.com (mail-ig0-x22f.google.com [IPv6:2607:f8b0:4001:c05::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 812722474 for ; Mon, 16 Jun 2014 04:50:57 +0000 (UTC) Received: by mail-ig0-f175.google.com with SMTP id uq10so2495859igb.2 for ; Sun, 15 Jun 2014 21:50:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=XG+Rri2/4jkK3/oKh2FRZvTjiLXNa4RF+omJ3FvGbWc=; b=SJ5LD0VkwiBu/SvKrGh37jGdO9FaEc+OjpaAeRG+bmkYA74ZATy/JPpiRrp41y5Eb1 mfRUVqREYRrjSWPSuajEOtBwWVIFfrSU0+i6/koLtJVt42ow+TXYf6/1ypad/qpm+mWh La7XHG/6CaSGzTaojOJOasbWgEduHZsXBu4aOlVrI2Q4dI1QY7yHgfb2MQuxadCcLOJm pEe4ILIdC+x2jdCOhCDTrA3ldAlI9BrtLfSzsnbbMd72fszccOCDWWjAZhx5Ptatxdk4 NxyKtZl8HzXzzDINFyvycAoXga6OWi5WBawC7oBCYz+E1u7h3LQkTVeFOnWmSBpfLJaD fBAw== MIME-Version: 1.0 X-Received: by 10.50.1.6 with SMTP id 6mr22917573igi.36.1402894256362; Sun, 15 Jun 2014 21:50:56 -0700 (PDT) Received: by 10.64.225.73 with HTTP; Sun, 15 Jun 2014 21:50:56 -0700 (PDT) In-Reply-To: References: <1402846139.4722.352.camel@btw.pki2.com> Date: Mon, 16 Jun 2014 00:50:56 -0400 Message-ID: Subject: Re: Large ZFS arrays? From: Rich To: Freddie Cash Content-Type: text/plain; charset=ISO-8859-1 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 04:50:57 -0000 I suppose I should jump in... 8 SC847E26-RJBOD1 per Dell R720 "head" w/128GB RAM, with 2 of the 9201-16e controllers, one port connected per enclosure. (The graphs of how it bottlenecks depending on how you daisy-chain things are...fascinating!) 4 zpools, 11 vdevs of 8 disks each, one disk per JBOD per vdev. SSDs for L2ARC or SLOG are of limited usefulness, given the size of the datasets involved - it'll save you on lots of tiny writes over NFS at times, but otherwise, enough spinning heads will beat the SSDs for sequential IO in non-pathological cases. I can describe more things as desired. :) - Rich On Sun, Jun 15, 2014 at 12:11 PM, Freddie Cash wrote: > On Jun 15, 2014 8:29 AM, "Dennis Glatting" wrote: >> >> Anyone built a large ZFS infrastructures (PB size) and care to share >> words of wisdom? > > We don't yet have a petabyte of storage (currently just under 200 TB raw), > but our infrastructure will scale to 720 TB raw (using 4 TB drives) without > daisy-changing storage boxes, or 1.4 PB if daisy-chained. > > We use a SuperMicro H8DGi-F6 motherboard in an SC826 2U chassis with SSDs > for the OS, log and cache vdevs directly connected to the onboard SAS > controller. We have multiple LSI 9211-8e controllers connected to the > external storage boxes (each chassis has an SAS expander). > > The storage chassis are 45-bay SC846-JBOD chassis, currently using 2TB > drives. We currently only have 2 storage chassis connected. It supports 4 > chassis directly, or 8 if you daisy-chain the storage chassis. > > We currently only use these for backups storage, so we configured things > for bulk storage and not raw I/O or throughout. We only have gigabit > Ethernet, and we saturate that with zfs send every day for several hours. > > Hope that helps. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 04:53:41 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 13725A03 for ; Mon, 16 Jun 2014 04:53:41 +0000 (UTC) Received: from mail-ie0-x22d.google.com (mail-ie0-x22d.google.com [IPv6:2607:f8b0:4001:c03::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D705D2488 for ; Mon, 16 Jun 2014 04:53:40 +0000 (UTC) Received: by mail-ie0-f173.google.com with SMTP id y20so4578428ier.18 for ; Sun, 15 Jun 2014 21:53:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=V7oNtmqOoQhJlpEgKIs3d/iMNcPw0N4h5gTmcnZ7s38=; b=bpgY6yak1E/oWITg2uLw3vE1Y4DSrT50OuMiETjBXgYSVweppQKbJu7KR4hEp+bIaK pjBl/D0cMCmTpk2+C0dGX60xEJ2gSNOIiUZBCrD4ZO6bWxXwPsAPWKpG7IS0QNZqxa+O EioahZQzlZDKDcbtEewgchvCdshSTeUXsqgi4+0+55Vwz9N2b86Xff+Lfg038Nywvb7+ 0yaKMsIAUf45eC4Qh04DcCPJaXAey70bQ1gONb83VwM/Gq+0JIOAPLGlFesdjUHWNuHy pH+klpOZs0BFIsYocHuIbweoDGhowoAL7RnXrmctpCx8LAqxqkrqeYtnrED9EgDvDml1 Z8qw== MIME-Version: 1.0 X-Received: by 10.50.1.6 with SMTP id 6mr22928986igi.36.1402894420353; Sun, 15 Jun 2014 21:53:40 -0700 (PDT) Sender: rincebrain@gmail.com Received: by 10.64.225.73 with HTTP; Sun, 15 Jun 2014 21:53:40 -0700 (PDT) In-Reply-To: References: <1402846984.4722.363.camel@btw.pki2.com> Date: Mon, 16 Jun 2014 00:53:40 -0400 X-Google-Sender-Auth: MWtHiL8fVLtZXGxyufys19aYHn8 Message-ID: Subject: Re: [Fwd: Re: Large ZFS arrays?] From: Rich To: Kevin Day Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs , Dennis Glatting X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 04:53:41 -0000 On Sun, Jun 15, 2014 at 12:00 PM, Kevin Day wrote: > > On Jun 15, 2014, at 10:43 AM, Dennis Glatting wrote: >> >> Total. I am looking at three pieces in total: >> >> * Two 1PT storage "blocks" providing load sharing and >> mirroring for failover. >> >> * One 5PB storage block for on-line archives (3-5 years). >> >> The 1PB nodes will divided into something that makes sense, such as >> multiple SuperMicro 847 chassis with 3TB disks providing some number of >> volumes. Division is a function of application, such as a 100TB RAIDz2 >> volumes for bulk storage whereas smaller 8TB volumes for active data, >> such as iSCSI, databases, and home directories. >> >> Thanks. > > 2) You definitely want a JBOD controller instead of a smart RAID controll= er. The LSI 9207 works pretty well, but when you exceed 192 drives it compl= ains on boot up of running out of heap space and makes you press a key to c= ontinue, which then works fine. There is a very recently released firmware = update for the card that seems to fix this, but we haven't completed testin= g yet. You'll also want to increase hw.mps.max_chains. The driver warns you= when you need to, but you need to reboot to change this, and you're probab= ly only going to discover this under heavy load. You should be able to disable the boot ROM on the HBAs to avoid this problem? I had this happen too in various configs, but only when the controllers were configured to add the devices attached as boot options... - Rich From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 07:42:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F3D13C5E for ; Mon, 16 Jun 2014 07:42:48 +0000 (UTC) Received: from smtprelay05.ispgateway.de (smtprelay05.ispgateway.de [80.67.31.97]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 81DB021B5 for ; Mon, 16 Jun 2014 07:42:48 +0000 (UTC) Received: from [78.35.169.64] (helo=fabiankeil.de) by smtprelay05.ispgateway.de with esmtpsa (SSLv3:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1WwRVq-0008LV-EB for freebsd-fs@freebsd.org; Mon, 16 Jun 2014 09:39:30 +0200 Date: Mon, 16 Jun 2014 09:39:28 +0200 From: Fabian Keil To: freebsd-fs@freebsd.org Subject: Re: ZFS pool permanent error question -- errors: Permanent errors have been detected in the following files: storage: <0x0> Message-ID: <20140616093928.1b96f24b@fabiankeil.de> In-Reply-To: <20140616024942.GA13697@koodekoo.local> References: <20140615211052.GA63247@neutralgood.org> <20140616024942.GA13697@koodekoo.local> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; boundary="Sig_/fv0n5LFkt5WMnWlpfL5xSla"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 07:42:49 -0000 --Sig_/fv0n5LFkt5WMnWlpfL5xSla Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Anders Jensen-Waud wrote: > On Sun, Jun 15, 2014 at 05:10:52PM -0400, kpneal@pobox.com wrote: > > On Sun, Jun 15, 2014 at 03:04:16PM +1000, Anders Jensen-Waud wrote: > > > Hi all, > > >=20 > > > My main zfs storage pool (named ``storage'') has recently started > > > displaying a very odd error: [...] > > > errors: Permanent errors have been detected in the following files: > > > storage:<0x0> > >=20 > > I'm not sure what causes ZFS to lose the filename like this. I'll let > > someone else comment. I want to say you have a corrupt file in a > > snapshot, but don't hold me to that. > >=20 > > It looks like you are running ZFS with pools consisting of a single > > disk. In cases like this if ZFS detects that a file has been corrupted > > ZFS is unable to do anything to fix it. Run with the option "copies=3D2" > > to have two copies of every file if you want ZFS to be able to fix > > broken files. Of course, this doubles the amount of space you will > > use, so you have to think about how important your data is to you. >=20 > Thank you for the tip. I didn't know about copies=3D2, so I will > definitely consider that option.=20 >=20 > I am running ZFS on a single disk -- a 1 TB USB drive -- attached to my > "server" at home. It is not exactly an enterprise server, but it fits > well for my home purposes, namely file backup from my different > computers. On a nightly basis I then copy and compress the data sets > from storage to another USB drive to have a second copy. In this > instance, the nightly backup script (zfs send/recv based) hadn't run > properly so I had no backup to recover from.=20 >=20 > Given that my machine only has 3 GB RAM, I was wondering if the issue > might be memory related and if I am better off converting the volume > back to UFS. I am keen to stay on ZFS to benefit from snapshots, > compression, security etc. Any thoughts? I doubt that the issue is memory related. BTW, I use single-disk pools for backups as well and one of my systems only has 2 GB RAM. My impression is that ZFS's "permanent error" detection is flawed and may also count (some) temporary errors as permanent. If the "permanent errors" don't survive scrubbing, I wouldn't worry about them, especially if no corrupt files are mentioned. > > You've got something going on here. Did you GPT partition the disk? The > > zpool status you posted says you built your pools on the entire disk > > and not inside a partition. But GEOM is saying the disk has been > > partitioned. GPT stores data at both the beginning and end of the > > disk. ZFS may have trashed the beginning of the disk but not gotten to > > the end yet. >=20 > This disk is not the ``storage'' zpool -- it is my ``backup'' pool, > which is on a different drive:=20 >=20 > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > backup 464G 235G 229G 50% 1.00x ONLINE - > storage 928G 841G 87.1G 90% 1.00x ONLINE - >=20 > Running 'gpt recover /dev/da1' fixes the error above but after a reboot > it reappears. Would it be better to completely wipe the disk and > reinitialise it with zfs?=20 As you mentioned being keen on security above, I think it would make sense to wipe the disk to add geli encryption to the mix [0], but I doubt that the gpt complaints are related to the "problem". Fabian [0] I use zogftw for this: http://www.fabiankeil.de/gehacktes/zogftw/ --Sig_/fv0n5LFkt5WMnWlpfL5xSla Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iEYEARECAAYFAlOenzMACgkQBYqIVf93VJ14AwCgl4C8314mGJE8BVfOlNeGo894 adgAoIUE/+DHFSJau7R1h4A3DY67IWgw =15fk -----END PGP SIGNATURE----- --Sig_/fv0n5LFkt5WMnWlpfL5xSla-- From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 08:00:11 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 09718173 for ; Mon, 16 Jun 2014 08:00:11 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EAE332314 for ; Mon, 16 Jun 2014 08:00:10 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5G80AlU071775 for ; Mon, 16 Jun 2014 09:00:10 +0100 (BST) (envelope-from bz-noreply@freebsd.org) Message-Id: <201406160800.s5G80AlU071775@kenobi.freebsd.org> From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bugzilla] Commit Needs MFC MIME-Version: 1.0 X-Bugzilla-Type: whine X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated Date: Mon, 16 Jun 2014 08:00:10 +0000 Content-Type: text/plain X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 08:00:11 -0000 Hi, You have a bug in the "Needs MFC" state which has not been touched in 7 or more days. This email serves as a reminder that you may want to MFC this bug or marked it as completed. In the event you have a longer MFC timeout you may update this bug with a comment and I won't remind you again for 7 days. This reminder is only sent on Mondays. Please file a bug about concerns you may have. This search was scheduled by eadler@FreeBSD.org. (7 bugs) Bug 133174: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=133174 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [msdosfs] [patch] msdosfs must support multibyte international characters in file names Bug 136470: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=136470 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] Cannot mount / in read-only, over NFS Bug 139651: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139651 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] mount(8): read-only remount of NFS volume does not work Bug 144447: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=144447 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] sharenfs fsunshare() & fsshare_main() non functional Bug 154228: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=154228 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [md] md getting stuck in wdrain state Bug 155411: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=155411 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [regression] [8.2-release] [tmpfs]: mount: tmpfs : No space left on device Bug 180236: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=180236 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] [nullfs] Leakage free space using ZFS with nullfs on 9.1-STABLE From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 08:04:26 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 34EAD519 for ; Mon, 16 Jun 2014 08:04:26 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1C63723FC for ; Mon, 16 Jun 2014 08:04:26 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5G84PTI012965 for ; Mon, 16 Jun 2014 09:04:25 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 154228] [md] md getting stuck in wdrain state Date: Mon, 16 Jun 2014 08:04:25 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 8.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: kib@FreeBSD.org X-Bugzilla-Status: Issue Resolved X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status cc resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 08:04:26 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=154228 Konstantin Belousov changed: What |Removed |Added ---------------------------------------------------------------------------- Status|Needs MFC |Issue Resolved CC| |kib@FreeBSD.org Resolution|--- |FIXED -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 08:05:37 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1920C598 for ; Mon, 16 Jun 2014 08:05:37 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0075F2408 for ; Mon, 16 Jun 2014 08:05:37 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5G85aol013704 for ; Mon, 16 Jun 2014 09:05:36 +0100 (BST) (envelope-from bz-noreply@freebsd.org) From: bz-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 180236] [zfs] [nullfs] Leakage free space using ZFS with nullfs on 9.1-STABLE Date: Mon, 16 Jun 2014 08:05:36 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: kib@FreeBSD.org X-Bugzilla-Status: Issue Resolved X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status cc resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 08:05:37 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=180236 Konstantin Belousov changed: What |Removed |Added ---------------------------------------------------------------------------- Status|Needs MFC |Issue Resolved CC| |kib@FreeBSD.org Resolution|--- |FIXED -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 08:40:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4EABBB3C for ; Mon, 16 Jun 2014 08:40:20 +0000 (UTC) Received: from mail-lb0-x235.google.com (mail-lb0-x235.google.com [IPv6:2a00:1450:4010:c04::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CC6E226A8 for ; Mon, 16 Jun 2014 08:40:19 +0000 (UTC) Received: by mail-lb0-f181.google.com with SMTP id p9so538803lbv.26 for ; Mon, 16 Jun 2014 01:40:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=NZjgG2jro+8tfBRPpFiYOZ+fe/4kJB3697bPcoHOGyE=; b=03xSa63cFlm8LEyGT9+wb1dj5aaxDrcnq3Lg2Y6Je0oNBnw3uzWTt6TWM3lk7rII5l L9ieM975sto+tXIDpmNXHhtZaxT5pMHFOMomzlDO2Y1llHMq6BYTUVHupADRsyvchHzP biO9yhQdAAkt+20UeuECHn3soZqoVYEyaxIIAJCW6IbXGxD9ANjpjcF01s1m+z9o7Vw+ /bfj6/l4LAMVM1ZOdreAWFy5WoDu2dvu8+XBRFTcLeWpo6gIfRELOckdg1O6b/54Ikfj as5B7EUeS578qfTWdfyn0TpL4u0MZn4zZQXf5hTTcz14BZQ8wCoPwiq67RJTOMvsBR26 9+QA== MIME-Version: 1.0 X-Received: by 10.113.4.70 with SMTP id cc6mr12475148lbd.21.1402908017627; Mon, 16 Jun 2014 01:40:17 -0700 (PDT) Received: by 10.112.137.69 with HTTP; Mon, 16 Jun 2014 01:40:17 -0700 (PDT) In-Reply-To: <20140616024942.GA13697@koodekoo.local> References: <20140615211052.GA63247@neutralgood.org> <20140616024942.GA13697@koodekoo.local> Date: Mon, 16 Jun 2014 09:40:17 +0100 Message-ID: Subject: Re: ZFS pool permanent error question -- errors: Permanent errors have been detected in the following files: storage: <0x0> From: Tom Evans To: Anders Jensen-Waud Content-Type: text/plain; charset=UTF-8 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 08:40:20 -0000 On Mon, Jun 16, 2014 at 3:49 AM, Anders Jensen-Waud wrote: > Running 'gpt recover /dev/da1' fixes the error above but after a reboot > it reappears. Would it be better to completely wipe the disk and > reinitialise it with zfs? > > Miraculously, an overnight 'zpool scrub storage' has wiped out the errors > from yesterday, and I am puzzled why that is the case. As per the > original zpool status from yesterday, ZFS warned that I needed to > recover all the files from backup > > aj@beastie> zpool status ~ > pool: backup > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > backup ONLINE 0 0 0 > da1 ONLINE 0 0 0 > > errors: No known data errors > > pool: storage > state: ONLINE > scan: scrub repaired 984K in 11h37m with 0 errors on Mon Jun 16 01:55:48 2014 > config: > > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > da0 ONLINE 0 0 0 > > errors: No known data errors > >> Running ZFS in a partition or on the entire disk is fine either way. But >> you have to be consistent. Partitioning a disk and then writing outside >> of the partition creates errors like the above GEOM one. > > Agree. In this instance it wasn't da0/storage, however. > You agree, but both of your pools are on the whole disk - da0 and da1 - and not on any partition on that disk. This means that consistently ZFS will trash the GPT table/labels, because you have told it that it can write there. If you GPT partition your disk, your pool should consist of partitions - da1p1, not da1. You do not need to GPT partition your disk unless you want to. Cheers Tom From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 14:11:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D3C583B9 for ; Mon, 16 Jun 2014 14:11:45 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "wonkity.com", Issuer "wonkity.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 67753250D for ; Mon, 16 Jun 2014 14:11:44 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.9/8.14.9) with ESMTP id s5GEBh4L006468 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 16 Jun 2014 08:11:43 -0600 (MDT) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.9/8.14.9/Submit) with ESMTP id s5GEBgIC006465; Mon, 16 Jun 2014 08:11:43 -0600 (MDT) (envelope-from wblock@wonkity.com) Date: Mon, 16 Jun 2014 08:11:42 -0600 (MDT) From: Warren Block To: Anders Jensen-Waud Subject: Re: ZFS pool permanent error question -- errors: Permanent errors have been detected in the following files: storage: <0x0> In-Reply-To: <20140616024942.GA13697@koodekoo.local> Message-ID: References: <20140615211052.GA63247@neutralgood.org> <20140616024942.GA13697@koodekoo.local> User-Agent: Alpine 2.11 (BSF 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Mon, 16 Jun 2014 08:11:43 -0600 (MDT) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 14:11:45 -0000 On Mon, 16 Jun 2014, Anders Jensen-Waud wrote: > This disk is not the ``storage'' zpool -- it is my ``backup'' pool, > which is on a different drive: > > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > backup 464G 235G 229G 50% 1.00x ONLINE - > storage 928G 841G 87.1G 90% 1.00x ONLINE - What does 'zpool status' say about the device names of that pool? > Running 'gpt recover /dev/da1' fixes the error above but after a reboot > it reappears. Would it be better to completely wipe the disk and > reinitialise it with zfs? Most likely the problem is that the disk was GPT partitioned, but when the pool was created, ZFS was told to use the whole disk (ada0) rather than just a partition (ada0p1). One of the partition tables was overwritten by ZFS information. Possibly this space was mostly unused by ZFS, because otherwise a 'gpart recover' would have damaged it. This could also have happened if GPT partitioning was not cleared from the disk before using it for ZFS. ZFS leaves some unused space at the end of the disk, enough to not overwrite a backup GPT. That would be detected by GEOM, and not match the primary, which was overwritten by ZFS. The error would be spurious, but attempting a recovery could overwrite actual ZFS data. ZFS works fine on whole disks or in partitions. But yes, in this case, I'd back up, destroy the pool, destroy partition information on the drives, then recreate the pool. A handy way to make sure a backup GPT table is not left on a disk is to create and then destroy GPT partitioning: gpart destroy -F adaN gpart create -s gpt adaN gpart destroy adaN From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 18:25:16 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C81E37B2 for ; Mon, 16 Jun 2014 18:25:16 +0000 (UTC) Received: from mail.helenius.fi (mail.helenius.fi [IPv6:2001:67c:164:40::91]) by mx1.freebsd.org (Postfix) with ESMTP id 7CE652E53 for ; Mon, 16 Jun 2014 18:25:15 +0000 (UTC) Received: from mail.helenius.fi (localhost [127.0.0.1]) by mail.helenius.fi (Postfix) with ESMTP id 2588A737B; Mon, 16 Jun 2014 18:25:11 +0000 (UTC) X-Virus-Scanned: amavisd-new at helenius.fi Received: from mail.helenius.fi ([127.0.0.1]) by mail.helenius.fi (mail.helenius.fi [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id B_t4cmVVLSVc; Mon, 16 Jun 2014 18:24:57 +0000 (UTC) Received: from [IPv6:2001:67c:164:42:4c3:716:fa22:2617] (unknown [IPv6:2001:67c:164:42:4c3:716:fa22:2617]) (Authenticated sender: pete) by mail.helenius.fi (Postfix) with ESMTPA id 717348438; Mon, 16 Jun 2014 06:40:20 +0000 (UTC) From: Petri Helenius Subject: l2arc compression leak Date: Mon, 16 Jun 2014 09:40:20 +0300 Message-Id: <5AD0B5C0-7C72-46FA-86D3-7AFA8FA1E84E@helenius.fi> To: developer@open-zfs.org Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) X-Mailer: Apple Mail (2.1878.2) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: "" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 18:25:16 -0000 Hi, Recent FreeBSD 10-STABLE seems to be suffering from the L2ARC memory = leak, eventually hanging on pfault. Should I apply this patch http://lists.open-zfs.org/pipermail/developer/2014-March/000535.html or wait for integration to SVN? Pete From owner-freebsd-fs@FreeBSD.ORG Tue Jun 17 07:04:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5D191B37 for ; Tue, 17 Jun 2014 07:04:29 +0000 (UTC) Received: from mail-yh0-x235.google.com (mail-yh0-x235.google.com [IPv6:2607:f8b0:4002:c01::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 21EE22BFA for ; Tue, 17 Jun 2014 07:04:29 +0000 (UTC) Received: by mail-yh0-f53.google.com with SMTP id b6so5189090yha.26 for ; Tue, 17 Jun 2014 00:04:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=uxbAPiKdEVshFuk2Ie7miKLot/sOAz1/fEe27AEXZ8w=; b=Iq7SubdTDQnslxsXdyiGV5uSq+MfLl52jvsl9jC+NyF38yTriuBIGXqd3wYHZKa/8i Ut1sbeKz9ViXbMqGcXoRIEduuOAAV0Os1ITMK4ZrYpy+1Cqoo5SoHe/czP1OPwiDQedX 225H4AO+cwRwqKb5ApE8RaFKoOstOlXX3nwrYfaHz8fXcQQDlZB2JoP7OsgzpINSKjSr iWoabct1ecRxVKFNuS0cXs7EPbd2AFkNSWs2zgC9wrJk0Osq16lIp2PjzH/yINEpr90d K1CsEym9Owe91SacswttcUPUJZUNW9mM5o7CWtyD/R+AkHM66Lm0FAqlt2o79qrRLcP8 4eCA== MIME-Version: 1.0 X-Received: by 10.236.185.105 with SMTP id t69mr40658949yhm.94.1402988666812; Tue, 17 Jun 2014 00:04:26 -0700 (PDT) Received: by 10.170.96.133 with HTTP; Tue, 17 Jun 2014 00:04:26 -0700 (PDT) In-Reply-To: <5AD0B5C0-7C72-46FA-86D3-7AFA8FA1E84E@helenius.fi> References: <5AD0B5C0-7C72-46FA-86D3-7AFA8FA1E84E@helenius.fi> Date: Tue, 17 Jun 2014 08:04:26 +0100 Message-ID: Subject: Re: l2arc compression leak From: krad To: Petri Helenius Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: developer@open-zfs.org, "" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jun 2014 07:04:29 -0000 thats really a decision for you as your situation is specific to you, and you get hit with the penalties if anything goes wrong. If its causing you a major problem in production and the risk/benfit ratio is worth it you could use it, but I would make sure you do rigorous testing first. However if you dont have a specific issue, i would hold off until its in stable at least. On 16 June 2014 07:40, Petri Helenius wrote: > > Hi, > > Recent FreeBSD 10-STABLE seems to be suffering from the L2ARC memory leak, > eventually hanging on pfault. > > Should I apply this patch > http://lists.open-zfs.org/pipermail/developer/2014-March/000535.html > > or wait for integration to SVN? > > Pete > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Jun 17 08:25:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 47F59D5A for ; Tue, 17 Jun 2014 08:25:25 +0000 (UTC) Received: from mail.helenius.fi (r091.secroom.net [193.19.137.91]) by mx1.freebsd.org (Postfix) with ESMTP id BFD1722BF for ; Tue, 17 Jun 2014 08:25:22 +0000 (UTC) Received: from mail.helenius.fi (localhost [127.0.0.1]) by mail.helenius.fi (Postfix) with ESMTP id 287F477E4; Tue, 17 Jun 2014 08:25:14 +0000 (UTC) X-Virus-Scanned: amavisd-new at helenius.fi Received: from mail.helenius.fi ([127.0.0.1]) by mail.helenius.fi (mail.helenius.fi [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zZDB0mR6KVwy; Tue, 17 Jun 2014 08:24:44 +0000 (UTC) Received: from [192.168.5.129] (a91-156-75-2.elisa-laajakaista.fi [91.156.75.2]) (Authenticated sender: pete) by mail.helenius.fi (Postfix) with ESMTPA id 32D7577CC; Tue, 17 Jun 2014 08:24:43 +0000 (UTC) Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) Subject: Re: l2arc compression leak From: Petri Helenius In-Reply-To: Date: Tue, 17 Jun 2014 11:24:37 +0300 Message-Id: References: <5AD0B5C0-7C72-46FA-86D3-7AFA8FA1E84E@helenius.fi> To: krad X-Mailer: Apple Mail (2.1878.2) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: developer@open-zfs.org, "" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jun 2014 08:25:25 -0000 I wonder when this makes it to HEAD? Pete On 17 Jun 2014, at 10:04 , krad wrote: > thats really a decision for you as your situation is specific to you, = and you get hit with the penalties if anything goes wrong. If its = causing you a major problem in production and the risk/benfit ratio is = worth it you could use it, but I would make sure you do rigorous testing = first. However if you dont have a specific issue, i would hold off until = its in stable at least. >=20 >=20 > On 16 June 2014 07:40, Petri Helenius wrote: >=20 > Hi, >=20 > Recent FreeBSD 10-STABLE seems to be suffering from the L2ARC memory = leak, eventually hanging on pfault. >=20 > Should I apply this patch > http://lists.open-zfs.org/pipermail/developer/2014-March/000535.html >=20 > or wait for integration to SVN? >=20 > Pete >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20 From owner-freebsd-fs@FreeBSD.ORG Tue Jun 17 15:47:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BE713D8D for ; Tue, 17 Jun 2014 15:47:32 +0000 (UTC) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 726F82D31 for ; Tue, 17 Jun 2014 15:47:32 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.8/8.14.8) with ESMTP id s5HFlGf2026243; Tue, 17 Jun 2014 08:47:17 -0700 (PDT) (envelope-from dg@pki2.com) Subject: Re: [Fwd: Re: Large ZFS arrays?] From: Dennis Glatting To: Kevin Day In-Reply-To: References: <1402846984.4722.363.camel@btw.pki2.com> Content-Type: text/plain; charset="iso-8859-13" Date: Tue, 17 Jun 2014 08:47:16 -0700 Message-ID: <1403020036.4722.445.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 8bit X-SoftwareMunitions-MailScanner-Information: Dennis Glatting X-SoftwareMunitions-MailScanner-ID: s5HFlGf2026243 X-SoftwareMunitions-MailScanner: Found to be clean X-MailScanner-From: dg@pki2.com Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jun 2014 15:47:32 -0000 On Sun, 2014-06-15 at 11:00 -0500, Kevin Day wrote: > On Jun 15, 2014, at 10:43 AM, Dennis Glatting wrote: > > > > Total. I am looking at three pieces in total: > > > > * Two 1PT storage "blocks" providing load sharing and > > mirroring for failover. > > > > * One 5PB storage block for on-line archives (3-5 years). > > > > The 1PB nodes will divided into something that makes sense, such as > > multiple SuperMicro 847 chassis with 3TB disks providing some number of > > volumes. Division is a function of application, such as a 100TB RAIDz2 > > volumes for bulk storage whereas smaller 8TB volumes for active data, > > such as iSCSI, databases, and home directories. > > > > Thanks. > > > Weÿre currently using multiples of the SuperMicro 847 chassis with 3TB > and 4TB drives, and LSI 9207 controllers. Each 45 drive array is > configured as 4 11 drive raidz2 groups, plus one hot spare. > > A few notes: > > 1) Iÿd highly recommend against grouping them together into one giant > zpool unless you really really have to. We just spent a lot of time > redoing everything so that each 45 drive array is its own > zpool/filesystem. Youÿre otherwise putting all your eggs into one very > big basket, and if something went wrong youÿd lose everything rather > than just a subset of your data. If you donÿt do this, youÿll almost > definitely have to run with sync=disabled, or the number of sync > requests hitting every drive will kill write performance. > > 2) You definitely want a JBOD controller instead of a smart RAID > controller. The LSI 9207 works pretty well, but when you exceed 192 > drives it complains on boot up of running out of heap space and makes > you press a key to continue, which then works fine. There is a very > recently released firmware update for the card that seems to fix this, > but we havenÿt completed testing yet. Youÿll also want to increase > hw.mps.max_chains. The driver warns you when you need to, but you need > to reboot to change this, and youÿre probably only going to discover > this under heavy load. > I had discovered the chains problem on some of my systems. Like most of the people on this list, I have a small data center in my home that the spouse had the noisy servers "relocated" to the garage. :) > 3) Weÿve played with L2ARC ssd devices, and arenÿt seeing much gains. > It appears that our active data set is so large that itÿd need a huge > SSD to even hit a small percentage of our frequently used files. > setting ´secondarycache=metadata¡ does seem to help a bit, but probably > not worth the hassle for us. This probably will depend entirely on your > workload though. > I'm curios if you have you tried the TB or near TB SSDs? I haven't looked to see if they are anything reliable, or fast. > 4) ´zfs destroy¡ can be excruciatingly expensive on large datasets. > http://blog.delphix.com/matt/2012/07/11/performance-of-zfs-destroy/ > Itÿs a bit better now, but donÿt assume you can ´zfs destroy¡ without > killing performance to everything. > Is that still a problem? Both FreeBSD and ZFS-on-Linux had a significant problem on destroy but I am under the impression that is now backgrounded on FreeBSD (ZoL, however, destroyed the pool with dedup data). It's been several months since I deleted TB files but I seem to recall that non-dedup was now good but dedup will forever suck. > If you have specific questions, Iÿm happy to help, but I think most of > the advice I can offer is going to be workload specific. If I had to do > it all over again, Iÿd probably break things down into many smaller > servers than trying to put as much onto one. > Replication for on-line fail over. HAST may be an option but I haven't looked into it. -- Dennis Glatting From owner-freebsd-fs@FreeBSD.ORG Tue Jun 17 19:53:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B49F737A; Tue, 17 Jun 2014 19:53:45 +0000 (UTC) Received: from mail-pb0-f43.google.com (mail-pb0-f43.google.com [209.85.160.43]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 812B126B8; Tue, 17 Jun 2014 19:53:45 +0000 (UTC) Received: by mail-pb0-f43.google.com with SMTP id um1so3242909pbc.30 for ; Tue, 17 Jun 2014 12:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=4wGmpOS5qNUDWjxee8FNFmX1/qhLc/N2JM7wc79qWBA=; b=wf14VzStAZTtPoWxLiRvRq2A9qKLePwjntxal3CdUw1OS8fp1Fgc7QywrT63XZ6A5I g5TK10HtGZbY/A9/DF0olLnODl4MLGPFcrFY72DxzfnH0FXGnBe6sShU1s/qhx6MY1zf qT6x5EAMBiyGUSlvuANbzNnFExe7CdsLIWN6PifBwLNmDpRwnEAfNd9YteA3MBShfqla dEwaW2dJxMS8w6W4mVA2uVgLa4X7x+DWAFejUea7p/6ncKfVoNv04zP2RIk0ty9gC0pJ 8mkwpClyNOOo2SCPHV84RfY0t688wBkd9Eq6xb519muRDnHCkRzgTA78pHdeQ81WAdD0 XseQ== MIME-Version: 1.0 X-Received: by 10.68.254.5 with SMTP id ae5mr34679323pbd.83.1403034784190; Tue, 17 Jun 2014 12:53:04 -0700 (PDT) Received: by 10.70.75.195 with HTTP; Tue, 17 Jun 2014 12:53:04 -0700 (PDT) Received: by 10.70.75.195 with HTTP; Tue, 17 Jun 2014 12:53:04 -0700 (PDT) In-Reply-To: <308a6f01436240e060878bb0620d0946@mail.feld.me> References: <7781CE90-D672-4A55-B7E9-47A48EA146E4@helenius.fi> <308a6f01436240e060878bb0620d0946@mail.feld.me> Date: Tue, 17 Jun 2014 22:53:04 +0300 Message-ID: Subject: Re: ZFS auto online From: Sami Halabi To: Mark Felder Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs@freebsd.org, Petri Helenius , owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jun 2014 19:53:45 -0000 Hi, Waiting for zfsd and pray not help yet.. Is there any working devd script that can be used as a reference. Thanks i advance, Sami =D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 29 =D7=91=D7=9E=D7=90=D7=99 2014 17:04= , "Mark Felder" =D7=9B=D7=AA=D7=91: > On 2014-05-29 03:13, Petri Helenius wrote: > >> Hi, >> >> How do I get ZFS to automatically online reattached device? >> >> > Besides waiting for zfsd, you could write a devd script that recognizes > the device by one of several identifiers and then automatically runs the > zfs commands you desire. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Jun 17 20:17:09 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2513DF4B; Tue, 17 Jun 2014 20:17:09 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.feld.me", Issuer "Gandi Standard SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9E12928D3; Tue, 17 Jun 2014 20:17:08 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id 11334870; Tue, 17 Jun 2014 15:17:04 -0500 (CDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=feld.me; h=mime-version :content-type:date:from:to:cc:subject:in-reply-to:references :message-id:sender; s=blargle2; bh=3VcTCrsluZB2jvIvBaIlFmMGra8=; b= TJAKJxzptFIRiOeKh0c3pIL0XFb82ajLu3bAZElNglw7wOxgAKGv2ngbB5HAuU89 ZfOzc/AN4CdQyZwQxwPxo1Wz3bxWTZmIjw6cyZ8NKU0pIuAWd5FFvoGnD6pQVn1Y a4+yCFI3XFYY4WmeanRx0ZJrW0C7J77HnivunCm/NBKmTA/xpStr7DKssnM4+8yM VjOcWgf79G/YJ873zsEms16y7zI1YTJ4k+Yofwn50SiYHbt0/CtY9/zHBDeG832q sfHC6vC14y4VsmCh+EGe9+E3dIwyJjMOnYQx17VndIYKDf+Ymnthni4ZCxLKidgh 2J72GzmSPJPr0JlYGQomvw== DomainKey-Signature: a=rsa-sha1; c=nofws; d=feld.me; h=mime-version :content-type:date:from:to:cc:subject:in-reply-to:references :message-id:sender; q=dns; s=blargle2; b=aMsQEf6Sf22zEvtA7sSIxKJ XRvs1XMDnha7t/OnCp+rxg6PUVGM1nJdSvDeYFWa+470pncjInk+5ii55ZnV6TH/ RvOXUoYGnSiFiis1yukCYCvOFPQv4mp5zmszOMmAnsI504mfAq6tyhlqBkgs7rC4 Jv3fLPtSXRgHnmnSQ0vrMSBN1zB5iAuyFZuk0wkVmmDlC4nqITBicCvulRubgsb7 aao6qFzH7obOqgEgFUbV8Or/AYFpJNEmG5Yyx5pi+G3Bhdl6pZuHBb8xizVNT51Z fpmctTYM/gAEcR1Fzws2QTjiCNFpaoTDfJiEj5VuIVf9rTtWx/pMeZl4XE7IzQg= = Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id 857238bd; Tue, 17 Jun 2014 15:17:04 -0500 (CDT) Received: from feld@feld.me by mail.feld.me (Archiveopteryx 3.2.0) with esmtpa id 1403036223-26378-26375/5/5; Tue, 17 Jun 2014 20:17:03 +0000 Mime-Version: 1.0 Content-Type: text/plain; format=flowed Date: Tue, 17 Jun 2014 15:17:03 -0500 From: Mark Felder To: Sami Halabi Subject: Re: ZFS auto online In-Reply-To: References: <7781CE90-D672-4A55-B7E9-47A48EA146E4@helenius.fi> <308a6f01436240e060878bb0620d0946@mail.feld.me> Message-Id: X-Sender: feld@FreeBSD.org User-Agent: Roundcube Webmail/1.0.1 Sender: feld@feld.me Cc: freebsd-fs@freebsd.org, Petri Helenius , owner-freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jun 2014 20:17:09 -0000 On 2014-06-17 14:53, Sami Halabi wrote: > Hi, > Waiting for zfsd and pray not help yet.. > Is there any working devd script that can be used as a reference. > in my /usr/local/etc/devd/ I have a virtualbox.conf file with this in it: # zalman notify 50 { match "system" "USB"; match "subsystem" "DEVICE"; match "type" "ATTACH"; match "vendor" "0x0928"; action "chown feld:feld /dev/$cdev"; }; This recognizes a zalman USB drive by its vendor ID when devd sees it is attached and runs that chown command. I'm sure you could tweak this to your needs. From owner-freebsd-fs@FreeBSD.ORG Tue Jun 17 21:13:08 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CDFD13CC for ; Tue, 17 Jun 2014 21:13:08 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B5E352DD5 for ; Tue, 17 Jun 2014 21:13:08 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5HLD8Xr056416 for ; Tue, 17 Jun 2014 22:13:08 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 156781] [zfs] zfs is losing the snapshot directory, Date: Tue, 17 Jun 2014 21:13:07 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 8.2-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: aaron@omnigroup.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jun 2014 21:13:08 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=156781 aaron@omnigroup.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |aaron@omnigroup.com --- Comment #11 from aaron@omnigroup.com --- This is happening for us too in the 9.1-RELEASE. Just noticed yesterday. We don't have a ton of snapshots, just a week's worth of daily ones. About 40GB of data. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Jun 17 22:07:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1E89FFB9 for ; Tue, 17 Jun 2014 22:07:13 +0000 (UTC) Received: from mail-lb0-x22f.google.com (mail-lb0-x22f.google.com [IPv6:2a00:1450:4010:c04::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9CC5F2351 for ; Tue, 17 Jun 2014 22:07:12 +0000 (UTC) Received: by mail-lb0-f175.google.com with SMTP id q8so2058216lbi.20 for ; Tue, 17 Jun 2014 15:07:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=wsgauA9BhHPLhuv072YjHlmcYW9q4LvFunJWqueP5Xo=; b=KEzR0KqXycWjyqq5siPqR6ufd8Mrstc3kdOHceWlxounr1u14AL+L0O9nidk67e11s DrbDwT7auEDHMkKSB8oxyPvRo4cHW9jlMdGP/YKMxBkasLx9QVTDbWSwe3CFSaP6u5FG vx0uhbSeeOeYiGJcSa3TLXRyNCd7Qc0JsFnk1TH8Xi/o5/FmjK53h+bqU3+glH3c+bZM /KVi06QCr88IT8XciSjO0tViSFg5OapABEFAwuFqQSYSn3PzUFpkR4KA5jX033qFbZWE pMVz9Fd+myAQG3pTxeF07NRkoixYYElpvuppz42H59VtDYV/qFkifH61HeM9iLe7GswT 3pmg== MIME-Version: 1.0 X-Received: by 10.112.72.41 with SMTP id a9mr2999167lbv.71.1403042830596; Tue, 17 Jun 2014 15:07:10 -0700 (PDT) Received: by 10.112.137.69 with HTTP; Tue, 17 Jun 2014 15:07:10 -0700 (PDT) In-Reply-To: <1403020036.4722.445.camel@btw.pki2.com> References: <1402846984.4722.363.camel@btw.pki2.com> <1403020036.4722.445.camel@btw.pki2.com> Date: Tue, 17 Jun 2014 23:07:10 +0100 Message-ID: Subject: Re: [Fwd: Re: Large ZFS arrays?] From: Tom Evans To: Dennis Glatting Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jun 2014 22:07:13 -0000 On Tue, Jun 17, 2014 at 4:47 PM, Dennis Glatting wrote: > On Sun, 2014-06-15 at 11:00 -0500, Kevin Day wrote: >> 4) =E2=80=9Czfs destroy=E2=80=9D can be excruciatingly expensive on larg= e datasets. >> http://blog.delphix.com/matt/2012/07/11/performance-of-zfs-destroy/ >> It=E2=80=99s a bit better now, but don=E2=80=99t assume you can =E2=80= =9Czfs destroy=E2=80=9D without >> killing performance to everything. >> > > Is that still a problem? Both FreeBSD and ZFS-on-Linux had a significant > problem on destroy but I am under the impression that is now > backgrounded on FreeBSD (ZoL, however, destroyed the pool with dedup > data). It's been several months since I deleted TB files but I seem to > recall that non-dedup was now good but dedup will forever suck. > I had a 9-stable (9.1ish) box that I was migrating the data from to a newer box, and was caught by zfs destroy requiring an insane amount of memory. I idly "zfs destroy" a 5 TB fs to clean up some free space, the box churns to a semi halt as it exhausts all memory before finally giving out and panicking. I had to transfer the disks to the new host, boot from my rescue usb and force import the pool and allow the destroy to proceed. It still used *all* the memory on the newer box, but whenever it got to the point it would run out it would force a cleanup and start over. I can't imagine how deadly that would be to regular processes. Still, it showed me that moving the disks and then the data was quicker than trying to move the data over the network, and now I arrange my file systems a little more prudently! Cheers Tom From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 02:35:06 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B94B0BBF for ; Fri, 20 Jun 2014 02:35:06 +0000 (UTC) Received: from mail-qa0-x22f.google.com (mail-qa0-x22f.google.com [IPv6:2607:f8b0:400d:c00::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 741322646 for ; Fri, 20 Jun 2014 02:35:06 +0000 (UTC) Received: by mail-qa0-f47.google.com with SMTP id hw13so2698357qab.34 for ; Thu, 19 Jun 2014 19:35:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=3geeks.org; s=google; h=from:content-type:content-transfer-encoding:subject:message-id:date :to:mime-version; bh=h+N7ROw+uYM09RCn6A192ToNG39hSyRrJndkC6g8Wus=; b=OIy2e8HK59RfgBvbSO8LT0t5CNobyxdZQOIE4pYNTK6Gku1UqBwYkBUY2YRGaO3Bxl mRNISLsRgxn1HMVRFwrKXaks1JaXThJXMaIevpfuu0FOI6rm4OQ1qDA1MK/Fg4Rk8v9E SfXmBiXYgAlQTS+FLsCk3zweiImGi6YTXqbXA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:content-type:content-transfer-encoding :subject:message-id:date:to:mime-version; bh=h+N7ROw+uYM09RCn6A192ToNG39hSyRrJndkC6g8Wus=; b=WI8Getzqje8/yubmDXaxqg+gILAjoGBOR9lAry7+AMkC7NENTdSzyINinFod6MZBmE LHnY88y7mjFmSMrTVFR0cZM+F4rDj5RShC45ETiMhhaygLLAVJZEqo+tnaZbkNfkGFAV q/bz1c3dYLsbdrgDhSpSLa1ei+94PZGFdXwcCPj7INJsV6RnSHEK5jTq9/2jLQrG3BFP cJr6mDsBVbe4cPEcOsY9iI5Rb+7i0f4xiW/oeAZ7bXZewSCAY4lMXwnY4yITx1ft+oBf FHOmlDxziHtfEj8uhB+PTtJ2os+ktxW1qc/FsNHA7XANblnudMH5qpnhvfoSRHqghfSd l6zQ== X-Gm-Message-State: ALoCoQlmQMnOdeBWNZIWbSCx7J6zVXaNCUF24Ll/I96z3EF2Wj7riOCm5uJnjKOk+1qqBwxUJVto X-Received: by 10.224.127.197 with SMTP id h5mr584833qas.3.1403231705262; Thu, 19 Jun 2014 19:35:05 -0700 (PDT) Received: from treehunter.3geeks.org (pool-108-28-189-172.washdc.fios.verizon.net. [108.28.189.172]) by mx.google.com with ESMTPSA id 22sm4396735qgs.23.2014.06.19.19.35.02 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 19 Jun 2014 19:35:03 -0700 (PDT) From: Daniel Mayfield Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Debugging newnfs Message-Id: <0016EC7C-7DCC-47B4-AD12-798525045F89@3geeks.org> Date: Thu, 19 Jun 2014 22:35:01 -0400 To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) X-Mailer: Apple Mail (2.1878.2) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 02:35:06 -0000 I have a very strange problem between an NFS server running FreeBSD 10 = w/ ZFS and a number of FreeBSD 10 VMs running on a XenServer 6.2 SP1 = host. The problem manifests as seemingly random permissions issues = and/or IO errors on the clients when the ZFS pool is busy. There are no = entries in dmesg on either side, and no errors logged in nfsstat either. = If I keep the traffic down, the errors subside, but not completely. = Other than tcpdump, how can I go about debugging this? Dan= From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 13:17:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 31E2C543 for ; Fri, 20 Jun 2014 13:17:14 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id ED48E2067 for ; Fri, 20 Jun 2014 13:17:13 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Av4EAPEypFODaFve/2dsb2JhbABZg19agm2nMgEBAQEBAQaRa4ZsUwGBHHWEAwEBAQMBAQEBIAQnIAsFFg4KAgINGQIpAQkmBggHBAEcBIgZCA2sSp48F4EqhDiDYIRdBgEBGzQHgneBTASXX4QokheDXiE1fQgXIg X-IronPort-AV: E=Sophos;i="5.01,514,1400040000"; d="scan'208";a="132343521" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 20 Jun 2014 09:16:04 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id CAA4DB3F23; Fri, 20 Jun 2014 09:16:04 -0400 (EDT) Date: Fri, 20 Jun 2014 09:16:04 -0400 (EDT) From: Rick Macklem To: Daniel Mayfield Message-ID: <538359689.1860054.1403270164794.JavaMail.root@uoguelph.ca> In-Reply-To: <0016EC7C-7DCC-47B4-AD12-798525045F89@3geeks.org> Subject: Re: Debugging newnfs MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 13:17:14 -0000 Daniel Mayfield wrote: > I have a very strange problem between an NFS server running FreeBSD > 10 w/ ZFS and a number of FreeBSD 10 VMs running on a XenServer 6.2 > SP1 host. The problem manifests as seemingly random permissions > issues and/or IO errors on the clients when the ZFS pool is busy. > There are no entries in dmesg on either side, and no errors logged > in nfsstat either. If I keep the traffic down, the errors subside, > but not completely. Other than tcpdump, how can I go about > debugging this? > Well, you didn't mention what mount options you are using or what network interfaces that you are using, but here's a few things that might be worth looking at... The TSO max transmit segments issue: - Without going into all the details (there have been some recent commits like r264630 to try and alleviate this), if a net device driver cannot handle 35 mbufs in a transmit TSO segment, things will get broken. - Xen/netfront is a weird exception, which I think is ok so long as lagg or a vlan isn't layered on top of it. --> If can disable TSO on both server and clients or reduce rsize,wsize to 32K on all client mounts and see if the problem persists, that is probably the best way to check this. (Since Xen/netfront is such a weird case, I am not 100% sure if doing the above will fix this problem, if it is being used) I also don't know if it is possible to have corrupted packets due to a hardware problem (bad memory or...) where the Xen/netfront world doesn't catch it. If you use the "soft" mount option, you could easily get this when the server is slow to respond. I'd strongly recommend using "tcp" and not "soft" for your mounts. ("nfsstat -m" on the client will show you what the actual mount options is use are. This can be somewhat different than what is specified on the command line, since servers limit rsize/wsize, as an example.) When you get a "permissions failure" case, check on the server to see if the permissions for the file appear correct on ZFS. If they are (or the problem disappears when you retry a command without changing permissions), you could have a caching issue. Other than capturing the packets and looking at them in wireshark (which knows NFS, unlike tcpdump) all you can do is try fiddling with the mount options related to caching and see if that helps. (Note that NFS does not have a cache coherency protocol, so if files are concurrently shared among multiple clients, all bets are off w.r.t. what the behaviour is. jhb@ is much better at this than I, since he seems to find lots of these weird cases at his workplace.) Good luck with it, rick > Dan > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 14:58:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E6AD2FFE for ; Fri, 20 Jun 2014 14:58:42 +0000 (UTC) Received: from mail-lb0-x236.google.com (mail-lb0-x236.google.com [IPv6:2a00:1450:4010:c04::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5BA652986 for ; Fri, 20 Jun 2014 14:58:42 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id c11so2465025lbj.13 for ; Fri, 20 Jun 2014 07:58:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=3geeks.org; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=/Ptdy09GgBtc72Uo7HgptmHO//Mmn+A5fsulKvtF3WE=; b=O0pEoYZlffKuUX0D7T2Y4v+LbDhCj/XuSbkDXXc7x1syPAzfx9mKrRW7nKERtBUyeD 74LtqUI9RFoB+4I9hb07eGNjfw0Z2wbP9+qy6LAosi/Gt5Jc5fnHNCoOtvUEHVaD74W8 4ISwNGbXoNG2fIv63/ItLxdsCSCV5atctFams= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=/Ptdy09GgBtc72Uo7HgptmHO//Mmn+A5fsulKvtF3WE=; b=JIdUVt0tvKhYdyl+zwVxaE/QH/X94gaHqIEGi23zhEshieLvGc1iCGV0D5QJ/WZT2e Dvvlp7HXH61h3MPeS4gFuU/3vwOKmeDhNt1AYZihbG/Jo6e5nq2SJw/br0PWM5EgqcKM kPHzPBjunmjlVy5oqcJLA2bbSLJB3264cuHBOcV6TRm30QxHTWmg0r/rh54+CSwtWlUF W1j3idPTU3Z1+xKSDoMzsr249nWvC+HtDj+Do3kGNIPhHYrGIP6YyWDYwmUQfGzKgSNr 36VRCkR+AYdDnA4osbs9+dZ232ZdEgX7JX1sSG0ctf5SgA3oL9eUj4ByjGzBM3bIESD7 NNSg== X-Gm-Message-State: ALoCoQniDiGsSM3Mw6f/yV4ElyQdclhz9yOznIfu9u/g+Bymj7u+DyAVIFE4TVj55rkc5xV32qvX MIME-Version: 1.0 X-Received: by 10.112.136.2 with SMTP id pw2mr2669950lbb.13.1403276319900; Fri, 20 Jun 2014 07:58:39 -0700 (PDT) Received: by 10.152.2.9 with HTTP; Fri, 20 Jun 2014 07:58:39 -0700 (PDT) X-Originating-IP: [64.236.208.26] In-Reply-To: <538359689.1860054.1403270164794.JavaMail.root@uoguelph.ca> References: <0016EC7C-7DCC-47B4-AD12-798525045F89@3geeks.org> <538359689.1860054.1403270164794.JavaMail.root@uoguelph.ca> Date: Fri, 20 Jun 2014 10:58:39 -0400 Message-ID: Subject: Re: Debugging newnfs From: Daniel Mayfield To: Rick Macklem Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 14:58:43 -0000 The server side is a set of vlans on a lagg of 4 igbs. The Xen side is the same setup, with the VMs in question attached to two different vlans. Many different mounts, but the mount options all look like this: nfsv3,tcp,resvport,hard,cto,lockd,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=65536,readahead=1,wcommitsize=4048762,timeout=120,retrans=2 The permissions do not change, but repeat operations succeed and fail randomly. There aren't any clients concurrently accessing the same mount. On Fri, Jun 20, 2014 at 9:16 AM, Rick Macklem wrote: > Daniel Mayfield wrote: > > I have a very strange problem between an NFS server running FreeBSD > > 10 w/ ZFS and a number of FreeBSD 10 VMs running on a XenServer 6.2 > > SP1 host. The problem manifests as seemingly random permissions > > issues and/or IO errors on the clients when the ZFS pool is busy. > > There are no entries in dmesg on either side, and no errors logged > > in nfsstat either. If I keep the traffic down, the errors subside, > > but not completely. Other than tcpdump, how can I go about > > debugging this? > > > Well, you didn't mention what mount options you are using or what > network interfaces that you are using, but here's a few things that > might be worth looking at... > > The TSO max transmit segments issue: > - Without going into all the details (there have been some recent > commits like r264630 to try and alleviate this), if a net device > driver cannot handle 35 mbufs in a transmit TSO segment, things > will get broken. > - Xen/netfront is a weird exception, which I think is ok so long > as lagg or a vlan isn't layered on top of it. > --> If can disable TSO on both server and clients or reduce rsize,wsize > to 32K on all client mounts and see if the problem persists, that > is probably the best way to check this. (Since Xen/netfront is > such a weird case, I am not 100% sure if doing the above will fix > this problem, if it is being used) > > I also don't know if it is possible to have corrupted packets due to > a hardware problem (bad memory or...) where the Xen/netfront world > doesn't catch it. > > If you use the "soft" mount option, you could easily get this when > the server is slow to respond. I'd strongly recommend using "tcp" > and not "soft" for your mounts. ("nfsstat -m" on the client will > show you what the actual mount options is use are. This can be > somewhat different than what is specified on the command line, since > servers limit rsize/wsize, as an example.) > > When you get a "permissions failure" case, check on the server to > see if the permissions for the file appear correct on ZFS. If they > are (or the problem disappears when you retry a command without > changing permissions), you could have a caching issue. Other than > capturing the packets and looking at them in wireshark (which knows > NFS, unlike tcpdump) all you can do is try fiddling with the mount > options related to caching and see if that helps. (Note that NFS > does not have a cache coherency protocol, so if files are concurrently > shared among multiple clients, all bets are off w.r.t. what the > behaviour is. jhb@ is much better at this than I, since he seems > to find lots of these weird cases at his workplace.) > > Good luck with it, rick > > > Dan > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 15:25:54 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 01EA6E94 for ; Fri, 20 Jun 2014 15:25:54 +0000 (UTC) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C99B92C66 for ; Fri, 20 Jun 2014 15:25:53 +0000 (UTC) Received: from c-66-41-25-68.hsd1.mn.comcast.net ([66.41.25.68] helo=[192.168.0.148]) by mail.physics.umn.edu with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1Wy090-000BPi-7p for freebsd-fs@freebsd.org; Fri, 20 Jun 2014 09:50:26 -0500 Message-ID: <53A44A23.6050604@physics.umn.edu> Date: Fri, 20 Jun 2014 09:50:11 -0500 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <1402846139.4722.352.camel@btw.pki2.com> In-Reply-To: <1402846139.4722.352.camel@btw.pki2.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=unavailable version=3.3.2 Subject: Re: Large ZFS arrays? X-SA-Exim-Version: 4.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 15:25:54 -0000 On 6/15/2014 10:28 AM, Dennis Glatting wrote: > Anyone built a large ZFS infrastructures (PB size) and care to share > words of wisdom? This is a bit of a late response but I wanted to put in our "me too" before I forget... We have about 500TB of storage on ZFS at present, and plan to add 600TB more later this summer, mostly in similar arrangements to what I've seen discussed already - using Supermicro 847 JBOD chassis and a mixture of Dell R710/R720 head nodes, with LSI 9200-8e HBAs. One R720 has four 847 chassis attached, a couple R710s just have a single chassis. We originally installed one HBA in the R720 for each chassis but had some deadlock problems at one point, which was resolved by daisy-chaining the chassis from a single HBA. I had a feeling it was maybe related to kern/177536 but not really sure. We've been running FreeBSD 9.1 on all the production nodes, though I've long wanted to (and am now beginning to) set up a reasonable long-term testing box where we could check out some of the kernel patches or tuning suggestions which come up - also beginning to test the 9.3 release for the next set of servers. We built all these conservatively with each chassis as a separate pool, each having four 10-drive raidz2 vdevs, a couple of spares, a cheapish L2ARC SSD and a mirrored pair of ZIL SSD (maybe unnecessary to mirror this these days?). I was using the Intel 24GB SLC drive for the ZIL, will need to choose something new for future pools. Would be interesting to hear a little about experiences with the drives used... For our first "experimental" chassis we used 3TB Seagate desktop drives - cheap but not the best choice, 18 months later they are dropping like flies (luckily we can risk some cheapness here as most of our data can be re-transferred from other sites if needed). Another chassis has 2TB WD RE4 enterprise drives (no problems), and four others have 3TB and 4TB WD "Red" NAS drives... which are another "slightly risky" selection but so far have been very solid (also in some casual discussion with a WD field engineer he seemed to feel these would be fine for both ZFS and hadoop use). Tracking drives for failures and replacements was a big issue for us. One of my co-workers wrote a nice perl script which periodically harvests all the data from the chassis (via sg3utils) and stores the mappings of chassis slots, da devices, drive labels, etc into a database. It also understands the layout of the 847 chassis and labels the drives for us according to some rules we made up - we do some prefix for the pool name, then "f" or "b" for front/back of chassis, then the slot number, and finally (?) has some controls to turn the chassis drive identify lights on or off. There might be other ways to do all this but we didn't find any, so it's been incredibly useful for us. As far as performance goes we've been pretty happy. Some of these get relatively hammered by NFS i/o from cluster compute jobs (maybe ~1200 processes on 100 nodes) and they have held up much better than our RHEL NFS servers using fiber channel RAID storage. We've also performed a few bulk transfers between hadoop and ZFS (using distcp with an NFS destination) and saw sustained 5Gbps write speeds (which really surprised me). I think that's all I've got for now. Graham -- ------------------------------------------------------------------------- Graham Allan School of Physics and Astronomy - University of Minnesota ------------------------------------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 15:28:16 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D4D1F181 for ; Fri, 20 Jun 2014 15:28:16 +0000 (UTC) Received: from mail-ie0-x236.google.com (mail-ie0-x236.google.com [IPv6:2607:f8b0:4001:c03::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A57152C90 for ; Fri, 20 Jun 2014 15:28:16 +0000 (UTC) Received: by mail-ie0-f182.google.com with SMTP id rp18so3381904iec.41 for ; Fri, 20 Jun 2014 08:28:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=XYei1PVQpWh5g5c7LOIaDKIhGpSw8eQbi58rUTcGxxk=; b=sponm5uhh0epg2rWKB3JNNpcgPkh7g3wQUxQJ+0t7TRB+CPEOtIa+7/mN/LY22aEJc qcj05xeR3FMojg0OalqqR277xZ5XnTw7BEmPPb5cVmjEaQ/y5tfR6axBbj8IPSbEH2tg aY/fEaSrqnw0If0QY9yOD8CLV45Vfb5NuKM7a0obi0O9Me5ln4ek4ioNAbgjKBjw3XjK DS3CU9FNXKq/wPg/hcKxLG2zRR0GGgGUEcfZPgzGu2/IgMb0JzE0EDczS3NiCdNq9yVV BrTOxIH0tB1Gjm6rwugdGZO05n4zu6wI4qMy0/UVMw0C8FvS0riFJWod8nbAPn189wwf LITA== MIME-Version: 1.0 X-Received: by 10.43.84.67 with SMTP id aj3mr4498441icc.38.1403278096106; Fri, 20 Jun 2014 08:28:16 -0700 (PDT) Received: by 10.64.225.73 with HTTP; Fri, 20 Jun 2014 08:28:16 -0700 (PDT) In-Reply-To: <53A44A23.6050604@physics.umn.edu> References: <1402846139.4722.352.camel@btw.pki2.com> <53A44A23.6050604@physics.umn.edu> Date: Fri, 20 Jun 2014 11:28:16 -0400 Message-ID: Subject: Re: Large ZFS arrays? From: Rich To: Graham Allan Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 15:28:16 -0000 Just FYI, a lot of people who do this use sas[23]ircu for scripting this, rather than sg3utils, though the latter is more powerful if you have enough of the SAS spec to play with... - Rich On Fri, Jun 20, 2014 at 10:50 AM, Graham Allan wrote: > On 6/15/2014 10:28 AM, Dennis Glatting wrote: >> >> Anyone built a large ZFS infrastructures (PB size) and care to share >> words of wisdom? > > > This is a bit of a late response but I wanted to put in our "me too" before > I forget... > > We have about 500TB of storage on ZFS at present, and plan to add 600TB more > later this summer, mostly in similar arrangements to what I've seen > discussed already - using Supermicro 847 JBOD chassis and a mixture of Dell > R710/R720 head nodes, with LSI 9200-8e HBAs. One R720 has four 847 chassis > attached, a couple R710s just have a single chassis. We originally installed > one HBA in the R720 for each chassis but had some deadlock problems at one > point, which was resolved by daisy-chaining the chassis from a single HBA. I > had a feeling it was maybe related to kern/177536 but not really sure. > > We've been running FreeBSD 9.1 on all the production nodes, though I've long > wanted to (and am now beginning to) set up a reasonable long-term testing > box where we could check out some of the kernel patches or tuning > suggestions which come up - also beginning to test the 9.3 release for the > next set of servers. > > We built all these conservatively with each chassis as a separate pool, each > having four 10-drive raidz2 vdevs, a couple of spares, a cheapish L2ARC SSD > and a mirrored pair of ZIL SSD (maybe unnecessary to mirror this these > days?). I was using the Intel 24GB SLC drive for the ZIL, will need to > choose something new for future pools. > > Would be interesting to hear a little about experiences with the drives > used... For our first "experimental" chassis we used 3TB Seagate desktop > drives - cheap but not the best choice, 18 months later they are dropping > like flies (luckily we can risk some cheapness here as most of our data can > be re-transferred from other sites if needed). Another chassis has 2TB WD > RE4 enterprise drives (no problems), and four others have 3TB and 4TB WD > "Red" NAS drives... which are another "slightly risky" selection but so far > have been very solid (also in some casual discussion with a WD field > engineer he seemed to feel these would be fine for both ZFS and hadoop use). > > Tracking drives for failures and replacements was a big issue for us. One of > my co-workers wrote a nice perl script which periodically harvests all the > data from the chassis (via sg3utils) and stores the mappings of chassis > slots, da devices, drive labels, etc into a database. It also understands > the layout of the 847 chassis and labels the drives for us according to some > rules we made up - we do some prefix for the pool name, then "f" or "b" for > front/back of chassis, then the slot number, and finally (?) has some > controls to turn the chassis drive identify lights on or off. There might be > other ways to do all this but we didn't find any, so it's been incredibly > useful for us. > > As far as performance goes we've been pretty happy. Some of these get > relatively hammered by NFS i/o from cluster compute jobs (maybe ~1200 > processes on 100 nodes) and they have held up much better than our RHEL NFS > servers using fiber channel RAID storage. We've also performed a few bulk > transfers between hadoop and ZFS (using distcp with an NFS destination) and > saw sustained 5Gbps write speeds (which really surprised me). > > I think that's all I've got for now. > > Graham > -- > ------------------------------------------------------------------------- > Graham Allan > School of Physics and Astronomy - University of Minnesota > ------------------------------------------------------------------------- > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 15:41:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BFB4682C for ; Fri, 20 Jun 2014 15:41:15 +0000 (UTC) Received: from mail-ob0-x231.google.com (mail-ob0-x231.google.com [IPv6:2607:f8b0:4003:c01::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8794A2E03 for ; Fri, 20 Jun 2014 15:41:15 +0000 (UTC) Received: by mail-ob0-f177.google.com with SMTP id uy5so1205550obc.8 for ; Fri, 20 Jun 2014 08:41:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=f45Uuj3HTlLk8QSC0dk1xKQ/1R+H1e35rhfpERtlsk0=; b=hGyEbp79rcOmjHqc/Vnkex+l9WIP8JbVckdq13c4oSM7WZpL8JYgMKyNTeGFXMQa94 6bIqpZDHFEwHsSqk03ODpTPB+xOFJITzJNbAm7qQAfaK+plfdQUpL/wzGiiSuAFfThLy f8h/IJMgcIf7Kekp7gnNVfOCumFVDWCbBzV8iwt3lfRsDciEeWmYbGbbVFQoRhdbCaZ1 IJfc64B4ZVRU7ATQyJa+fYVwr9bE9KBybbwiVbUv3npUUPsSFU6Rfb3+kahR3lQNkS+8 2NmIj3AG9i2cHdaHWatkAjuWo2yY8jiQSXcaifgMVwqFNx02cARu5nxK0L5E7Tp6j7ft V9rw== MIME-Version: 1.0 X-Received: by 10.182.60.65 with SMTP id f1mr3326751obr.78.1403278874288; Fri, 20 Jun 2014 08:41:14 -0700 (PDT) Received: by 10.202.171.1 with HTTP; Fri, 20 Jun 2014 08:41:14 -0700 (PDT) In-Reply-To: <53A44A23.6050604@physics.umn.edu> References: <1402846139.4722.352.camel@btw.pki2.com> <53A44A23.6050604@physics.umn.edu> Date: Fri, 20 Jun 2014 08:41:14 -0700 Message-ID: Subject: Re: Large ZFS arrays? From: Freddie Cash To: Graham Allan Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 15:41:15 -0000 On Fri, Jun 20, 2014 at 7:50 AM, Graham Allan wrote= : > =E2=80=8B=E2=80=8B > > Would be interesting to hear a little about experiences with the drives > used... For our first "experimental" chassis we used 3TB Seagate desktop > drives - cheap but not the best choice, 18 months later they are dropping > like flies (luckily we can risk some cheapness here as most of our data c= an > be re-transferred from other sites if needed). Another chassis has 2TB WD > RE4 enterprise drives (no problems), and four others have 3TB and 4TB WD > "Red" NAS drives... which are another "slightly risky" selection but so f= ar > have been very solid (also in some casual discussion with a WD field > engineer he seemed to feel these would be fine for both ZFS and hadoop us= e). =E2=80=8BWe've had good experiences with WD Black drives (500 GB, 1 TB, and= 2 TB). These tend to last the longest, and provide the nicest failure modes. It's also very easy to understand the WD model numbers. We've also used Seagate =E2=80=8B7200.11 and 7200.12 drives (1 TB and 2 TB)= . These perform well, but fail in weird ways. They also tend to fail sooner than the WD. Thankfully, the RMA process with Seagate is fairly simple and turn-around time is fairly quick. Unfortunately, trying to figure out exactly which model of Seagate drive to order is becoming more of a royal pain as time goes on. They keep changing their marketing model names and the actual model numbers. There's now something like 8 separate product lines to pick from and 6+ different models in each line, times 2 for 4K vs 0.5K sectors. We started out (3? 4? years ago) using WD Blue drives because they were inexpensive (like almost half the price of WD Black) and figured all the ZFS goodness would work well on them. Quickly found out that desktop drives really aren't suited to server work. Especially when being written to for 12+ hours a day. :) =E2=80=8BWe were going to try some Toshiba drives in our next setup, but we received an exceptionally good price on WD Black drives on our last tender ($80 CDN for 1 TB) that we decided to stick with those for now.=E2=80=8B := D After all, they work well, so why rock the boat? =E2=80=8BWe haven't used any drives larger than 2 TB as of yet. Tracking drives for failures and replacements was a big issue for us. One > of my co-workers wrote a nice perl script which periodically harvests all > the data from the chassis (via sg3utils) and stores the mappings of chass= is > slots, da devices, drive labels, etc into a database. It also understands > the layout of the 847 chassis and labels the drives for us according to > some rules we made up - we do some prefix for the pool name, then "f" or > "b" for front/back of chassis, then the slot number, and finally (?) has > some controls to turn the chassis drive identify lights on or off. There > might be other ways to do all this but we didn't find any, so it's been > incredibly useful for us. > =E2=80=8BWe partition each drive into a single GPT partition (starting at 1= MB, covering whole disk), and label that partition with the chassis/slot that it's in. Then use the GPT label to build the pool (/dev/gpt/diskname). That way, all the metadata in the pool, and any error messages from ZFS, tell us exactly which disk, in which chassis, in which slot, is having issues. No external database required. :)=E2=80=8B Currently using smartmontools and the periodic scripts to alert us of pending drive failures, and a custom cron job that checks the health of the pools for alerting us to actual drive failures. It's not pretty, but with only 4 large servers to monitor, it works for us. I'm hoping to eventually convert those scripts to Nagios plugins, and let our existing monitoring setup keep track of the ZFS pools as well. --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 16:56:48 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E42D017A for ; Fri, 20 Jun 2014 16:56:48 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id CAD72249B for ; Fri, 20 Jun 2014 16:56:48 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5KGumQ4091994 for ; Fri, 20 Jun 2014 17:56:48 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 187594] [zfs] [patch] ZFS ARC behavior problem and fix Date: Fri, 20 Jun 2014 16:56:49 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: adrian@freebsd.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 16:56:49 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 Adrian Chadd changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |adrian@freebsd.org --- Comment #17 from Adrian Chadd --- I'll swing alan cox into it and see what he thinks. Thanks! -a -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 17:08:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3450CF3A for ; Fri, 20 Jun 2014 17:08:36 +0000 (UTC) Received: from mx2.paymentallianceintl.com (mx2.paymentallianceintl.com [216.26.158.171]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx2.paymentallianceintl.com", Issuer "Go Daddy Secure Certification Authority" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D851F25D7 for ; Fri, 20 Jun 2014 17:08:35 +0000 (UTC) Received: from PAIMAIL.pai.local (paimail.pai.local [10.10.0.153]) by mx2.paymentallianceintl.com (8.14.5/8.13.8) with ESMTP id s5KH32vK057940 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL) for ; Fri, 20 Jun 2014 13:03:02 -0400 (EDT) (envelope-from mikej@paymentallianceintl.com) Received: from PAIMAIL.pai.local ([::1]) by PAIMAIL.pai.local ([::1]) with mapi; Fri, 20 Jun 2014 13:03:02 -0400 From: Michael Jung To: "freebsd-fs@freebsd.org" Date: Fri, 20 Jun 2014 13:03:01 -0400 Subject: FW: [Bug 187594] [zfs] [patch] ZFS ARC behavior problem and fix Thread-Topic: [Bug 187594] [zfs] [patch] ZFS ARC behavior problem and fix Thread-Index: Ac+MqKObUGh7IiRMQYiLfMejX0Q2wwAAF/jQ Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 17:08:36 -0000 Finally! +1 - works great and is stable. --mikej -----Original Message----- From: owner-freebsd-fs@freebsd.org [mailto:owner-freebsd-fs@freebsd.org] On= Behalf Of bugzilla-noreply@freebsd.org Sent: Friday, June 20, 2014 12:57 PM To: freebsd-fs@freebsd.org Subject: [Bug 187594] [zfs] [patch] ZFS ARC behavior problem and fix https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D187594 Adrian Chadd changed: What |Removed |Added ---------------------------------------------------------------------------= - CC| |adrian@freebsd.org --- Comment #17 from Adrian Chadd --- I'll swing alan = cox into it and see what he thinks. Thanks! -a -- You are receiving this mail because: You are the assignee for the bug. _______________________________________________ freebsd-fs@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" GoPai.com | Facebook.com/PaymentAlliance CONFIDENTIALITY NOTE: This message is intended only for the use of the individual or entity to whom it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this transmission in error, please notify us by telephone at (502) 212-4001 or notify us at PAI , Dept. 99, 6060 Dutchmans Lane, Suite 320, Louisville, KY 40205 From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 20:01:19 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F34CC96 for ; Fri, 20 Jun 2014 20:01:19 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DB3BD25C4 for ; Fri, 20 Jun 2014 20:01:19 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5KK1J1c061449 for ; Fri, 20 Jun 2014 21:01:19 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 187594] [zfs] [patch] ZFS ARC behavior problem and fix Date: Fri, 20 Jun 2014 20:01:20 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: fullermd@over-yonder.net X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 20:01:20 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 fullermd@over-yonder.net changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |fullermd@over-yonder.net --- Comment #18 from fullermd@over-yonder.net --- I've also been running the patch (from http://...) for 1-2 weeks now on a couple stable/10 boxes and one -CURRENT, ranging from 4 to 16 gig of RAM. No problems noted, swapping definitely less aggressive so I've not yet waited for anything to swap back in. Doesn't seem to starve the ARC either. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 20:55:21 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 94931DBE for ; Fri, 20 Jun 2014 20:55:21 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7C6E82A42 for ; Fri, 20 Jun 2014 20:55:21 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5KKtLrf061795 for ; Fri, 20 Jun 2014 21:55:21 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 187594] [zfs] [patch] ZFS ARC behavior problem and fix Date: Fri, 20 Jun 2014 20:55:21 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: adrian@freebsd.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 20:55:21 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 --- Comment #19 from Adrian Chadd --- >From alc: I gave it a cursory look. The patch appears to use "vm_cnt.v_free_target" incorrectly. If you look at sys/vmmeter, specifically, /* * Return TRUE if we have not reached our free page target during * free page recovery operations. */ static __inline int vm_page_count_target(void) { return (vm_cnt.v_free_target > (vm_cnt.v_free_count + vm_cnt.v_cache_count)); } /* * Return the number of pages we need to free-up or cache * A positive number indicates that we do not have enough free pages. */ static __inline int vm_paging_target(void) { return (vm_cnt.v_free_target - (vm_cnt.v_free_count + vm_cnt.v_cache_count)); } you see that "vm_cnt.v_free_target" should be compared to "(vm_cnt.v_free_count + vm_cnt.v_cache_count)" not "vm_cnt.v_free_count". -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 21:11:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 75E40550 for ; Fri, 20 Jun 2014 21:11:03 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 3D0A92BC1 for ; Fri, 20 Jun 2014 21:11:02 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Av4EAMqepFODaFve/2dsb2JhbABZg19agm2nMwEBAQEBAQaRa4ZsUwGBJHWEAwEBAQMBAQEBIAQnIAsFFg4KAgINGQIpAQkmBggHBAEcBIgZCA2rbp48F4EqhDiDYIRIFQYBARs0B4J3gUwEl1+EKJIXg14hNX0IFyI X-IronPort-AV: E=Sophos;i="5.01,516,1400040000"; d="scan'208";a="132543151" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 20 Jun 2014 17:11:01 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 2CACBB4039; Fri, 20 Jun 2014 17:11:01 -0400 (EDT) Date: Fri, 20 Jun 2014 17:11:01 -0400 (EDT) From: Rick Macklem To: Daniel Mayfield Message-ID: <373087919.2114818.1403298661172.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: Debugging newnfs MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 21:11:03 -0000 Daniel Mayfield wrote: > > > The server side is a set of vlans on a lagg of 4 igbs. I think igb net interfaces have a limit of 64 transmit segments (IGB_MAX_SCATTER), so they should be ok with TSO enabled. > The Xen side > is the same setup, with the VMs in question attached to two > different vlans. > Well, from what I know, using lagg on top of a Xen/netfront net device will definitely be a problem, unless you have r265290 and r265412. (Without these patches, the setting of if_hw_tsomax done by Xen's netfront is not propagated up to tcp_output(). The same statements apply to if_vlan.c, with the patch r265291.) I know nothing about Xen, so I have no idea if you are using the Xen/netfront virtual net driver, but using lagg and/or vlan on top of it is definitely broken without the recent patches. If you can disable TSO, that will be a workaround for this. > > Many different mounts, but the mount options all look like this: > > > > nfsv3,tcp,resvport,hard,cto,lockd,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=65536,readahead=1,wcommitsize=4048762,timeout=120,retrans=2 > > > The permissions do not change, but repeat operations succeed and fail > randomly. > > > > There aren't any clients concurrently accessing the same mount. > > > > > > > On Fri, Jun 20, 2014 at 9:16 AM, Rick Macklem < rmacklem@uoguelph.ca > > wrote: > > > > > Daniel Mayfield wrote: > > I have a very strange problem between an NFS server running FreeBSD > > 10 w/ ZFS and a number of FreeBSD 10 VMs running on a XenServer 6.2 > > SP1 host. The problem manifests as seemingly random permissions > > issues and/or IO errors on the clients when the ZFS pool is busy. > > There are no entries in dmesg on either side, and no errors logged > > in nfsstat either. If I keep the traffic down, the errors subside, > > but not completely. Other than tcpdump, how can I go about > > debugging this? > > > Well, you didn't mention what mount options you are using or what > network interfaces that you are using, but here's a few things that > might be worth looking at... > > The TSO max transmit segments issue: > - Without going into all the details (there have been some recent > commits like r264630 to try and alleviate this), if a net device > driver cannot handle 35 mbufs in a transmit TSO segment, things > will get broken. > - Xen/netfront is a weird exception, which I think is ok so long > as lagg or a vlan isn't layered on top of it. > --> If can disable TSO on both server and clients or reduce > rsize,wsize > to 32K on all client mounts and see if the problem persists, that > is probably the best way to check this. (Since Xen/netfront is > such a weird case, I am not 100% sure if doing the above will fix > this problem, if it is being used) > > I also don't know if it is possible to have corrupted packets due to > a hardware problem (bad memory or...) where the Xen/netfront world > doesn't catch it. > > If you use the "soft" mount option, you could easily get this when > the server is slow to respond. I'd strongly recommend using "tcp" > and not "soft" for your mounts. ("nfsstat -m" on the client will > show you what the actual mount options is use are. This can be > somewhat different than what is specified on the command line, since > servers limit rsize/wsize, as an example.) > > When you get a "permissions failure" case, check on the server to > see if the permissions for the file appear correct on ZFS. If they > are (or the problem disappears when you retry a command without > changing permissions), you could have a caching issue. Other than > capturing the packets and looking at them in wireshark (which knows > NFS, unlike tcpdump) all you can do is try fiddling with the mount > options related to caching and see if that helps. (Note that NFS > does not have a cache coherency protocol, so if files are > concurrently > shared among multiple clients, all bets are off w.r.t. what the > behaviour is. jhb@ is much better at this than I, since he seems > to find lots of these weird cases at his workplace.) > > Good luck with it, rick > > > Dan > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to " > > freebsd-fs-unsubscribe@freebsd.org " > > > > From owner-freebsd-fs@FreeBSD.ORG Fri Jun 20 21:18:34 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BAB93823 for ; Fri, 20 Jun 2014 21:18:34 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 89AF82C27 for ; Fri, 20 Jun 2014 21:18:34 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5KLIYsV073785 for ; Fri, 20 Jun 2014 22:18:34 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 187594] [zfs] [patch] ZFS ARC behavior problem and fix Date: Fri, 20 Jun 2014 21:18:34 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: karl@denninger.net X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jun 2014 21:18:34 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 karl@denninger.net changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |karl@denninger.net --- Comment #20 from karl@denninger.net --- No, because memory in "cache" is subject to being either reallocated or freed. When I was developing this patch that was my first impression as well and how I originally coded it, and it turned out to be wrong. The issue here is that you have two parts of the system contending for RAM -- the VM system generally, and the ARC cache. If the ARC cache frees space before the VM system activates and starts pruning then you wind up with the ARC pinned at the minimum after some period of time, because it releases "early." The original ZFS code releases ARC only when the VM system goes into "desperation" mode. That's too late and results in pathological behavior including long freezes where nothing appears to happen at all. What appears to actually be happening is that the ARC is essentially dumped while paging is occurring, and the system reacts very badly to that. The test as it sits now activates the ARC pare-down at the point the VM system wakes up. The two go into and out of contention at roughly the same time resulting in a balanced result -- the ARC stabilizes at a value allowing some cached pages to remain, but cached pages do not grow without boundary nor does the system get into a page starvation situation and get into the "freeze" condition trying to free huge chunks of ARC at once. If you have a need to bias the ARC pare-down more-aggressively you can through the tunables, but the existing code is where after much experimentation across multiple workloads and RAM sizes was found to result in both a stable ARC and stable cache page population over long periods of time (weeks of uptime across varying loads.) As currently implemented this has now been running untouched for several months on an extremely busy web, database (Postgresql) and internal Samba server without incident. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jun 21 15:19:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9FB8E5EA for ; Sat, 21 Jun 2014 15:19:56 +0000 (UTC) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.redhat.com", Issuer "Red Hat IS CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 7C32C2AF4 for ; Sat, 21 Jun 2014 15:19:56 +0000 (UTC) Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s5LFJnif005302 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Sat, 21 Jun 2014 11:19:49 -0400 Received: from [10.36.5.149] (vpn1-5-149.ams2.redhat.com [10.36.5.149]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s5LFJl3r022233 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO) for ; Sat, 21 Jun 2014 11:19:49 -0400 From: Justin Clift Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: FreeBSD support being added to GlusterFS Date: Sat, 21 Jun 2014 16:19:47 +0100 Message-Id: To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Apple Message framework v1283) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jun 2014 15:19:56 -0000 Hi all, The GlusterFS project is looking to add official support for FreeBSD to = our next release. Does anyone have time to try out a tarball snapshot (from today) of our = code so far? = https://download.gluster.org/pub/gluster/experimental/glusterfs-freebsd_20= 140621.tar.bz2 In theory (!), it should compile ok using: $ ./autogen.sh $ ./configure $ make $ sudo make install I'm not familiar enough with the ports system in FreeBSD any more to = remember how to make it work there. Hopefully someone else can help with that too.=20 Feedback/thoughts? :) Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift From owner-freebsd-fs@FreeBSD.ORG Sat Jun 21 18:40:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 87EF2456 for ; Sat, 21 Jun 2014 18:40:44 +0000 (UTC) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8949D2979 for ; Sat, 21 Jun 2014 18:40:43 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.8/8.14.8) with ESMTP id s5LIeKpt078746 for ; Sat, 21 Jun 2014 11:40:21 -0700 (PDT) (envelope-from dg17@penx.com) Subject: Re: FreeBSD support being added to GlusterFS From: Dennis Glatting Reply-To: dg17@penx.com To: freebsd-fs@freebsd.org In-Reply-To: References: Content-Type: text/plain; charset="us-ascii" Date: Sat, 21 Jun 2014 11:40:19 -0700 Message-ID: <1403376020.4960.340.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-SoftwareMunitions-MailScanner-Information: Dennis Glatting X-SoftwareMunitions-MailScanner-ID: s5LIeKpt078746 X-SoftwareMunitions-MailScanner: Found to be clean X-MailScanner-From: dg17@penx.com X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jun 2014 18:40:44 -0000 Does this exist in ports? On Sat, 2014-06-21 at 16:19 +0100, Justin Clift wrote: > Hi all, > > The GlusterFS project is looking to add official support for FreeBSD to our next release. > > Does anyone have time to try out a tarball snapshot (from today) of our code so far? > > https://download.gluster.org/pub/gluster/experimental/glusterfs-freebsd_20140621.tar.bz2 > > In theory (!), it should compile ok using: > > $ ./autogen.sh > $ ./configure > $ make > $ sudo make install > > I'm not familiar enough with the ports system in FreeBSD any more to remember how to > make it work there. Hopefully someone else can help with that too. > > Feedback/thoughts? :) > > Regards and best wishes, > > Justin Clift > > -- > GlusterFS - http://www.gluster.org > > An open source, distributed file system scaling to several > petabytes, and handling thousands of clients. > > My personal twitter: twitter.com/realjustinclift > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Jun 21 18:43:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B0F014D1 for ; Sat, 21 Jun 2014 18:43:35 +0000 (UTC) Received: from mail.ignoranthack.me (ignoranthack.me [199.102.79.106]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 90F4D2988 for ; Sat, 21 Jun 2014 18:43:34 +0000 (UTC) Received: from [192.168.200.204] (c-50-131-5-126.hsd1.ca.comcast.net [50.131.5.126]) (using SSLv3 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: sbruno@ignoranthack.me) by mail.ignoranthack.me (Postfix) with ESMTPSA id 2F54D1936DE; Sat, 21 Jun 2014 18:43:33 +0000 (UTC) Subject: Re: FreeBSD support being added to GlusterFS From: Sean Bruno Reply-To: sbruno@freebsd.org To: dg17@penx.com In-Reply-To: <1403376020.4960.340.camel@btw.pki2.com> References: <1403376020.4960.340.camel@btw.pki2.com> Content-Type: text/plain; charset="us-ascii" Date: Sat, 21 Jun 2014 11:43:31 -0700 Message-ID: <1403376211.39384.18.camel@bruno> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jun 2014 18:43:35 -0000 On Sat, 2014-06-21 at 11:40 -0700, Dennis Glatting wrote: > Does this exist in ports? > > > > On Sat, 2014-06-21 at 16:19 +0100, Justin Clift wrote: > > Hi all, > > > > The GlusterFS project is looking to add official support for FreeBSD > to our next release. > > > > Does anyone have time to try out a tarball snapshot (from today) of > our code so far? > > > > > https://download.gluster.org/pub/gluster/experimental/glusterfs-freebsd_20140621.tar.bz2 > > > > In theory (!), it should compile ok using: > > > > $ ./autogen.sh > > $ ./configure > > $ make > > $ sudo make install > > > > I'm not familiar enough with the ports system in FreeBSD any more to > remember how to > > make it work there. Hopefully someone else can help with that too. > > Doesn't look like it. sean From owner-freebsd-fs@FreeBSD.ORG Sat Jun 21 20:51:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6688C5C5; Sat, 21 Jun 2014 20:51:38 +0000 (UTC) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.redhat.com", Issuer "Red Hat IS CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 40F29234C; Sat, 21 Jun 2014 20:51:38 +0000 (UTC) Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s5LKpb61024973 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 21 Jun 2014 16:51:37 -0400 Received: from [10.36.5.30] (vpn1-5-30.ams2.redhat.com [10.36.5.30]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s5LKpYYH003454 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Sat, 21 Jun 2014 16:51:36 -0400 Subject: Re: FreeBSD support being added to GlusterFS Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=us-ascii From: Justin Clift In-Reply-To: <1403376211.39384.18.camel@bruno> Date: Sat, 21 Jun 2014 21:51:34 +0100 Content-Transfer-Encoding: 7bit Message-Id: <92C0B7EA-1AD1-40FC-A179-A300AC79D940@gluster.org> References: <1403376020.4960.340.camel@btw.pki2.com> <1403376211.39384.18.camel@bruno> To: sbruno@freebsd.org X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 Cc: freebsd-fs@freebsd.org, dg17@penx.com X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jun 2014 20:51:38 -0000 On 21/06/2014, at 7:43 PM, Sean Bruno wrote: > On Sat, 2014-06-21 at 11:40 -0700, Dennis Glatting wrote: >> Does this exist in ports? So far, it doesn't seem to. Probably because we've not had good FreeBSD support thus far. There is a GlusterFS page on the FreeBSD wiki, but it mainly talks about a 1.x version of GlusterFS from many years ago. Our last release was v3.5, and FreeBSD support will be in v3.6 (if all goes well). Does that help? :) Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift From owner-freebsd-fs@FreeBSD.ORG Sun Jun 22 00:07:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EBAC9DB3 for ; Sun, 22 Jun 2014 00:07:52 +0000 (UTC) Received: from thyme.infocus-llc.com (server.infocus-llc.com [206.156.254.44]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client CN "*.infocus-llc.com", Issuer "*.infocus-llc.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id B0C8D20BE for ; Sun, 22 Jun 2014 00:07:52 +0000 (UTC) Received: from draco.over-yonder.net (c-75-65-60-66.hsd1.ms.comcast.net [75.65.60.66]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by thyme.infocus-llc.com (Postfix) with ESMTPSA id 16D7137B547; Sat, 21 Jun 2014 19:07:45 -0500 (CDT) Received: by draco.over-yonder.net (Postfix, from userid 100) id 3gwvG83TFkz8pC; Sat, 21 Jun 2014 19:07:44 -0500 (CDT) Date: Sat, 21 Jun 2014 19:07:44 -0500 From: "Matthew D. Fuller" To: Justin Clift Subject: Re: FreeBSD support being added to GlusterFS Message-ID: <20140622000744.GW86779@over-yonder.net> References: MIME-Version: 1.0 In-Reply-To: X-Editor: vi X-OS: FreeBSD User-Agent: Mutt/1.5.23-fullermd.4 (2014-03-12) X-Virus-Scanned: clamav-milter 0.98.3 at thyme.infocus-llc.com X-Virus-Status: Clean Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 22 Jun 2014 00:07:53 -0000 On Sat, Jun 21, 2014 at 04:19:47PM +0100 I heard the voice of Justin Clift, and lo! it spake thus: > > I'm not familiar enough with the ports system in FreeBSD any more to > remember how to make it work there. Hopefully someone else can help > with that too. Attached shar of a quick&dirty port skeleton for it. Not tested beyond building and 'make package'. Note one patch for mount(2) args differing BSD/Linux. pkg complains about how it links in python when making the package, but I dunno whether that would break it or not. -- Matthew Fuller (MF4839) | fullermd@over-yonder.net Systems/Network Administrator | http://www.over-yonder.net/~fullermd/ On the Internet, nobody can hear you scream. From owner-freebsd-fs@FreeBSD.ORG Sun Jun 22 00:20:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 759BCF6D for ; Sun, 22 Jun 2014 00:20:48 +0000 (UTC) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.redhat.com", Issuer "Red Hat IS CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4F3E82182 for ; Sun, 22 Jun 2014 00:20:48 +0000 (UTC) Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s5M0KlCf005954 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 21 Jun 2014 20:20:47 -0400 Received: from [10.36.5.30] (vpn1-5-30.ams2.redhat.com [10.36.5.30]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s5M0KiG1018457 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Sat, 21 Jun 2014 20:20:46 -0400 Subject: Re: FreeBSD support being added to GlusterFS Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=us-ascii From: Justin Clift In-Reply-To: <20140622000744.GW86779@over-yonder.net> Date: Sun, 22 Jun 2014 01:20:44 +0100 Content-Transfer-Encoding: 7bit Message-Id: References: <20140622000744.GW86779@over-yonder.net> To: "Matthew D. Fuller" X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 Cc: freebsd-fs@freebsd.org, Harshavardhana X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 22 Jun 2014 00:20:48 -0000 On 22/06/2014, at 1:07 AM, Matthew D. Fuller wrote: > On Sat, Jun 21, 2014 at 04:19:47PM +0100 I heard the voice of > Justin Clift, and lo! it spake thus: >> >> I'm not familiar enough with the ports system in FreeBSD any more to >> remember how to make it work there. Hopefully someone else can help >> with that too. > > Attached shar of a quick&dirty port skeleton for it. Not tested > beyond building and 'make package'. Note one patch for mount(2) args > differing BSD/Linux. pkg complains about how it links in python when > making the package, but I dunno whether that would break it or not. Thanks Mike. :) Just forwarded that to Harsh and Mike Ma, the guys that have been making the FreeBSD version happen. Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift From owner-freebsd-fs@FreeBSD.ORG Sun Jun 22 16:38:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2AF7957B for ; Sun, 22 Jun 2014 16:38:58 +0000 (UTC) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.redhat.com", Issuer "Red Hat IS CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 04EAA2310 for ; Sun, 22 Jun 2014 16:38:57 +0000 (UTC) Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s5MGcosZ010969 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 22 Jun 2014 12:38:50 -0400 Received: from [10.36.5.213] (vpn1-5-213.ams2.redhat.com [10.36.5.213]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s5MGclx8020787 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Sun, 22 Jun 2014 12:38:49 -0400 Subject: Re: FreeBSD support being added to GlusterFS Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=us-ascii From: Justin Clift In-Reply-To: <20140622000744.GW86779@over-yonder.net> Date: Sun, 22 Jun 2014 17:38:48 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <33E41869-BF14-491D-9FE8-700D3CF2AB79@gluster.org> References: <20140622000744.GW86779@over-yonder.net> To: "Matthew D. Fuller" X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 Cc: freebsd-fs@freebsd.org, Harshavardhana X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 22 Jun 2014 16:38:58 -0000 On 22/06/2014, at 1:07 AM, Matthew D. Fuller wrote: > On Sat, Jun 21, 2014 at 04:19:47PM +0100 I heard the voice of > Justin Clift, and lo! it spake thus: >>=20 >> I'm not familiar enough with the ports system in FreeBSD any more to >> remember how to make it work there. Hopefully someone else can help >> with that too.=20 >=20 > Attached shar of a quick&dirty port skeleton for it. Not tested > beyond building and 'make package'. Note one patch for mount(2) args > differing BSD/Linux. pkg complains about how it links in python when > making the package, but I dunno whether that would break it or not. New tarball here now too: = https://download.gluster.org/pub/gluster/experimental/glusterfs-freebsd_20= 140622.tar.bz2 This one should work on FreeBSD versions older than v10 now, and doesn't require libexecinfo installed. (note, I'm not the coder doing this, that's Harsha, CC'd). :) Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift From owner-freebsd-fs@FreeBSD.ORG Mon Jun 23 05:34:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6339D1A1 for ; Mon, 23 Jun 2014 05:34:47 +0000 (UTC) Received: from mail-qg0-f53.google.com (mail-qg0-f53.google.com [209.85.192.53]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 22A052E8F for ; Mon, 23 Jun 2014 05:34:46 +0000 (UTC) Received: by mail-qg0-f53.google.com with SMTP id i50so5431897qgf.40 for ; Sun, 22 Jun 2014 22:34:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=nO/fkJBWdNfOAmj8GFWy36myHVAA3EfYGES2IeuOFWw=; b=dxQzw3e88331dpF4VQAOG8jy2Bq60Pqo+wdFRQNnzDdZUuQui8NtloOv+B6mXYSWPv L0qppopypLbpJSpJX3IWBbHZxMyeW0HQD270HA6lVaYZBeil9v/KBhiP8YfDmeC7IUg/ kRQpo289pKAa1CfruErHbt/41300EUWpfe4pgNZzN7ZWUjuhbzkS/Vl+WAAOqkDoyO4d AiE4K4yZFOcShQIeFv+fE552R559msOFD4VzcXWy+EgG3GOy5S9IC2CSdnVitJPXK0Le mM8pmKba7dWtgl4siwrnSkV70WhxMD4yVZPntKW9EH/IpArFYrsEklPo4vdF3vcch+SM XJ3w== X-Gm-Message-State: ALoCoQmw/YL0zkgNuyy0MrIhdH/kkgkMFN8wPKtr3EnzAB1wAob+4MTwXl+oLr+ppfbXDwlP1vqx MIME-Version: 1.0 X-Received: by 10.140.89.201 with SMTP id v67mr27925727qgd.71.1403501332112; Sun, 22 Jun 2014 22:28:52 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Sun, 22 Jun 2014 22:28:52 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: <33E41869-BF14-491D-9FE8-700D3CF2AB79@gluster.org> References: <20140622000744.GW86779@over-yonder.net> <33E41869-BF14-491D-9FE8-700D3CF2AB79@gluster.org> Date: Sun, 22 Jun 2014 22:28:52 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Justin Clift Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org, "Matthew D. Fuller" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Jun 2014 05:34:47 -0000 https://download.gluster.org/pub/gluster/experimental/glusterfs-freebsd_20140623.tar.bz2 - new tarball uploaded fixes the FUSE segfault which was observed with previous tarball. Thanks On Sun, Jun 22, 2014 at 9:38 AM, Justin Clift wrote: > On 22/06/2014, at 1:07 AM, Matthew D. Fuller wrote: >> On Sat, Jun 21, 2014 at 04:19:47PM +0100 I heard the voice of >> Justin Clift, and lo! it spake thus: >>> >>> I'm not familiar enough with the ports system in FreeBSD any more to >>> remember how to make it work there. Hopefully someone else can help >>> with that too. >> >> Attached shar of a quick&dirty port skeleton for it. Not tested >> beyond building and 'make package'. Note one patch for mount(2) args >> differing BSD/Linux. pkg complains about how it links in python when >> making the package, but I dunno whether that would break it or not. > > New tarball here now too: > > https://download.gluster.org/pub/gluster/experimental/glusterfs-freebsd_20140622.tar.bz2 > > This one should work on FreeBSD versions older than v10 now, and > doesn't require libexecinfo installed. (note, I'm not the coder > doing this, that's Harsha, CC'd). > > :) > > Regards and best wishes, > > Justin Clift > > -- > GlusterFS - http://www.gluster.org > > An open source, distributed file system scaling to several > petabytes, and handling thousands of clients. > > My personal twitter: twitter.com/realjustinclift > -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Mon Jun 23 08:00:11 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 59E7D701 for ; Mon, 23 Jun 2014 08:00:11 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3C6C12936 for ; Mon, 23 Jun 2014 08:00:11 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5N80Ba1086709 for ; Mon, 23 Jun 2014 09:00:11 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) Message-Id: <201406230800.s5N80Ba1086709@kenobi.freebsd.org> From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bugzilla] Commit Needs MFC MIME-Version: 1.0 X-Bugzilla-Type: whine X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated Date: Mon, 23 Jun 2014 08:00:11 +0000 Content-Type: text/plain X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Jun 2014 08:00:11 -0000 Hi, You have a bug in the "Needs MFC" state which has not been touched in 7 or more days. This email serves as a reminder that you may want to MFC this bug or marked it as completed. In the event you have a longer MFC timeout you may update this bug with a comment and I won't remind you again for 7 days. This reminder is only sent on Mondays. Please file a bug about concerns you may have. This search was scheduled by eadler@FreeBSD.org. (5 bugs) Bug 133174: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=133174 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [msdosfs] [patch] msdosfs must support multibyte international characters in file names Bug 136470: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=136470 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] Cannot mount / in read-only, over NFS Bug 139651: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139651 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] mount(8): read-only remount of NFS volume does not work Bug 144447: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=144447 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] sharenfs fsunshare() & fsshare_main() non functional Bug 155411: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=155411 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [regression] [8.2-release] [tmpfs]: mount: tmpfs : No space left on device From owner-freebsd-fs@FreeBSD.ORG Mon Jun 23 14:14:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 98083281 for ; Mon, 23 Jun 2014 14:14:01 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [88.198.178.88]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D46EA2CD8 for ; Mon, 23 Jun 2014 14:14:00 +0000 (UTC) Received: (qmail 34773 invoked by uid 89); 23 Jun 2014 14:14:00 -0000 Received: by simscan 1.4.0 ppid: 34768, pid: 34770, t: 0.0447s scanners: attach: 1.4.0 clamav: 0.97.3/m:55/d:19123 Received: from unknown (HELO suse3.ewadmin.local) (rainer@ultra-secure.de@212.71.117.1) by mail.ultra-secure.de with ESMTPA; 23 Jun 2014 14:14:00 -0000 Date: Mon, 23 Jun 2014 16:13:54 +0200 From: Rainer Duffner To: freebsd-fs@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Message-ID: <20140623161354.2fdd1289@suse3.ewadmin.local> In-Reply-To: <201405151530.s4FFU0d6050580@freefall.freebsd.org> References: <201405151530.s4FFU0d6050580@freefall.freebsd.org> X-Mailer: Claws Mail 3.9.2 (GTK+ 2.24.22; x86_64-suse-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Jun 2014 14:14:01 -0000 Am Thu, 15 May 2014 15:30:00 GMT schrieb Karl Denninger : > The following reply was made to PR kern/187594; it has been noted by > GNATS. > > From: Karl Denninger > To: bug-followup@FreeBSD.org > Cc: > Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and > fix Date: Thu, 15 May 2014 10:05:34 -0500 > > This is a cryptographically signed message in MIME format. > > --------------ms050404000608010301080104 > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > Content-Transfer-Encoding: quoted-printable > > I have now been running the latest delta as posted 26 March -- it > is=20 coming up on two months now, has been stable here and I've seen > several=20 positive reports and no negative ones on impact for > others. Performance=20 continues to be "as expected." > > Is there an expectation on this being merged forward and/or MFC'd? > Hi, I'm looking into applying your patch to one of my servers. However, I've got a question: can I apply it to stock 10.0? If I need to go 10-stable, which revision do you recommend? The server in question only has 32GB of RAM and I fear that once it does NFS in addition to MySQL, it will be bogged down. Best Regards, Rainer From owner-freebsd-fs@FreeBSD.ORG Mon Jun 23 15:59:23 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7518A35D for ; Mon, 23 Jun 2014 15:59:23 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 1EB4926D5 for ; Mon, 23 Jun 2014 15:59:22 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.8/8.14.8) with ESMTP id s5NFxCjk012146 for ; Mon, 23 Jun 2014 10:59:12 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [192.168.1.40] by Spamblock-sys (LOCAL/AUTH); Mon Jun 23 10:59:12 2014 Message-ID: <53A84ECA.6030308@denninger.net> Date: Mon, 23 Jun 2014 10:59:06 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix [SB QUAR: Mon Jun 23 09:14:03 2014] References: <201405151530.s4FFU0d6050580@freefall.freebsd.org> <20140623161354.2fdd1289@suse3.ewadmin.local> In-Reply-To: <20140623161354.2fdd1289@suse3.ewadmin.local> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms090805020304010000020509" X-Antivirus: avast! (VPS 140623-0, 06/23/2014), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Jun 2014 15:59:23 -0000 This is a cryptographically signed message in MIME format. --------------ms090805020304010000020509 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 6/23/2014 9:13 AM, Rainer Duffner wrote: > Am Thu, 15 May 2014 15:30:00 GMT > schrieb Karl Denninger : > >> The following reply was made to PR kern/187594; it has been noted by >> GNATS. >> >> From: Karl Denninger >> To: bug-followup@FreeBSD.org >> Cc: >> Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and >> fix Date: Thu, 15 May 2014 10:05:34 -0500 >> >> This is a cryptographically signed message in MIME format. >> =20 >> --------------ms050404000608010301080104 >> Content-Type: text/plain; charset=3DISO-8859-1; format=3Dflowed >> Content-Transfer-Encoding: quoted-printable >> =20 >> I have now been running the latest delta as posted 26 March -- it >> is=3D20 coming up on two months now, has been stable here and I've see= n >> several=3D20 positive reports and no negative ones on impact for >> others. Performance=3D20 continues to be "as expected." >> =20 >> Is there an expectation on this being merged forward and/or MFC'd? >> > > Hi, > > > I'm looking into applying your patch to one of my servers. > > However, I've got a question: can I apply it to stock 10.0? > If I need to go 10-stable, which revision do you recommend? > > > The server in question only has 32GB of RAM and I fear that once it > does NFS in addition to MySQL, it will be bogged down. > > > > > Best Regards, > Rainer > It should apply cleanly to stock 10.0 but I have not checked it=20 recently. There was a change made in the VM defines by the rest of the=20 team but I believe it is properly ifdef'd so as to figure it out and=20 work both before and after. --=20 -- Karl karl@denninger.net --------------ms090805020304010000020509 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5 MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5 SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8 Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4 GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4 +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1 YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2 aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA2MjMxNTU5MDZaMCMGCSqGSIb3DQEJBDEW BBQkNqecdED9hdz9j8uRzz0J1XuE6TBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAm9Bc5z2tC0wb/yWjbTj1F2e0s5cz gGpbQQF2IDWYFAfkuc6F04kMXS2yhugV3heNrGAJpfpyeORGN7KqRRZh+P4L+QSB14qJ/e0b rfMFTB5s/lnlB6FpAgpOdhpjzonKTuVR8D1VtnpJjjZkGA6unBdnej6Bf+7gGwpan1Wukbog 60WDR8ZsPsvWOKTgJ9/L9qit/+ExIoI++a9LtaKuvS4EpcWnNZ34BbUHjZcC32jc9WV6fi5M H6ecFpkNk5ZHxPZHTj0g5lubGRMpchK16Ltjf1kbphlf7IXvFBCG2aDZtDFJTOLjHCOoK0z8 juj6mGr29L38IaXhlKg04rcY5po5B7g8TmeTdxzwDgTh4B/t67f/OJHpzL6zMpV6lK5Jqc58 dngsO1H4LVqk+r5ArFOt86S7uMGWsSFS3LmPnZdRZsGqoDbBmw450feC2XiKMCs60mnuHEsh VpZHfRiPV8EDX5S/RFCmpTlgTjJQZzklznTB7Vo6WVvHVcdux0VddNyz7qKFAElQCpXCVEwr LPvAECexJW+V4QQxzvnViasEa+xg5dz3TrGZr4MDHQz05zLm3Zs9EGkhKoiImObGMkoYoLs0 QIc8ZVOgHF0Nr6xjUdRv5vRrciqzJ0PZAvjrLjc5JztrzLxeq/SIBax3l+womBBpQTjaxTa3 aOPSOTgAAAAAAAA= --------------ms090805020304010000020509-- From owner-freebsd-fs@FreeBSD.ORG Tue Jun 24 13:57:42 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B925E243 for ; Tue, 24 Jun 2014 13:57:42 +0000 (UTC) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.redhat.com", Issuer "Red Hat IS CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 9233F2C82 for ; Tue, 24 Jun 2014 13:57:42 +0000 (UTC) Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s5ODvZqv005004 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Jun 2014 09:57:35 -0400 Received: from [10.36.7.246] (vpn1-7-246.ams2.redhat.com [10.36.7.246]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s5ODvSdM003951 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Tue, 24 Jun 2014 09:57:33 -0400 Subject: Re: FreeBSD support being added to GlusterFS Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=us-ascii From: Justin Clift In-Reply-To: <20140622000744.GW86779@over-yonder.net> Date: Tue, 24 Jun 2014 14:57:27 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <1B58A675-354E-4787-8078-F85B354B2912@gluster.org> References: <20140622000744.GW86779@over-yonder.net> To: "Matthew D. Fuller" X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 24 Jun 2014 13:57:42 -0000 On 22/06/2014, at 1:07 AM, Matthew D. Fuller wrote: > On Sat, Jun 21, 2014 at 04:19:47PM +0100 I heard the voice of > Justin Clift, and lo! it spake thus: >>=20 >> I'm not familiar enough with the ports system in FreeBSD any more to >> remember how to make it work there. Hopefully someone else can help >> with that too.=20 >=20 > Attached shar of a quick&dirty port skeleton for it. Not tested > beyond building and 'make package'. Note one patch for mount(2) args > differing BSD/Linux. pkg complains about how it links in python when > making the package, but I dunno whether that would break it or not. Thanks Matthew, that definitely helped. Jordan Hubbard committed an updated version of that to the FreeNAS repo yesterday: = https://github.com/freenas/ports/commit/5a7f56db3cf1b4913cee88bf1083b780c6= 6ae18e It leverages a slightly later tarball, adjusted so mount(2) works :) and the FUSE client doesn't crash. Is that commit above usable for main FreeBSD as well? Note though, that's an early pre-alpha version of GlusterFS v3.6, so I'm not sure if it's suitable for inclusion in a "stable" repo? ;) Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift From owner-freebsd-fs@FreeBSD.ORG Tue Jun 24 21:05:16 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E50E1EBF for ; Tue, 24 Jun 2014 21:05:16 +0000 (UTC) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.redhat.com", Issuer "Red Hat IS CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id AC1AC2E4C for ; Tue, 24 Jun 2014 21:05:16 +0000 (UTC) Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s5OL5Fsn016581 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Tue, 24 Jun 2014 17:05:15 -0400 Received: from [10.36.7.246] (vpn1-7-246.ams2.redhat.com [10.36.7.246]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s5OL5CZU012030 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO) for ; Tue, 24 Jun 2014 17:05:14 -0400 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1283) Subject: Re: FreeBSD support being added to GlusterFS From: Justin Clift In-Reply-To: Date: Tue, 24 Jun 2014 22:05:12 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> References: To: freebsd-fs@freebsd.org X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 24 Jun 2014 21:05:17 -0000 On 21/06/2014, at 4:19 PM, Justin Clift wrote: > The GlusterFS project is looking to add official support for FreeBSD = to our next release. Thanks for everyone's help with this. It's made a positive difference. :) We're now officially adding FreeBSD support upstream for v3.6 (when that gets released down the track): = http://supercolony.gluster.org/pipermail/gluster-devel/2014-June/041223.ht= ml Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift From owner-freebsd-fs@FreeBSD.ORG Thu Jun 26 20:33:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C3227EB4 for ; Thu, 26 Jun 2014 20:33:00 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A46B525DB for ; Thu, 26 Jun 2014 20:33:00 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.8/8.14.8) with ESMTP id s5QKX05L043735 for ; Thu, 26 Jun 2014 20:33:00 GMT (envelope-from bdrewery@freefall.freebsd.org) Received: (from bdrewery@localhost) by freefall.freebsd.org (8.14.9/8.14.9/Submit) id s5QKX0lL043734 for freebsd-fs@freebsd.org; Thu, 26 Jun 2014 20:33:00 GMT (envelope-from bdrewery) Received: (qmail 25936 invoked from network); 26 Jun 2014 15:32:58 -0500 Received: from unknown (HELO blah) (freebsd@shatow.net@67.182.131.225) by sweb.xzibition.com with ESMTPA; 26 Jun 2014 15:32:58 -0500 Message-ID: <53AC8377.4060408@FreeBSD.org> Date: Thu, 26 Jun 2014 15:32:55 -0500 From: Bryan Drewery Organization: FreeBSD User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: getdirentries cookies usage outside of UFS [PATCH] References: <201404142227.s3EMRwIL080960@chez.mckusick.com> In-Reply-To: <201404142227.s3EMRwIL080960@chez.mckusick.com> Content-Type: multipart/mixed; boundary="------------000609010903090106080801" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 Jun 2014 20:33:00 -0000 This is a multi-part message in MIME format. --------------000609010903090106080801 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit On 2014-04-14 17:27, Kirk McKusick wrote: >> Date: Fri, 11 Apr 2014 21:03:57 -0500 >> From: Bryan Drewery >> To: freebsd-fs@freebsd.org >> Subject: getdirentries cookies usage outside of UFS >> >> Recently I was working on a compat syscall for sys_getdirentries() >> that >> converts between our dirent and the FreeBSD dirent struct. We had >> never >> tried using this on TMPFS and when we did ran into weird issues (hence >> my recent commits to TMPFS to clarify some of the getdirentries() >> code). >> We were not using cookies, so I referenced the Linux compat module >> (linux_file.c getdents_common()) >> >> I ran across this code: >> >>> /* >>> * When using cookies, the vfs has the option of reading from >>> * a different offset than that supplied (UFS truncates the >>> * offset to a block boundary to make sure that it never reads >>> * partway through a directory entry, even if the directory >>> * has been compacted). >>> */ >>> while (len > 0 && ncookies > 0 && *cookiep <= off) { >>> bdp = (struct dirent *) inp; >>> len -= bdp->d_reclen; >>> inp += bdp->d_reclen; >>> cookiep++; >>> ncookies--; >>> }=20 >> >> >> At first it looked innocuous but then it occurred to me it was the >> root >> of the issue I was having as it was eating my cookies based on their >> value, despite tmpfs cookies being random hash values that have no >> sequential relation. So I looked at how NFS was handling the same code >> and found this lovely hack from r216691: >> >>> not_zfs = strcmp(vp->v_mount->mnt_vfc->vfc_name, "zfs"); >> =2E.. >>> while (cpos < cend && ncookies > 0 && >>> (dp->d_fileno == 0 || dp->d_type == DT_WHT || >>> (not_zfs != 0 && ((u_quad_t)(*cookiep)) <= toff))) { >>> cpos += dp->d_reclen; >>> dp = (struct dirent *)cpos; >>> cookiep++; >>> ncookies--; >>> } >> >> I ended up doing the opposite, only running the code if getting >> dirents >> from "ufs". >> >> So there's multiple issue here. >> >> 1. NFS is broken on TMPFS. I can see why it's gone so long unnoticed, >> why would you do that? Still probably worth fixing. >> >> 2. Linux and SVR4 getdirentries() are both broken on TMPFS/ZFS. I am >> surprised Linux+ZFS has not been noticed by now. I am aware the SVR4 >> is >> full of other bugs too. I ran across many just reviewing the >> getdirentries code alone. >> >> Do any other file systems besides UFS do this offset/cookie >> truncation/rewind? If UFS is the only one it may be acceptable to >> change >> this zfs check to !ufs and add it to the other modules. If we don't >> like >> that, or there are potentially other file systems doing this too, how >> about adding a flag to somewhere to indicate the file system has >> monotonically increasing offsets and needs this rewind support. I'm >> not >> sure where that is best done, struct vfsconf? >> >> Regards, >> Bryan Drewery > > This code is specific to UFS. I concur with your fix of making > it conditionl on UFS. I feel guilty for putting that code in > unconditionally in the first place. In my defense it was 1982 > and UFS was the only filesystem :-) > > Kirk McKusick Based on this discussion I have made the following patch against NFS. If we're comfortable with this approach I will apply the same logic to the Linux and SVR4 modules. Mirrored at http://people.freebsd.org/~bdrewery/patches/nfs-zfs-ufs.patch The patch changes 'not_zfs' to 'is_ufs' in the NFS code. Some of the code actually is ZFS-specific in regards to snapshot handling. So I have also added a 'is_zfs' variable to compare against for those cases. I've removed the comments about ZFS in the UFS-only cases as the existing comment seems to cover it fine. (Unrelated) This code, from r259845, in sys/fs/nfsserver/nfs_nfsdport.c seems odd to me: /* * Check to see if entries in this directory can be safely acquired * via VFS_VGET() or if a switch to VOP_LOOKUP() is required. * ZFS snapshot directories need VOP_LOOKUP(), so that any * automount of the snapshot directory that is required will * be done. * This needs to be done here for NFSv4, since NFSv4 never does * a VFS_VGET() for "." or "..". */ - if (not_zfs == 0) { + if (is_zfs == 1) { r = VFS_VGET(mp, at.na_fileid, LK_SHARED, &nvp); if (r == EOPNOTSUPP) { usevget = 0; cn.cn_nameiop = LOOKUP; cn.cn_lkflags = LK_SHARED | LK_RETRY; cn.cn_cred = nd->nd_cred; cn.cn_thread = p; } else if (r == 0) vput(nvp); } This fallback is also done later (from r199715), but not limited to ZFS. Would it make sense to not limit this first check to ZFS as well? I see that unionfs_vget also returns EOPNOTSUPP. A nullfs mount from ZFS served over NFS may also return EOPNOTSUPP, as odd as that is. -- Regards, Bryan Drewery --------------000609010903090106080801 Content-Type: text/plain; charset=UTF-8; x-mac-type="0"; x-mac-creator="0"; name="nfs-zfs-ufs.patch" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="nfs-zfs-ufs.patch" ZGlmZiAtLWdpdCBzeXMvZnMvbmZzc2VydmVyL25mc19uZnNkcG9ydC5jIHN5cy9mcy9uZnNz ZXJ2ZXIvbmZzX25mc2Rwb3J0LmMKaW5kZXggMDk3OTg2NC4uYzc5OGU1ZCAxMDA2NDQKLS0t IHN5cy9mcy9uZnNzZXJ2ZXIvbmZzX25mc2Rwb3J0LmMKKysrIHN5cy9mcy9uZnNzZXJ2ZXIv bmZzX25mc2Rwb3J0LmMKQEAgLTE1NTEsNyArMTU1MSw3IEBAIG5mc3J2ZF9yZWFkZGlyKHN0 cnVjdCBuZnNydl9kZXNjcmlwdCAqbmQsIGludCBpc2RncmFtLAogCXVfbG9uZyAqY29va2ll cyA9IE5VTEwsICpjb29raWVwOwogCXN0cnVjdCB1aW8gaW87CiAJc3RydWN0IGlvdmVjIGl2 OwotCWludCBub3RfemZzOworCWludCBpc191ZnM7CiAKIAlpZiAobmQtPm5kX3JlcHN0YXQp IHsKIAkJbmZzcnZfcG9zdG9wYXR0cihuZCwgZ2V0cmV0LCAmYXQpOwpAQCAtMTYwNiw3ICsx NjA2LDcgQEAgbmZzcnZkX3JlYWRkaXIoc3RydWN0IG5mc3J2X2Rlc2NyaXB0ICpuZCwgaW50 IGlzZGdyYW0sCiAJCQluZnNydl9wb3N0b3BhdHRyKG5kLCBnZXRyZXQsICZhdCk7CiAJCWdv dG8gb3V0OwogCX0KLQlub3RfemZzID0gc3RyY21wKHZwLT52X21vdW50LT5tbnRfdmZjLT52 ZmNfbmFtZSwgInpmcyIpOworCWlzX3VmcyA9IHN0cmNtcCh2cC0+dl9tb3VudC0+bW50X3Zm Yy0+dmZjX25hbWUsICJ1ZnMiKSA9PSAwOwogCU1BTExPQyhyYnVmLCBjYWRkcl90LCBzaXos IE1fVEVNUCwgTV9XQUlUT0spOwogYWdhaW46CiAJZW9mZmxhZyA9IDA7CkBAIC0xNjg2LDEy ICsxNjg2LDEwIEBAIGFnYWluOgogCSAqIHNraXAgb3ZlciB0aGUgcmVjb3JkcyB0aGF0IHBy ZWNlZGUgdGhlIHJlcXVlc3RlZCBvZmZzZXQuIFRoaXMKIAkgKiByZXF1aXJlcyB0aGUgYXNz dW1wdGlvbiB0aGF0IGZpbGUgb2Zmc2V0IGNvb2tpZXMgbW9ub3RvbmljYWxseQogCSAqIGlu Y3JlYXNlLgotCSAqIFNpbmNlIHRoZSBvZmZzZXQgY29va2llcyBkb24ndCBtb25vdG9uaWNh bGx5IGluY3JlYXNlIGZvciBaRlMsCi0JICogdGhpcyBpcyBub3QgZG9uZSB3aGVuIFpGUyBp cyB0aGUgZmlsZSBzeXN0ZW0uCiAJICovCiAJd2hpbGUgKGNwb3MgPCBjZW5kICYmIG5jb29r aWVzID4gMCAmJgogCSAgICAoZHAtPmRfZmlsZW5vID09IDAgfHwgZHAtPmRfdHlwZSA9PSBE VF9XSFQgfHwKLQkgICAgIChub3RfemZzICE9IDAgJiYgKCh1X3F1YWRfdCkoKmNvb2tpZXAp KSA8PSB0b2ZmKSkpIHsKKwkgICAgIChpc191ZnMgPT0gMSAmJiAoKHVfcXVhZF90KSgqY29v a2llcCkpIDw9IHRvZmYpKSkgewogCQljcG9zICs9IGRwLT5kX3JlY2xlbjsKIAkJZHAgPSAo c3RydWN0IGRpcmVudCAqKWNwb3M7CiAJCWNvb2tpZXArKzsKQEAgLTE4MDQsNyArMTgwMiw3 IEBAIG5mc3J2ZF9yZWFkZGlycGx1cyhzdHJ1Y3QgbmZzcnZfZGVzY3JpcHQgKm5kLCBpbnQg aXNkZ3JhbSwKIAlzdHJ1Y3QgdWlvIGlvOwogCXN0cnVjdCBpb3ZlYyBpdjsKIAlzdHJ1Y3Qg Y29tcG9uZW50bmFtZSBjbjsKLQlpbnQgYXRfcm9vdCwgbmVlZHNfdW5idXN5LCBub3RfemZz LCBzdXBwb3J0c19uZnN2NGFjbHM7CisJaW50IGF0X3Jvb3QsIGlzX3VmcywgaXNfemZzLCBu ZWVkc191bmJ1c3ksIHN1cHBvcnRzX25mc3Y0YWNsczsKIAlzdHJ1Y3QgbW91bnQgKm1wLCAq bmV3X21wOwogCXVpbnQ2NF90IG1vdW50ZWRfb25fZmlsZW5vOwogCkBAIC0xODg0LDcgKzE4 ODIsOCBAQCBuZnNydmRfcmVhZGRpcnBsdXMoc3RydWN0IG5mc3J2X2Rlc2NyaXB0ICpuZCwg aW50IGlzZGdyYW0sCiAJCQluZnNydl9wb3N0b3BhdHRyKG5kLCBnZXRyZXQsICZhdCk7CiAJ CWdvdG8gb3V0OwogCX0KLQlub3RfemZzID0gc3RyY21wKHZwLT52X21vdW50LT5tbnRfdmZj LT52ZmNfbmFtZSwgInpmcyIpOworCWlzX3VmcyA9IHN0cmNtcCh2cC0+dl9tb3VudC0+bW50 X3ZmYy0+dmZjX25hbWUsICJ1ZnMiKSA9PSAwOworCWlzX3pmcyA9IHN0cmNtcCh2cC0+dl9t b3VudC0+bW50X3ZmYy0+dmZjX25hbWUsICJ6ZnMiKSA9PSAwOwogCiAJTUFMTE9DKHJidWYs IGNhZGRyX3QsIHNpeiwgTV9URU1QLCBNX1dBSVRPSyk7CiBhZ2FpbjoKQEAgLTE5NTcsMTIg KzE5NTYsMTAgQEAgYWdhaW46CiAJICogc2tpcCBvdmVyIHRoZSByZWNvcmRzIHRoYXQgcHJl Y2VkZSB0aGUgcmVxdWVzdGVkIG9mZnNldC4gVGhpcwogCSAqIHJlcXVpcmVzIHRoZSBhc3N1 bXB0aW9uIHRoYXQgZmlsZSBvZmZzZXQgY29va2llcyBtb25vdG9uaWNhbGx5CiAJICogaW5j cmVhc2UuCi0JICogU2luY2UgdGhlIG9mZnNldCBjb29raWVzIGRvbid0IG1vbm90b25pY2Fs bHkgaW5jcmVhc2UgZm9yIFpGUywKLQkgKiB0aGlzIGlzIG5vdCBkb25lIHdoZW4gWkZTIGlz IHRoZSBmaWxlIHN5c3RlbS4KIAkgKi8KIAl3aGlsZSAoY3BvcyA8IGNlbmQgJiYgbmNvb2tp ZXMgPiAwICYmCiAJICAoZHAtPmRfZmlsZW5vID09IDAgfHwgZHAtPmRfdHlwZSA9PSBEVF9X SFQgfHwKLQkgICAobm90X3pmcyAhPSAwICYmICgodV9xdWFkX3QpKCpjb29raWVwKSkgPD0g dG9mZikgfHwKKwkgICAoaXNfdWZzID09IDEgJiYgKCh1X3F1YWRfdCkoKmNvb2tpZXApKSA8 PSB0b2ZmKSB8fAogCSAgICgobmQtPm5kX2ZsYWcgJiBORF9ORlNWNCkgJiYKIAkgICAgKChk cC0+ZF9uYW1sZW4gPT0gMSAmJiBkcC0+ZF9uYW1lWzBdID09ICcuJykgfHwKIAkgICAgIChk cC0+ZF9uYW1sZW49PTIgJiYgZHAtPmRfbmFtZVswXT09Jy4nICYmIGRwLT5kX25hbWVbMV09 PScuJykpKSkpIHsKQEAgLTIwMDQsNyArMjAwMSw3IEBAIGFnYWluOgogCSAqIFRoaXMgbmVl ZHMgdG8gYmUgZG9uZSBoZXJlIGZvciBORlN2NCwgc2luY2UgTkZTdjQgbmV2ZXIgZG9lcwog CSAqIGEgVkZTX1ZHRVQoKSBmb3IgIi4iIG9yICIuLiIuCiAJICovCi0JaWYgKG5vdF96ZnMg PT0gMCkgeworCWlmIChpc196ZnMgPT0gMSkgewogCQlyID0gVkZTX1ZHRVQobXAsIGF0Lm5h X2ZpbGVpZCwgTEtfU0hBUkVELCAmbnZwKTsKIAkJaWYgKHIgPT0gRU9QTk9UU1VQUCkgewog CQkJdXNldmdldCA9IDA7CkBAIC0yMTUzLDcgKzIxNTAsNyBAQCBhZ2FpbjoKIAkJCQkJaWYg KCFyKQogCQkJCQkgICAgciA9IG5mc3Zub19nZXRhdHRyKG52cCwgbnZhcCwKIAkJCQkJCW5k LT5uZF9jcmVkLCBwLCAxKTsKLQkJCQkJaWYgKHIgPT0gMCAmJiBub3RfemZzID09IDAgJiYK KwkJCQkJaWYgKHIgPT0gMCAmJiBpc196ZnMgPT0gMSAmJgogCQkJCQkgICAgbmZzcnZfZW5h YmxlX2Nyb3NzbW50cHQgIT0gMCAmJgogCQkJCQkgICAgKG5kLT5uZF9mbGFnICYgTkRfTkZT VjQpICE9IDAgJiYKIAkJCQkJICAgIG52cC0+dl90eXBlID09IFZESVIgJiYKZGlmZiAtLWdp dCBzeXMvbmZzc2VydmVyL25mc19zZXJ2LmMgc3lzL25mc3NlcnZlci9uZnNfc2Vydi5jCmlu ZGV4IGRlNTg0MjEuLjEwMTBkYTYgMTAwNjQ0Ci0tLSBzeXMvbmZzc2VydmVyL25mc19zZXJ2 LmMKKysrIHN5cy9uZnNzZXJ2ZXIvbmZzX3NlcnYuYwpAQCAtMjYyNyw3ICsyNjI3LDcgQEAg bmZzcnZfcmVhZGRpcihzdHJ1Y3QgbmZzcnZfZGVzY3JpcHQgKm5mc2QsIHN0cnVjdCBuZnNz dmNfc29jayAqc2xwLAogCWludCB2MyA9IChuZnNkLT5uZF9mbGFnICYgTkRfTkZTVjMpOwog CXVfcXVhZF90IG9mZiwgdG9mZiwgdmVyZjsKIAl1X2xvbmcgKmNvb2tpZXMgPSBOVUxMLCAq Y29va2llcDsgLyogbmVlZHMgdG8gYmUgaW50NjRfdCBvciBvZmZfdCAqLwotCWludCBub3Rf emZzOworCWludCBpc191ZnM7CiAKIAluZnNkYnByaW50ZigoIiVzICVkXG4iLCBfX0ZJTEVf XywgX19MSU5FX18pKTsKIAlmaHAgPSAmbmZoLmZoX2dlbmVyaWM7CkBAIC0yNjkwLDcgKzI2 OTAsNyBAQCBuZnNydl9yZWFkZGlyKHN0cnVjdCBuZnNydl9kZXNjcmlwdCAqbmZzZCwgc3Ry dWN0IG5mc3N2Y19zb2NrICpzbHAsCiAJCWVycm9yID0gMDsKIAkJZ290byBuZnNtb3V0Owog CX0KLQlub3RfemZzID0gc3RyY21wKHZwLT52X21vdW50LT5tbnRfdmZjLT52ZmNfbmFtZSwg InpmcyIpICE9IDA7CisJaXNfdWZzID0gc3RyY21wKHZwLT52X21vdW50LT5tbnRfdmZjLT52 ZmNfbmFtZSwgInVmcyIpID09IDA7CiAJVk9QX1VOTE9DSyh2cCwgMCk7CiAKIAkvKgpAQCAt Mjc3NywxMiArMjc3NywxMCBAQCBhZ2FpbjoKIAkgKiBza2lwIG92ZXIgdGhlIHJlY29yZHMg dGhhdCBwcmVjZWRlIHRoZSByZXF1ZXN0ZWQgb2Zmc2V0LiBUaGlzCiAJICogcmVxdWlyZXMg dGhlIGFzc3VtcHRpb24gdGhhdCBmaWxlIG9mZnNldCBjb29raWVzIG1vbm90b25pY2FsbHkK IAkgKiBpbmNyZWFzZS4KLQkgKiBTaW5jZSB0aGUgb2Zmc2V0IGNvb2tpZXMgZG9uJ3QgbW9u b3RvbmljYWxseSBpbmNyZWFzZSBmb3IgWkZTLAotCSAqIHRoaXMgaXMgbm90IGRvbmUgd2hl biBaRlMgaXMgdGhlIGZpbGUgc3lzdGVtLgogCSAqLwogCXdoaWxlIChjcG9zIDwgY2VuZCAm JiBuY29va2llcyA+IDAgJiYKIAkJKGRwLT5kX2ZpbGVubyA9PSAwIHx8IGRwLT5kX3R5cGUg PT0gRFRfV0hUIHx8Ci0JCSAobm90X3pmcyAhPSAwICYmICgodV9xdWFkX3QpKCpjb29raWVw KSkgPD0gdG9mZikpKSB7CisJCSAoaXNfdWZzID09IDEgJiYgKCh1X3F1YWRfdCkoKmNvb2tp ZXApKSA8PSB0b2ZmKSkpIHsKIAkJY3BvcyArPSBkcC0+ZF9yZWNsZW47CiAJCWRwID0gKHN0 cnVjdCBkaXJlbnQgKiljcG9zOwogCQljb29raWVwKys7CkBAIC0yOTI4LDcgKzI5MjYsNyBA QCBuZnNydl9yZWFkZGlycGx1cyhzdHJ1Y3QgbmZzcnZfZGVzY3JpcHQgKm5mc2QsIHN0cnVj dCBuZnNzdmNfc29jayAqc2xwLAogCWludCB1c2V2Z2V0ID0gMTsKIAlzdHJ1Y3QgY29tcG9u ZW50bmFtZSBjbjsKIAlzdHJ1Y3QgbW91bnQgKm1udHAgPSBOVUxMOwotCWludCBub3RfemZz OworCWludCBpc191ZnM7CiAKIAluZnNkYnByaW50ZigoIiVzICVkXG4iLCBfX0ZJTEVfXywg X19MSU5FX18pKTsKIAl2cF9sb2NrZWQgPSAwOwpAQCAtMjk4OCw3ICsyOTg2LDcgQEAgbmZz cnZfcmVhZGRpcnBsdXMoc3RydWN0IG5mc3J2X2Rlc2NyaXB0ICpuZnNkLCBzdHJ1Y3QgbmZz c3ZjX3NvY2sgKnNscCwKIAkJZXJyb3IgPSAwOwogCQlnb3RvIG5mc21vdXQ7CiAJfQotCW5v dF96ZnMgPSBzdHJjbXAodnAtPnZfbW91bnQtPm1udF92ZmMtPnZmY19uYW1lLCAiemZzIikg IT0gMDsKKwlpc191ZnMgPSBzdHJjbXAodnAtPnZfbW91bnQtPm1udF92ZmMtPnZmY19uYW1l LCAidWZzIikgPT0gMDsKIAlWT1BfVU5MT0NLKHZwLCAwKTsKIAl2cF9sb2NrZWQgPSAwOwog CXJidWYgPSBtYWxsb2Moc2l6LCBNX1RFTVAsIE1fV0FJVE9LKTsKQEAgLTMwNjgsMTIgKzMw NjYsMTAgQEAgYWdhaW46CiAJICogc2tpcCBvdmVyIHRoZSByZWNvcmRzIHRoYXQgcHJlY2Vk ZSB0aGUgcmVxdWVzdGVkIG9mZnNldC4gVGhpcwogCSAqIHJlcXVpcmVzIHRoZSBhc3N1bXB0 aW9uIHRoYXQgZmlsZSBvZmZzZXQgY29va2llcyBtb25vdG9uaWNhbGx5CiAJICogaW5jcmVh c2UuCi0JICogU2luY2UgdGhlIG9mZnNldCBjb29raWVzIGRvbid0IG1vbm90b25pY2FsbHkg aW5jcmVhc2UgZm9yIFpGUywKLQkgKiB0aGlzIGlzIG5vdCBkb25lIHdoZW4gWkZTIGlzIHRo ZSBmaWxlIHN5c3RlbS4KIAkgKi8KIAl3aGlsZSAoY3BvcyA8IGNlbmQgJiYgbmNvb2tpZXMg PiAwICYmCiAJCShkcC0+ZF9maWxlbm8gPT0gMCB8fCBkcC0+ZF90eXBlID09IERUX1dIVCB8 fAotCQkgKG5vdF96ZnMgIT0gMCAmJiAoKHVfcXVhZF90KSgqY29va2llcCkpIDw9IHRvZmYp KSkgeworCQkgKGlzX3VmcyA9PSAxICYmICgodV9xdWFkX3QpKCpjb29raWVwKSkgPD0gdG9m ZikpKSB7CiAJCWNwb3MgKz0gZHAtPmRfcmVjbGVuOwogCQlkcCA9IChzdHJ1Y3QgZGlyZW50 ICopY3BvczsKIAkJY29va2llcCsrOwo= --------------000609010903090106080801-- From owner-freebsd-fs@FreeBSD.ORG Thu Jun 26 20:48:55 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D8C501FE; Thu, 26 Jun 2014 20:48:55 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 7565D2705; Thu, 26 Jun 2014 20:48:54 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AhIFABmGrFODaFve/2dsb2JhbABaFoNJWoJupzYBAQEBAQEGkiyGbVMBgSR1hAMBAQEEAQEBIAQnIAQHGw4DAwECAQICDRkCKQEJHggGCAcEARwEiCENpRydPBeBK4Q5iEQBBgEBGzQHgneBTAWXcIQvki+DXiEvBnwBCBci X-IronPort-AV: E=Sophos;i="5.01,555,1400040000"; d="scan'208";a="135343866" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 26 Jun 2014 16:48:47 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 6A40FB403F; Thu, 26 Jun 2014 16:48:47 -0400 (EDT) Date: Thu, 26 Jun 2014 16:48:47 -0400 (EDT) From: Rick Macklem To: Bryan Drewery Message-ID: <1142861946.4612955.1403815727424.JavaMail.root@uoguelph.ca> In-Reply-To: <53AC8377.4060408@FreeBSD.org> Subject: Re: getdirentries cookies usage outside of UFS [PATCH] MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 Jun 2014 20:48:55 -0000 Bryan Drewery wrote: > On 2014-04-14 17:27, Kirk McKusick wrote: > >> Date: Fri, 11 Apr 2014 21:03:57 -0500 > >> From: Bryan Drewery > >> To: freebsd-fs@freebsd.org > >> Subject: getdirentries cookies usage outside of UFS > >> > >> Recently I was working on a compat syscall for sys_getdirentries() > >> that > >> converts between our dirent and the FreeBSD dirent struct. We had > >> never > >> tried using this on TMPFS and when we did ran into weird issues > >> (hence > >> my recent commits to TMPFS to clarify some of the getdirentries() > >> code). > >> We were not using cookies, so I referenced the Linux compat module > >> (linux_file.c getdents_common()) > >> > >> I ran across this code: > >> > >>> /* > >>> * When using cookies, the vfs has the option of reading from > >>> * a different offset than that supplied (UFS truncates the > >>> * offset to a block boundary to make sure that it never > >>> reads > >>> * partway through a directory entry, even if the directory > >>> * has been compacted). > >>> */ > >>> while (len > 0 && ncookies > 0 && *cookiep <= off) { > >>> bdp = (struct dirent *) inp; > >>> len -= bdp->d_reclen; > >>> inp += bdp->d_reclen; > >>> cookiep++; > >>> ncookies--; > >>> }=20 > >> > >> > >> At first it looked innocuous but then it occurred to me it was the > >> root > >> of the issue I was having as it was eating my cookies based on > >> their > >> value, despite tmpfs cookies being random hash values that have no > >> sequential relation. So I looked at how NFS was handling the same > >> code > >> and found this lovely hack from r216691: > >> > >>> not_zfs = strcmp(vp->v_mount->mnt_vfc->vfc_name, "zfs"); > >> =2E.. > >>> while (cpos < cend && ncookies > 0 && > >>> (dp->d_fileno == 0 || dp->d_type == DT_WHT || > >>> (not_zfs != 0 && ((u_quad_t)(*cookiep)) <= toff))) { > >>> cpos += dp->d_reclen; > >>> dp = (struct dirent *)cpos; > >>> cookiep++; > >>> ncookies--; > >>> } > >> > >> I ended up doing the opposite, only running the code if getting > >> dirents > >> from "ufs". > >> > >> So there's multiple issue here. > >> > >> 1. NFS is broken on TMPFS. I can see why it's gone so long > >> unnoticed, > >> why would you do that? Still probably worth fixing. > >> > >> 2. Linux and SVR4 getdirentries() are both broken on TMPFS/ZFS. I > >> am > >> surprised Linux+ZFS has not been noticed by now. I am aware the > >> SVR4 > >> is > >> full of other bugs too. I ran across many just reviewing the > >> getdirentries code alone. > >> > >> Do any other file systems besides UFS do this offset/cookie > >> truncation/rewind? If UFS is the only one it may be acceptable to > >> change > >> this zfs check to !ufs and add it to the other modules. If we > >> don't > >> like > >> that, or there are potentially other file systems doing this too, > >> how > >> about adding a flag to somewhere to indicate the file system has > >> monotonically increasing offsets and needs this rewind support. > >> I'm > >> not > >> sure where that is best done, struct vfsconf? > >> > >> Regards, > >> Bryan Drewery > > > > This code is specific to UFS. I concur with your fix of making > > it conditionl on UFS. I feel guilty for putting that code in > > unconditionally in the first place. In my defense it was 1982 > > and UFS was the only filesystem :-) > > > > Kirk McKusick > > Based on this discussion I have made the following patch against NFS. > If > we're comfortable with this approach I will apply the same logic to > the > Linux and SVR4 modules. > > Mirrored at > http://people.freebsd.org/~bdrewery/patches/nfs-zfs-ufs.patch > > The patch changes 'not_zfs' to 'is_ufs' in the NFS code. Some of the > code actually is ZFS-specific in regards to snapshot handling. So I > have > also added a 'is_zfs' variable to compare against for those cases. > > I've removed the comments about ZFS in the UFS-only cases as the > existing comment seems to cover it fine. > > (Unrelated) This code, from r259845, in > sys/fs/nfsserver/nfs_nfsdport.c > seems odd to me: > > /* > * Check to see if entries in this directory can be safely > acquired > * via VFS_VGET() or if a switch to VOP_LOOKUP() is > required. > * ZFS snapshot directories need VOP_LOOKUP(), so that any > * automount of the snapshot directory that is required will > * be done. > * This needs to be done here for NFSv4, since NFSv4 never > does > * a VFS_VGET() for "." or "..". > */ > - if (not_zfs == 0) { > + if (is_zfs == 1) { > r = VFS_VGET(mp, at.na_fileid, LK_SHARED, &nvp); > if (r == EOPNOTSUPP) { > usevget = 0; > cn.cn_nameiop = LOOKUP; > cn.cn_lkflags = LK_SHARED | LK_RETRY; > cn.cn_cred = nd->nd_cred; > cn.cn_thread = p; > } else if (r == 0) > vput(nvp); > } > > This fallback is also done later (from r199715), but not limited to > ZFS. Would it make sense to not limit this first check to ZFS as > well? I > see that unionfs_vget also returns EOPNOTSUPP. A nullfs mount from > ZFS > served over NFS may also return EOPNOTSUPP, as odd as that is. > Well, the answer is similar to the not_zfs one. I don't have any way to test all the different file systems to make sure a change doesn't introduce a regression for any of them. In this case, someone was able to confirm that this changed fixed a problem for ZFS, so I put the code in as ZFS specific. The change further down was done by pjd@ and not me. Since he did it for all file systems in the old server, I duplicated that. (At that point, there was no ZFS specific stuff.) Enabling it for all file system types introduces a little overhead, but I would agree with that if it was known that it did not cause a regression for any file system type, then it would be nice to avoid a file system specific chunk. Bottom line. If you test this for all file systems and/or check the code for all file system types and are convinced it is safe to do for all file system types, then I have no problem with you doing a commit to change it. rick ps: I'll look at the patch, but I imagine it's fine with me. Basically, if Kirk thinks it's UFS specific, I believe him. > > -- > Regards, > Bryan Drewery > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Jun 27 00:06:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5B0AC28C; Fri, 27 Jun 2014 00:06:25 +0000 (UTC) Received: from mail-ve0-x234.google.com (mail-ve0-x234.google.com [IPv6:2607:f8b0:400c:c01::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E6A702788; Fri, 27 Jun 2014 00:06:24 +0000 (UTC) Received: by mail-ve0-f180.google.com with SMTP id jw12so4559214veb.11 for ; Thu, 26 Jun 2014 17:06:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=gjX70HQmike0Uwf48vLilPNrbQPCd+rwWKbDLMdvWAg=; b=o/Yu9bLfqgXjwxhtwanZoWFkdIqj62rlW11WPgJ9vCMlyS+/0hMcFmw55O1nz5Xsy/ p4l1Kgvmuf0YfzsYZeV/tuepXUwK0+n5ByrX6nfKcRln2hfrF25cc/jr4PSZd1blgaBa B6PgWitZNGfl1xeHcxygHtJewHZv7Lk6HduK9UTmET6O83Nw81QAdsTOE+Ql4QPZGPys /BUVXDvQfu9ku/OWJxDEWY1IL2OXrH7/E+OzeOZk91u2Nk9pSZGyJE5e33DCSkKid+ln pwS5JI5DpI2REYXsstHjeAJ1JFmuWwv4Q8OssC37CA/Xz8yN9JxuQ0mqcy4T1aqo7kla FW1g== MIME-Version: 1.0 X-Received: by 10.52.29.236 with SMTP id n12mr13934395vdh.38.1403827583825; Thu, 26 Jun 2014 17:06:23 -0700 (PDT) Received: by 10.221.65.198 with HTTP; Thu, 26 Jun 2014 17:06:23 -0700 (PDT) In-Reply-To: References: <53A25FC7.5040105@connotech.com> <53A2E91B.8060802@av8n.com> <20140619134829.5d7bd14a@jabberwock.cb.piermont.com> <1403207567.1908.23.camel@excessive.dsl.static.sonic.net> <20140619170644.34e6ddf0@jabberwock.cb.piermont.com> <20140620090421.0cb08f1a@jabberwock.cb.piermont.com> Date: Thu, 26 Jun 2014 20:06:23 -0400 Message-ID: Subject: Fwd: [Cryptography] hardware vs software FDE (was Re: Shredding a file on a flash-based file system?) From: grarpamp To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Cc: freebsd-security@freebsd.org, freebsd-geom@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Jun 2014 00:06:25 -0000 Links regarding fde implementations, relevant re geli / gbde. ---------- Forwarded message ---------- From: Darren Lasko Date: Thu, Jun 26, 2014 at 6:03 PM Subject: Re: [Cryptography] hardware vs software FDE (was Re: Shredding a file on a flash-based file system?) To: "Perry E. Metzger" Cc: cryptography@metzdowd.com Hi Perry, Sorry for the very slow reply; I was away on vacation. On Fri, Jun 20, 2014 at 9:04 AM, Perry E. Metzger wrote: > > On Thu, 19 Jun 2014 23:37:04 -0400 Darren Lasko > wrote: > > On Thu, Jun 19, 2014 at 5:06 PM, Perry E. Metzger > > wrote: > > > > > It is different in a vital respect -- in the software > > > implementation, you can more or less check that everything is > > > working as expected, and you don't have to trust that the drive > > > isn't sabotaging you. That's quite different -- vitally so, I > > > think. > [...] > > However, to your point that "in the software implementation, you > > can more or less check that everything is working as expected," > > this only holds true if it's open-source (and as we have found > > recently, this is still no guarantee against nasty security > > "flaws"), or if you're willing to reverse-engineer a closed-source > > product (which you could also do with a hardware-based product, > > though likely at a greater expense). > > No. You are missing a very vital point. > I really don't think I missed your point. I even acknowledged that point in my previous post. My counter-point is merely that the actual media encryption part, while vitally important, is only a small part of the overall FDE solution. The other parts of the solution are equally important, much harder to get right, and not readily verifiable in *either* a hardware solution or a closed-source software solution. I would argue that if you don't trust hard drives with built-in encryption, then you also shouldn't trust closed-source software drive encryption products (and maybe you don't). In fact, even the actual media encryption part is probably much harder to verify in a closed-source software implementation than you might be thinking... > If the sectors on the drive are encrypted with some particular > algorithm using some particular key, I can check, in a software only > solution, that the sectors are indeed encrypted in that key using > that algorithm. Getting "that key" out of a closed-source software FDE product will require reverse-engineering the product or employing something like the techniques used in the Princeton "cold boot attack". And once you have the key, you also need to know the encryption algorithm and cipher mode being used (which is usually specified in the product documentation) *plus* the product's algorithm for generating IVs/tweaks for the cipher mode (probably only discoverable by reverse-engineering, since I've never seen a closed-source implementation give this level of detail in its documentation). This is why I said in my previous post, "you can take a look at the ciphertext and verify that you see random-looking bits, and maybe verify through experimentation that it's not using a poor choice of cipher mode like ECB." Anything more than that will require you to dive deep into the inner workings of the product. [...] > > It is actually much worse than that since the hardware implementation > could be doing things like stashing keys in hidden sectors, but one > need not go so far as to worry about that because even the most basic > audit is impossible. > Software-only products are capable of implementing equivalent levels of malfeasance, for example by obfuscating the plaintext media encryption key and stashing it in the area of the drive they reserve for their pre-boot code and metadata. They could even encrypt the media key using a public key to which the developers (or their "partners") hold the private key. > > > While it's true that even with a closed-source product you can take > > a look at the ciphertext and verify that you see random-looking > > bits, > > No, if they say "this is using AES-256 GCM" I can do more than that. > Again, not without the key. > > If your closed source vendor is not telling you what algorithm and > mode they are using, they are of course also doing something > unacceptable and should be excluded from your purchases. It is > acceptable (though not even remotely optimal) if the encryption > implementation is closed source, but it is utterly unacceptable if > its method of operation is not fully disclosed. > Your original comment was about "checking/verifying", not "disclosure". If you look at the datasheets for self-encrypting drives from just about any respectable manufacturer, they disclose the encryption algorithm/mode: http://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ssd-pro-1500-series-sata-specification.pdf (XTS-AES-256, FIPS 197 certified) http://www.micron.com/-/media/documents/products/data%20sheet/ssd/m550_2_5_ssd.pdf (AES-256 CBC) Seagate has FIPS 140-2 certification on various models, so even more information can be gleaned from their public security policies (e.g. http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp2119.pdf) and CAVP certifications (e.g. AES cert #1974 for XTS, CBC, and EBC) Just to reiterate, checking that the actual media encryption is implemented correctly in a closed-source software product is not a straightforward task (even though you can easily "see" the ciphertext). We haven't even discussed how you would verify the other (and trickier, IMO) bits of the product, such as entropy source & RNG for generating media keys, how passwords are "strengthened", how the media key(s) are cryptographically protected with the "strengthened" authentication credentials, how the "key blobs" are sanitized from the drive (especially on flash-based storage devices), etc. I think it's fair to say that hardware-based FDE solutions aren't any more "untrustworthy" than their closed-source software counterparts, and I think one could even argue that open-sourced isn't a silver bullet (http://underhanded.xcott.com/). Even in software implementations, there are a variety of components that are just as difficult to verify everything is working as expected; and as such a high level of faith is required that the software isn't sabotaging you. Regards, Darren _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography From owner-freebsd-fs@FreeBSD.ORG Fri Jun 27 07:04:16 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6DAC9526 for ; Fri, 27 Jun 2014 07:04:16 +0000 (UTC) Received: from mail-wi0-x236.google.com (mail-wi0-x236.google.com [IPv6:2a00:1450:400c:c05::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 010212A18 for ; Fri, 27 Jun 2014 07:04:15 +0000 (UTC) Received: by mail-wi0-f182.google.com with SMTP id bs8so2254167wib.15 for ; Fri, 27 Jun 2014 00:04:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=//a/v53/nZDP6lpFFvjemab+TBWefL0PnX0x4XEf0iU=; b=rl6uXm+xmzxZUe7gyHFYZ44AnMJP/rvkxhIN4+q0YLXB0ByALcJvvcgXFK3q613yIu ZRiJUwJXuAOOwxpuAUvzbQVOnimskwEXxyKk7ZTHvQoV8ZQLAcAvYCrg+ZEc73aWCp4s yVepfUo2NrUPa44E269wSFp1M/8QJ1hiAenbQOLbeGbPoO5VD/QXK0Rg2TUqHkI49tYP ySj+A16ml4bI2Jyi+u5d8KagZ22mRFK5cSVlrSvxq5rBwS29ErN5x/sgO+9e177J7rTb qEbq0Zw7eI9ZXKIFaLkfxaX6cVdcqlCN66GBjVE+iQtaOByMeLmGytX60X+G2JidhVZi CDSQ== X-Received: by 10.180.20.112 with SMTP id m16mr9898364wie.6.1403852654088; Fri, 27 Jun 2014 00:04:14 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by mx.google.com with ESMTPSA id o3sm74506206wiz.24.2014.06.27.00.04.12 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Jun 2014 00:04:13 -0700 (PDT) Sender: Baptiste Daroussin Date: Fri, 27 Jun 2014 09:04:11 +0200 From: Baptiste Daroussin To: Justin Clift Subject: Re: FreeBSD support being added to GlusterFS Message-ID: <20140627070411.GI24440@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="zqjkMoGlbUJ91oFe" Content-Disposition: inline In-Reply-To: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Jun 2014 07:04:16 -0000 --zqjkMoGlbUJ91oFe Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Jun 24, 2014 at 10:05:12PM +0100, Justin Clift wrote: > On 21/06/2014, at 4:19 PM, Justin Clift wrote: > > > The GlusterFS project is looking to add official support for FreeBSD to= our next release. >=20 >=20 > Thanks for everyone's help with this. It's made a positive > difference. :) >=20 > We're now officially adding FreeBSD support upstream for > v3.6 (when that gets released down the track): >=20 > http://supercolony.gluster.org/pipermail/gluster-devel/2014-June/041223= =2Ehtml >=20 > Regards and best wishes, >=20 > Justin Clift For you information here is my version: http://people.freebsd.org/~bapt/glusterfs.diff It is just missing the license bits if everyone here agrees I'll commit :) regards, Bapt --zqjkMoGlbUJ91oFe Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlOtF2sACgkQ8kTtMUmk6Eyu4wCeIvwnfK96cRr7vnOVb9m17sQj B7YAn3oeBcmJvJaE0PCCQIgGH+Qo8m6n =jnV5 -----END PGP SIGNATURE----- --zqjkMoGlbUJ91oFe-- From owner-freebsd-fs@FreeBSD.ORG Fri Jun 27 21:34:47 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BB74F3E3; Fri, 27 Jun 2014 21:34:47 +0000 (UTC) Received: from mail.crittercasa.com (mail.turbofuzz.com [208.87.221.144]) by mx1.freebsd.org (Postfix) with ESMTP id 52A4A2F7F; Fri, 27 Jun 2014 21:34:47 +0000 (UTC) Received: from kruse-25.3.ixsystems.com (unknown [69.198.165.132]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.crittercasa.com (Postfix) with ESMTPS id BB4F316485F; Fri, 27 Jun 2014 14:27:36 -0700 (PDT) Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: FreeBSD support being added to GlusterFS From: Jordan Hubbard In-Reply-To: <20140627070411.GI24440@ivaldir.etoilebsd.net> Date: Fri, 27 Jun 2014 14:27:35 -0700 Message-Id: References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> To: Baptiste Daroussin X-Mailer: Apple Mail (2.1878.6) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Jun 2014 21:34:47 -0000 On Jun 27, 2014, at 12:04 AM, Baptiste Daroussin = wrote: > For you information here is my version: > http://people.freebsd.org/~bapt/glusterfs.diff >=20 > It is just missing the license bits >=20 > if everyone here agrees I'll commit :) Seems reasonable. Question from my own "questions to be asked about = glusterfs" pile: Paths. I notice that glusterd requires quite a few = path not in the standard hierarchy for /usr/local (or any value of = ${prefix}) that will cause it to simply fall over upon first invocation. = To wit: /var/lib/glusterd (nothing in FreeBSD uses /var/lib at all - /var/db, = /var/run and /var/tmp are more canonical locations, depending on what = you [the service] are trying to do). In fact, ${prefix}/var seems to be generally avoided by most things in = ports. /usr/local/var/log is highly atypical, for example. This also creates problems for us in FreeNAS since our root filesystem = is read-only by default, and we simply make parts of /var (the root = /var) r/w to accommodate things wanting to write into /var/log, = /var/tmp/, /var/run and so on. I would hope that the port could also be = configured to run as a system component, or at least obey a more = predictable ${prefix} hierarchy so that we could map things suitably r/w = into the location(s) that glusterfs needs to scribble on at runtime. I was going to write all of this up in a more exhaustive email but I got = side-tracked by other projects. :) - Jordan From owner-freebsd-fs@FreeBSD.ORG Fri Jun 27 21:59:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C0D0590E for ; Fri, 27 Jun 2014 21:59:29 +0000 (UTC) Received: from mail-wi0-x232.google.com (mail-wi0-x232.google.com [IPv6:2a00:1450:400c:c05::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 538822176 for ; Fri, 27 Jun 2014 21:59:29 +0000 (UTC) Received: by mail-wi0-f178.google.com with SMTP id n15so3444074wiw.17 for ; Fri, 27 Jun 2014 14:59:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=bLpuX6yZY2oS1TGlF0Wp8Gl3iMh/Vktt9XLjBdvGd/M=; b=UmF+94dKMRkUIERm1b6Gf4DgTjp6wyVTqdEbPLD/UAlKyi1hagmsL6Zl/qdR8iCCQs RXWexFlFsYkPAJFpjTKXJ6UGrnR0oij7EohzeXuvrseeSVC5Ev3g5f9oo0i7gO0HpR9Q L8JyM6oXfSSCbMIPmP083kKjKGzw3zyjmOaqpSN+p8/08Kdrd9hP476vtYumZo8Yl3tq a17E/S1FNtCU0KqJxVBMsgz/L8AXySKG4TBD8Jba1jDINrabEy1rtdtwvlsXTgjZJG9S baAmIqpsWKGPmLArhecUcwGkP9A67Xea6a4pSmwf2TByTJZEuF4PBLaXMmnFhLmsW9JT C16g== X-Received: by 10.180.14.40 with SMTP id m8mr14600530wic.50.1403906367399; Fri, 27 Jun 2014 14:59:27 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by mx.google.com with ESMTPSA id ge17sm991987wic.0.2014.06.27.14.59.26 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Jun 2014 14:59:26 -0700 (PDT) Sender: Baptiste Daroussin Date: Fri, 27 Jun 2014 23:59:24 +0200 From: Baptiste Daroussin To: Jordan Hubbard Subject: Re: FreeBSD support being added to GlusterFS Message-ID: <20140627215924.GC34108@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="4jXrM3lyYWu4nBt5" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Jun 2014 21:59:29 -0000 --4jXrM3lyYWu4nBt5 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Jun 27, 2014 at 02:27:35PM -0700, Jordan Hubbard wrote: >=20 > On Jun 27, 2014, at 12:04 AM, Baptiste Daroussin wrote: >=20 > > For you information here is my version: > > http://people.freebsd.org/~bapt/glusterfs.diff > >=20 > > It is just missing the license bits > >=20 > > if everyone here agrees I'll commit :) >=20 > Seems reasonable. Question from my own "questions to be asked about glus= terfs" pile: Paths. I notice that glusterd requires quite a few path not = in the standard hierarchy for /usr/local (or any value of ${prefix}) that w= ill cause it to simply fall over upon first invocation. To wit: >=20 > /var/lib/glusterd (nothing in FreeBSD uses /var/lib at all - /var/db, /va= r/run and /var/tmp are more canonical locations, depending on what you [the= service] are trying to do). >=20 > In fact, ${prefix}/var seems to be generally avoided by most things in po= rts. /usr/local/var/log is highly atypical, for example. >=20 > This also creates problems for us in FreeNAS since our root filesystem is= read-only by default, and we simply make parts of /var (the root /var) r/w= to accommodate things wanting to write into /var/log, /var/tmp/, /var/run = and so on. I would hope that the port could also be configured to run as a= system component, or at least obey a more predictable ${prefix} hierarchy = so that we could map things suitably r/w into the location(s) that glusterf= s needs to scribble on at runtime. >=20 > I was going to write all of this up in a more exhaustive email but I got = side-tracked by other projects. :) >=20 > - Jordan >=20 >=20 >=20 Yup I figured that out and was about to change the /var/lib into /var/db, generally in ports we are trying to enforce everything under ${PREFIX} but = /var, that predates me and I have made anything in the direction of changing that= , but I try to enforce the "do not touch base but /var" for any ports. including = do not touch /etc, right now onlu shells and user additions do touch /etc regards, Bapt --4jXrM3lyYWu4nBt5 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlOt6TwACgkQ8kTtMUmk6ExWQQCgnJKk0+J31Ct7S7GfjCb4/H7Z dHcAnimSlI0llr3RgqACIz2pwP1Bp6pb =N1xg -----END PGP SIGNATURE----- --4jXrM3lyYWu4nBt5-- From owner-freebsd-fs@FreeBSD.ORG Fri Jun 27 22:35:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 439C9DF5 for ; Fri, 27 Jun 2014 22:35:03 +0000 (UTC) Received: from mail-we0-x232.google.com (mail-we0-x232.google.com [IPv6:2a00:1450:400c:c03::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CA08A24BC for ; Fri, 27 Jun 2014 22:35:02 +0000 (UTC) Received: by mail-we0-f178.google.com with SMTP id x48so5722825wes.9 for ; Fri, 27 Jun 2014 15:34:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=x3VuX/2HmcIpzNV2FhkAS/KN4y2dI4fkT9gPj81ufQg=; b=UwpggeJKEbMOrJrVqVBqnpylcKRRaAfkQqd081xSEKWAj17ex0xksh9yCQdV0J69v+ fxxOzKrb1UXW0Uk5F+DdfpLsDKe+Nyvkp0iS8rUltBenoo/PI7jlzSUy6jBjGTdKUCo0 qKa8pC0CsmEWcznPkr8J3U5tgOrfQK8zJhGpJe2bz6WKtarPvU7AV1PJw57jUdyLxAd1 CLplWIDmizIzP/peRQtKlYBpo/DZIULT/efIBvLXELK4JEuffcl5+PshtEnw/I9eLfgD A7g9fMftX6qI0ueEnR+Qs1xHzwhsVfJ00wMPWlq0XxPHb0MxQ9DdvjbqU16C+WYbFE5i 4iPg== X-Received: by 10.194.237.135 with SMTP id vc7mr29399183wjc.86.1403908499690; Fri, 27 Jun 2014 15:34:59 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by mx.google.com with ESMTPSA id d3sm1233618wiy.13.2014.06.27.15.34.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Jun 2014 15:34:58 -0700 (PDT) Sender: Baptiste Daroussin Date: Sat, 28 Jun 2014 00:34:56 +0200 From: Baptiste Daroussin To: Jordan Hubbard Subject: Re: FreeBSD support being added to GlusterFS Message-ID: <20140627223456.GD34108@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="3Gf/FFewwPeBMqCJ" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Jun 2014 22:35:03 -0000 --3Gf/FFewwPeBMqCJ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Jun 27, 2014 at 02:27:35PM -0700, Jordan Hubbard wrote: >=20 > On Jun 27, 2014, at 12:04 AM, Baptiste Daroussin wrote: >=20 > > For you information here is my version: > > http://people.freebsd.org/~bapt/glusterfs.diff > >=20 > > It is just missing the license bits > >=20 > > if everyone here agrees I'll commit :) >=20 > Seems reasonable. Question from my own "questions to be asked about glus= terfs" pile: Paths. I notice that glusterd requires quite a few path not = in the standard hierarchy for /usr/local (or any value of ${prefix}) that w= ill cause it to simply fall over upon first invocation. To wit: >=20 > /var/lib/glusterd (nothing in FreeBSD uses /var/lib at all - /var/db, /va= r/run and /var/tmp are more canonical locations, depending on what you [the= service] are trying to do). >=20 > In fact, ${prefix}/var seems to be generally avoided by most things in po= rts. /usr/local/var/log is highly atypical, for example. >=20 > This also creates problems for us in FreeNAS since our root filesystem is= read-only by default, and we simply make parts of /var (the root /var) r/w= to accommodate things wanting to write into /var/log, /var/tmp/, /var/run = and so on. I would hope that the port could also be configured to run as a= system component, or at least obey a more predictable ${prefix} hierarchy = so that we could map things suitably r/w into the location(s) that glusterf= s needs to scribble on at runtime. >=20 > I was going to write all of this up in a more exhaustive email but I got = side-tracked by other projects. :) >=20 > - Jordan >=20 >=20 >=20 Here is a new version which uses /var/db, /var/log and /var/run This version also works with pkg_install and handle the /var/db/glusterd/groups/virt file as a config file so it will not overwrite= user one on reinstallation. I haven't added the license as I do not understand the license from the COP= YING files I see GPLv2 and LGPLv3 but from the website I see GPLv3, can someone enlighten me? I'm still looking for a better maintainer then me :) I mean I can make sure= the port/package is clean, I can also help on the fuse part if needed, but I ha= ve no use case so far on glusterfs beside highly supporting it on FreeBSD and bei= ng excited by the features it provides :) Maybe someone at FreeNAS it willing to take maintainership of this port in = the ports tree? Jordan does this new port fits FreeNAS requirements? regards, Bapt --3Gf/FFewwPeBMqCJ Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlOt8ZAACgkQ8kTtMUmk6EwrHgCfZ4Ytw1GDQ1ZpjT1lIfEYJ+P+ 2VcAoKl8muHSjMegyqU2U7Tx6ehQEMi/ =w/O7 -----END PGP SIGNATURE----- --3Gf/FFewwPeBMqCJ-- From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 03:03:37 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F3F5A18B for ; Sat, 28 Jun 2014 03:03:36 +0000 (UTC) Received: from smtp-vbr15.xs4all.nl (smtp-vbr15.xs4all.nl [194.109.24.35]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7FC8F2941 for ; Sat, 28 Jun 2014 03:03:36 +0000 (UTC) Received: from [192.168.178.21] (hamlet.badexample.net [83.163.216.124]) (authenticated bits=0) by smtp-vbr15.xs4all.nl (8.13.8/8.13.8) with ESMTP id s5S33Qlc053985 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO) for ; Sat, 28 Jun 2014 05:03:27 +0200 (CEST) (envelope-from walter@cloudvps.com) From: Walter Heukels Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Subject: ZFS deadlock report Message-Id: Date: Sat, 28 Jun 2014 05:03:25 +0200 To: freebsd-fs@FreeBSD.org Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) X-Mailer: Apple Mail (2.1878.2) X-Virus-Scanned: by XS4ALL Virus Scanner X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 03:03:37 -0000 Hello, we are seeing occasional zfs hangs on our file servers. The symptoms = are these: - no NFS response - =91zpool status=92 works and shows no errors, but =91zfs list=92 hangs - ls on the mounted filesystems hangs as well - kernel version is FreeBSD 10.0-STABLE #0 r265916, GENERIC According to https://wiki.freebsd.org/AvgZfsDeadlockDebug this looks = like a possible deadlock. Running processes: https://www.badexample.net/tmp/ps-aux.txt procstat -kk -a output: https://www.badexample.net/tmp/procstat.txt Kind regards, Walter Heukels CloudVPS From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 03:36:34 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 87E6E8A4; Sat, 28 Jun 2014 03:36:34 +0000 (UTC) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.redhat.com", Issuer "Red Hat IS CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 5FFC12BD2; Sat, 28 Jun 2014 03:36:34 +0000 (UTC) Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s5S3aR38013297 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 27 Jun 2014 23:36:27 -0400 Received: from [10.36.4.87] (vpn1-4-87.ams2.redhat.com [10.36.4.87]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s5S3aJCg032146 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Fri, 27 Jun 2014 23:36:24 -0400 Subject: Re: FreeBSD support being added to GlusterFS Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=us-ascii From: Justin Clift In-Reply-To: <20140627223456.GD34108@ivaldir.etoilebsd.net> Date: Sat, 28 Jun 2014 04:36:09 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <8BCE5893-C28B-4035-9595-D31DD19E6596@gluster.org> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <20140627223456.GD34108@ivaldir.etoilebsd.net> To: Baptiste Daroussin X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 Cc: Jordan Hubbard , freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 03:36:34 -0000 On 27/06/2014, at 11:34 PM, Baptiste Daroussin wrote: > I haven't added the license as I do not understand the license from = the COPYING > files I see GPLv2 and LGPLv3 but from the website I see GPLv3, can = someone > enlighten me? Oops. Looks like an out of date page on the website. GlusterFS is dual licensed GPLv2 and LGPLv3. I'll get the website fixed in a bit. Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 03:38:41 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 85A38927 for ; Sat, 28 Jun 2014 03:38:41 +0000 (UTC) Received: from nm6-vm4.bullet.mail.ne1.yahoo.com (nm6-vm4.bullet.mail.ne1.yahoo.com [98.138.91.166]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4A4DD2BDE for ; Sat, 28 Jun 2014 03:38:40 +0000 (UTC) Received: from [98.138.226.180] by nm6.bullet.mail.ne1.yahoo.com with NNFMP; 28 Jun 2014 03:35:49 -0000 Received: from [98.138.101.181] by tm15.bullet.mail.ne1.yahoo.com with NNFMP; 28 Jun 2014 03:35:49 -0000 Received: from [127.0.0.1] by omp1092.mail.ne1.yahoo.com with NNFMP; 28 Jun 2014 03:35:49 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 784829.85719.bm@omp1092.mail.ne1.yahoo.com Received: (qmail 74543 invoked by uid 60001); 28 Jun 2014 03:35:49 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1403926549; bh=vOjBwvIA8wqzzgbk+E+bhqs6wpBioKaRvShwGiAg1gA=; h=Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=UiJndNwInrTQ8bDZwZ7Kir8C4voFP+AkZVCW+mnMnQZw7vJPw33z8ZRSAlcq6ojZnOwgEIxQeKW4gxIhojQijdzNGh3DDf0GPGXH8vdN5nwymbOp5WYcKzp8tKlmK4JZPEAPRy6xtmFvkT0KzyWCsSojLIWuhjxSv6zC+AccPsU= X-YMail-OSG: 3Mrd6zcVM1mXmTtntWN0kHieiaippqV11GpzngWAqbuaAI4 D9xT.m0Ps_RVwlgxXQYrBruz_KL50lVOvqBi8MxIsFfZ9kyohfk_EwpelcVo _GOCP1zC4eyGGnoFsFODiTq.B44BSk0SqU8wAXvv__boyllJI835DfqNI5xp 2qPid1j8TSCrND1IvBEglwdLpFEsH4hKbenrdwV_ZaYlXjjLYeOtS80a.1B1 XQQLeVxykRXsa4OgwqlbaM0RDDrCfx40Ntxd_MqWXhpm1IpX5l0Xqi086Ih0 _TjkEiH_vVc8TTjsjkmJvN6SWZKR4kEGAJr90Y23zptI2vt.p6HYu9_ljMyN J_QtyP725eQX_ol5uvc46_Goh_VNsCeyCRG5xl3YCXaG3gxnmGtmETrFychd n4esURoVA_ujk7AGekDjgOiJUXMA86zAh.Z2lSNeddfKjHIYITLTrudkl9lQ 5HCCWOhVacpuvG0MFTL3uMN.1.6HBmv7jYv3.84iiIvemHzzJyD6rWRAGmho Jqhw4ztrLEyYm0Q1z5tPHV2cTjXcLtJJa9kq1boxgzMLMyOcTHKJr.BtI3ja jQz0oBxImYnafjZVw5gXI9OICa8d4KXwBZL.fNG7KMKnBXGR8Xd5hStYHLJ. iC.ZyzWQEZrKn87i5W1oMjEm0PpEWCyIuWlsSs9np2Rl8jao5vMI- Received: from [207.154.100.163] by web120905.mail.ne1.yahoo.com via HTTP; Fri, 27 Jun 2014 20:35:49 PDT X-Rocket-MIMEInfo: 002.001, CgpIZWxsbyBhbGwsIEkgaGF2ZSBhIGhhcmQgZHJpdmUgdGhhdCByZXByZXNlbnRzIGFuIG9sZGVyIGluc3RhbGxhdGlvbiBvZiBGcmVlQlNEIGFuZCBJIHdvdWxkIGxpa2UgdG8gYWNjZXNzIGl0LsKgIFVzaW5nIGEgVVNCIC0.IElERSBjb25uZWN0aW9uIGRldmljZSB0aGUgZHJpdmUgYXBwZWFycyBhczoKCi9kZXYvZGEwW3gqXcKgwqAgd2hlcmUgeCogaXMgdmFyaW91cyBsZXR0ZXJzICdhJywgJ2UnLCAnZicsIHdoaWNoIG5vIGRvdWJ0IHJlcHJlc2VudCB0aGUgcGFydGl0aW9ucyBmcm9tIHRoZSBwcmV2aW8BMAEBAQE- X-Mailer: YahooMailWebService/0.8.191.1 Message-ID: <1403926549.37922.YahooMailNeo@web120905.mail.ne1.yahoo.com> Date: Fri, 27 Jun 2014 20:35:49 -0700 From: Duckbreath Reply-To: Duckbreath Subject: Mounting a file system with superblock 32 To: "freebsd-fs@freebsd.org" MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 03:38:41 -0000 =0A=0AHello all, I have a hard drive that represents an older installation = of FreeBSD and I would like to access it.=A0 Using a USB -> IDE connection = device the drive appears as:=0A=0A/dev/da0[x*]=A0=A0 where x* is various le= tters 'a', 'e', 'f', which no doubt represent the partitions from the previ= ous installation.=0A=0A=0AA simple mount doesn't work though, returning an = error message about unrecognized device.=0A=0AA simple usage of fsck_ff how= ever shows the file system clear,=0Afsck_ff -b 32 /dev/da0a returns system = clean, and newfs -N will give me various facts about the drive (blocksize, = fragment size, cylinder groups, blocks, indoes, and sectors).=0A=0AGoogling= around has shown that perhaps the mdmfs utility is what I need.=A0 Maybe.= =A0 It appears to be in vogue as a general purpose utility that looks like = it has a 'everything for everybody' type design.=A0 I couldn't find anythin= g in the manuals on it about specifying a superblock location though, like = 32.=A0 Also, the manual and the handbook http://www.freebsd.org/doc/en/book= s/handbook/disks-virtual.html have a discrepancy where the manual claims th= e '-s' option only makes sense if -F is not specified, even though the exam= ple in the handbook specifies both.=0A=0AI believe UFS drives with the olde= r superblock at 32 are called 'UFS1' (as opposed to 'UFS2', of course, whic= h is for larger drives).=A0 The mount utility's "-t" option can't seem to s= pecify either, with only ufs being an available choice.=0A=0AThis fits my d= efinition of non-trivial.=A0 Any of you know how to mount a UFS1 drive? From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 06:29:27 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 396984A6; Sat, 28 Jun 2014 06:29:27 +0000 (UTC) Received: from nk11p03mm-asmtp001.mac.com (nk11p03mm-asmtp001.mac.com [17.158.232.236]) (using TLSv1 with cipher DES-CBC3-SHA (168/168 bits)) (Client CN "smtp.me.com", Issuer "VeriSign Class 3 Extended Validation SSL SGC CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EEDA4290E; Sat, 28 Jun 2014 06:29:26 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from [10.20.30.117] (11.sub-70-197-10.myvzw.com [70.197.10.11]) by nk11p03mm-asmtp001.mac.com (Oracle Communications Messaging Server 7u4-27.10(7.0.4.27.9) 64bit (built Jun 6 2014)) with ESMTPSA id <0N7V00GNI8P0C430@nk11p03mm-asmtp001.mac.com>; Sat, 28 Jun 2014 06:29:26 +0000 (GMT) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.12.52,1.0.14,0.0.0000 definitions=2014-06-28_01:2014-06-27,2014-06-28,1970-01-01 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1402240000 definitions=main-1406280086 Subject: Re: FreeBSD support being added to GlusterFS From: Jordan Hubbard In-reply-to: <20140627223456.GD34108@ivaldir.etoilebsd.net> Date: Fri, 27 Jun 2014 23:29:22 -0700 Message-id: References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <20140627223456.GD34108@ivaldir.etoilebsd.net> To: Baptiste Daroussin X-Mailer: Apple Mail (2.1878.2) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 06:29:27 -0000 On Jun 27, 2014, at 3:34 PM, Baptiste Daroussin wrote: > Here is a new version which uses /var/db, /var/log and /var/run > This version also works with pkg_install and handle the > /var/db/glusterd/groups/virt file as a config file so it will not overwrite user > one on reinstallation. > > Maybe someone at FreeNAS it willing to take maintainership of this port in the > ports tree? > > Jordan does this new port fits FreeNAS requirements? Something seems to have eaten the attachment - did you simply update the diff at http://people.freebsd.org/~bapt/glusterfs.diff ? If so, I can re-grab that and test it in FreeNAS easily tomorrow. I think we could also find someone in the FreeNAS project to take over the port, yes, certainly. Thanks! - Jordan From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 07:04:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E617FB24 for ; Sat, 28 Jun 2014 07:04:38 +0000 (UTC) Received: from mail-we0-x234.google.com (mail-we0-x234.google.com [IPv6:2a00:1450:400c:c03::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7816C2B9F for ; Sat, 28 Jun 2014 07:04:38 +0000 (UTC) Received: by mail-we0-f180.google.com with SMTP id x48so6087377wes.11 for ; Sat, 28 Jun 2014 00:04:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=lOLnWVvo6Xfr+cofT2g4LszhCgSpUZRoM4ALEgjqiOQ=; b=QIYYsVJ1UxlAnhoJ8zrdL6BT+LFB4F4TnhW0JrWDTlQQQh2Bgf+blWvTr5VIeMRCtk BUWnxXXesZuOcCMR6JkjBA6CNdxuMeLSyF7qZokXVZgUJDkTVPcoBovrgSuEjkTd1zN/ GxV/enTLUmiQTgDD+L8eCauaRaOvCwdXcaI0PWvxzLRvXqJhLkPF1uuVTEFa8hyCe14X qUNODY/56nojV8JyBByS9oW41Mer2LTB6xMQWEhf2tITD4SkoroaUq64wVC3UqAVU+86 MtOEFjLjWq/zXqGPpiMBfUn1bVJbpNZbWpWP6mAvAJi/Df/g3A0SNsYWh7Ib0vPI1kv6 KsRA== X-Received: by 10.180.83.200 with SMTP id s8mr16266428wiy.2.1403939076640; Sat, 28 Jun 2014 00:04:36 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by mx.google.com with ESMTPSA id pq9sm26220752wjc.35.2014.06.28.00.04.35 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 28 Jun 2014 00:04:35 -0700 (PDT) Sender: Baptiste Daroussin Date: Sat, 28 Jun 2014 09:04:33 +0200 From: Baptiste Daroussin To: Jordan Hubbard Subject: Re: FreeBSD support being added to GlusterFS Message-ID: <20140628070433.GE34108@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <20140627223456.GD34108@ivaldir.etoilebsd.net> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="GxcwvYAGnODwn7V8" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 07:04:39 -0000 --GxcwvYAGnODwn7V8 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Jun 27, 2014 at 11:29:22PM -0700, Jordan Hubbard wrote: >=20 > On Jun 27, 2014, at 3:34 PM, Baptiste Daroussin wrote: >=20 > > Here is a new version which uses /var/db, /var/log and /var/run > > This version also works with pkg_install and handle the > > /var/db/glusterd/groups/virt file as a config file so it will not overw= rite user > > one on reinstallation. > >=20 > > Maybe someone at FreeNAS it willing to take maintainership of this port= in the > > ports tree? > >=20 > > Jordan does this new port fits FreeNAS requirements? >=20 > Something seems to have eaten the attachment - did you simply update the = diff at http://people.freebsd.org/~bapt/glusterfs.diff ? If so, I can re-g= rab that and test it in FreeNAS easily tomorrow. >=20 > I think we could also find someone in the FreeNAS project to take over th= e port, yes, certainly. >=20 > Thanks! >=20 > - Jordan >=20 >=20 sorry yes I updated the diff in ~bapt regards, Bapt --GxcwvYAGnODwn7V8 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlOuaQEACgkQ8kTtMUmk6Exw/wCfZ3cTOxjbWSwOoHKfK/zdCY1h 5X0Anj8TIBesKKzNbnLzHjxMiaQECfIu =5Ep+ -----END PGP SIGNATURE----- --GxcwvYAGnODwn7V8-- From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 07:04:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 95B39B94 for ; Sat, 28 Jun 2014 07:04:56 +0000 (UTC) Received: from mail-qa0-f45.google.com (mail-qa0-f45.google.com [209.85.216.45]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5494A2BA4 for ; Sat, 28 Jun 2014 07:04:55 +0000 (UTC) Received: by mail-qa0-f45.google.com with SMTP id v10so4733463qac.4 for ; Sat, 28 Jun 2014 00:04:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=nC0Nyu4t7wCD3YwY3Hf7EVQt7+y/ZLdlcp8/+L/1fP4=; b=GcntCJXeLQcmqzfTmiwxHK4zQ2jh2gnVPa4ZC/6XRqLZXxILjHs3YXWiMggWzMa+mf NsBYXMFn2jKZzyScexPc35QgQvX4TcU6i20tambPKJlva+QeSZShDKlKNhZBIoGbeSKg GHa7fDinKby7mcQ0K2wxotfQwcaua9kuO+J3wfINiYvWH/GFVuuTPXz/3a/T6ji6W3I3 A6JSE3qgyFtGfCnDI5kRM7N/YVRWe+WNlvTZKl4O4sOxciZS6ZwRrDYaSa8+x1gzcXF9 2WYWpE9xHDwfdC512FFQTo1Fz7wN+5q9hyl40P7RMPvj0t085Y0v0vmSzmgTZYbNW+gc fw5g== X-Gm-Message-State: ALoCoQkzuxwMnugoT/7VyL1ZIW/Z9lKzE++ewS7n7YI5ZGSjsv8xQjQ4AG61KtYRdYa015XKZYAF MIME-Version: 1.0 X-Received: by 10.224.68.2 with SMTP id t2mr41311962qai.71.1403939089659; Sat, 28 Jun 2014 00:04:49 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Sat, 28 Jun 2014 00:04:49 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <20140627223456.GD34108@ivaldir.etoilebsd.net> Date: Sat, 28 Jun 2014 00:04:49 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Jordan Hubbard Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 07:04:56 -0000 >> Maybe someone at FreeNAS it willing to take maintainership of this port in the >> ports tree? >> >> Jordan does this new port fits FreeNAS requirements? > > Something seems to have eaten the attachment - did you simply update the diff at http://people.freebsd.org/~bapt/glusterfs.diff ? If so, I can re-grab that and test it in FreeNAS easily tomorrow. > I would personally think that such a specific change of paths and configuration should be done directly inside GlusterFS. Currently the problem is that this is in-fact is a hardcoded path and doesn't honor ${PREFIX} - i can get this fixed appropriately GlusterFS upstream. This in-fact an issue on "NetBSD and OSX" ports too which do not honor "/var/lib" There are two options - A conditional check for GlusterFS for FreeBSD to use "/var/db/glusterd" instead? - Make it configurable and honor "--localstatedir" and one can pass --localstatedir=/var while building a package. -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 07:07:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B6909C36 for ; Sat, 28 Jun 2014 07:07:59 +0000 (UTC) Received: from mail-qg0-f45.google.com (mail-qg0-f45.google.com [209.85.192.45]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 749592BBC for ; Sat, 28 Jun 2014 07:07:58 +0000 (UTC) Received: by mail-qg0-f45.google.com with SMTP id a108so277806qge.32 for ; Sat, 28 Jun 2014 00:07:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=8Bw8GmEyIV/otdrZb4aYTRD3ZW/JfAALqgv0OUhrc28=; b=WjpqAD0XhHeNIvBnOLiMfG9ZN1S7V7P1qJYSBJWSP/BknGIHrWMiknvDF5Yp+NZ4nF g6iD0nKqRJFJWVM6a6Vzt+wih1P7mVMzMsIZSKc+yrHpvjjJ3OqG5McxeR3b2JfCmxie NZy2asyLh8Zrjt4e1qEfehwPlaNQhQpRDpwk/YICnsZtMeqoKLYN3aZmqvAnop57sWWj +d6lLw5eKbW8enzsoo6CWpoSDZq6XxVcwugG3xqkMGacL5UAFArwDxLo0fMimyQWewIZ DDVV8ENvKsQqABEgT/TeRlPtx+jDJXJsX/PsquY6zem419ra0JosbjgewritiFJHUO96 TuKw== X-Gm-Message-State: ALoCoQmfxUnWp/2k6vvdvgMJBkr/0F3WcnWwM3QXEXa5hmrSYAx698q35q/v+zQQCjTitxjrJ3rV MIME-Version: 1.0 X-Received: by 10.224.127.197 with SMTP id h5mr41121215qas.3.1403939272548; Sat, 28 Jun 2014 00:07:52 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Sat, 28 Jun 2014 00:07:52 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: <20140628070433.GE34108@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <20140627223456.GD34108@ivaldir.etoilebsd.net> <20140628070433.GE34108@ivaldir.etoilebsd.net> Date: Sat, 28 Jun 2014 00:07:52 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Baptiste Daroussin Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org, Jordan Hubbard X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 07:07:59 -0000 On Sat, Jun 28, 2014 at 12:04 AM, Baptiste Daroussin wrote: > On Fri, Jun 27, 2014 at 11:29:22PM -0700, Jordan Hubbard wrote: >> >> On Jun 27, 2014, at 3:34 PM, Baptiste Daroussin wrote: >> >> > Here is a new version which uses /var/db, /var/log and /var/run >> > This version also works with pkg_install and handle the >> > /var/db/glusterd/groups/virt file as a config file so it will not overwrite user >> > one on reinstallation. >> > >> > Maybe someone at FreeNAS it willing to take maintainership of this port in the >> > ports tree? >> > >> > Jordan does this new port fits FreeNAS requirements? >> >> Something seems to have eaten the attachment - did you simply update the diff at http://people.freebsd.org/~bapt/glusterfs.diff ? If so, I can re-grab that and test it in FreeNAS easily tomorrow. >> Also looking at the "glusterfs.diff" one doesn't have to use external "argp" http://people.freebsd.org/~bapt/glusterfs.diff +CONFIGURE_ARGS= --with-pkgconfigdir=${PREFIX}/libdata/pkgconfig \ + --with-mountutildir=${PREFIX}/sbin \ + --localstatedir=/var +CPPFLAGS+= -I/usr/local/include +LDFLAGS+= -L/usr/local/lib -largp +ACLOCAL_ARGS= -I ./contrib/aclocal +AUTOMAKE_ARGS= --add-missing --copy --foreign argp-standalone is provided as part of the build process and its not a necessary build dependency. -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 07:17:41 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 471631A6 for ; Sat, 28 Jun 2014 07:17:41 +0000 (UTC) Received: from mail-we0-x22b.google.com (mail-we0-x22b.google.com [IPv6:2a00:1450:400c:c03::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CB5382C71 for ; Sat, 28 Jun 2014 07:17:40 +0000 (UTC) Received: by mail-we0-f171.google.com with SMTP id q58so6114546wes.2 for ; Sat, 28 Jun 2014 00:17:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=v3mFXvX2cuTb6UjTV0MCRHiGc4gc9WHkzUre2jDG3fc=; b=KWY+Usj1CFwISf0XTxWQKK89ev3rAEqdX+gne6TGx4elAyDkgvPPndUohfUK9CbowT aiuqnjZw4h3kYMi4rfiJcrYzxOYXn1BXtajdtCtzuzYxtKKbI96iq64moeU/LsjXvyIp 9pYCzhmRXUugNHWmcVpASLYeISI0zXwYhO+OLevCU3x3OHCTui9FfTP7WjwLPsr4TWYX x1D3NKcFDm+BhGGRsgSQQVI9Ao4N2+JDdFl1rd89yhuuqeINc3eIbgMPizFj8Vs8TodQ Mr0PJobVAc7UBhlMPu0MdS3Kc2e98U6n5AR7ReIMujvbAOpnx9DiSpX0zlz+WFFFkWlM wPYQ== X-Received: by 10.180.84.7 with SMTP id u7mr16711398wiy.1.1403939859082; Sat, 28 Jun 2014 00:17:39 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by mx.google.com with ESMTPSA id ev9sm5076441wic.24.2014.06.28.00.17.37 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 28 Jun 2014 00:17:38 -0700 (PDT) Sender: Baptiste Daroussin Date: Sat, 28 Jun 2014 09:17:35 +0200 From: Baptiste Daroussin To: Harshavardhana Subject: Re: FreeBSD support being added to GlusterFS Message-ID: <20140628071735.GF34108@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <20140627223456.GD34108@ivaldir.etoilebsd.net> <20140628070433.GE34108@ivaldir.etoilebsd.net> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="qp4W5+cUSnZs0RIF" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org, Jordan Hubbard X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 07:17:41 -0000 --qp4W5+cUSnZs0RIF Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jun 28, 2014 at 12:07:52AM -0700, Harshavardhana wrote: > On Sat, Jun 28, 2014 at 12:04 AM, Baptiste Daroussin w= rote: > > On Fri, Jun 27, 2014 at 11:29:22PM -0700, Jordan Hubbard wrote: > >> > >> On Jun 27, 2014, at 3:34 PM, Baptiste Daroussin wro= te: > >> > >> > Here is a new version which uses /var/db, /var/log and /var/run > >> > This version also works with pkg_install and handle the > >> > /var/db/glusterd/groups/virt file as a config file so it will not ov= erwrite user > >> > one on reinstallation. > >> > > >> > Maybe someone at FreeNAS it willing to take maintainership of this p= ort in the > >> > ports tree? > >> > > >> > Jordan does this new port fits FreeNAS requirements? > >> > >> Something seems to have eaten the attachment - did you simply update t= he diff at http://people.freebsd.org/~bapt/glusterfs.diff ? If so, I can r= e-grab that and test it in FreeNAS easily tomorrow. > >> >=20 > Also looking at the "glusterfs.diff" one doesn't have to use external "ar= gp" >=20 > http://people.freebsd.org/~bapt/glusterfs.diff >=20 > +CONFIGURE_ARGS=3D --with-pkgconfigdir=3D${PREFIX}/libdata/pkgconfig \ > + --with-mountutildir=3D${PREFIX}/sbin \ > + --localstatedir=3D/var > +CPPFLAGS+=3D -I/usr/local/include > +LDFLAGS+=3D -L/usr/local/lib -largp > +ACLOCAL_ARGS=3D -I ./contrib/aclocal > +AUTOMAKE_ARGS=3D --add-missing --copy --foreign >=20 > argp-standalone is provided as part of the build process and its not a > necessary build dependency. >=20 > --=20 > Religious confuse piety with mere ritual, the virtuous confuse > regulation with outcomes it is better imho to uses the one we have in ports, has it receives the nor= mal maintainance and avoid having to do the job twice, and it is simpler has we= ll as that makes be not having to use autogen.sh but rather use regular macros to regenerate the configure/Makefiles regards, Bapt --qp4W5+cUSnZs0RIF Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlOubA8ACgkQ8kTtMUmk6EzoywCfVREziV610wvQElFR7XCSTZ/A rbQAoJmLQMMH3X03IgPlfUBeKSitkxQJ =7Ebb -----END PGP SIGNATURE----- --qp4W5+cUSnZs0RIF-- From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 08:37:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6088277D for ; Sat, 28 Jun 2014 08:37:18 +0000 (UTC) Received: from mail-qa0-f41.google.com (mail-qa0-f41.google.com [209.85.216.41]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1D7412211 for ; Sat, 28 Jun 2014 08:37:17 +0000 (UTC) Received: by mail-qa0-f41.google.com with SMTP id cm18so4950210qab.0 for ; Sat, 28 Jun 2014 01:37:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=ehv1XduQ/DKH+hhJsDV8dEPJYEb6Zt0xBY8+vyqBeDA=; b=U07+WVEPVPso7c8MPxxOsw9GBgPTct1Yc3GbBBc/ammMpO0RX4mWdSC8lyHlYdCGP4 wNMzQ5Wn4zrcfwruU3VNmsYV1oQfIr+NLJQV6yajxtEvDUzUu9G2Ecx45lpkkvvnJjcw Z5n1YorLjxBRYx39lCkoIKYPcQFUEDBkpvnlGNqaZMGdICz97ankBMQRJX7i3UJ3I8NS ZcoJPTOdrtAedXW16/MQ08Tl6Z0+Y+s4oTj8S2wun+eKEn0O/BdxkXus4frRJ+0Le5EF ZZOMJeQh6kPZBOwMoNTPNTylRyxbIAJfpvin1OeGSw2hKnfwrmiLDdyv+PUsVSmIJ2ej mqAg== X-Gm-Message-State: ALoCoQmSbxz66yCNrqDgm92HnOkHOX1+kd8M9vQ2RiZbMEkohKzZrmlLPJwNw2TSvJ2r15c9UB2q MIME-Version: 1.0 X-Received: by 10.140.93.163 with SMTP id d32mr39682764qge.1.1403944631431; Sat, 28 Jun 2014 01:37:11 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Sat, 28 Jun 2014 01:37:11 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: <20140628071735.GF34108@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <20140627223456.GD34108@ivaldir.etoilebsd.net> <20140628070433.GE34108@ivaldir.etoilebsd.net> <20140628071735.GF34108@ivaldir.etoilebsd.net> Date: Sat, 28 Jun 2014 01:37:11 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Baptiste Daroussin Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org, Jordan Hubbard X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 08:37:18 -0000 > > it is better imho to uses the one we have in ports, has it receives the normal > maintainance and avoid having to do the job twice, and it is simpler has well as > that makes be not having to use autogen.sh but rather use regular macros to > regenerate the configure/Makefiles The tarball was provided only as a placeholder and its interim to run "./autogen.sh" it is not necessary on an official release which comprises of "./configure" . In-fact argp-standalone is the one tested and blessed (since one cannot guarantee what a FreeBSD port might diverge into) -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 11:07:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E66BFED8 for ; Sat, 28 Jun 2014 11:07:45 +0000 (UTC) Received: from mail-qg0-f50.google.com (mail-qg0-f50.google.com [209.85.192.50]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A26182D05 for ; Sat, 28 Jun 2014 11:07:44 +0000 (UTC) Received: by mail-qg0-f50.google.com with SMTP id j5so407921qga.23 for ; Sat, 28 Jun 2014 04:07:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=Ko6/i5XKaVYeCc5fZfgwr7V5XYbWabqGzEms0KrVpxY=; b=XpCMiGK0NCSjBEmbMdsicGhgaNieJUbF71wtQk+MM5GCkr/S5zebJx5vDbsseIX2ee bsM/KgEtxUDiiX8Kq7wCTLMuynBSiuvZWyGtjeu1kv7fPTA6CBM3s+qWZXCXJXk/jyus HYKSwFcU2Pot9URUK+aflclHUy2Wphs0BxZ0Ex5LymeT8629JakDZKWQhYZ6L8BdB8XE xjCbt4QpXsm4RrA596p4L4qYvTAIDAPRYlbpSBSHchfbFiPGOIIERNsmL0xvrKeEfLH1 r8W5y+OAaCzhaFt9Na+1PnBkiW/aZSWpdaJmCzS4fjvdllevyERqF/2Dw9WpxoVR51HX yCJA== X-Gm-Message-State: ALoCoQmcaeqVtciV2ezxTFUShR3ka9GBoFg40wUxjfsHF89qwHHmrrKukGiZgP/FngFi9PpsXeFV MIME-Version: 1.0 X-Received: by 10.229.234.3 with SMTP id ka3mr32677274qcb.16.1403953658227; Sat, 28 Jun 2014 04:07:38 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Sat, 28 Jun 2014 04:07:38 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <20140627223456.GD34108@ivaldir.etoilebsd.net> <20140628070433.GE34108@ivaldir.etoilebsd.net> <20140628071735.GF34108@ivaldir.etoilebsd.net> Date: Sat, 28 Jun 2014 04:07:38 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Baptiste Daroussin Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org, Jordan Hubbard X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 11:07:46 -0000 > it is not necessary on an official release which comprises of "./configure" . > > In-fact argp from ports is okay to use, but it would be less work to rely on argp-standalone provided by GlusterFS. If the reason to use 'argp' from port was related to ./autogen.sh - then in-fact its not necessary. As i explained earlier - it was our mistake to provide only 'git' tarball not the 'make dist' tarball. I have further changes pending on review upstream, once they are pushed - i will provide a new tarball - without the necessity for './autogen.sh' Thanks for testing this port. -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Sat Jun 28 11:17:53 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C2E1623B for ; Sat, 28 Jun 2014 11:17:53 +0000 (UTC) Received: from mail-wg0-x229.google.com (mail-wg0-x229.google.com [IPv6:2a00:1450:400c:c00::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5100A2E07 for ; Sat, 28 Jun 2014 11:17:53 +0000 (UTC) Received: by mail-wg0-f41.google.com with SMTP id a1so5997899wgh.0 for ; Sat, 28 Jun 2014 04:17:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=peamGtHSmxwSa48iWsjfG6rMXMmNaI2bykWclDP+XCc=; b=aLRGEM3mgxFNA0H7GlxAcKVulTj2nKyKvrpswnZSKP3F286DyDH1SqhkK8pODk0zI6 vFhYNLYdGQ/s2DjVNuPBQ9el7w+hg6O1OA4TPfm40mFS2zwopBG2lWQLYfFyWjP2H6i0 ITxBv3eJU219Ewp0gOP/GsIFlMWDHmL2LJeRdEqSthCTTQx3otXpy5PHWIsp2IXqrYUG Pqqsd1pOeQUnfL1sFq69fCNa7PH/9SjNKG8W9AX+773/qkFAQSAgpADGouZ3TdFB5OL1 GTKu8U0FYwBkrGBM8HGmoA37bqDrhZiG5EXSig4EMgFfcNu3KpJ1/QxV4TDGHhjYpORX pmxQ== X-Received: by 10.180.75.197 with SMTP id e5mr17728262wiw.76.1403954271409; Sat, 28 Jun 2014 04:17:51 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by mx.google.com with ESMTPSA id l5sm7062735wif.22.2014.06.28.04.17.50 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 28 Jun 2014 04:17:50 -0700 (PDT) Sender: Baptiste Daroussin Date: Sat, 28 Jun 2014 13:17:48 +0200 From: Baptiste Daroussin To: Harshavardhana Subject: Re: FreeBSD support being added to GlusterFS Message-ID: <20140628111748.GG34108@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <20140627223456.GD34108@ivaldir.etoilebsd.net> <20140628070433.GE34108@ivaldir.etoilebsd.net> <20140628071735.GF34108@ivaldir.etoilebsd.net> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="KR/qxknboQ7+Tpez" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org, Jordan Hubbard X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2014 11:17:53 -0000 --KR/qxknboQ7+Tpez Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jun 28, 2014 at 04:07:38AM -0700, Harshavardhana wrote: > > it is not necessary on an official release which comprises of "./config= ure" . > > > > >=20 > In-fact argp from ports is okay to use, but it would be less work to > rely on argp-standalone > provided by GlusterFS. If the reason to use 'argp' from port was > related to ./autogen.sh - > then in-fact its not necessary. >=20 > As i explained earlier - it was our mistake to provide only 'git' > tarball not the 'make dist' tarball. > I have further changes pending on review upstream, once they are > pushed - i will provide a new > tarball - without the necessity for './autogen.sh' >=20 > Thanks for testing this port. Thank you in general we try to unbundle as much thing as possible that said= argp is small and safe enough to not cause problems having it bundled :) So I have no strong opinion on that. regards, Bapt --KR/qxknboQ7+Tpez Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlOupFwACgkQ8kTtMUmk6EyI9ACggjeMzeLR+hUxGgdTqK0GgYgi zU0AnjJJmIo/EyydZ3rMQCyikjY3UL6Q =m9JT -----END PGP SIGNATURE----- --KR/qxknboQ7+Tpez-- From owner-freebsd-fs@FreeBSD.ORG Sun Jun 29 18:30:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 25B96C60; Sun, 29 Jun 2014 18:30:58 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8E46E2E5C; Sun, 29 Jun 2014 18:30:57 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id 6F41877E98; Sun, 29 Jun 2014 11:22:28 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 15699-02; Sun, 29 Jun 2014 11:22:28 -0700 (PDT) Received: from [10.8.0.6] (unknown [10.8.0.6]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id E949277E95; Sun, 29 Jun 2014 11:22:26 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) Subject: Re: FreeBSD support being added to GlusterFS From: Jordan Hubbard In-Reply-To: <20140627070411.GI24440@ivaldir.etoilebsd.net> Date: Sun, 29 Jun 2014 11:22:24 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> To: Baptiste Daroussin X-Mailer: Apple Mail (2.1878.2) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Jun 2014 18:30:58 -0000 On Jun 27, 2014, at 12:04 AM, Baptiste Daroussin = wrote: >=20 > For you information here is my version: > http://people.freebsd.org/~bapt/glusterfs.diff >=20 > It is just missing the license bits >=20 > if everyone here agrees I'll commit :) Unfortunately, I can=92t get this to build under poudriere. It builds = fine outside of the build jail, but inside, it seems to get tripped up = on the patches. The previous version we also only got to build under = poudriere by committing all sorts of terrible hacks against autogen.sh. = FreeNAS 9.3, of course, also uses poudriere to build all its ports. :) Is there any chance of possibly just getting the FreeBSD-specific = changes folded into the git repo (e.g. upstreamed) with suitable #ifdefs = so that the port can be simplified a bit and hopefully made more = poudriere friendly? Thanks, - Jordan From owner-freebsd-fs@FreeBSD.ORG Sun Jun 29 19:12:41 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3EF7170D for ; Sun, 29 Jun 2014 19:12:41 +0000 (UTC) Received: from mail-qc0-f171.google.com (mail-qc0-f171.google.com [209.85.216.171]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EFE2E21ED for ; Sun, 29 Jun 2014 19:12:40 +0000 (UTC) Received: by mail-qc0-f171.google.com with SMTP id w7so6222162qcr.2 for ; Sun, 29 Jun 2014 12:12:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=UogubytaGJx4dN1IY3dd6ZX2ZGRy/S5r5QzzgUha5EU=; b=jvWI0Q9sl4VNDwlUGxV8zT6/Mt2Vf7uS/8tuZv9L+rzjBSr/zXgcqpKLWXnoZz5IEM cU3ja4U4q4tGesLIkt5KZ1w3ohCGd/15rKxGg55DL5Ql2BGJAUPeoWxBMI0iC4YO1e7m /vvDAZCpRSq1vLzMeQ+SoXGlzfyjgqt+4M7gLn0v+0inXnuVtk6bMW11TGX+Ir9PtRNR HtrFSK0BUtXbDtOp56PbOZRPAvNHjMDX70HDa5HmZll2KDxzF0WK40dJKQSp18e6Xxvl E9aDriaXteZ0Px4GvCKuz5T1Sxo4s+Jzlkar53lHRn6TAZoK7YuCnEOXIF/BJEK7kMWr kNbw== X-Gm-Message-State: ALoCoQmfm+gTb28zYM0DpFnupe+wQ9UGF8LbMFoJY13xwvgs/MrHX32vOoigkcZSu240239Bbr24 MIME-Version: 1.0 X-Received: by 10.140.93.163 with SMTP id d32mr52186710qge.1.1404069153941; Sun, 29 Jun 2014 12:12:33 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Sun, 29 Jun 2014 12:12:33 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> Date: Sun, 29 Jun 2014 12:12:33 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Jordan Hubbard Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Jun 2014 19:12:41 -0000 The patches are pending review and will be pushed hopefully this week! - can i get these installed on a Virtualbox instance and test the failures? Do not understand what is "poudriere" build? - is it like building inside a chrooted environment? The changes regarding "/var/lib" to "/var/db" are not complete with the diff, in-fact inside the code - "DEFAULT_WORKDIR /var/lib/glusterd" is hard-coded - and its only complete until all the references point to "/var/db/glusterd" I will be making these changes this week. Is "/var/db" the directory we all agree upon for "glusterd" configuration files? On Sun, Jun 29, 2014 at 11:22 AM, Jordan Hubbard w= rote: > > On Jun 27, 2014, at 12:04 AM, Baptiste Daroussin wrote= : > >> >> For you information here is my version: >> http://people.freebsd.org/~bapt/glusterfs.diff >> >> It is just missing the license bits >> >> if everyone here agrees I'll commit :) > > Unfortunately, I can=E2=80=99t get this to build under poudriere. It bui= lds fine outside of the build jail, but inside, it seems to get tripped up = on the patches. The previous version we also only got to build under poudr= iere by committing all sorts of terrible hacks against autogen.sh. FreeNA= S 9.3, of course, also uses poudriere to build all its ports. :) > > Is there any chance of possibly just getting the FreeBSD-specific changes= folded into the git repo (e.g. upstreamed) with suitable #ifdefs so that t= he port can be simplified a bit and hopefully made more poudriere friendly? > > Thanks, > > - Jordan > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" --=20 Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Sun Jun 29 20:30:09 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 45D813CE; Sun, 29 Jun 2014 20:30:09 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4CDD726F8; Sun, 29 Jun 2014 20:30:08 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id 9C45C77947; Sun, 29 Jun 2014 13:30:07 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 23597-07; Sun, 29 Jun 2014 13:30:07 -0700 (PDT) Received: from [10.8.0.6] (unknown [10.8.0.6]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id 30E5377944; Sun, 29 Jun 2014 13:30:05 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) Subject: Re: FreeBSD support being added to GlusterFS From: Jordan Hubbard In-Reply-To: Date: Sun, 29 Jun 2014 13:30:04 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <186F1E29-986E-4C4C-A944-91A0035C09EB@mail.turbofuzz.com> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> To: Harshavardhana X-Mailer: Apple Mail (2.1878.2) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Jun 2014 20:30:09 -0000 On Jun 29, 2014, at 12:12 PM, Harshavardhana = wrote: > The patches are pending review and will be pushed hopefully this week! Cool! > - can i get these installed on a Virtualbox instance and test the > failures? Sure, you can always install = http://download.freenas.org/nightlies/9.3/ALPHA/20140625/ and install it = under VirtualBox - that has the glusterfs port in it as currently = described by = https://github.com/freenas/ports/tree/freenas/9-stable/sysutils/glusterfs = (I love github!). I tried updating that port to Baptiste=92s latest = version last night, but like I said, it croaked during the build process = so I couldn=92t get another 9.3 nightly done and ended up simply = reverting the changes for now. That 2nd URL will also show you the current state of the Makefile and = patches we needed to get glusterfs compiling under FreeNAS, which may = prove instructive as to why =93our version=94 builds and the other does = not. Longer-term, once we get glusterd to not simply fall over with missing = paths, if desired I can also create the glusterfs team an = externally-facing VMWare VM containing the latest FreeNAS 9.3 build if = they want to be able to test interoperability or otherwise beat on it to = their heart=92s content. We do this for the Samba project as well, = since there=92s nothing like seeing one's code actually running in the = target NAS environment to ensure it actually works, and it=92s easy to = install nightly builds on it if and as things evolve. > Do not understand what is "poudriere" build? - is it like building > inside a chrooted environment? In essence, yes, though it also provides some other nice features that = accelerate the process of building appliances based on freebsd packages. = See https://fossil.etoilebsd.net/poudriere/doc/trunk/doc/index.wiki for = reference. > The changes regarding "/var/lib" to "/var/db" are not complete with > the diff, in-fact inside the code - "DEFAULT_WORKDIR > /var/lib/glusterd" is hard-coded - and its only complete until all the > references point to "/var/db/glusterd" >=20 > I will be making these changes this week. Is "/var/db" the directory > we all agree upon for "glusterd" configuration files? I believe so, yes. Thanks! - Jordan From owner-freebsd-fs@FreeBSD.ORG Sun Jun 29 20:37:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 20C515AB for ; Sun, 29 Jun 2014 20:37:52 +0000 (UTC) Received: from mail-we0-x232.google.com (mail-we0-x232.google.com [IPv6:2a00:1450:400c:c03::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A6A34278A for ; Sun, 29 Jun 2014 20:37:51 +0000 (UTC) Received: by mail-we0-f178.google.com with SMTP id x48so7300857wes.23 for ; Sun, 29 Jun 2014 13:37:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=sIzmjts80bdCbs6iJcZDqW8FBN55hmQ1H9c2TCt1nv4=; b=vsYV47g2kqzLOTtfk86noRTL6okg9uALM6ntqSZ1sMp8TAKJZJSAKsXgWvpMNkwM6G L/n+7BZi/UJFpuz7s5iYbXoWYH4oTiy+h7uJVTGfdrc792V84nwK2XJ9SAhYN30y0pGt WJayk0w2mlReinKcd3Wl4uTsw/+W30BNpLBSut4P/5Ogqw5+WaZSouuaDVhOzLhMQ1cs Au5/xFu1BdKYbvYghqLhtqxag5RLQ/YpIFYpl/KqKSHHanrl74Iae62wUlNWApqMPSPp yXtHjoilQeUyY6lhOlujiyg3W37tz0nveFWfsmS+vKvCWGOaVbey4+Xo5dA8sthf2Hit soqA== X-Received: by 10.194.87.134 with SMTP id ay6mr4153391wjb.84.1404074269819; Sun, 29 Jun 2014 13:37:49 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by mx.google.com with ESMTPSA id l5sm23280730wif.22.2014.06.29.13.37.48 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 29 Jun 2014 13:37:48 -0700 (PDT) Sender: Baptiste Daroussin Date: Sun, 29 Jun 2014 22:37:46 +0200 From: Baptiste Daroussin To: Jordan Hubbard Subject: Re: FreeBSD support being added to GlusterFS Message-ID: <20140629203746.GI34108@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="2nTeH+t2PBomgucg" Content-Disposition: inline In-Reply-To: <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Jun 2014 20:37:52 -0000 --2nTeH+t2PBomgucg Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, Jun 29, 2014 at 11:22:24AM -0700, Jordan Hubbard wrote: >=20 > On Jun 27, 2014, at 12:04 AM, Baptiste Daroussin wrote: >=20 > >=20 > > For you information here is my version: > > http://people.freebsd.org/~bapt/glusterfs.diff > >=20 > > It is just missing the license bits > >=20 > > if everyone here agrees I'll commit :) >=20 > Unfortunately, I can=E2=80=99t get this to build under poudriere. It bui= lds fine outside of the build jail, but inside, it seems to get tripped up = on the patches. The previous version we also only got to build under poudr= iere by committing all sorts of terrible hacks against autogen.sh. FreeNA= S 9.3, of course, also uses poudriere to build all its ports. :) >=20 I ll havr access to my buildbox tomorrow, and fix the build in poudriere, I= ll update the patch accordingly regards, Bapt --2nTeH+t2PBomgucg Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlOweRoACgkQ8kTtMUmk6EysLwCfRIXAYDuFBtWWPmujguNE7P+8 mrMAnjtjZbuaLw2dwJViJtLk6L9IvctP =c59B -----END PGP SIGNATURE----- --2nTeH+t2PBomgucg-- From owner-freebsd-fs@FreeBSD.ORG Mon Jun 30 03:49:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9B05A5D4 for ; Mon, 30 Jun 2014 03:49:20 +0000 (UTC) Received: from mail-qc0-f180.google.com (mail-qc0-f180.google.com [209.85.216.180]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5545225CF for ; Mon, 30 Jun 2014 03:49:19 +0000 (UTC) Received: by mail-qc0-f180.google.com with SMTP id r5so6601961qcx.11 for ; Sun, 29 Jun 2014 20:49:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=zimvBU0zcTW1nzIuVva9sSLDjqFpqHvyaNAY+gZ6jdk=; b=H2r9sZXxYKE7xpJJIeywh4H4ib3I7YhQU6aSB4PlAGXI+tEWaANQQy0CEgsEMOb6su Cp7aOg2e2xCl4l36Oo8VMq78wZ71ndp27Db75mzdP2U5ahuNOmAhrRl4gcuGnRmD76ge 7xIIrPyVHZAV8N80/WHpASGswyixFZX54zLZQznQhx+jAOzFuEZvFMz0xcZjFRMKQ1w1 idaGjZYM8PEXA7O4Aui3IoXw+2Mgqw0QrWDX1KceeiU4BckhcfbeW1M8bqrPHVMOMfcY r3pnVvmKsIMA5hfqI5GL4s6QGfl6zjW4SYhMX/+4Xg5QwvaDRAS9K1EYTmnU1H8BZI3M npXQ== X-Gm-Message-State: ALoCoQnTExfgH3fNK+PuTN8NMEKTast/ldqYB5emNCdzTQafop3lEijVWQQ8DytcB4DR5Ly9eV2T MIME-Version: 1.0 X-Received: by 10.140.93.163 with SMTP id d32mr54684135qge.1.1404100153323; Sun, 29 Jun 2014 20:49:13 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Sun, 29 Jun 2014 20:49:13 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: <20140629203746.GI34108@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> <20140629203746.GI34108@ivaldir.etoilebsd.net> Date: Sun, 29 Jun 2014 20:49:13 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Baptiste Daroussin Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Jun 2014 03:49:20 -0000 http://download.gluster.org/pub/gluster/experimental/glusterfs-freebsd_2014= 0629.tar.bz2 - I just made the necessary changes from "/var/lib" to "/var/db" - please test this tarball out - relevant patches are posted for upstream review. Thanks On Sun, Jun 29, 2014 at 1:37 PM, Baptiste Daroussin wrot= e: > On Sun, Jun 29, 2014 at 11:22:24AM -0700, Jordan Hubbard wrote: >> >> On Jun 27, 2014, at 12:04 AM, Baptiste Daroussin wrot= e: >> >> > >> > For you information here is my version: >> > http://people.freebsd.org/~bapt/glusterfs.diff >> > >> > It is just missing the license bits >> > >> > if everyone here agrees I'll commit :) >> >> Unfortunately, I can=E2=80=99t get this to build under poudriere. It bu= ilds fine outside of the build jail, but inside, it seems to get tripped up= on the patches. The previous version we also only got to build under poud= riere by committing all sorts of terrible hacks against autogen.sh. FreeN= AS 9.3, of course, also uses poudriere to build all its ports. :) >> > I ll havr access to my buildbox tomorrow, and fix the build in poudriere,= I ll > update the patch accordingly > > regards, > Bapt --=20 Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Mon Jun 30 08:00:11 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 863F57E8 for ; Mon, 30 Jun 2014 08:00:11 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 63823291F for ; Mon, 30 Jun 2014 08:00:11 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s5U80BNv003764 for ; Mon, 30 Jun 2014 09:00:11 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) Message-Id: <201406300800.s5U80BNv003764@kenobi.freebsd.org> From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bugzilla] Commit Needs MFC MIME-Version: 1.0 X-Bugzilla-Type: whine X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated Date: Mon, 30 Jun 2014 08:00:11 +0000 Content-Type: text/plain X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Jun 2014 08:00:11 -0000 Hi, You have a bug in the "Needs MFC" state which has not been touched in 7 or more days. This email serves as a reminder that you may want to MFC this bug or marked it as completed. In the event you have a longer MFC timeout you may update this bug with a comment and I won't remind you again for 7 days. This reminder is only sent on Mondays. Please file a bug about concerns you may have. This search was scheduled by eadler@FreeBSD.org. (5 bugs) Bug 133174: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=133174 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [msdosfs] [patch] msdosfs must support multibyte international characters in file names Bug 136470: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=136470 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] Cannot mount / in read-only, over NFS Bug 139651: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139651 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] mount(8): read-only remount of NFS volume does not work Bug 144447: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=144447 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] sharenfs fsunshare() & fsshare_main() non functional Bug 155411: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=155411 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [regression] [8.2-release] [tmpfs]: mount: tmpfs : No space left on device From owner-freebsd-fs@FreeBSD.ORG Tue Jul 1 23:36:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 142BE9F; Tue, 1 Jul 2014 23:36:08 +0000 (UTC) Received: from mail-lb0-x231.google.com (mail-lb0-x231.google.com [IPv6:2a00:1450:4010:c04::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 503222C61; Tue, 1 Jul 2014 23:36:07 +0000 (UTC) Received: by mail-lb0-f177.google.com with SMTP id u10so7333020lbd.22 for ; Tue, 01 Jul 2014 16:36:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=SZ7X2cf10ACFanVTQo7iu/CTsWLzXlytx+of2miJ/xI=; b=q2J+VEO8tdmlGqoixSZcy4Qq6fpPN+1zc3AHfTBZsf+7BbhSbQAsMFPac/HsLg5IQl XZMGG7FCzd6zMftK67RMoB46FyqIYqdmY/QV672Em/ewulB9+Y8I3I58gsEX8/iEavE1 GsqAAQVIVMR7YEJNYunMBMXQT7JtlAnT8r+Ga8jHwYAprNlpQBg8t/MWXoAXOwHMsmWv lSfKhZzP/BFwE74JNBhT3Zy+h/ROdf04NwYEeH9lJsmGfc6gf5dhstHJOCCS57MO++o8 lY3oDjy4KkkKGMZENdFms0FkxJAHHD0lc6bpDBiKcanQBAixSVeNVYsmZZZ+eNbhfEVY 1KcA== MIME-Version: 1.0 X-Received: by 10.112.171.134 with SMTP id au6mr36117257lbc.21.1404257765160; Tue, 01 Jul 2014 16:36:05 -0700 (PDT) Received: by 10.152.195.7 with HTTP; Tue, 1 Jul 2014 16:36:05 -0700 (PDT) In-Reply-To: References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> <20140629203746.GI34108@ivaldir.etoilebsd.net> Date: Wed, 2 Jul 2014 09:36:05 +1000 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Outback Dingo To: Harshavardhana Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Jul 2014 23:36:08 -0000 On Mon, Jun 30, 2014 at 1:49 PM, Harshavardhana wrote: > > http://download.gluster.org/pub/gluster/experimental/glusterfs-freebsd_20= 140629.tar.bz2 > - I just made the necessary changes from "/var/lib" to "/var/db" - > please test this tarball out - relevant patches are posted for > upstream review. > > Thanks > > > > On Sun, Jun 29, 2014 at 1:37 PM, Baptiste Daroussin > wrote: > > On Sun, Jun 29, 2014 at 11:22:24AM -0700, Jordan Hubbard wrote: > >> > >> On Jun 27, 2014, at 12:04 AM, Baptiste Daroussin > wrote: > >> > >> > > >> > For you information here is my version: > >> > http://people.freebsd.org/~bapt/glusterfs.diff > >> > > >> > It is just missing the license bits > >> > > >> > if everyone here agrees I'll commit :) > >> > >> Unfortunately, I can=E2=80=99t get this to build under poudriere. It = builds > fine outside of the build jail, but inside, it seems to get tripped up on > the patches. The previous version we also only got to build under > poudriere by committing all sorts of terrible hacks against autogen.sh. > FreeNAS 9.3, of course, also uses poudriere to build all its ports. :) > >> > > I ll havr access to my buildbox tomorrow, and fix the build in > poudriere, I ll > > update the patch accordingly > > > > regards, > > Bapt > > > just caught this on a fresh FreeBSD install uname -a FreeBSD bsdvirt 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 checking openssl/cmac.h presence... yes checking for openssl/cmac.h... yes ./configure: 13050: Syntax error: word unexpected (expecting ")") > > -- > Religious confuse piety with mere ritual, the virtuous confuse > regulation with outcomes > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Jul 2 02:15:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A857FF6B for ; Wed, 2 Jul 2014 02:15:15 +0000 (UTC) Received: from mail-qc0-f171.google.com (mail-qc0-f171.google.com [209.85.216.171]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 62D8228CA for ; Wed, 2 Jul 2014 02:15:14 +0000 (UTC) Received: by mail-qc0-f171.google.com with SMTP id w7so9186915qcr.2 for ; Tue, 01 Jul 2014 19:15:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=DA8tNioyCuUDV5pK6wRBCR7uB82jOHUxDZPrh4eAO04=; b=Boz85pEkx+TYdUVi0QVZLSYd+le4Eb24QoHNuhHbDB4+V4vnuChWNwnGxV1ssmyk7S mAu/3pdwzInknb8msONPV/9vCBFe/qLjnCXb1b85kuUgm/+4e52U3LXLQMI2gjIMpnaj K6yvPAVFkJSRjQwjMSsn71m1NFkX1sJTDdD6kPcy7ifu6eGBX3iLaV0GH7nDGryfTf/b 0o4xC0uvZsI5AKssaUl7kXyXJ7uMurTD9q5ATHKb6iOqokTe1OekCEBidojMIBaBUEQy /jQyp0FVp4aSd/1VNs+0NmLzKE1yY+ngUJ8Ks7NOLUejT5tmE7j8HM4TooNre3S6hL3T fYlQ== X-Gm-Message-State: ALoCoQmeeqwHDa5P35HTBR4InwtRXBygESRrfyEJo+ZJZOSuM2xIvQaqo+sCdSv85gXGd6alxXSM MIME-Version: 1.0 X-Received: by 10.140.93.163 with SMTP id d32mr76351252qge.1.1404267308714; Tue, 01 Jul 2014 19:15:08 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Tue, 1 Jul 2014 19:15:08 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> <20140629203746.GI34108@ivaldir.etoilebsd.net> Date: Tue, 1 Jul 2014 19:15:08 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Outback Dingo Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Jul 2014 02:15:15 -0000 > just caught this on a fresh FreeBSD install > > uname -a > FreeBSD bsdvirt 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 > 22:34:59 UTC 2014 > > > checking openssl/cmac.h presence... yes > checking for openssl/cmac.h... yes > ./configure: 13050: Syntax error: word unexpected (expecting ")") Do not see it on my end uname -a FreeBSD bsd-host 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014 Was "./autogen.sh" clean? -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Wed Jul 2 14:18:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 10FF142A for ; Wed, 2 Jul 2014 14:18:49 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [88.198.178.88]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 509832A2F for ; Wed, 2 Jul 2014 14:18:47 +0000 (UTC) Received: (qmail 30101 invoked by uid 89); 2 Jul 2014 14:18:43 -0000 Received: by simscan 1.4.0 ppid: 30096, pid: 30098, t: 0.0358s scanners: attach: 1.4.0 clamav: 0.97.3/m:55/d:19152 Received: from unknown (HELO suse3.ewadmin.local) (rainer@ultra-secure.de@212.71.117.1) by mail.ultra-secure.de with ESMTPA; 2 Jul 2014 14:18:43 -0000 Date: Wed, 2 Jul 2014 16:18:37 +0200 From: Rainer Duffner To: Karl Denninger Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix Message-ID: <20140702161837.2a8ef126@suse3.ewadmin.local> In-Reply-To: <53A84ECA.6030308@denninger.net> References: <201405151530.s4FFU0d6050580@freefall.freebsd.org> <20140623161354.2fdd1289@suse3.ewadmin.local> <53A84ECA.6030308@denninger.net> X-Mailer: Claws Mail 3.9.2 (GTK+ 2.24.22; x86_64-suse-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Jul 2014 14:18:49 -0000 > It should apply cleanly to stock 10.0 but I have not checked it > recently. There was a change made in the VM defines by the rest of > the team but I believe it is properly ifdef'd so as to figure it out > and work both before and after. > So far, it looks good. http://s30.postimg.org/pa6hpaktd/rrdstd_Mem_Week_zoom_Mem.png http://s30.postimg.org/45r81v17l/rrdstd_VMem_Week_zoom_VMem.png I tried it in a VM with 10.0 first (to see if it actually still boots) and then applied it to the production-server with 10.0. Thanks a lot for all the work you have put into this. I really hope this will end up in a release at some point, so I don't have to rebuild the kernel (and patch the source) every time a kernel update gets published and can get back to freebsd-update. Best Regards Rainer From owner-freebsd-fs@FreeBSD.ORG Thu Jul 3 01:33:24 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3EED6E68 for ; Thu, 3 Jul 2014 01:33:24 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 264A02A2D for ; Thu, 3 Jul 2014 01:33:24 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s631XNbs005949 for ; Thu, 3 Jul 2014 02:33:23 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 121072] [smbfs] mount_smbfs(8) cannot normally convert the character-code. Date: Thu, 03 Jul 2014 01:33:24 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: bin X-Bugzilla-Version: 7.0-PRERELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: nrgmilk@gmail.com X-Bugzilla-Status: Issue Resolved X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Jul 2014 01:33:24 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=121072 nrgmilk@gmail.com changed: What |Removed |Added ---------------------------------------------------------------------------- Status|In Discussion |Issue Resolved Resolution|--- |FIXED --- Comment #2 from nrgmilk@gmail.com --- This issue was resolved by switch to new kiconv in 10-RELEASE. This problem was that text encoding conversion did not work in mount_smbfs. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Thu Jul 3 09:43:14 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D9CDAD94 for ; Thu, 3 Jul 2014 09:43:14 +0000 (UTC) Received: from archeo.suszko.eu (archeo.unixguru.pl [IPv6:2001:41d0:1:f47a::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9778F2178 for ; Thu, 3 Jul 2014 09:43:14 +0000 (UTC) Received: from archeo (localhost [127.0.0.1]) by archeo.suszko.eu (Postfix) with ESMTP id 271AB2063809; Thu, 3 Jul 2014 11:43:03 +0200 (CEST) X-Virus-Scanned: amavisd-new at archeo.local Received: from archeo.suszko.eu ([127.0.0.1]) by archeo (archeo.local [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id Iz8_DCXnlK5B; Thu, 3 Jul 2014 11:43:02 +0200 (CEST) Received: from helium (gate.grtech.pl [195.8.99.234]) by archeo.suszko.eu (Postfix) with ESMTPSA id 7D9EF2063807; Thu, 3 Jul 2014 11:43:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=suszko.eu; s=dkim; t=1404380582; bh=EsAeOFet4rAVIYruGLJf4VXUmTjvcgqhfjshP+PndhU=; h=Date:From:To:Subject; b=QAInvB64pgTz6SviwTdKwv1esEXz6LK+4mBuJiK7hJKlaIGtrCNXvynTq69kp6h5q SRCkFaYZ01zu68U2IudglER1n8+EivMEv1Z3VDoXF5dxsupnt49pW+ATxPmgBF/OZQ bv7Lh6oLirgBAKa2QVj2vahlMcYt71o3/9rtSAbk= Date: Thu, 3 Jul 2014 11:42:54 +0200 From: Maciej Suszko To: freebsd-fs@FreeBSD.org Subject: ccdconfig and Linux mdadm Message-ID: <20140703114254.6472055a@helium> X-Mailer: Claws Mail 3.10.1 (GTK+ 2.24.22; amd64-portbld-freebsd10.0) MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; boundary="Sig_/2AhD3wY3W5n/O1fD0.yyU91"; protocol="application/pgp-signature" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Jul 2014 09:43:15 -0000 --Sig_/2AhD3wY3W5n/O1fD0.yyU91 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Hi, I'm wondering if anyone have tested or used ccd(4) to access md devices created under Linux. According to ccdconfig(8) it should be possible but I have no luck setting it up. I created md0 under Linux (raid0, chunksize 512K): root@lnx:~ # cat /proc/mdstat Personalities : [raid1] [raid0]=20 md0 : active raid0 sdc1[1] sdb1[0] 523264 blocks super 1.2 512k chunks root@lnx:~ # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Jul 3 11:06:21 2014 Raid Level : raid0 Array Size : 523264 (511.09 MiB 535.82 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Jul 3 11:06:21 2014 State : clean=20 Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Name : t2.vbox:0 (local to host t2.vbox) UUID : b9927850:379de36b:82df9efc:6e7461af Events : 0 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 There is ext3 filesystem created directly on /dev/md0. Now, trying to set up ccd device under FreeBSD-10-RELEASE amd64: root@fbsd:~ # ccdconfig -c /dev/ccd0 512 linux /dev/ada1s1 /dev/ada2s1 root@fbsd:~ # ccdconfig -g ccd0 1024 0 /dev/ada1s1 /dev/ada2s1 root@freebsd:~ # fsck.ext3 -n /dev/ccd0=20 e2fsck 1.42.9 (28-Dec-2013) ext2fs_open2: Bad magic number in super-block fsck.ext3: Superblock invalid, trying backup blocks... fsck.ext3: Bad magic number in super-block while trying to open /dev/ccd0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Any ideas? --=20 regards, Maciej Suszko. --Sig_/2AhD3wY3W5n/O1fD0.yyU91 Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlO1JaEACgkQCikUk0l7iGpm+gCggRDg+pbGpldsj4w+YUdYEMC6 ynwAnRgkwcORI0PHa2Cu2KAA8gpe2+qM =PNcK -----END PGP SIGNATURE----- --Sig_/2AhD3wY3W5n/O1fD0.yyU91-- From owner-freebsd-fs@FreeBSD.ORG Thu Jul 3 10:12:30 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5F1EF969 for ; Thu, 3 Jul 2014 10:12:30 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4639D24A0 for ; Thu, 3 Jul 2014 10:12:30 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s63ACUGo052023 for ; Thu, 3 Jul 2014 11:12:30 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 167685] [zfs] ZFS on USB drive prevents shutdown / reboot Date: Thu, 03 Jul 2014 10:12:29 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: raven428@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Jul 2014 10:12:30 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=167685 raven428@gmail.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |raven428@gmail.com --- Comment #5 from raven428@gmail.com --- FreeBSD 10.0-STABLE #0 r266463: Tue May 20 18:24:03 UTC 2014 and the issue is still here, but to repeat it need to be created zvol on some zfs pool, http://farm6.staticflickr.com/5506/14550830655_d517ab28f5_b.jpg - screenshot. the machine is shutting down fine, if no zvols present in system. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Thu Jul 3 11:55:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BCB13746 for ; Thu, 3 Jul 2014 11:55:39 +0000 (UTC) Received: from mail.helenius.fi (mail.helenius.fi [IPv6:2001:67c:164:40::91]) by mx1.freebsd.org (Postfix) with ESMTP id 5CBB12DF9 for ; Thu, 3 Jul 2014 11:55:38 +0000 (UTC) Received: from mail.helenius.fi (localhost [127.0.0.1]) by mail.helenius.fi (Postfix) with ESMTP id 4BB0C8FA7; Thu, 3 Jul 2014 11:55:34 +0000 (UTC) X-Virus-Scanned: amavisd-new at helenius.fi Received: from mail.helenius.fi ([127.0.0.1]) by mail.helenius.fi (mail.helenius.fi [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7U-ALYeHwihW; Thu, 3 Jul 2014 11:55:19 +0000 (UTC) Received: from [IPv6:2001:67c:164:42:edca:7cd0:d3ac:8d5a] (unknown [IPv6:2001:67c:164:42:edca:7cd0:d3ac:8d5a]) (Authenticated sender: pete) by mail.helenius.fi (Postfix) with ESMTPA id C51D68F95; Thu, 3 Jul 2014 11:55:17 +0000 (UTC) Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: l2arc compression leak From: Petri Helenius In-Reply-To: Date: Thu, 3 Jul 2014 14:55:16 +0300 Message-Id: <9E818A41-AEF6-4600-B12B-0539EE521B60@helenius.fi> References: <5AD0B5C0-7C72-46FA-86D3-7AFA8FA1E84E@helenius.fi> To: krad X-Mailer: Apple Mail (2.1878.6) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: developer@open-zfs.org, "" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Jul 2014 11:55:39 -0000 Anyone know if this is fixed in the recent ZFS commits? Pete On 17 Jun 2014, at 11:24 , Petri Helenius wrote: >=20 > I wonder when this makes it to HEAD? >=20 > Pete >=20 > On 17 Jun 2014, at 10:04 , krad wrote: >=20 >> thats really a decision for you as your situation is specific to you, = and you get hit with the penalties if anything goes wrong. If its = causing you a major problem in production and the risk/benfit ratio is = worth it you could use it, but I would make sure you do rigorous testing = first. However if you dont have a specific issue, i would hold off until = its in stable at least. >>=20 >>=20 >> On 16 June 2014 07:40, Petri Helenius wrote: >>=20 >> Hi, >>=20 >> Recent FreeBSD 10-STABLE seems to be suffering from the L2ARC memory = leak, eventually hanging on pfault. >>=20 >> Should I apply this patch >> http://lists.open-zfs.org/pipermail/developer/2014-March/000535.html >>=20 >> or wait for integration to SVN? >>=20 >> Pete >>=20 >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>=20 >=20 From owner-freebsd-fs@FreeBSD.ORG Thu Jul 3 13:28:08 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5498D7B9; Thu, 3 Jul 2014 13:28:08 +0000 (UTC) Received: from archeo.suszko.eu (archeo.unixguru.pl [IPv6:2001:41d0:1:f47a::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 126BA279D; Thu, 3 Jul 2014 13:28:07 +0000 (UTC) Received: from archeo (localhost [127.0.0.1]) by archeo.suszko.eu (Postfix) with ESMTP id 418942063809; Thu, 3 Jul 2014 15:28:04 +0200 (CEST) X-Virus-Scanned: amavisd-new at archeo.local Received: from archeo.suszko.eu ([127.0.0.1]) by archeo (archeo.local [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id BR52lJxT_I3g; Thu, 3 Jul 2014 15:28:04 +0200 (CEST) Received: from helium (gate.grtech.pl [195.8.99.234]) by archeo.suszko.eu (Postfix) with ESMTPSA id 9E7F62063802; Thu, 3 Jul 2014 15:28:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=suszko.eu; s=dkim; t=1404394083; bh=xgYqQpAPYKSi8YDgY9nUzOD+K93OVM3/t6YQskXNYd4=; h=Date:From:To:Cc:Subject:In-Reply-To:References; b=sY4bx57A0eblnC/FPY2Y20na2ULHe5ctokywe7ak66QRNzjNSzCbd7VMCmEKSEoQ+ 8Q4kiCLa1G0ZIh18QWLqIpSrZLMwzl30aomQNF9xrDrhuCf/G6evE6QJof0R30ld6F 7kNHnmBvhN0eSUaNsz0Q4LzMeocg958zbFgZOvPc= Date: Thu, 3 Jul 2014 15:28:01 +0200 From: Maciej Suszko To: Stefan Esser Subject: Re: ccdconfig and Linux mdadm Message-ID: <20140703152801.695a39e6@helium> In-Reply-To: <53B5395B.6040301@freebsd.org> References: <20140703114254.6472055a@helium> <53B5395B.6040301@freebsd.org> X-Mailer: Claws Mail 3.10.1 (GTK+ 2.24.22; amd64-portbld-freebsd10.0) MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; boundary="Sig_/lqX7EpxMeMLt.RhDezI2/jC"; protocol="application/pgp-signature" Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Jul 2014 13:28:08 -0000 --Sig_/lqX7EpxMeMLt.RhDezI2/jC Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable On Thu, 03 Jul 2014 13:07:07 +0200 Stefan Esser wrote: > Well, you should first check whether the Ext3 file system has been > created with parameters that are compatible with the ext2 driver in > FreeBSD (which has grown some Ext3 extensions, but many are missing). >=20 > But AFAIK, the ext2/3 tools are direct and full ports of the Linux > versions, and they should just work on file systems that the FreeBSD > kernel does not support. ext3 partition created directly on disk is recognized under FreeBSD - simple checking `file -s /dev/ada1s1` and `file -s /dev/sdb1` give similar results. When using ccd - device is seen as data, as there was no filesystem. > OTOH, I'm not sure, whether the parameters to ccdconfig are correct. > A chunk size of 512KB does not match the parameter 512 in ccdconfig, > AFAICT (512 sectors =3D=3D 256KB). But this should not affect reading > the super-block, which ought to be in the first chunk, anyway. Tried setting ileave to 1024, using CCDF_NO_OFFSET - differet combinations but with no luck. > You may want to dump the raw data from the md/ccd device and compare > results under Linux and under FreeBSD. This may give some insight, > why fsck.ext3 does not find a valid super-block ... I dd'ed first 1MB directly from md0 and ccd0 - MD5 checksums are different. > Just my 2ct - I'm not using ccdconfig myself ... Thanks for suggestions. --=20 regards, Maciej Suszko. --Sig_/lqX7EpxMeMLt.RhDezI2/jC Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlO1WmEACgkQCikUk0l7iGosbACgirl0J9WdzZf4dZNlhq+QatNJ VNEAn2byQBjx0ZA5MLsLRcqkMnI6YDE3 =Pc5v -----END PGP SIGNATURE----- --Sig_/lqX7EpxMeMLt.RhDezI2/jC-- From owner-freebsd-fs@FreeBSD.ORG Thu Jul 3 20:35:59 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 601BBD14; Thu, 3 Jul 2014 20:35:59 +0000 (UTC) Received: from archeo.suszko.eu (archeo.unixguru.pl [IPv6:2001:41d0:1:f47a::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EA778220E; Thu, 3 Jul 2014 20:35:58 +0000 (UTC) Received: from archeo (localhost [127.0.0.1]) by archeo.suszko.eu (Postfix) with ESMTP id 696E22063809; Thu, 3 Jul 2014 22:35:54 +0200 (CEST) X-Virus-Scanned: amavisd-new at archeo.local Received: from archeo.suszko.eu ([127.0.0.1]) by archeo (archeo.local [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 1e8sb1piOiTP; Thu, 3 Jul 2014 22:35:54 +0200 (CEST) Received: from leo.lan (89-66-16-9.dynamic.chello.pl [89.66.16.9]) by archeo.suszko.eu (Postfix) with ESMTPSA id ABF0D2063807; Thu, 3 Jul 2014 22:35:53 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=suszko.eu; s=dkim; t=1404419754; bh=UL4f+KdyTz8yquSSrkQPCbbPqcqsF7HGcSrOWydRd2g=; h=Date:From:To:Cc:Subject:In-Reply-To:References; b=GqNgd7bYxkYQdIPsVL3PxLx/CL7a/J6NXomqUtbReVpZzy0GbcA3HkWPZibvdQZgN fmoaiTJC2jFF0z8afbBPbLAMtHRcVhvyvY6yB5C9hC48Ud+cKlEp9Em0C+Au9n3BK5 arosGOkpiKvkNmqpD2KstMwD/baqGY/bP1HmbimM= Date: Thu, 3 Jul 2014 22:35:48 +0200 From: Maciej Suszko To: Stefan Esser Subject: Re: ccdconfig and Linux mdadm Message-ID: <20140703223548.49b5c907@leo.lan> In-Reply-To: <53B57935.3090209@freebsd.org> References: <20140703114254.6472055a@helium> <53B5395B.6040301@freebsd.org> <20140703152801.695a39e6@helium> <53B57935.3090209@freebsd.org> X-Mailer: Claws Mail 3.10.1 (GTK+ 2.24.22; amd64-portbld-freebsd10.0) MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; boundary="Sig_/aLwII0lQ=jMv_xdrIabXVFE"; protocol="application/pgp-signature" Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Jul 2014 20:35:59 -0000 --Sig_/aLwII0lQ=jMv_xdrIabXVFE Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Stefan Esser wrote: > In this situation, I'd do the following: >=20 > 1) dd 1MB from the start of the partition from each underlying device > (sdb1, sdc1) and from the md device into named files under Linux >=20 > 2) Same as 1) under FreeBSD ... >=20 > 3) Dump the first 4KB of each file into a text file, e.g. with > "dd if=3D$FILE bs=3D4k count=3D1 | hd > $FILE.txt" and look for > signatures that are similar (e.g. for the "magic number" of > the ext3fs). Thanks for pointing me out. Studying [1] mdraid superblock formats I found that in my case superblock starts 4K from the beginning of each device (version 1.2). Checking byte by byte against the specification I hit data_offset field - it was 0x800 (2048 decimal), so having 512-byte sectors it means data starts at 1MB... Here's what I just did: 1) create gnop deviced with 1MB offset: root@fbsd:~ # gnop create -o 1M ada1s1 root@fbsd:~ # gnop create -o 1M ada2s1 2) create ccd device (this time md0 was created with chunksize 32) root@fbsd:~ # ccdconfig ccd0 32 linux /dev/ada1s1.nop /dev/ada2s1.nop root@fbsd:~ # ccdonfig -g ccd0 64 0 /dev/ada1s1.nop /dev/ada2s1.nop And finally, here are the results: root@fbsd:~ # file -s /dev/ccd0=20 /dev/ccd0: Linux rev 1.0 ext3 filesystem data, UUID=3Dc442e028-bfa8-4841-8b= b2-7d21a9835c00 root@fbsd:~ # df -ht ext2fs Filesystem Size Used Avail Capacity Mounted on /dev/ccd0 190M 185M 5.4M 97% /root/nobackup root@fbsd:~ # ls -la total 185062 drwxr-xr-x 3 root wheel 1024 Jul 3 22:04 . drwxr-xr-x 10 root wheel 512 Jul 3 20:34 .. -rw-r--r-- 1 root wheel 45 Jul 3 22:05 180mb.MD5 -rw-r--r-- 1 root wheel 188743680 Jul 3 22:03 180mb.file drwx------ 2 root wheel 12288 Jul 3 22:19 lost+found root@fbsd:~ # gmd5sum -c 180mb.MD5=20 180mb.file: OK 180mb.* files were created under Linux. Again I can say FreeBSD rocks! ... as usual :D [1] https://raid.wiki.kernel.org/index.php/RAID_superblock_formats --=20 regards, Maciej Suszko. --Sig_/aLwII0lQ=jMv_xdrIabXVFE Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlO1vqgACgkQCikUk0l7iGqMIwCfdrhS2MwmAYnwiQL5JYsQSZV0 FtMAnA7sbUBKem5Mg6pHJ4K7VDwydNPU =nCa/ -----END PGP SIGNATURE----- --Sig_/aLwII0lQ=jMv_xdrIabXVFE-- From owner-freebsd-fs@FreeBSD.ORG Thu Jul 3 20:37:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E4169DDC for ; Thu, 3 Jul 2014 20:37:56 +0000 (UTC) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BA6F4223B for ; Thu, 3 Jul 2014 20:37:56 +0000 (UTC) Received: from spa-sysadm-01.spa.umn.edu ([134.84.199.8]) by mail.physics.umn.edu with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1X2nEA-000DSu-F3 for freebsd-fs@freebsd.org; Thu, 03 Jul 2014 15:03:31 -0500 Message-ID: <53B5B712.5050404@physics.umn.edu> Date: Thu, 03 Jul 2014 15:03:30 -0500 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: FreeBSD Filesystems Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=ALL_TRUSTED,RP_MATCHES_RCVD autolearn=unavailable version=3.3.2 Subject: replaced da devices not being detected X-SA-Exim-Version: 4.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Jul 2014 20:37:57 -0000 Running a FreeBSD 9.1 server with four attached Supermicro SAS JBODs, 45 drives in each. Sometimes when we replace a drive due to failure, the new drive isn't detected. At that point we can swap the drive in and out or between vacant slots with nothing appearing in dmesg or other logs. But we know the drive is working because it is detected by the chassis if queried with sg3utils. No effect from "camcontrol rescan" etc, either. Rebooting the server does result in the drive being detected but obviously this isn't a desirable way to fix it. It does seem to me like we get to replace some number of drives without incident, then after some point no new da devices are detected. Just wondering if anyone has seen this, or if there are any sysctls which might affect it or other ways to tickle the system into behaving as expected. Thanks for any advice, Graham -- ------------------------------------------------------------------------- Graham Allan School of Physics and Astronomy - University of Minnesota ------------------------------------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Jul 3 23:41:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4C69EDD8 for ; Thu, 3 Jul 2014 23:41:08 +0000 (UTC) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 21367218F for ; Thu, 3 Jul 2014 23:41:07 +0000 (UTC) Received: from spa-sysadm-01.spa.umn.edu ([134.84.199.8]) by mail.physics.umn.edu with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1X2qcj-000KHj-AS for freebsd-fs@freebsd.org; Thu, 03 Jul 2014 18:41:06 -0500 Message-ID: <53B5EA11.4060509@physics.umn.edu> Date: Thu, 03 Jul 2014 18:41:05 -0500 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <53B5B712.5050404@physics.umn.edu> In-Reply-To: <53B5B712.5050404@physics.umn.edu> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=ALL_TRUSTED,RP_MATCHES_RCVD autolearn=unavailable version=3.3.2 Subject: Re: replaced da devices not being detected X-SA-Exim-Version: 4.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Jul 2014 23:41:08 -0000 On 7/3/2014 3:03 PM, Graham Allan wrote: > > It does seem to me like we get to replace some number of drives without > incident, then after some point no new da devices are detected. I should have given some more info about the HBA etc in use - it's an LSI 9205-8e (SAS2308, using mps driver), and dmesg is telling me the HBA has (IT) firmware 14.00.00.00. Don't know if this is good or bad but it appears to match the mps driver version, if that means anything. I can see LSI is up to firmware 19.00.00.00 for the card, and I know I've seen discussion here of the favored version, but can't find it now. However SAS2IRCU can see the added drive even when camcontrol fails to, so I'm not sure that it's related to the HBA as such - unless SAS2IRCI gets that information by a different path such as querying the enclosure controller. -- ------------------------------------------------------------------------- Graham Allan School of Physics and Astronomy - University of Minnesota ------------------------------------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Jul 4 02:41:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 29352538; Fri, 4 Jul 2014 02:41:38 +0000 (UTC) Received: from mail-wi0-x22c.google.com (mail-wi0-x22c.google.com [IPv6:2a00:1450:400c:c05::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 91F2C220B; Fri, 4 Jul 2014 02:41:37 +0000 (UTC) Received: by mail-wi0-f172.google.com with SMTP id hi2so12300134wib.5 for ; Thu, 03 Jul 2014 19:41:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=O3U0lY1sS/JXZ9ZwhFuagjCTQtWQ1fDjz/2+PLAvpk8=; b=cd0N93eMGkpYcJRpKCSlW9/Z0oQ+wb65DDHDWiXnukZnYZlaRHxEUolK8lpA651/35 se4TXNw+tsDFbvK7NAyM2vh4VuJzHpFLV4kss7Q3uh/x5HPiVRdz4sl7DoRbXvMv1dMY PH4mS05RLAzXfXj8C1BCEIOHijsjahXhoV8291NkdPFbZNSp0G7eR5G9/KU7r9PR52Ri Rz431CDy4vy+6LrSDN5yEBkM/Y/fx2nPYfXZluQUHJBBMkLUiKbgWcVluPHjF/WyErln BghdnDurq96WhOOONn4x/5W385pJBjvDaGAKXMU5NA2aordsEgE5PH0fqhnuN1bUfEYI urKg== X-Received: by 10.180.74.9 with SMTP id p9mr14723281wiv.39.1404441695713; Thu, 03 Jul 2014 19:41:35 -0700 (PDT) Received: from mavbook.mavhome.dp.ua ([134.249.139.101]) by mx.google.com with ESMTPSA id nc19sm19330251wic.4.2014.07.03.19.41.34 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Jul 2014 19:41:34 -0700 (PDT) Sender: Alexander Motin Message-ID: <53B6145D.1090405@FreeBSD.org> Date: Fri, 04 Jul 2014 05:41:33 +0300 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: Pawel Jakub Dawidek Subject: Re: zfs_setextattr() synchronicity References: <53B5FD02.5030700@FreeBSD.org> In-Reply-To: <53B5FD02.5030700@FreeBSD.org> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Jul 2014 02:41:38 -0000 On 04.07.2014 04:01, Alexander Motin wrote: > Doing some Samba benchmarks, creating many small files on ZFS, I've > noticed that it's performance heavily depends on dedicated ZIL presence. > Looking deeper I've noticed that most of time is spent in > extattr_set_file() syscall. Deeper look brought me to zfs_vnops.c, where > in zfs_setextattr() I found: > VOP_WRITE(vp, ap->a_uio, IO_UNIT | IO_SYNC, ap->a_cred); > > I guess that IO_SYNC here is what causes ZIL commit for every created > file, when Samba sets some DOSATTRIB attribute. I've tried to find that > code in Solaris, but failed. Is it FreeBSD specific? Same time, looking > on UFS code, I see that it does not synchronizes those calls by default. > Same as not synchronized by default (unless sync=always is set) > zfs_setattr() calls for ZFS. Why is zfs_setextattr() synchronized > heavier then zfs_setattr()? I've found that such single-line patch improves results of file creation benchmark on Samba by several times: --- a/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c +++ b/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c @@ -6710,7 +6710,7 @@ vop_setextattr { va.va_size = 0; error = VOP_SETATTR(vp, &va, ap->a_cred); if (error == 0) - VOP_WRITE(vp, ap->a_uio, IO_UNIT | IO_SYNC, ap->a_cred); + VOP_WRITE(vp, ap->a_uio, IO_UNIT, ap->a_cred); VOP_UNLOCK(vp, 0); vn_close(vp, flags, ap->a_cred, td); Can anybody tell me why setting extended attributes on ZFS require zil_commit(), while otherwise it is possible to create and write files, change their permissions, etc. without ever doing zil_commit(). It looks at least strange to me. -- Alexander Motin From owner-freebsd-fs@FreeBSD.ORG Fri Jul 4 12:22:17 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9D33CC9B; Fri, 4 Jul 2014 12:22:17 +0000 (UTC) Received: from SMTP02.CITRIX.COM (smtp02.citrix.com [66.165.176.63]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "mail.citrix.com", Issuer "Cybertrust Public SureServer SV CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B03A624BC; Fri, 4 Jul 2014 12:22:16 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.01,600,1400025600"; d="scan'208";a="149983477" Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 04 Jul 2014 12:22:14 +0000 Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id 14.3.181.6; Fri, 4 Jul 2014 08:22:13 -0400 Message-ID: <53B69C73.7090806@citrix.com> Date: Fri, 4 Jul 2014 14:22:11 +0200 From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: FreeBSD Hackers Subject: Re: Strange IO performance with UFS References: <53B691EA.3070108@citrix.com> In-Reply-To: <53B691EA.3070108@citrix.com> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 8bit X-DLP: MIA2 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Jul 2014 12:22:17 -0000 Adding freebsd-fs, sorry for the double posting to -hackers. On 04/07/14 13:37, Roger Pau Monné wrote: > Hello, > > I'm doing some tests on IO performance using fio, and I've found > something weird when using UFS and large files. I have the following > very simple sequential fio workload: > > [global] > rw=write > size=10g > bs=4k > > [job1] > > In this case the box has 6GB of RAM, and when running the following fio > workload, I also run `iostat -xz -w 1` in parallel. The result of fio is > pretty disappointing in terms of performance: > > bw=33309KB/s, iops=8327 > > The output of iostat is the following: > > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 266.1 299.0 34000.8 38243.4 1 92.4 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 236.7 235.7 30295.1 30168.9 30 61.0 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 301.8 224.7 38272.7 28674.4 80 49.3 95 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 185.1 274.8 23687.5 35168.7 15 92.4 105 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 258.4 238.1 33077.3 30475.7 36 57.1 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 200.3 213.4 25634.5 27319.4 8 72.7 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 243.3 233.7 31053.3 29919.1 31 57.4 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 243.5 228.5 31169.7 29244.1 49 73.2 99 > > So, we are performing almost the same amount of reads and writes to > disk, even when the workload is just a sequential write, this doesn't > seem right to me, I was expecting that the number of reads would be much > lower. > > I've also added the following DTrace probe, in order to figure out where > those reads are coming from, and the stack trace of all these read bios > is always the same: > > kernel`g_io_request+0x384 > kernel`g_part_start+0x2c3 > kernel`g_io_request+0x384 > kernel`g_part_start+0x2c3 > kernel`g_io_request+0x384 > kernel`ufs_strategy+0x8a > kernel`VOP_STRATEGY_APV+0xf5 > kernel`bufstrategy+0x46 > kernel`cluster_read+0x5e6 > kernel`ffs_balloc_ufs2+0x1be2 > kernel`ffs_write+0x310 > kernel`VOP_WRITE_APV+0x166 > kernel`vn_write+0x2eb > kernel`vn_io_fault_doio+0x22 > kernel`vn_io_fault1+0x78 > kernel`vn_io_fault+0x173 > kernel`dofilewrite+0x85 > kernel`kern_writev+0x65 > kernel`sys_write+0x63 > > The probe used is the following: > > io:::start > /args[0] && (args[0]->bio_cmd == BIO_READ)/ > { > @traces[stack()] = count(); > } > > If I lower the file size of the fio workload to 4GB for example > everything seems fine, and I see almost no reads in iostat: > > bw=84953KB/s, iops=21238 > > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 694.6 0.0 88912.2 82 111.4 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 4.0 559.4 159.3 71014.2 67 99.6 99 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 1.9 630.8 124.8 80617.0 63 90.6 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 673.3 0.0 86177.9 80 107.2 99 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 7.0 564.5 381.7 72260.6 4 94.1 101 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 2.9 641.8 92.2 82113.9 55 101.3 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 1.9 638.9 151.2 81773.4 54 90.4 100 > > Is this something expected/known? I'm I doing something wrong on the tests? > > Thanks, Roger. > _______________________________________________ > freebsd-hackers@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-hackers > To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri Jul 4 15:42:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 925FD609; Fri, 4 Jul 2014 15:42:36 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.feld.me", Issuer "Gandi Standard SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 30BC3265E; Fri, 4 Jul 2014 15:42:36 +0000 (UTC) Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id 78f86f01; Fri, 4 Jul 2014 10:42:26 -0500 (CDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=feld.me; h=content-type :mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:sender; s= blargle2; bh=IsrAyr61lxug/Wx9IQrwECXZknQ=; b=bDN6mkkfoctD8+buIpZ o34f+9cQMK+7veFbQde4Euz2vHRtNTgMXp4q6GVSZDxip+qkvR4Nhb+qEIsngWBU j9Vfx3n5G+jHs56gQ90sv2iRUO8ahAkNnzSdUHIo/L6bYU9ak0EDRegx5Qr58kjh tVjQF0AxDoIwV3fofuRHEqp0EbtG8HYJ38pUEhIPuzwmERolqNjSpKXXGACd3uqd YkWMFjkN7OkZoA66ujgoKjjn3pbmUqr/8J5WUq/BHH10WwsRbKRrhqgMBbIs1ZZQ uEc7I4fHY9rskSB+xNVxW4t+An34tGJ22jNMWKpB0D1Y2+QdqlI/hgcHqpl+fEJ1 zcw== DomainKey-Signature: a=rsa-sha1; c=nofws; d=feld.me; h=content-type :mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:sender; q= dns; s=blargle2; b=BduycPLsi0iJyc/dX9YHi6WgN6mT1fLP706kzNi78Wu8e AlVsWRG/wgCWw+6VS1zkwUlK6GnRt0zNlNoaqZ9t++giw79sA8q/hIIk9n/sbAh4 eJ8scNIjsQvWnly/1kfToMvK9HEQ2dhXkLlzslBKfyemwI7TBSoiTv/fW4NEW9Yi r6rwu8rU9s2a7GTKXQs5ioqwrerHw+MW2e0Zb63jlgvCcasqFdtKkl0fCAXJyhlr X7mp3w7fCeodKP7Pb8SLs6E1ur7p5n1M6+qMcAI/kweWuupFfK8/Em52JEBHEVLk x2avCYRqJyPma+zCOHT8ULf60bU/SMfrqHE38ff/Q== Received: from mail.feld.me (mail.feld.me [66.170.3.6]); by mail.feld.me (OpenSMTPD) with ESMTP id b55a1c0d; Fri, 4 Jul 2014 10:42:26 -0500 (CDT) Received: from feld@feld.me by mail.feld.me (Archiveopteryx 3.2.0) with esmtpa id 1404488545-4188-4185/5/13; Fri, 4 Jul 2014 15:42:25 +0000 Content-Type: text/plain Mime-Version: 1.0 Subject: Re: ccdconfig and Linux mdadm From: Mark Felder In-Reply-To: <20140703223548.49b5c907@leo.lan> Date: Fri, 4 Jul 2014 10:42:28 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: <04F83DC7-CB5F-4308-A9E5-99F6EE35C7B7@FreeBSD.org> References: <20140703114254.6472055a@helium> <53B5395B.6040301@freebsd.org> <20140703152801.695a39e6@helium> <53B57935.3090209@freebsd.org> <20140703223548.49b5c907@leo.lan> To: Maciej Suszko X-Mailer: Apple Mail (2.1878.6) Sender: feld@feld.me Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Jul 2014 15:42:36 -0000 On Jul 3, 2014, at 15:35, Maciej Suszko wrote: >=20 > 180mb.* files were created under Linux. Again I can say FreeBSD > rocks! ... as usual :D >=20 >=20 Wow, this is excellent. I hope this doesn't get lost to the mailing list = archives... From owner-freebsd-fs@FreeBSD.ORG Fri Jul 4 16:29:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E06DCE16; Fri, 4 Jul 2014 16:29:00 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "wonkity.com", Issuer "wonkity.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 89AF22A7D; Fri, 4 Jul 2014 16:29:00 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.9/8.14.9) with ESMTP id s64GSMbN027827 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 4 Jul 2014 10:28:23 -0600 (MDT) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.9/8.14.9/Submit) with ESMTP id s64GSLcU027824; Fri, 4 Jul 2014 10:28:21 -0600 (MDT) (envelope-from wblock@wonkity.com) Date: Fri, 4 Jul 2014 10:28:21 -0600 (MDT) From: Warren Block To: Mark Felder Subject: Re: ccdconfig and Linux mdadm In-Reply-To: <04F83DC7-CB5F-4308-A9E5-99F6EE35C7B7@FreeBSD.org> Message-ID: References: <20140703114254.6472055a@helium> <53B5395B.6040301@freebsd.org> <20140703152801.695a39e6@helium> <53B57935.3090209@freebsd.org> <20140703223548.49b5c907@leo.lan> <04F83DC7-CB5F-4308-A9E5-99F6EE35C7B7@FreeBSD.org> User-Agent: Alpine 2.11 (BSF 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Fri, 04 Jul 2014 10:28:23 -0600 (MDT) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Jul 2014 16:29:01 -0000 On Fri, 4 Jul 2014, Mark Felder wrote: > > On Jul 3, 2014, at 15:35, Maciej Suszko wrote: >> >> 180mb.* files were created under Linux. Again I can say FreeBSD >> rocks! ... as usual :D >> >> > > Wow, this is excellent. I hope this doesn't get lost to the mailing list archives... Agreed! I suspect this can also be done with gconcant and gstripe. If so, I'm willing to add it as an example to the gconcat man page. ccd and ccdconfig are really old, although the ccdconfig man page was updated in October 2013. The example could go there, but I'd really rather update the more modern tools. From owner-freebsd-fs@FreeBSD.ORG Fri Jul 4 19:47:53 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D6EC34E1 for ; Fri, 4 Jul 2014 19:47:53 +0000 (UTC) Received: from venus.codepro.be (venus.codepro.be [IPv6:2a01:4f8:162:1127::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.codepro.be", Issuer "Gandi Standard SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9C9F82B70 for ; Fri, 4 Jul 2014 19:47:52 +0000 (UTC) Received: from vega.codepro.be (unknown [172.16.1.3]) by venus.codepro.be (Postfix) with ESMTP id 7CFE91A421 for ; Fri, 4 Jul 2014 21:47:50 +0200 (CEST) Received: by vega.codepro.be (Postfix, from userid 1001) id 5BC402DAC; Fri, 4 Jul 2014 21:47:50 +0200 (CEST) Date: Fri, 4 Jul 2014 21:47:50 +0200 From: Kristof Provost To: freebsd-fs@freebsd.org Subject: ZFS panic on zvol resize Message-ID: <20140704194750.GU75721@vega.codepro.be> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-PGP-Fingerprint: E114 D9EA 909E D469 8F57 17A5 7D15 91C6 9EFA F286 X-Checked-By-NSA: Probably User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Jul 2014 19:47:53 -0000 Hi, On current r268263 (and also on stable-10 r268263) I can reliably panic the machine by simply attempting to resize a zvol: # zfs create tank/zvol # zfs set mountpoint=none tank/zvol # zfs create -V100G tank/zvol/disk0 # zfs set volsize=200G tank/zvol/disk0 It produces the following panic: panic: solaris assert: !rrw_held(&dp->dp_config_rwlock, RW_READER), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c, line: 1120 cpuid = 1 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe01217d54b0 kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe01217d5560 vpanic() at vpanic+0x126/frame 0xfffffe01217d55a0 panic() at panic+0x43/frame 0xfffffe01217d5600 assfail() at assfail+0x1d/frame 0xfffffe01217d5610 dsl_pool_hold() at dsl_pool_hold+0x67/frame 0xfffffe01217d5650 dmu_objset_hold() at dmu_objset_hold+0x21/frame 0xfffffe01217d5690 dsl_prop_get_integer() at dsl_prop_get_integer+0x28/frame 0xfffffe01217d56d0 zvol_set_volsize() at zvol_set_volsize+0x126/frame 0xfffffe01217d5760 zfs_prop_set_special() at zfs_prop_set_special+0x2e2/frame 0xfffffe01217d57f0 zfs_set_prop_nvlist() at zfs_set_prop_nvlist+0x23f/frame 0xfffffe01217d5880 zfs_ioc_set_prop() at zfs_ioc_set_prop+0x106/frame 0xfffffe01217d58e0 zfsdev_ioctl() at zfsdev_ioctl+0x6ee/frame 0xfffffe01217d5990 devfs_ioctl_f() at devfs_ioctl_f+0xfb/frame 0xfffffe01217d59f0 kern_ioctl() at kern_ioctl+0x22b/frame 0xfffffe01217d5a50 sys_ioctl() at sys_ioctl+0x13c/frame 0xfffffe01217d5aa0 amd64_syscall() at amd64_syscall+0x25a/frame 0xfffffe01217d5bb0 Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe01217d5bb0 --- syscall (54, FreeBSD ELF64, sys_ioctl), rip = 0x8019e89ba, rsp = 0x7fffffffb8c8, rbp = 0x7fffffffb940 --- Uptime: 2m18s Automatic reboot in 15 seconds - press a key on the console to abort Please let me know if there's any other information which could be helpful, or any patch I could test. Regards, Kristof From owner-freebsd-fs@FreeBSD.ORG Fri Jul 4 21:19:57 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 480E0673; Fri, 4 Jul 2014 21:19:57 +0000 (UTC) Received: from systemdatarecorder.org (ec2-54-246-96-61.eu-west-1.compute.amazonaws.com [54.246.96.61]) (using TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "localhost", Issuer "localhost" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id CCB43237B; Fri, 4 Jul 2014 21:19:55 +0000 (UTC) Received: from nereid (84-253-211-213.bb.dnainternet.fi [84.253.211.213]) (authenticated bits=0) by systemdatarecorder.org (8.14.4/8.14.4/Debian-2ubuntu2.1) with ESMTP id s64LIB7p026639 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 4 Jul 2014 21:18:11 GMT Date: Sat, 5 Jul 2014 00:19:38 +0300 From: Stefan Parvu To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= Subject: Re: Strange IO performance with UFS Message-Id: <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> In-Reply-To: <53B69C73.7090806@citrix.com> References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> Organization: systemdatarecorder.org X-Mailer: Sylpheed 3.4.1 (GTK+ 2.24.22; amd64-portbld-freebsd11.0) Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org, FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Jul 2014 21:19:57 -0000 Hi, > > I'm doing some tests on IO performance using fio, and I've found > > something weird when using UFS and large files. I have the following > > very simple sequential fio workload: System: FreeBSD ox 10.0-RELEASE-p6 FreeBSD 10.0-RELEASE-p6 #0: Tue Jun 24 07:47:37 = UTC 2014 =20 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 1. Seq Write to 1 file, 10GB size, single writer, block 4k, UFS2: I tried to write seq using a single writer using an IOSIZE similar to your = example, 10 GB to a 14TB Hdw RAID 10 LSI device using fio 2.1.9 under FreeBSD 10.0.=20 Result: Run status group 0 (all jobs): WRITE: io=3D10240MB, aggrb=3D460993KB/s, minb=3D460993KB/s, maxb=3D460993= KB/s,=20 mint=3D22746msec, maxt=3D22746msec 2. Seq Write to 2500 files, each file 5MB size, multiple writers, UFS2: Result: Run status group 0 (all jobs): WRITE: io=3D12500MB, aggrb=3D167429KB/s, minb=3D334KB/s, maxb=3D9968KB/s,= =20 mint=3D2568msec, maxt=3D76450msec Questions: - where are you writing, what storage: hdw / sfw RAID ? - are you using time based fio tests ?=20 For fun I can share with you some results we been doing between FreeBSD10 a= md64 (f10)=20 and Debian7 amd64 (d7) using LSI HDW RAID 10. We don't use time based fio b= ut rather=20 we measure how fast we can send once the IOSIZE and measure the elapsed tim= e.=20 This proofed to be more accurate and return more sane results than actually= keeping=20 fio running for 15 or 30minutes. Id=A0 =A0 =A0 Test_Name=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Throughput=A0 = =A0 =A0 Utilization=A0 =A0 =A0 Idle 1=A0 =A0 =A0 f10.raid10.4k.2500=A0 =A0 =A0 =A0 23 MB/s=A0 =A0 =A0 =A0 =A0 = 8%=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 92% 2=A0 =A0 =A0 f10.raid10.4k.5000=A0 =A0 =A0 =A0 18 MB/s=A0 =A0 =A0 =A0 =A0 = 9%=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 91% 3=A0 =A0 =A0 f10.raid10.64k.2500=A0 =A0 =A0 215 MB/s=A0 =A0 =A0 =A0 22%=A0= =A0 =A0 =A0 =A0 =A0 =A0 78% 4=A0 =A0 =A0 f10.raid10.64k.5000=A0 =A0 =A0 162 MB/s=A0 =A0 =A0 =A0 18%=A0= =A0 =A0 =A0 =A0 =A0 =A0 82% =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 idle=A0 =A0 iowait 5=A0 =A0 =A0 d7.raid10.4k.2500=A0 =A0 =A0 =A0 29 MB/s=A0 =A0 =A0 =A0 =A0 = 2%=A0 =A0 =A0 =A0 =A0 65.08 + 32.93 6=A0 =A0 =A0 d7.raid10.4k.5000=A0 =A0 =A0 =A0 29 MB/s=A0 =A0 =A0 =A0 =A0 = 3%=A0 =A0 =A0 =A0 =A0 53.68 + 43.79 7=A0 =A0 =A0 d7.raid10.64k.2500=A0 =A0 =A0 297 MB/s=A0 =A0 =A0 =A0 3%=A0 = =A0 =A0 =A0 =A0 56.44 + 41.11 8=A0 =A0 =A0 d7.raid10.64k.5000=A0 =A0 =A0 182 MB/s=A0 =A0 =A0 =A0 4%=A0 = =A0 =A0 =A0 =A0 12.85 + 83.85 --=20 Stefan Parvu From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 02:07:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6AF19DAE for ; Sat, 5 Jul 2014 02:07:44 +0000 (UTC) Received: from frv191.fwdcdn.com (frv191.fwdcdn.com [212.42.77.191]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 25FA028E3 for ; Sat, 5 Jul 2014 02:07:43 +0000 (UTC) Received: from [10.10.1.29] (helo=frv197.fwdcdn.com) by frv191.fwdcdn.com with esmtp ID 1X3F9Q-0008BB-UV for freebsd-fs@freebsd.org; Sat, 05 Jul 2014 04:52:28 +0300 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ukr.net; s=ffe; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-Id:Cc:To:Subject:From:Date; bh=m+Y1jLL9rN2rsiY3Qyc8BrQqd9GARrDDFjN4xDF0tcI=; b=bCwSXsCHNL6KHdb9lbC+ZXyG4BrPVyMfcjAAZnqqjHVbklx/nf9jgsG+Q4rCOxdrCVqF3MAE4bx//EkaSX+B+pNlQ8ZgEh8oMQF5TRODJTzTiILQfaAUUlJysodGtJuYnx+yyKGK9E0fQfTJUf3d0/7E2Q7c9wGmltm4A1ns4hM=; Received: from [10.10.10.35] (helo=frv35.fwdcdn.com) by frv197.fwdcdn.com with smtp ID 1X3F9H-0001Mz-4b for freebsd-fs@freebsd.org; Sat, 05 Jul 2014 04:52:19 +0300 Date: Sat, 05 Jul 2014 04:52:18 +0300 From: Vladislav Prodan Subject: Re: ZFS panic on zvol resize To: Kristof Provost X-Mailer: mail.ukr.net 5.0 Message-Id: <1404524910.210843386.2z8lay1z@frv35.fwdcdn.com> In-Reply-To: <20140704194750.GU75721@vega.codepro.be> References: <20140704194750.GU75721@vega.codepro.be> MIME-Version: 1.0 Received: from universite@ukr.net by frv35.fwdcdn.com; Sat, 05 Jul 2014 04:52:18 +0300 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: binary Content-Disposition: inline Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 02:07:44 -0000 --- Original message --- From: "Kristof Provost" Date: 4 July 2014, 22:48:00 > Hi, > > On current r268263 (and also on stable-10 r268263) I can reliably panic > the machine by simply attempting to resize a zvol: > > # zfs create tank/zvol > # zfs set mountpoint=none tank/zvol > # zfs create -V100G tank/zvol/disk0 > # zfs set volsize=200G tank/zvol/disk0 > > It produces the following panic: r268271 # uname -a FreeBSD vm-10-2.domain.com 10.0-STABLE FreeBSD 10.0-STABLE #0: Sat Jul 5 04:38:25 EEST 2014 root@vm-10-2.domain.com:/usr/obj/usr/src/sys/vm-10-2.3 amd64 This sequence of commands does not cause panic: zfs create zroot/zvol zfs set mountpoint=none zroot/zvol zfs create -V1G zroot/zvol/disk0 zfs set volsize=2G zroot/zvol/disk0 Do you recommend to install revision r268263 and check again? or use larger disks - 200-400GB? -- Vladislav V. Prodan System & Network Administrator support.od.ua From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 09:32:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9DA406C4; Sat, 5 Jul 2014 09:32:12 +0000 (UTC) Received: from SMTP.CITRIX.COM (smtp.citrix.com [66.165.176.89]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "mail.citrix.com", Issuer "Cybertrust Public SureServer SV CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B814F277A; Sat, 5 Jul 2014 09:32:10 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.01,606,1400025600"; d="scan'208";a="149918303" Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net) ([10.9.154.239]) by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jul 2014 09:32:07 +0000 Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id 14.3.181.6; Sat, 5 Jul 2014 05:32:06 -0400 Message-ID: <53B7C616.1000702@citrix.com> Date: Sat, 5 Jul 2014 11:32:06 +0200 From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: Stefan Parvu Subject: Re: Strange IO performance with UFS References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> In-Reply-To: <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-DLP: MIA2 Cc: freebsd-fs@freebsd.org, FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 09:32:12 -0000 On 04/07/14 23:19, Stefan Parvu wrote: > Hi, > >>> I'm doing some tests on IO performance using fio, and I've found >>> something weird when using UFS and large files. I have the following >>> very simple sequential fio workload: > > System: > FreeBSD ox 10.0-RELEASE-p6 FreeBSD 10.0-RELEASE-p6 #0: Tue Jun 24 07:47:37 UTC 2014 > root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 > > > 1. Seq Write to 1 file, 10GB size, single writer, block 4k, UFS2: > > I tried to write seq using a single writer using an IOSIZE similar to your example, 10 > GB to a 14TB Hdw RAID 10 LSI device using fio 2.1.9 under FreeBSD 10.0. > > Result: > Run status group 0 (all jobs): > WRITE: io=10240MB, aggrb=460993KB/s, minb=460993KB/s, maxb=460993KB/s, > mint=22746msec, maxt=22746msec This looks much better than what I've saw in my benchmarks, how much memory does the system have? In my case I've seen the reads issue when trying to write to files that where greater than the memory the system has. My box has 6GB of RAM and I was using a 10GB file. > > > 2. Seq Write to 2500 files, each file 5MB size, multiple writers, UFS2: > > Result: > Run status group 0 (all jobs): > WRITE: io=12500MB, aggrb=167429KB/s, minb=334KB/s, maxb=9968KB/s, > mint=2568msec, maxt=76450msec > > Questions: > > - where are you writing, what storage: hdw / sfw RAID ? The storage is a simple SATA disk, no RAID: pass0 at ahcich0 bus 0 scbus0 target 0 lun 0 pass0: ATA-8 SATA 3.x device pass0: Serial Number Z3T3FJXL pass0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) pass0: Command Queueing enabled > - are you using time based fio tests ? I'm using the following fio workload, as stated in the first email: [global] rw=write size=4g bs=4k [job1] The problem doesn't seem to be related to the hardware (I've also seen this when running inside of a VM), but to UFS itself that at some point (or maybe under certain conditions) starts making a lot of reads when doing a simple write: kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 kernel`g_io_request+0x384 kernel`ufs_strategy+0x8a kernel`VOP_STRATEGY_APV+0xf5 kernel`bufstrategy+0x46 kernel`cluster_read+0x5e6 kernel`ffs_balloc_ufs2+0x1be2 kernel`ffs_write+0x310 kernel`VOP_WRITE_APV+0x166 kernel`vn_write+0x2eb kernel`vn_io_fault_doio+0x22 kernel`vn_io_fault1+0x78 kernel`vn_io_fault+0x173 kernel`dofilewrite+0x85 kernel`kern_writev+0x65 kernel`sys_write+0x63 This can also be seen by running iostat in parallel with the fio workload: device r/s w/s kr/s kw/s qlen svc_t %b ada0 243.3 233.7 31053.3 29919.1 31 57.4 100 This clearly shows that even when I was doing a sequential write (the fio workload shown above), the disk was actually reading more data than writing it, which makes no sense, and all the reads come from the path trace shown above. Roger. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 09:34:31 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3E5AC8A6 for ; Sat, 5 Jul 2014 09:34:31 +0000 (UTC) Received: from venus.codepro.be (venus.codepro.be [IPv6:2a01:4f8:162:1127::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.codepro.be", Issuer "Gandi Standard SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 01D7127B9 for ; Sat, 5 Jul 2014 09:34:30 +0000 (UTC) Received: from vega.codepro.be (unknown [172.16.1.3]) by venus.codepro.be (Postfix) with ESMTP id 64B2D1AD50; Sat, 5 Jul 2014 11:34:28 +0200 (CEST) Received: by vega.codepro.be (Postfix, from userid 1001) id 5F6742F39; Sat, 5 Jul 2014 11:34:28 +0200 (CEST) Date: Sat, 5 Jul 2014 11:34:28 +0200 From: Kristof Provost To: Vladislav Prodan Subject: Re: ZFS panic on zvol resize Message-ID: <20140705093428.GV75721@vega.codepro.be> References: <20140704194750.GU75721@vega.codepro.be> <1404524910.210843386.2z8lay1z@frv35.fwdcdn.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <1404524910.210843386.2z8lay1z@frv35.fwdcdn.com> X-PGP-Fingerprint: E114 D9EA 909E D469 8F57 17A5 7D15 91C6 9EFA F286 X-Checked-By-NSA: Probably User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 09:34:31 -0000 On 2014-07-05 04:52:18 (+0300), Vladislav Prodan wrote: > r268271 > # uname -a > FreeBSD vm-10-2.domain.com 10.0-STABLE FreeBSD 10.0-STABLE #0: Sat Jul 5 04:38:25 EEST 2014 root@vm-10-2.domain.com:/usr/obj/usr/src/sys/vm-10-2.3 amd64 > > This sequence of commands does not cause panic: > zfs create zroot/zvol > zfs set mountpoint=none zroot/zvol > zfs create -V1G zroot/zvol/disk0 > zfs set volsize=2G zroot/zvol/disk0 > > > Do you recommend to install revision r268263 and check again? or use larger disks - 200-400GB? > No, the zvol size doesn't matter for me. I can reproduce it with 1G/2G as well. I'm also still seeing the panic on r268286 (11-current), so the version you tested should have failed too. Both of the affected systems are amd64, and both are running on raidz (or raidz-2). I've also been able to reproduce it on a zpool on a single disk. Regards, Kristof From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 09:36:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BC08B986; Sat, 5 Jul 2014 09:36:32 +0000 (UTC) Received: from SMTP.CITRIX.COM (smtp.citrix.com [66.165.176.89]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "mail.citrix.com", Issuer "Cybertrust Public SureServer SV CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A1C9827E0; Sat, 5 Jul 2014 09:36:31 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.01,606,1400025600"; d="scan'208";a="149918522" Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net) ([10.9.154.239]) by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jul 2014 09:36:29 +0000 Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id 14.3.181.6; Sat, 5 Jul 2014 05:36:28 -0400 Message-ID: <53B7C71C.40005@citrix.com> Date: Sat, 5 Jul 2014 11:36:28 +0200 From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: Stefan Parvu Subject: Re: Strange IO performance with UFS References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> In-Reply-To: <53B7C616.1000702@citrix.com> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 8bit X-DLP: MIA2 Cc: freebsd-fs@freebsd.org, FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 09:36:32 -0000 On 05/07/14 11:32, Roger Pau Monné wrote: > I'm using the following fio workload, as stated in the first email: > > [global] > rw=write > size=4g I've pasted the wrong workload, the size is 10g. > bs=4k > > [job1] From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 09:58:44 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 49DEEB86; Sat, 5 Jul 2014 09:58:44 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DED9D291A; Sat, 5 Jul 2014 09:58:43 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id s659wVh1029271 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 5 Jul 2014 12:58:31 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua s659wVh1029271 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id s659wVLq029270; Sat, 5 Jul 2014 12:58:31 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sat, 5 Jul 2014 12:58:31 +0300 From: Konstantin Belousov To: Roger Pau Monn? Subject: Re: Strange IO performance with UFS Message-ID: <20140705095831.GO93733@kib.kiev.ua> References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="YAtIlCcyqLEoH4m3" Content-Disposition: inline In-Reply-To: <53B7C616.1000702@citrix.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 09:58:44 -0000 --YAtIlCcyqLEoH4m3 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jul 05, 2014 at 11:32:06AM +0200, Roger Pau Monn? wrote: > On 04/07/14 23:19, Stefan Parvu wrote: > > Hi, > >=20 > >>> I'm doing some tests on IO performance using fio, and I've found > >>> something weird when using UFS and large files. I have the following > >>> very simple sequential fio workload: > >=20 > > System: > > FreeBSD ox 10.0-RELEASE-p6 FreeBSD 10.0-RELEASE-p6 #0: Tue Jun 24 07:47= :37 UTC 2014 =20 > > root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 > >=20 > >=20 > > 1. Seq Write to 1 file, 10GB size, single writer, block 4k, UFS2: > >=20 > > I tried to write seq using a single writer using an IOSIZE similar to y= our example, 10 > > GB to a 14TB Hdw RAID 10 LSI device using fio 2.1.9 under FreeBSD 10.0.= =20 > >=20 > > Result: > > Run status group 0 (all jobs): > > WRITE: io=3D10240MB, aggrb=3D460993KB/s, minb=3D460993KB/s, maxb=3D46= 0993KB/s,=20 > > mint=3D22746msec, maxt=3D22746msec >=20 > This looks much better than what I've saw in my benchmarks, how much > memory does the system have? >=20 > In my case I've seen the reads issue when trying to write to files that > where greater than the memory the system has. My box has 6GB of RAM and > I was using a 10GB file. >=20 > >=20 > >=20 > > 2. Seq Write to 2500 files, each file 5MB size, multiple writers, UFS2: > >=20 > > Result: > > Run status group 0 (all jobs): > > WRITE: io=3D12500MB, aggrb=3D167429KB/s, minb=3D334KB/s, maxb=3D9968K= B/s,=20 > > mint=3D2568msec, maxt=3D76450msec > >=20 > > Questions: > >=20 > > - where are you writing, what storage: hdw / sfw RAID ? >=20 > The storage is a simple SATA disk, no RAID: >=20 > pass0 at ahcich0 bus 0 scbus0 target 0 lun 0 > pass0: ATA-8 SATA 3.x device > pass0: Serial Number Z3T3FJXL > pass0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) > pass0: Command Queueing enabled >=20 > > - are you using time based fio tests ? >=20 > I'm using the following fio workload, as stated in the first email: >=20 > [global] > rw=3Dwrite > size=3D4g > bs=3D4k >=20 > [job1] >=20 > The problem doesn't seem to be related to the hardware (I've also seen > this when running inside of a VM), but to UFS itself that at some point > (or maybe under certain conditions) starts making a lot of reads when > doing a simple write: >=20 > kernel`g_io_request+0x384 > kernel`g_part_start+0x2c3 > kernel`g_io_request+0x384 > kernel`g_part_start+0x2c3 > kernel`g_io_request+0x384 > kernel`ufs_strategy+0x8a > kernel`VOP_STRATEGY_APV+0xf5 > kernel`bufstrategy+0x46 > kernel`cluster_read+0x5e6 > kernel`ffs_balloc_ufs2+0x1be2 > kernel`ffs_write+0x310 > kernel`VOP_WRITE_APV+0x166 > kernel`vn_write+0x2eb > kernel`vn_io_fault_doio+0x22 > kernel`vn_io_fault1+0x78 > kernel`vn_io_fault+0x173 > kernel`dofilewrite+0x85 > kernel`kern_writev+0x65 > kernel`sys_write+0x63 >=20 > This can also be seen by running iostat in parallel with the fio workload: >=20 > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 243.3 233.7 31053.3 29919.1 31 57.4 100 >=20 > This clearly shows that even when I was doing a sequential write (the > fio workload shown above), the disk was actually reading more data than > writing it, which makes no sense, and all the reads come from the path > trace shown above. The backtrace above means that the BA_CLRBUF was specified for UFS_BALLOC(). In turns, this occurs when the write size is less than the UFS block size. UFS has to read the block to ensure that partial write does not corrupt the rest of the buffer. You can get the block size for file with stat(2), st_blksize field of the struct stat, or using statfs(2), field f_iosize of struct statfs, or just looking at the dumpfs output for your filesystem, the bsize value. For modern UFS typical value is 32KB. --YAtIlCcyqLEoH4m3 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBAgAGBQJTt8xGAAoJEJDCuSvBvK1Bj4oP/1vOuNuetXtgTXYq7OwGEjRm l45JFiwz4onMkXoqHQv7t/GEluC1SZjTXeUV8uFPcb+azSOAgRD7EpXaRzQ+2TNV lQYIrFQ1CBND6NBfabNQ1V7upnGZ5jkDx2egMnOJgLGae59308+SrUa5d1Z5D3d/ JpuaA8IcWo8UmowE+SH4pFa0gQjmY3CBbxjjTNJo3sEi5EjGerf4UKqEV5v8tBWg kL8dOYDFPidvU8pur9thjvLtOFiDTVbypaPsB6gbdixJZvPBEn3GNJPWt3eiCDTp NM0amiHk57JncSx3EJSmH5BxhHrdHrNAfW2S3LzwGN6Iul3rvoyQdQFSZX85TNRc 9YB8QvdaDx48MsEQnv1SXlJSHJQFPzpzQ7xQjjAvee+yhBX8iCAdaqAY/uBgG5iM XznhtERlaBcIeh59VZFUdH0Iwq/x6t0/di6DzP3NakB2RW9bCZp6fP0fy/eM954Y ScDm7NF6YfJ8vJhFnOK8sRPeMGn93HPMYUaAMLLJgkQoHOc0gsOW24C9y3Gepysb LS69tWq4MHa+fxkQSrwsTaSpNjWUEvE6b+YESquk8wqs/ilW+MuD8T6fSw7UrZUF rsIQ7yxdIyploduNP7YWbTnBxw3qCinVwZ4LplPjP3OeRi+HbA1GQkT+uR4Ak25h UeOraCzeLWtLU60U6AM1 =JOHj -----END PGP SIGNATURE----- --YAtIlCcyqLEoH4m3-- From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 10:34:58 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C35462EC for ; Sat, 5 Jul 2014 10:34:58 +0000 (UTC) Received: from venus.codepro.be (venus.codepro.be [IPv6:2a01:4f8:162:1127::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.codepro.be", Issuer "Gandi Standard SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 870B02BCC for ; Sat, 5 Jul 2014 10:34:58 +0000 (UTC) Received: from vega.codepro.be (unknown [172.16.1.3]) by venus.codepro.be (Postfix) with ESMTP id F15A01ADF7; Sat, 5 Jul 2014 12:34:55 +0200 (CEST) Received: by vega.codepro.be (Postfix, from userid 1001) id EDD522089; Sat, 5 Jul 2014 12:34:55 +0200 (CEST) Date: Sat, 5 Jul 2014 12:34:55 +0200 From: Kristof Provost To: Vladislav Prodan Subject: Re: ZFS panic on zvol resize Message-ID: <20140705103455.GW75721@vega.codepro.be> References: <20140704194750.GU75721@vega.codepro.be> <1404524910.210843386.2z8lay1z@frv35.fwdcdn.com> <20140705093428.GV75721@vega.codepro.be> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20140705093428.GV75721@vega.codepro.be> X-PGP-Fingerprint: E114 D9EA 909E D469 8F57 17A5 7D15 91C6 9EFA F286 X-Checked-By-NSA: Probably User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 10:34:58 -0000 On 2014-07-05 11:34:28 (+0200), Kristof Provost wrote: > No, the zvol size doesn't matter for me. I can reproduce it with 1G/2G > as well. > > I'm also still seeing the panic on r268286 (11-current), so the version > you tested should have failed too. > > Both of the affected systems are amd64, and both are running on raidz > (or raidz-2). I've also been able to reproduce it on a zpool on a single > disk. > I should of course point out that I've got WITNESS and INVARIANTS enabled. You won't see the panic without them. Regards, Kristof From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 10:35:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F2644356; Sat, 5 Jul 2014 10:35:17 +0000 (UTC) Received: from SMTP02.CITRIX.COM (smtp02.citrix.com [66.165.176.63]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "mail.citrix.com", Issuer "Cybertrust Public SureServer SV CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EB4AA2BCF; Sat, 5 Jul 2014 10:35:16 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.01,607,1400025600"; d="scan'208";a="150119556" Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jul 2014 10:35:12 +0000 Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id 14.3.181.6; Sat, 5 Jul 2014 06:35:12 -0400 Message-ID: <53B7D4DF.40301@citrix.com> Date: Sat, 5 Jul 2014 12:35:11 +0200 From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: Konstantin Belousov Subject: Re: Strange IO performance with UFS References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> <20140705095831.GO93733@kib.kiev.ua> In-Reply-To: <20140705095831.GO93733@kib.kiev.ua> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-DLP: MIA2 Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 10:35:18 -0000 On 05/07/14 11:58, Konstantin Belousov wrote: > On Sat, Jul 05, 2014 at 11:32:06AM +0200, Roger Pau Monn? wrote: >> On 04/07/14 23:19, Stefan Parvu wrote: >>> Hi, >>> >>>>> I'm doing some tests on IO performance using fio, and I've >>>>> found something weird when using UFS and large files. I >>>>> have the following very simple sequential fio workload: >>> >>> System: FreeBSD ox 10.0-RELEASE-p6 FreeBSD 10.0-RELEASE-p6 #0: >>> Tue Jun 24 07:47:37 UTC 2014 >>> root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC >>> amd64 >>> >>> >>> 1. Seq Write to 1 file, 10GB size, single writer, block 4k, >>> UFS2: >>> >>> I tried to write seq using a single writer using an IOSIZE >>> similar to your example, 10 GB to a 14TB Hdw RAID 10 LSI device >>> using fio 2.1.9 under FreeBSD 10.0. >>> >>> Result: Run status group 0 (all jobs): WRITE: io=10240MB, >>> aggrb=460993KB/s, minb=460993KB/s, maxb=460993KB/s, >>> mint=22746msec, maxt=22746msec >> >> This looks much better than what I've saw in my benchmarks, how >> much memory does the system have? >> >> In my case I've seen the reads issue when trying to write to >> files that where greater than the memory the system has. My box >> has 6GB of RAM and I was using a 10GB file. >> >>> >>> >>> 2. Seq Write to 2500 files, each file 5MB size, multiple >>> writers, UFS2: >>> >>> Result: Run status group 0 (all jobs): WRITE: io=12500MB, >>> aggrb=167429KB/s, minb=334KB/s, maxb=9968KB/s, mint=2568msec, >>> maxt=76450msec >>> >>> Questions: >>> >>> - where are you writing, what storage: hdw / sfw RAID ? >> >> The storage is a simple SATA disk, no RAID: >> >> pass0 at ahcich0 bus 0 scbus0 target 0 lun 0 pass0: >> ATA-8 SATA 3.x device pass0: Serial >> Number Z3T3FJXL pass0: 300.000MB/s transfers (SATA 2.x, UDMA6, >> PIO 8192bytes) pass0: Command Queueing enabled >> >>> - are you using time based fio tests ? >> >> I'm using the following fio workload, as stated in the first >> email: >> >> [global] rw=write size=4g bs=4k >> >> [job1] >> >> The problem doesn't seem to be related to the hardware (I've also >> seen this when running inside of a VM), but to UFS itself that at >> some point (or maybe under certain conditions) starts making a >> lot of reads when doing a simple write: >> >> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 >> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 >> kernel`g_io_request+0x384 kernel`ufs_strategy+0x8a >> kernel`VOP_STRATEGY_APV+0xf5 kernel`bufstrategy+0x46 >> kernel`cluster_read+0x5e6 kernel`ffs_balloc_ufs2+0x1be2 >> kernel`ffs_write+0x310 kernel`VOP_WRITE_APV+0x166 >> kernel`vn_write+0x2eb kernel`vn_io_fault_doio+0x22 >> kernel`vn_io_fault1+0x78 kernel`vn_io_fault+0x173 >> kernel`dofilewrite+0x85 kernel`kern_writev+0x65 >> kernel`sys_write+0x63 >> >> This can also be seen by running iostat in parallel with the fio >> workload: >> >> device r/s w/s kr/s kw/s qlen svc_t %b ada0 >> 243.3 233.7 31053.3 29919.1 31 57.4 100 >> >> This clearly shows that even when I was doing a sequential write >> (the fio workload shown above), the disk was actually reading >> more data than writing it, which makes no sense, and all the >> reads come from the path trace shown above. > > The backtrace above means that the BA_CLRBUF was specified for > UFS_BALLOC(). In turns, this occurs when the write size is less > than the UFS block size. UFS has to read the block to ensure that > partial write does not corrupt the rest of the buffer. Thanks for the clarification, that makes sense. I'm not opening the file with O_DIRECT, so shouldn't the write be cached in memory and flushed to disk when we have the full block? It's a sequential write, so the whole block is going to be rewritten very soon. > > You can get the block size for file with stat(2), st_blksize field > of the struct stat, or using statfs(2), field f_iosize of struct > statfs, or just looking at the dumpfs output for your filesystem, > the bsize value. For modern UFS typical value is 32KB. Yes, block size is 32KB, checked with dumpfs. I've changed the block size in fio to 32k and then I get the expected results in iostat and fio: extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 1.0 658.2 31.1 84245.1 58 108.4 101 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 689.8 0.0 88291.4 54 112.1 99 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 1.0 593.3 30.6 75936.9 80 111.7 97 write: io=10240MB, bw=81704KB/s, iops=2553, runt=128339msec Roger. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 11:24:57 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8045CA23; Sat, 5 Jul 2014 11:24:57 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 217112F24; Sat, 5 Jul 2014 11:24:56 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id s65BOmC7050072 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 5 Jul 2014 14:24:48 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua s65BOmC7050072 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id s65BOmDH050071; Sat, 5 Jul 2014 14:24:48 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sat, 5 Jul 2014 14:24:48 +0300 From: Konstantin Belousov To: Roger Pau Monn? Subject: Re: Strange IO performance with UFS Message-ID: <20140705112448.GQ93733@kib.kiev.ua> References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> <20140705095831.GO93733@kib.kiev.ua> <53B7D4DF.40301@citrix.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="vXXM0T2D4JjNJBTG" Content-Disposition: inline In-Reply-To: <53B7D4DF.40301@citrix.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 11:24:57 -0000 --vXXM0T2D4JjNJBTG Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jul 05, 2014 at 12:35:11PM +0200, Roger Pau Monn? wrote: > On 05/07/14 11:58, Konstantin Belousov wrote: > > On Sat, Jul 05, 2014 at 11:32:06AM +0200, Roger Pau Monn? wrote: > >> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3=20 > >> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3=20 > >> kernel`g_io_request+0x384 kernel`ufs_strategy+0x8a=20 > >> kernel`VOP_STRATEGY_APV+0xf5 kernel`bufstrategy+0x46=20 > >> kernel`cluster_read+0x5e6 kernel`ffs_balloc_ufs2+0x1be2=20 > >> kernel`ffs_write+0x310 kernel`VOP_WRITE_APV+0x166=20 > >> kernel`vn_write+0x2eb kernel`vn_io_fault_doio+0x22=20 > >> kernel`vn_io_fault1+0x78 kernel`vn_io_fault+0x173=20 > >> kernel`dofilewrite+0x85 kernel`kern_writev+0x65=20 > >> kernel`sys_write+0x63 > >>=20 > >> This can also be seen by running iostat in parallel with the fio > >> workload: > >>=20 > >> device r/s w/s kr/s kw/s qlen svc_t %b ada0 > >> 243.3 233.7 31053.3 29919.1 31 57.4 100 > >>=20 > >> This clearly shows that even when I was doing a sequential write > >> (the fio workload shown above), the disk was actually reading > >> more data than writing it, which makes no sense, and all the > >> reads come from the path trace shown above. > >=20 > > The backtrace above means that the BA_CLRBUF was specified for > > UFS_BALLOC(). In turns, this occurs when the write size is less > > than the UFS block size. UFS has to read the block to ensure that > > partial write does not corrupt the rest of the buffer. >=20 > Thanks for the clarification, that makes sense. I'm not opening the > file with O_DIRECT, so shouldn't the write be cached in memory and > flushed to disk when we have the full block? It's a sequential write, > so the whole block is going to be rewritten very soon. >=20 > >=20 > > You can get the block size for file with stat(2), st_blksize field > > of the struct stat, or using statfs(2), field f_iosize of struct > > statfs, or just looking at the dumpfs output for your filesystem, > > the bsize value. For modern UFS typical value is 32KB. >=20 > Yes, block size is 32KB, checked with dumpfs. I've changed the block > size in fio to 32k and then I get the expected results in iostat and fio: >=20 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 1.0 658.2 31.1 84245.1 58 108.4 101 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 689.8 0.0 88291.4 54 112.1 99 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 1.0 593.3 30.6 75936.9 80 111.7 97 >=20 > write: io=3D10240MB, bw=3D81704KB/s, iops=3D2553, runt=3D128339msec The current code in ffs_write() only avoids read before write when write covers complete block. I think we can somewhat loose the test to also avoid read when we are at EOF and write covers completely the valid portion of the last block. This leaves the unwritten portion of the block with the garbage. I believe that it is not harmful, since the only way for usermode to access that garbage is through the mmap(2). The vnode_generic_getpages() zeroes out parts of the page which are after EOF. Try this, almost completely untested: commit 30375741f5b15609e51cac5b242ecfe7d614e902 Author: Konstantin Belousov Date: Sat Jul 5 14:19:39 2014 +0300 Do not do read-before-write if the written area completely covers the valid portion of the block at EOF. diff --git a/sys/ufs/ffs/ffs_vnops.c b/sys/ufs/ffs/ffs_vnops.c index 423d811..b725932 100644 --- a/sys/ufs/ffs/ffs_vnops.c +++ b/sys/ufs/ffs/ffs_vnops.c @@ -729,10 +729,12 @@ ffs_write(ap) vnode_pager_setsize(vp, uio->uio_offset + xfersize); =20 /* - * We must perform a read-before-write if the transfer size - * does not cover the entire buffer. + * We must perform a read-before-write if the transfer + * size does not cover the entire buffer or the valid + * part of the last buffer for the file. */ - if (fs->fs_bsize > xfersize) + if (fs->fs_bsize > xfersize && (blkoffset !=3D 0 || + uio->uio_offset + xfersize < ip->i_size)) flags |=3D BA_CLRBUF; else flags &=3D ~BA_CLRBUF; --vXXM0T2D4JjNJBTG Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBAgAGBQJTt+B/AAoJEJDCuSvBvK1Bw04QAJxzmWigObOC25XA/oIPk9zF ti5DfuhZVa+8MfwfXRWGyEPN7wI0ZTU/ZHO0wotsq6BdckMOlKI+taGQFgaMTMfE NI5YZbU5N5fj68Oy0Txx8/FM8uj7M+IudOXLyKoWPUMFv+P/eQR3XhzYP7NLuCIC a+oLzLMM9doE5mwycYwqVIhMzJrOY7vgxZOwGS6iJ0nlH5pGsVEE0RWYIG6Y0EzD rNlQ1BkX/UUiLgZHFTA29965NMdQ8nyhKJlDcEvY8y65MofdXevEBLHBrIxcU70L u01OOusrofn/VjI4n6AXLCmGZwxxDpcyAVtFGfaWRQTzi/4kKVFg3os/NAkEaNTW +yYq8QUWuMz9CK6rmcTQZlP+ufUkSGo3MFrzlyURsbp8rSRKTFRZsDXwlyhkD5oA lDpiVnHj4DQuBTy3BNLkrwN50d8/ygZFd3Y5nmLgZyyUUucQz+KIROMQJrBDm8rg hpXD65PTRAyc6zJeVV0RZPHgdAf2TLx+3HQM3tFfL2NLMbTQ9zrgg/F2eOUzE2Rg oS0mNgQUutlDsrgTpL7VP0CtmCuGWX+tffhYLZXSt84G1RzXRxIkxJ9ngLvYoQxM zfnCAr2bty3MujR6L+fIgMjGDn06Qov0wcmnoRYB7NdzfPZgySbM4DLznC5u+wfj h5xNmy+wZ79mAEPufMHJ =XS/S -----END PGP SIGNATURE----- --vXXM0T2D4JjNJBTG-- From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 14:05:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9FF1E855; Sat, 5 Jul 2014 14:05:38 +0000 (UTC) Received: from systemdatarecorder.org (ec2-54-246-96-61.eu-west-1.compute.amazonaws.com [54.246.96.61]) (using TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "localhost", Issuer "localhost" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 2FEED2D1B; Sat, 5 Jul 2014 14:05:36 +0000 (UTC) Received: from nereid (84-253-211-213.bb.dnainternet.fi [84.253.211.213]) (authenticated bits=0) by systemdatarecorder.org (8.14.4/8.14.4/Debian-2ubuntu2.1) with ESMTP id s65E3ugG030946 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 5 Jul 2014 14:03:57 GMT Date: Sat, 5 Jul 2014 17:05:24 +0300 From: Stefan Parvu To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= Subject: Re: Strange IO performance with UFS Message-Id: <20140705170524.4212b6fa0b1046a33e1fc69a@systemdatarecorder.org> In-Reply-To: <53B7C616.1000702@citrix.com> References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> Organization: systemdatarecorder.org X-Mailer: Sylpheed 3.4.1 (GTK+ 2.24.22; amd64-portbld-freebsd11.0) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 14:05:38 -0000 > This looks much better than what I've saw in my benchmarks, how much > memory does the system have? We use on this system 64GB RAM. If you increase the block size in fio you should see better throughput, as you already found. Cool, you sorted out the thing. As a side note: interesting for us, was to discover that system usage between Debian 7 and FreeBSD was kind of different for our test workloads. Strange Linux system was around 3-4% system time, no matter what sort of block size or number of files we were pushing using hardware raid 10, resulting in a high iowait time and high run queue length (which on Linux systems adds to it the iowait). FreeBSD, I think, does not add to the run queue length the iowait processes waiting for a storage, network etc. Is this correct ? -- Stefan Parvu From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 14:24:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D6A3B227; Sat, 5 Jul 2014 14:24:45 +0000 (UTC) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps1.elischer.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 9FCDD2EA1; Sat, 5 Jul 2014 14:24:45 +0000 (UTC) Received: from jre-mbp.elischer.org (ppp121-45-250-191.lns20.per2.internode.on.net [121.45.250.191]) (authenticated bits=0) by vps1.elischer.org (8.14.9/8.14.9) with ESMTP id s65EOTci040172 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Sat, 5 Jul 2014 07:24:31 -0700 (PDT) (envelope-from julian@freebsd.org) Message-ID: <53B80A97.1080803@freebsd.org> Date: Sat, 05 Jul 2014 22:24:23 +0800 From: Julian Elischer User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: Konstantin Belousov , Roger Pau Monn? Subject: Re: Strange IO performance with UFS References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> <20140705095831.GO93733@kib.kiev.ua> <53B7D4DF.40301@citrix.com> <20140705112448.GQ93733@kib.kiev.ua> In-Reply-To: <20140705112448.GQ93733@kib.kiev.ua> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 14:24:45 -0000 On 7/5/14, 7:24 PM, Konstantin Belousov wrote: > On Sat, Jul 05, 2014 at 12:35:11PM +0200, Roger Pau Monn? wrote: >> On 05/07/14 11:58, Konstantin Belousov wrote: >>> On Sat, Jul 05, 2014 at 11:32:06AM +0200, Roger Pau Monn? wrote: >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 >>>> kernel`g_io_request+0x384 kernel`ufs_strategy+0x8a >>>> kernel`VOP_STRATEGY_APV+0xf5 kernel`bufstrategy+0x46 >>>> kernel`cluster_read+0x5e6 kernel`ffs_balloc_ufs2+0x1be2 >>>> kernel`ffs_write+0x310 kernel`VOP_WRITE_APV+0x166 >>>> kernel`vn_write+0x2eb kernel`vn_io_fault_doio+0x22 >>>> kernel`vn_io_fault1+0x78 kernel`vn_io_fault+0x173 >>>> kernel`dofilewrite+0x85 kernel`kern_writev+0x65 >>>> kernel`sys_write+0x63 >>>> >>>> This can also be seen by running iostat in parallel with the fio >>>> workload: >>>> >>>> device r/s w/s kr/s kw/s qlen svc_t %b ada0 >>>> 243.3 233.7 31053.3 29919.1 31 57.4 100 >>>> >>>> This clearly shows that even when I was doing a sequential write >>>> (the fio workload shown above), the disk was actually reading >>>> more data than writing it, which makes no sense, and all the >>>> reads come from the path trace shown above. >>> The backtrace above means that the BA_CLRBUF was specified for >>> UFS_BALLOC(). In turns, this occurs when the write size is less >>> than the UFS block size. UFS has to read the block to ensure that >>> partial write does not corrupt the rest of the buffer. >> Thanks for the clarification, that makes sense. I'm not opening the >> file with O_DIRECT, so shouldn't the write be cached in memory and >> flushed to disk when we have the full block? It's a sequential write, >> so the whole block is going to be rewritten very soon. >> >>> You can get the block size for file with stat(2), st_blksize field >>> of the struct stat, or using statfs(2), field f_iosize of struct >>> statfs, or just looking at the dumpfs output for your filesystem, >>> the bsize value. For modern UFS typical value is 32KB. >> Yes, block size is 32KB, checked with dumpfs. I've changed the block >> size in fio to 32k and then I get the expected results in iostat and fio: >> >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 1.0 658.2 31.1 84245.1 58 108.4 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 689.8 0.0 88291.4 54 112.1 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 1.0 593.3 30.6 75936.9 80 111.7 97 >> >> write: io=10240MB, bw=81704KB/s, iops=2553, runt=128339msec > The current code in ffs_write() only avoids read before write when > write covers complete block. I think we can somewhat loose the test > to also avoid read when we are at EOF and write covers completely > the valid portion of the last block. > > This leaves the unwritten portion of the block with the garbage. I > believe that it is not harmful, since the only way for usermode to > access that garbage is through the mmap(2). The vnode_generic_getpages() > zeroes out parts of the page which are after EOF. I have vague memories of this being in a security bulletin once along the lines of "random data disclosure" by making tons of 1 frag size files and then mmapping them. > > Try this, almost completely untested: > > commit 30375741f5b15609e51cac5b242ecfe7d614e902 > Author: Konstantin Belousov > Date: Sat Jul 5 14:19:39 2014 +0300 > > Do not do read-before-write if the written area completely covers > the valid portion of the block at EOF. > > diff --git a/sys/ufs/ffs/ffs_vnops.c b/sys/ufs/ffs/ffs_vnops.c > index 423d811..b725932 100644 > --- a/sys/ufs/ffs/ffs_vnops.c > +++ b/sys/ufs/ffs/ffs_vnops.c > @@ -729,10 +729,12 @@ ffs_write(ap) > vnode_pager_setsize(vp, uio->uio_offset + xfersize); > > /* > - * We must perform a read-before-write if the transfer size > - * does not cover the entire buffer. > + * We must perform a read-before-write if the transfer > + * size does not cover the entire buffer or the valid > + * part of the last buffer for the file. > */ > - if (fs->fs_bsize > xfersize) > + if (fs->fs_bsize > xfersize && (blkoffset != 0 || > + uio->uio_offset + xfersize < ip->i_size)) > flags |= BA_CLRBUF; > else > flags &= ~BA_CLRBUF; From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 16:18:21 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 83AAFD1F; Sat, 5 Jul 2014 16:18:21 +0000 (UTC) Received: from SMTP02.CITRIX.COM (smtp02.citrix.com [66.165.176.63]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "mail.citrix.com", Issuer "Cybertrust Public SureServer SV CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6DC402734; Sat, 5 Jul 2014 16:18:19 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.01,608,1400025600"; d="scan'208";a="150149047" Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jul 2014 16:18:11 +0000 Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id 14.3.181.6; Sat, 5 Jul 2014 12:18:10 -0400 Message-ID: <53B8253F.5060403@citrix.com> Date: Sat, 5 Jul 2014 18:18:07 +0200 From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: Konstantin Belousov Subject: Re: Strange IO performance with UFS References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> <20140705095831.GO93733@kib.kiev.ua> <53B7D4DF.40301@citrix.com> <20140705112448.GQ93733@kib.kiev.ua> In-Reply-To: <20140705112448.GQ93733@kib.kiev.ua> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-DLP: MIA1 Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 16:18:21 -0000 On 05/07/14 13:24, Konstantin Belousov wrote: > On Sat, Jul 05, 2014 at 12:35:11PM +0200, Roger Pau Monn? wrote: >> On 05/07/14 11:58, Konstantin Belousov wrote: >>> On Sat, Jul 05, 2014 at 11:32:06AM +0200, Roger Pau Monn? >>> wrote: >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 >>>> kernel`g_io_request+0x384 kernel`ufs_strategy+0x8a >>>> kernel`VOP_STRATEGY_APV+0xf5 kernel`bufstrategy+0x46 >>>> kernel`cluster_read+0x5e6 kernel`ffs_balloc_ufs2+0x1be2 >>>> kernel`ffs_write+0x310 kernel`VOP_WRITE_APV+0x166 >>>> kernel`vn_write+0x2eb kernel`vn_io_fault_doio+0x22 >>>> kernel`vn_io_fault1+0x78 kernel`vn_io_fault+0x173 >>>> kernel`dofilewrite+0x85 kernel`kern_writev+0x65 >>>> kernel`sys_write+0x63 >>>> >>>> This can also be seen by running iostat in parallel with the >>>> fio workload: >>>> >>>> device r/s w/s kr/s kw/s qlen svc_t %b ada0 >>>> 243.3 233.7 31053.3 29919.1 31 57.4 100 >>>> >>>> This clearly shows that even when I was doing a sequential >>>> write (the fio workload shown above), the disk was actually >>>> reading more data than writing it, which makes no sense, and >>>> all the reads come from the path trace shown above. >>> >>> The backtrace above means that the BA_CLRBUF was specified for >>> UFS_BALLOC(). In turns, this occurs when the write size is >>> less than the UFS block size. UFS has to read the block to >>> ensure that partial write does not corrupt the rest of the >>> buffer. >> >> Thanks for the clarification, that makes sense. I'm not opening >> the file with O_DIRECT, so shouldn't the write be cached in >> memory and flushed to disk when we have the full block? It's a >> sequential write, so the whole block is going to be rewritten >> very soon. >> >>> >>> You can get the block size for file with stat(2), st_blksize >>> field of the struct stat, or using statfs(2), field f_iosize of >>> struct statfs, or just looking at the dumpfs output for your >>> filesystem, the bsize value. For modern UFS typical value is >>> 32KB. >> >> Yes, block size is 32KB, checked with dumpfs. I've changed the >> block size in fio to 32k and then I get the expected results in >> iostat and fio: >> >> extended device statistics device r/s w/s kr/s kw/s >> qlen svc_t %b ada0 1.0 658.2 31.1 84245.1 58 108.4 >> 101 extended device statistics device r/s w/s kr/s >> kw/s qlen svc_t %b ada0 0.0 689.8 0.0 88291.4 54 >> 112.1 99 extended device statistics device r/s w/s kr/s >> kw/s qlen svc_t %b ada0 1.0 593.3 30.6 75936.9 80 >> 111.7 97 >> >> write: io=10240MB, bw=81704KB/s, iops=2553, runt=128339msec > > The current code in ffs_write() only avoids read before write when > write covers complete block. I think we can somewhat loose the > test to also avoid read when we are at EOF and write covers > completely the valid portion of the last block. > > This leaves the unwritten portion of the block with the garbage. I > believe that it is not harmful, since the only way for usermode to > access that garbage is through the mmap(2). The > vnode_generic_getpages() zeroes out parts of the page which are > after EOF. > > Try this, almost completely untested: Doesn't seem to help much, I'm still seeing the same issue. I'm sampling iostat every 1s, and here's the output form the start of the 4k block fio workload: extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 349.5 0.0 44612.3 48 88.0 52 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 655.4 0.0 83773.6 76 99.8 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 699.2 0.0 89493.1 59 109.4 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 628.1 0.0 80392.6 55 114.8 98 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 655.7 0.0 83799.6 79 98.4 102 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 701.4 0.0 89782.0 80 105.5 97 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 697.9 0.0 89331.6 78 112.0 103 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 714.1 0.0 91408.7 77 110.3 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 724.0 0.0 92675.0 67 112.5 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 700.4 0.0 89646.6 49 102.5 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 686.4 0.0 87857.2 78 110.0 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 702.0 0.0 89851.6 80 112.9 97 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 736.3 0.0 94246.4 67 110.1 103 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 624.6 0.0 79950.0 48 115.7 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 704.0 0.0 90118.4 77 106.1 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 714.6 0.0 91470.0 80 103.6 99 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 710.4 0.0 90926.1 80 111.1 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 655.3 0.0 83882.1 70 115.8 99 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 539.8 0.0 69094.5 80 121.2 101 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 711.6 0.0 91087.6 79 107.9 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 705.5 0.0 90304.5 81 111.3 97 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 727.3 0.0 93092.8 81 108.9 102 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 699.5 0.0 89296.4 55 109.0 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 689.0 0.0 88066.1 78 96.6 101 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 738.3 0.0 94496.1 56 109.1 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 615.4 0.0 78770.0 80 112.3 98 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 707.3 0.0 90529.8 86 105.7 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 704.3 0.0 89333.9 67 98.3 99 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 641.3 0.0 82081.5 80 112.3 101 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 701.6 0.0 89747.9 51 101.1 101 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 693.0 0.0 88702.1 80 103.6 97 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 632.7 0.0 80991.8 80 112.0 99 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 709.0 0.0 90748.2 80 107.5 102 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 715.0 0.0 91523.0 80 104.7 101 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 650.1 0.0 83210.5 56 110.9 101 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 682.2 0.0 87319.1 57 107.9 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 719.0 0.0 92032.6 80 103.6 99 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 624.3 0.0 79905.8 80 110.5 97 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 696.5 0.0 89151.7 80 109.9 103 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 664.2 0.0 85017.6 77 109.9 101 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 681.7 0.0 87254.0 80 107.5 98 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 668.5 0.0 85569.3 57 109.9 99 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 682.3 0.0 87329.0 53 110.8 102 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 0.0 643.9 0.0 82420.9 77 104.8 101 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 107.5 457.1 13701.7 58471.3 57 106.0 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 220.9 253.9 28281.4 32498.9 54 108.8 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 290.6 277.9 37198.8 35576.1 65 94.3 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 309.3 267.9 39590.7 34295.9 80 89.5 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 213.6 302.0 27212.7 38562.0 24 93.5 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 232.1 224.3 29712.5 28339.8 31 117.4 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 262.9 249.4 33654.0 31928.1 47 81.4 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 232.2 229.2 29721.6 29340.5 50 78.5 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 222.8 229.4 28430.0 29362.7 42 85.9 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 231.5 246.5 29628.8 31555.9 6 72.9 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 261.7 256.8 33498.7 32769.1 33 83.9 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 262.7 260.7 33628.3 33279.4 35 85.5 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 234.0 249.1 29867.9 31883.1 18 90.9 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 252.1 239.8 32263.0 30581.4 32 91.2 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 241.5 257.5 30917.0 32961.1 16 69.5 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 257.9 243.5 33011.9 31164.2 32 86.8 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 237.5 235.6 30311.2 30046.9 31 67.4 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 290.4 213.1 37172.8 27277.0 79 65.3 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 216.4 284.3 27703.7 36392.5 42 95.4 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 223.8 248.2 28645.1 31774.4 16 69.4 89 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 294.0 217.7 37544.4 27864.2 64 68.0 110 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 210.7 245.6 26966.6 31439.8 59 107.4 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 228.5 265.2 29246.6 33940.5 10 99.2 98 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 279.1 218.4 35727.2 27955.0 52 71.9 102 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 232.3 293.4 29607.9 37521.4 14 93.2 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 299.5 236.6 38340.2 30288.8 79 69.7 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 216.3 268.9 27686.3 34417.3 4 90.5 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 285.8 261.0 36585.3 33409.5 53 84.6 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 228.5 232.5 29059.7 29661.1 48 74.3 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 242.7 262.4 31060.0 33588.2 27 69.9 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 248.2 252.2 31766.1 32149.3 8 78.9 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 267.9 230.2 34288.6 29462.8 62 68.5 100 extended device statistics device r/s w/s kr/s kw/s qlen svc_t %b ada0 238.0 266.2 30375.8 34075.6 0 95.4 100 As can be seen from the log above, at first the workload runs fine, and the disk is only performing writes, but at some point (in this case around 40% of completion) it starts performing this read-before-write dance that completely screws up performance. Roger. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 18:06:05 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C40FDF85; Sat, 5 Jul 2014 18:06:05 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2FFD62EE7; Sat, 5 Jul 2014 18:06:05 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id s65I5qsV048552 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 5 Jul 2014 21:05:52 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua s65I5qsV048552 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id s65I5q0A048359; Sat, 5 Jul 2014 21:05:52 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sat, 5 Jul 2014 21:05:52 +0300 From: Konstantin Belousov To: Julian Elischer Subject: Re: Strange IO performance with UFS Message-ID: <20140705180552.GT93733@kib.kiev.ua> References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> <20140705095831.GO93733@kib.kiev.ua> <53B7D4DF.40301@citrix.com> <20140705112448.GQ93733@kib.kiev.ua> <53B80A97.1080803@freebsd.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="oFYJDJkv0E+Y9r9L" Content-Disposition: inline In-Reply-To: <53B80A97.1080803@freebsd.org> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 18:06:05 -0000 --oFYJDJkv0E+Y9r9L Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jul 05, 2014 at 10:24:23PM +0800, Julian Elischer wrote: > On 7/5/14, 7:24 PM, Konstantin Belousov wrote: > > This leaves the unwritten portion of the block with the garbage. I > > believe that it is not harmful, since the only way for usermode to > > access that garbage is through the mmap(2). The vnode_generic_getpages() > > zeroes out parts of the page which are after EOF. >=20 > I have vague memories of this being in a security bulletin once along=20 > the lines of "random data disclosure" by making tons of 1 frag size=20 > files and then mmapping them. I am not sure what do you reference there. Might be, http://www.freebsd.org/security/advisories/FreeBSD-SA-13:11.sendfile.asc ? --oFYJDJkv0E+Y9r9L Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBAgAGBQJTuD5/AAoJEJDCuSvBvK1BoLAP/jSqameKVZxTczf3CNW1f1OT U3wY3Dad+2ymwVmNkF6k5Rq4rFKfYOSt/DJflGMzC8FWCNFdrm3iCW0H3shMEYSQ Eugjui+f0XrtDj+8t65ngmiBPJ4Jp8/uDoTs2ZlYl+psx3HJ+VZtRo928sHePG9O oBZLlPUcA7ZGmrsflWJRIbZbi3MyHgvIg/h3pRzvO02MD5yMvuJBB/KAhRlHtgap Bi4aRaDfGE3XyL/+3DCnViAZNuyKP9JnPPlTUfXsMbaNxCeNoUtSnR+UrzhXsACH 2iLvSbVX19zN8q1yNbvMWOgfeonXnx8fbvjnkH7BByyaPsqwYeKCNTqBVGmK0oL8 DsOjGk6yDHb3fZrf1wlqdGGZ/9WgHZlRaBHvsweoYu/4Q2w4IoQNW55k5bq4DKAP PJIxzIE4GUozANhg1q+V0kzpUMNJHaPFlWJnn9Nld1NVU91LJ1LHdzDzivsaUujP 9TUXaQTlB43m+k7JruqyFCUbMmqSXSlUmG+y8K/FECs+LWe9sRYW6UruoLUkUAE2 fjtPZbmtkGWo+h9ILPu63g5E5QWxLHaBzk4lUu9KZYigQTpW0DAMVMIwnZgVhm+p 04bmvvJfg6gVqHHmqz9+ef5gkg/eHCedos8dBaE2HuBbar4QVI0wI49z60esiwvy iymUu+RPNEvPynC4Tw1W =whGv -----END PGP SIGNATURE----- --oFYJDJkv0E+Y9r9L-- From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 19:58:28 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4B2D2BE6; Sat, 5 Jul 2014 19:58:28 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B25032859; Sat, 5 Jul 2014 19:58:27 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id s65JwGhe075890 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 5 Jul 2014 22:58:16 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua s65JwGhe075890 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id s65JwGMt075889; Sat, 5 Jul 2014 22:58:16 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sat, 5 Jul 2014 22:58:16 +0300 From: Konstantin Belousov To: Roger Pau Monn? Subject: Re: Strange IO performance with UFS Message-ID: <20140705195816.GV93733@kib.kiev.ua> References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> <20140705095831.GO93733@kib.kiev.ua> <53B7D4DF.40301@citrix.com> <20140705112448.GQ93733@kib.kiev.ua> <53B8253F.5060403@citrix.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="SJ2dn9eo9utwE8uN" Content-Disposition: inline In-Reply-To: <53B8253F.5060403@citrix.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 19:58:28 -0000 --SJ2dn9eo9utwE8uN Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jul 05, 2014 at 06:18:07PM +0200, Roger Pau Monn? wrote: > On 05/07/14 13:24, Konstantin Belousov wrote: > > On Sat, Jul 05, 2014 at 12:35:11PM +0200, Roger Pau Monn? wrote: > >> On 05/07/14 11:58, Konstantin Belousov wrote: > >>> On Sat, Jul 05, 2014 at 11:32:06AM +0200, Roger Pau Monn? > >>> wrote: > >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3=20 > >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3=20 > >>>> kernel`g_io_request+0x384 kernel`ufs_strategy+0x8a=20 > >>>> kernel`VOP_STRATEGY_APV+0xf5 kernel`bufstrategy+0x46=20 > >>>> kernel`cluster_read+0x5e6 kernel`ffs_balloc_ufs2+0x1be2=20 > >>>> kernel`ffs_write+0x310 kernel`VOP_WRITE_APV+0x166=20 > >>>> kernel`vn_write+0x2eb kernel`vn_io_fault_doio+0x22=20 > >>>> kernel`vn_io_fault1+0x78 kernel`vn_io_fault+0x173=20 > >>>> kernel`dofilewrite+0x85 kernel`kern_writev+0x65=20 > >>>> kernel`sys_write+0x63 > >>>>=20 > >>>> This can also be seen by running iostat in parallel with the > >>>> fio workload: > >>>>=20 > >>>> device r/s w/s kr/s kw/s qlen svc_t %b ada0=20 > >>>> 243.3 233.7 31053.3 29919.1 31 57.4 100 > >>>>=20 > >>>> This clearly shows that even when I was doing a sequential > >>>> write (the fio workload shown above), the disk was actually > >>>> reading more data than writing it, which makes no sense, and > >>>> all the reads come from the path trace shown above. > >>>=20 > >>> The backtrace above means that the BA_CLRBUF was specified for=20 > >>> UFS_BALLOC(). In turns, this occurs when the write size is > >>> less than the UFS block size. UFS has to read the block to > >>> ensure that partial write does not corrupt the rest of the > >>> buffer. > >>=20 > >> Thanks for the clarification, that makes sense. I'm not opening > >> the file with O_DIRECT, so shouldn't the write be cached in > >> memory and flushed to disk when we have the full block? It's a > >> sequential write, so the whole block is going to be rewritten > >> very soon. > >>=20 > >>>=20 > >>> You can get the block size for file with stat(2), st_blksize > >>> field of the struct stat, or using statfs(2), field f_iosize of > >>> struct statfs, or just looking at the dumpfs output for your > >>> filesystem, the bsize value. For modern UFS typical value is > >>> 32KB. > >>=20 > >> Yes, block size is 32KB, checked with dumpfs. I've changed the > >> block size in fio to 32k and then I get the expected results in > >> iostat and fio: > >>=20 > >> extended device statistics device r/s w/s kr/s kw/s > >> qlen svc_t %b ada0 1.0 658.2 31.1 84245.1 58 108.4 > >> 101 extended device statistics device r/s w/s kr/s > >> kw/s qlen svc_t %b ada0 0.0 689.8 0.0 88291.4 54 > >> 112.1 99 extended device statistics device r/s w/s kr/s > >> kw/s qlen svc_t %b ada0 1.0 593.3 30.6 75936.9 80 > >> 111.7 97 > >>=20 > >> write: io=3D10240MB, bw=3D81704KB/s, iops=3D2553, runt=3D128339msec > >=20 > > The current code in ffs_write() only avoids read before write when=20 > > write covers complete block. I think we can somewhat loose the > > test to also avoid read when we are at EOF and write covers > > completely the valid portion of the last block. > >=20 > > This leaves the unwritten portion of the block with the garbage. I=20 > > believe that it is not harmful, since the only way for usermode to=20 > > access that garbage is through the mmap(2). The > > vnode_generic_getpages() zeroes out parts of the page which are > > after EOF. > >=20 > > Try this, almost completely untested: >=20 > Doesn't seem to help much, I'm still seeing the same issue. I'm > sampling iostat every 1s, and here's the output form the start of the > 4k block fio workload: >=20 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 349.5 0.0 44612.3 48 88.0 52 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 655.4 0.0 83773.6 76 99.8 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 699.2 0.0 89493.1 59 109.4 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 628.1 0.0 80392.6 55 114.8 98 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 655.7 0.0 83799.6 79 98.4 102 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 701.4 0.0 89782.0 80 105.5 97 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 697.9 0.0 89331.6 78 112.0 103 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 714.1 0.0 91408.7 77 110.3 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 724.0 0.0 92675.0 67 112.5 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 700.4 0.0 89646.6 49 102.5 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 686.4 0.0 87857.2 78 110.0 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 702.0 0.0 89851.6 80 112.9 97 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 736.3 0.0 94246.4 67 110.1 103 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 624.6 0.0 79950.0 48 115.7 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 704.0 0.0 90118.4 77 106.1 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 714.6 0.0 91470.0 80 103.6 99 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 710.4 0.0 90926.1 80 111.1 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 655.3 0.0 83882.1 70 115.8 99 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 539.8 0.0 69094.5 80 121.2 101 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 711.6 0.0 91087.6 79 107.9 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 705.5 0.0 90304.5 81 111.3 97 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 727.3 0.0 93092.8 81 108.9 102 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 699.5 0.0 89296.4 55 109.0 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 689.0 0.0 88066.1 78 96.6 101 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 738.3 0.0 94496.1 56 109.1 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 615.4 0.0 78770.0 80 112.3 98 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 707.3 0.0 90529.8 86 105.7 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 704.3 0.0 89333.9 67 98.3 99 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 641.3 0.0 82081.5 80 112.3 101 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 701.6 0.0 89747.9 51 101.1 101 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 693.0 0.0 88702.1 80 103.6 97 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 632.7 0.0 80991.8 80 112.0 99 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 709.0 0.0 90748.2 80 107.5 102 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 715.0 0.0 91523.0 80 104.7 101 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 650.1 0.0 83210.5 56 110.9 101 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 682.2 0.0 87319.1 57 107.9 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 719.0 0.0 92032.6 80 103.6 99 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 624.3 0.0 79905.8 80 110.5 97 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 696.5 0.0 89151.7 80 109.9 103 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 664.2 0.0 85017.6 77 109.9 101 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 681.7 0.0 87254.0 80 107.5 98 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 668.5 0.0 85569.3 57 109.9 99 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 682.3 0.0 87329.0 53 110.8 102 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 0.0 643.9 0.0 82420.9 77 104.8 101 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 107.5 457.1 13701.7 58471.3 57 106.0 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 220.9 253.9 28281.4 32498.9 54 108.8 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 290.6 277.9 37198.8 35576.1 65 94.3 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 309.3 267.9 39590.7 34295.9 80 89.5 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 213.6 302.0 27212.7 38562.0 24 93.5 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 232.1 224.3 29712.5 28339.8 31 117.4 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 262.9 249.4 33654.0 31928.1 47 81.4 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 232.2 229.2 29721.6 29340.5 50 78.5 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 222.8 229.4 28430.0 29362.7 42 85.9 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 231.5 246.5 29628.8 31555.9 6 72.9 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 261.7 256.8 33498.7 32769.1 33 83.9 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 262.7 260.7 33628.3 33279.4 35 85.5 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 234.0 249.1 29867.9 31883.1 18 90.9 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 252.1 239.8 32263.0 30581.4 32 91.2 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 241.5 257.5 30917.0 32961.1 16 69.5 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 257.9 243.5 33011.9 31164.2 32 86.8 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 237.5 235.6 30311.2 30046.9 31 67.4 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 290.4 213.1 37172.8 27277.0 79 65.3 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 216.4 284.3 27703.7 36392.5 42 95.4 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 223.8 248.2 28645.1 31774.4 16 69.4 89 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 294.0 217.7 37544.4 27864.2 64 68.0 110 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 210.7 245.6 26966.6 31439.8 59 107.4 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 228.5 265.2 29246.6 33940.5 10 99.2 98 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 279.1 218.4 35727.2 27955.0 52 71.9 102 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 232.3 293.4 29607.9 37521.4 14 93.2 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 299.5 236.6 38340.2 30288.8 79 69.7 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 216.3 268.9 27686.3 34417.3 4 90.5 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 285.8 261.0 36585.3 33409.5 53 84.6 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 228.5 232.5 29059.7 29661.1 48 74.3 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 242.7 262.4 31060.0 33588.2 27 69.9 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 248.2 252.2 31766.1 32149.3 8 78.9 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 267.9 230.2 34288.6 29462.8 62 68.5 100 > extended device statistics > device r/s w/s kr/s kw/s qlen svc_t %b > ada0 238.0 266.2 30375.8 34075.6 0 95.4 100 >=20 > As can be seen from the log above, at first the workload runs fine, > and the disk is only performing writes, but at some point (in this > case around 40% of completion) it starts performing this > read-before-write dance that completely screws up performance. I reproduced this locally. I think my patch is useless for the fio/4k write situation. What happens is indeed related to the amount of the available memory. When the size of the file written by fio is larger than the memory, system has to recycle the cached pages. So after some moment, doing a write has to do read-before-write, and this occurs not at the EOF (since fio pre-allocated the job file). In fact, I used 10G file on 8G machine, but I interrupted the fio before it finish the job. The longer the previous job runs, the longer is time for which new job does not issue reads. If I allow the job to completely fill the cache, then the reads starts immediately on the next job run. I do not see how could anything be changed there, if we want to keep user file content on partial block writes, and we do. --SJ2dn9eo9utwE8uN Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBAgAGBQJTuFjYAAoJEJDCuSvBvK1B8pQP/1ErZ6Y+f3fsSG+uaz8e3o/l ZDmEBzy4C0xAR0ulO0vi/76N91rKp6LYe2GuAQ+kRY6lNmOWItuiZJVWh75RCbp4 0eVJ6fMG4Dsj7WzVqZBXASflptjJj+N2DtxC+w3cCxuNLlpWwD3sgINhX1LVxaxT hqods47QACMMlrX0MvXprXc9ilpZM5Bq5prRSN2toowHttFksQixn6wWgCQFYxNg ySWSdhusK26pw4ApAyDY3Fm2O0kd8VTHuZ9cTmS/3drYO7iyyodySzWtVBQEa9re SLRvZJ9oBazdePfABv/lLzHclTESnxi0QRBrveT/bGA2VC9T/Ii4h30Dpn+vFY/3 DVhQb/+M7y8jQN5A9G6K2NqiIlO4sNsTsYOzfJ1OTUa9FsH5K4ED+CP09HGG9nyl 0G2ENzz40hcSssxxUA2IiXgTVWMBD+bnPRAGgvP8PCIjwd3toDHjldlasTyOnlf7 1yMfzwIGN9oDp8GfOBh/tGUbRj3Wl+8fDLcHgCi3jlyAIos0VjFJfUUFHSfQL/U5 +m7QRofZRgGZhYe2gEZrVFXO5YK3PTqvwKly9Ms1NdCeZ574Sky2pfbA34FcKJVU SPDg5cxwgOY0M787dXNd3L27Plwl5tRlEyB514mslAi3E4GUvjw5Hph+7iZNvLw7 Az2Y3pAJwH3VWblvxQsS =VlFN -----END PGP SIGNATURE----- --SJ2dn9eo9utwE8uN-- From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 23:32:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E850E572; Sat, 5 Jul 2014 23:32:46 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5A7A929A3; Sat, 5 Jul 2014 23:32:46 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id B1E6978F07; Sat, 5 Jul 2014 16:32:45 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 76454-09; Sat, 5 Jul 2014 16:32:45 -0700 (PDT) Received: from [10.8.0.34] (unknown [10.8.0.34]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id 893C678F04; Sat, 5 Jul 2014 16:32:37 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) Subject: Re: FreeBSD support being added to GlusterFS From: Jordan Hubbard In-Reply-To: Date: Sat, 5 Jul 2014 16:32:35 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <1A58F492-946F-46D4-A19E-2734F368CDAC@mail.turbofuzz.com> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> <20140629203746.GI34108@ivaldir.etoilebsd.net> To: Harshavardhana X-Mailer: Apple Mail (2.1878.2) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 23:32:47 -0000 On Jun 29, 2014, at 8:49 PM, Harshavardhana = wrote: > = http://download.gluster.org/pub/gluster/experimental/glusterfs-freebsd_201= 40629.tar.bz2 > - I just made the necessary changes from "/var/lib" to "/var/db" - > please test this tarball out - relevant patches are posted for > upstream review. OK, using this version, this is what we see on glusterd =97debug: [2014-07-05 23:32:08.694245] I [MSGID: 100030] [glusterfsd.c:1998:main] = 0-glusterd: Started running glusterd version (args: glusterd --debug) [2014-07-05 23:32:08.694289] D = [logging.c:1781:__gf_log_inject_timer_event] 0-logging-infra: Starting = timer now. Timeout =3D 120, current buf size =3D 5 [2014-07-05 23:32:08.694482] D [MSGID: 0] [glusterfsd.c:614:get_volfp] = 0-glusterfsd: loading volume file /usr/local/etc/glusterfs/glusterd.vol [2014-07-05 23:32:08.697067] I [glusterd.c:1215:init] 0-management: = Using /var/db/glusterd as working directory [2014-07-05 23:32:08.697124] C [logging.c:2334:gf_cmd_log_init] = 0-management: No such file or directory [2014-07-05 23:32:08.697135] C [glusterd.c:1231:init] 0-this->name: = Unable to create cmd log file = /usr/local/var/log/glusterfs/.cmd_log_history Thanks, - Jordan From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 23:37:39 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 400F7905 for ; Sat, 5 Jul 2014 23:37:39 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2823D29DB for ; Sat, 5 Jul 2014 23:37:39 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s65NbdQl066763 for ; Sun, 6 Jul 2014 00:37:39 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191573] [zfs] kernel panic when running zpool/add/files.t Date: Sat, 05 Jul 2014 23:37:39 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: Needs Triage X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 23:37:39 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191573 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org --- Comment #4 from Mark Linimon --- Over to maintainers. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 5 23:49:21 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4D36CE87 for ; Sat, 5 Jul 2014 23:49:21 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3526F2AC0 for ; Sat, 5 Jul 2014 23:49:21 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s65NnLJl021945 for ; Sun, 6 Jul 2014 00:49:21 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Sat, 05 Jul 2014 23:49:21 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: Needs Triage X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to short_desc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jul 2014 23:49:21 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org Summary|ZFS doesn't use all |[zfs] ZFS doesn't use all |available memory |available memory --- Comment #1 from Mark Linimon --- Over to maintainers. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 00:34:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AD40B862; Sun, 6 Jul 2014 00:34:49 +0000 (UTC) Received: from mail-qc0-x22f.google.com (mail-qc0-x22f.google.com [IPv6:2607:f8b0:400d:c01::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5AD5B2E24; Sun, 6 Jul 2014 00:34:49 +0000 (UTC) Received: by mail-qc0-f175.google.com with SMTP id i8so2544955qcq.34 for ; Sat, 05 Jul 2014 17:34:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=UHZOG0Zic7Dolta+7PE5zHqVGeQ0zFVr4k67mWbceG0=; b=kszaEFUAq7cN5kOk8lyXG3ZEk2bS186cA8Rfharw8AIlx/S8RRhOm9LJap8mPrd1NP F1KqC0cpfR8AyvpLbYZqpdfNkJw+/ChG5PURD/0l6a8qPmc9nonRK/yQ2dxsu8Rnl7b7 QYwtWDeDQztNDfCA1i5ooLYPk1vk1zgLU2zTSlyrh0Oa5YB2PjZ+QsXyO03GpZ5FS1lC um4Djiu+S+NFJe3MzISx8LilffusZr2qWLUj6GedjajCKU57YGxTg9We7nE/eQ8keoXz 3e++mDbH+Hh5n7J2lUoPzO3v0nX19gbeXFEtvFjqs0TKAaJcH0lreh4uCg7pWKNPHbKb GQgQ== MIME-Version: 1.0 X-Received: by 10.224.66.70 with SMTP id m6mr34805528qai.55.1404606888444; Sat, 05 Jul 2014 17:34:48 -0700 (PDT) Sender: adrian.chadd@gmail.com Received: by 10.224.202.193 with HTTP; Sat, 5 Jul 2014 17:34:48 -0700 (PDT) In-Reply-To: <20140705195816.GV93733@kib.kiev.ua> References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> <20140705095831.GO93733@kib.kiev.ua> <53B7D4DF.40301@citrix.com> <20140705112448.GQ93733@kib.kiev.ua> <53B8253F.5060403@citrix.com> <20140705195816.GV93733@kib.kiev.ua> Date: Sat, 5 Jul 2014 17:34:48 -0700 X-Google-Sender-Auth: clVdKxBczF0xrgWVDzkB20BpSdI Message-ID: Subject: Re: Strange IO performance with UFS From: Adrian Chadd To: Konstantin Belousov Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 00:34:49 -0000 Hm, wait a sec. So if the IO size is a multiple of the underlying FS block size, it should be okay? -a On 5 July 2014 12:58, Konstantin Belousov wrote: > On Sat, Jul 05, 2014 at 06:18:07PM +0200, Roger Pau Monn? wrote: >> On 05/07/14 13:24, Konstantin Belousov wrote: >> > On Sat, Jul 05, 2014 at 12:35:11PM +0200, Roger Pau Monn? wrote: >> >> On 05/07/14 11:58, Konstantin Belousov wrote: >> >>> On Sat, Jul 05, 2014 at 11:32:06AM +0200, Roger Pau Monn? >> >>> wrote: >> >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 >> >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 >> >>>> kernel`g_io_request+0x384 kernel`ufs_strategy+0x8a >> >>>> kernel`VOP_STRATEGY_APV+0xf5 kernel`bufstrategy+0x46 >> >>>> kernel`cluster_read+0x5e6 kernel`ffs_balloc_ufs2+0x1be2 >> >>>> kernel`ffs_write+0x310 kernel`VOP_WRITE_APV+0x166 >> >>>> kernel`vn_write+0x2eb kernel`vn_io_fault_doio+0x22 >> >>>> kernel`vn_io_fault1+0x78 kernel`vn_io_fault+0x173 >> >>>> kernel`dofilewrite+0x85 kernel`kern_writev+0x65 >> >>>> kernel`sys_write+0x63 >> >>>> >> >>>> This can also be seen by running iostat in parallel with the >> >>>> fio workload: >> >>>> >> >>>> device r/s w/s kr/s kw/s qlen svc_t %b ada0 >> >>>> 243.3 233.7 31053.3 29919.1 31 57.4 100 >> >>>> >> >>>> This clearly shows that even when I was doing a sequential >> >>>> write (the fio workload shown above), the disk was actually >> >>>> reading more data than writing it, which makes no sense, and >> >>>> all the reads come from the path trace shown above. >> >>> >> >>> The backtrace above means that the BA_CLRBUF was specified for >> >>> UFS_BALLOC(). In turns, this occurs when the write size is >> >>> less than the UFS block size. UFS has to read the block to >> >>> ensure that partial write does not corrupt the rest of the >> >>> buffer. >> >> >> >> Thanks for the clarification, that makes sense. I'm not opening >> >> the file with O_DIRECT, so shouldn't the write be cached in >> >> memory and flushed to disk when we have the full block? It's a >> >> sequential write, so the whole block is going to be rewritten >> >> very soon. >> >> >> >>> >> >>> You can get the block size for file with stat(2), st_blksize >> >>> field of the struct stat, or using statfs(2), field f_iosize of >> >>> struct statfs, or just looking at the dumpfs output for your >> >>> filesystem, the bsize value. For modern UFS typical value is >> >>> 32KB. >> >> >> >> Yes, block size is 32KB, checked with dumpfs. I've changed the >> >> block size in fio to 32k and then I get the expected results in >> >> iostat and fio: >> >> >> >> extended device statistics device r/s w/s kr/s kw/s >> >> qlen svc_t %b ada0 1.0 658.2 31.1 84245.1 58 108.4 >> >> 101 extended device statistics device r/s w/s kr/s >> >> kw/s qlen svc_t %b ada0 0.0 689.8 0.0 88291.4 54 >> >> 112.1 99 extended device statistics device r/s w/s kr/s >> >> kw/s qlen svc_t %b ada0 1.0 593.3 30.6 75936.9 80 >> >> 111.7 97 >> >> >> >> write: io=10240MB, bw=81704KB/s, iops=2553, runt=128339msec >> > >> > The current code in ffs_write() only avoids read before write when >> > write covers complete block. I think we can somewhat loose the >> > test to also avoid read when we are at EOF and write covers >> > completely the valid portion of the last block. >> > >> > This leaves the unwritten portion of the block with the garbage. I >> > believe that it is not harmful, since the only way for usermode to >> > access that garbage is through the mmap(2). The >> > vnode_generic_getpages() zeroes out parts of the page which are >> > after EOF. >> > >> > Try this, almost completely untested: >> >> Doesn't seem to help much, I'm still seeing the same issue. I'm >> sampling iostat every 1s, and here's the output form the start of the >> 4k block fio workload: >> >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 349.5 0.0 44612.3 48 88.0 52 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 655.4 0.0 83773.6 76 99.8 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 699.2 0.0 89493.1 59 109.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 628.1 0.0 80392.6 55 114.8 98 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 655.7 0.0 83799.6 79 98.4 102 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 701.4 0.0 89782.0 80 105.5 97 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 697.9 0.0 89331.6 78 112.0 103 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 714.1 0.0 91408.7 77 110.3 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 724.0 0.0 92675.0 67 112.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 700.4 0.0 89646.6 49 102.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 686.4 0.0 87857.2 78 110.0 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 702.0 0.0 89851.6 80 112.9 97 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 736.3 0.0 94246.4 67 110.1 103 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 624.6 0.0 79950.0 48 115.7 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 704.0 0.0 90118.4 77 106.1 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 714.6 0.0 91470.0 80 103.6 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 710.4 0.0 90926.1 80 111.1 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 655.3 0.0 83882.1 70 115.8 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 539.8 0.0 69094.5 80 121.2 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 711.6 0.0 91087.6 79 107.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 705.5 0.0 90304.5 81 111.3 97 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 727.3 0.0 93092.8 81 108.9 102 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 699.5 0.0 89296.4 55 109.0 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 689.0 0.0 88066.1 78 96.6 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 738.3 0.0 94496.1 56 109.1 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 615.4 0.0 78770.0 80 112.3 98 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 707.3 0.0 90529.8 86 105.7 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 704.3 0.0 89333.9 67 98.3 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 641.3 0.0 82081.5 80 112.3 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 701.6 0.0 89747.9 51 101.1 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 693.0 0.0 88702.1 80 103.6 97 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 632.7 0.0 80991.8 80 112.0 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 709.0 0.0 90748.2 80 107.5 102 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 715.0 0.0 91523.0 80 104.7 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 650.1 0.0 83210.5 56 110.9 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 682.2 0.0 87319.1 57 107.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 719.0 0.0 92032.6 80 103.6 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 624.3 0.0 79905.8 80 110.5 97 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 696.5 0.0 89151.7 80 109.9 103 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 664.2 0.0 85017.6 77 109.9 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 681.7 0.0 87254.0 80 107.5 98 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 668.5 0.0 85569.3 57 109.9 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 682.3 0.0 87329.0 53 110.8 102 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 643.9 0.0 82420.9 77 104.8 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 107.5 457.1 13701.7 58471.3 57 106.0 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 220.9 253.9 28281.4 32498.9 54 108.8 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 290.6 277.9 37198.8 35576.1 65 94.3 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 309.3 267.9 39590.7 34295.9 80 89.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 213.6 302.0 27212.7 38562.0 24 93.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 232.1 224.3 29712.5 28339.8 31 117.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 262.9 249.4 33654.0 31928.1 47 81.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 232.2 229.2 29721.6 29340.5 50 78.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 222.8 229.4 28430.0 29362.7 42 85.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 231.5 246.5 29628.8 31555.9 6 72.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 261.7 256.8 33498.7 32769.1 33 83.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 262.7 260.7 33628.3 33279.4 35 85.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 234.0 249.1 29867.9 31883.1 18 90.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 252.1 239.8 32263.0 30581.4 32 91.2 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 241.5 257.5 30917.0 32961.1 16 69.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 257.9 243.5 33011.9 31164.2 32 86.8 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 237.5 235.6 30311.2 30046.9 31 67.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 290.4 213.1 37172.8 27277.0 79 65.3 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 216.4 284.3 27703.7 36392.5 42 95.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 223.8 248.2 28645.1 31774.4 16 69.4 89 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 294.0 217.7 37544.4 27864.2 64 68.0 110 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 210.7 245.6 26966.6 31439.8 59 107.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 228.5 265.2 29246.6 33940.5 10 99.2 98 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 279.1 218.4 35727.2 27955.0 52 71.9 102 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 232.3 293.4 29607.9 37521.4 14 93.2 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 299.5 236.6 38340.2 30288.8 79 69.7 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 216.3 268.9 27686.3 34417.3 4 90.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 285.8 261.0 36585.3 33409.5 53 84.6 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 228.5 232.5 29059.7 29661.1 48 74.3 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 242.7 262.4 31060.0 33588.2 27 69.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 248.2 252.2 31766.1 32149.3 8 78.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 267.9 230.2 34288.6 29462.8 62 68.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 238.0 266.2 30375.8 34075.6 0 95.4 100 >> >> As can be seen from the log above, at first the workload runs fine, >> and the disk is only performing writes, but at some point (in this >> case around 40% of completion) it starts performing this >> read-before-write dance that completely screws up performance. > > I reproduced this locally. I think my patch is useless for the fio/4k write > situation. > > What happens is indeed related to the amount of the available memory. > When the size of the file written by fio is larger than the memory, > system has to recycle the cached pages. So after some moment, doing > a write has to do read-before-write, and this occurs not at the EOF > (since fio pre-allocated the job file). > > In fact, I used 10G file on 8G machine, but I interrupted the fio > before it finish the job. The longer the previous job runs, the longer > is time for which new job does not issue reads. If I allow the job to > completely fill the cache, then the reads starts immediately on the next > job run. > > I do not see how could anything be changed there, if we want to keep > user file content on partial block writes, and we do. From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 06:46:49 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D58D289B for ; Sun, 6 Jul 2014 06:46:49 +0000 (UTC) Received: from mail-qc0-f177.google.com (mail-qc0-f177.google.com [209.85.216.177]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9209A2652 for ; Sun, 6 Jul 2014 06:46:48 +0000 (UTC) Received: by mail-qc0-f177.google.com with SMTP id r5so2629544qcx.22 for ; Sat, 05 Jul 2014 23:46:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=kix5hOCsmJwJ+ZGg8qjBLvAtFDf+URVt+TIOgdzH8i0=; b=lrBXoeOyi2HMWsXLWp2bCvckVwRElvoEMcmiJYJTMaKWPvCqzwwHKRFIO5p56ZaDnn h1vW+ZjJ2a4nLHx78IYJYeZUzqbnvCixlEwFvRNWrxgEPv2PtaRqUoEcL/ieZMY2Kk5q bAvRnVVeN29ID3R9GqMkw4lYK8aemZm3qKAbf2ZG9KJW9AWVDw4QRZoVAZ9zKHcu2uLX dcpWJ9/FcUbJfmLtk5S3ScFphRakyvW51HpjMWjRWslOoLAkZJdVuD/xUglsRTDAp3YS UbcaKwsFQc+aqz3daAuFsKNUWfELmKwnSD6NFzVWukrTL9KMeER+/FO6QWhiDKfkGLz5 woGw== X-Gm-Message-State: ALoCoQl3+ClJyPsYbJJl2Ky+6qPAX3kXnNdMOVrJ8fq5DsIj9tzTZ6UXeBxtQAOAZJGcSzhlAQjK MIME-Version: 1.0 X-Received: by 10.140.105.102 with SMTP id b93mr7868501qgf.3.1404629202543; Sat, 05 Jul 2014 23:46:42 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Sat, 5 Jul 2014 23:46:42 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: <1A58F492-946F-46D4-A19E-2734F368CDAC@mail.turbofuzz.com> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> <20140629203746.GI34108@ivaldir.etoilebsd.net> <1A58F492-946F-46D4-A19E-2734F368CDAC@mail.turbofuzz.com> Date: Sat, 5 Jul 2014 23:46:42 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Jordan Hubbard Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 06:46:50 -0000 On Sat, Jul 5, 2014 at 4:32 PM, Jordan Hubbard wro= te: > > On Jun 29, 2014, at 8:49 PM, Harshavardhana w= rote: > >> http://download.gluster.org/pub/gluster/experimental/glusterfs-freebsd_2= 0140629.tar.bz2 >> - I just made the necessary changes from "/var/lib" to "/var/db" - >> please test this tarball out - relevant patches are posted for >> upstream review. > > OK, using this version, this is what we see on glusterd =E2=80=94debug: > > [2014-07-05 23:32:08.694245] I [MSGID: 100030] [glusterfsd.c:1998:main] 0= -glusterd: Started running glusterd version (args: glusterd --debug) > [2014-07-05 23:32:08.694289] D [logging.c:1781:__gf_log_inject_timer_even= t] 0-logging-infra: Starting timer now. Timeout =3D 120, current buf size = =3D 5 > [2014-07-05 23:32:08.694482] D [MSGID: 0] [glusterfsd.c:614:get_volfp] 0-= glusterfsd: loading volume file /usr/local/etc/glusterfs/glusterd.vol > [2014-07-05 23:32:08.697067] I [glusterd.c:1215:init] 0-management: Using= /var/db/glusterd as working directory > [2014-07-05 23:32:08.697124] C [logging.c:2334:gf_cmd_log_init] 0-managem= ent: No such file or directory > [2014-07-05 23:32:08.697135] C [glusterd.c:1231:init] 0-this->name: Unabl= e to create cmd log file /usr/local/var/log/glusterfs/.cmd_log_history > Ah do we also need to use "/var/db" for "/usr/local/var/log" - hmm it should have created the directories in "mkdir_p" fashion - i will verify why it didn't. I must have missed this since my FreeBSD installation might already have "/usr/local/var/log" from a previous installation. Will fix them and update here - thanks again for testing. NOTE: Main porting fixes are part of the upstream repo - i pushed them this week. I have a pending reviews for this change from "/var/lib" to "/var/db" which are pending some reviews. --=20 Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 07:12:19 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6FE74B66 for ; Sun, 6 Jul 2014 07:12:19 +0000 (UTC) Received: from mail-qa0-f43.google.com (mail-qa0-f43.google.com [209.85.216.43]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 2D6732812 for ; Sun, 6 Jul 2014 07:12:18 +0000 (UTC) Received: by mail-qa0-f43.google.com with SMTP id k15so2459280qaq.2 for ; Sun, 06 Jul 2014 00:12:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=MRvXX7uqp18f7eBhK1qjQaWNj1W/soBnMhGWVu/K2C4=; b=P/I3P/DyLoCm6CaitIP6Oo4NZUWf/lgV4kzV6j2FP7UmEcRnBOSAB8lPpomfPTk6BR vjt3U5WzeJIQewFYkmhIw0Z55EfNk+EJdRGJf4lOGICBdWBreEXhGJxo/WcQd+/VNDjz B5G6+uJ0VEi9lVOgbgR2/L1/FF/qBaK0h7QnhAcVv506ejPs47kCJ4vF6YVuo9/DrP69 pfUT64VQvaVoNtM+hkpSaoSKq6vwPnrJfduLXVSRwdmQQA3Aes0lf2Rpieozh0WQWHlW CPkJgq7Mqo3Ke0ClaQz7GrHg/MEIcb4UkthpZkkCSnwBDU6bJOilHlsq3Lp7XTTXpyA3 qdFw== X-Gm-Message-State: ALoCoQmLLOYeOrFzrkplpPisK9s4tquAgsS1ol+1S97jGDgJih5T4wvXMt6GJW1SnbeVxnctCbu5 MIME-Version: 1.0 X-Received: by 10.140.105.102 with SMTP id b93mr8014583qgf.3.1404630732078; Sun, 06 Jul 2014 00:12:12 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Sun, 6 Jul 2014 00:12:11 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: <1A58F492-946F-46D4-A19E-2734F368CDAC@mail.turbofuzz.com> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> <20140629203746.GI34108@ivaldir.etoilebsd.net> <1A58F492-946F-46D4-A19E-2734F368CDAC@mail.turbofuzz.com> Date: Sun, 6 Jul 2014 00:12:11 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Jordan Hubbard Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 07:12:19 -0000 > > [2014-07-05 23:32:08.694245] I [MSGID: 100030] [glusterfsd.c:1998:main] 0-glusterd: Started running glusterd version (args: glusterd --debug) > [2014-07-05 23:32:08.694289] D [logging.c:1781:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 > [2014-07-05 23:32:08.694482] D [MSGID: 0] [glusterfsd.c:614:get_volfp] 0-glusterfsd: loading volume file /usr/local/etc/glusterfs/glusterd.vol > [2014-07-05 23:32:08.697067] I [glusterd.c:1215:init] 0-management: Using /var/db/glusterd as working directory > [2014-07-05 23:32:08.697124] C [logging.c:2334:gf_cmd_log_init] 0-management: No such file or directory > [2014-07-05 23:32:08.697135] C [glusterd.c:1231:init] 0-this->name: Unable to create cmd log file /usr/local/var/log/glusterfs/.cmd_log_history > Just tested this on FreeBSD 10 after 'rm -rf /usr/local' and complete GlusterFS recompile/install - glusterd doesn't indicate this issue. A dummy directory "/usr/local/var/log/glusterfs" directory is created nevertheless after a 'make install' glusterfsd/src/Makefile.am: $(INSTALL) -d -m 755 $(DESTDIR)$(localstatedir)/log/glusterfs ^^ this should take care of the problematic directory. Can you check if 'ls -l /usr/local/var/log' exists? as a work-around you can compile with # ./configure --localstatedir=/var and it would seem like FreeBSD has /var/log -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 07:17:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 86238C19; Sun, 6 Jul 2014 07:17:46 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EE5CD2845; Sun, 6 Jul 2014 07:17:45 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id s667HZnl046087 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 6 Jul 2014 10:17:35 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua s667HZnl046087 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id s667HZDG046086; Sun, 6 Jul 2014 10:17:35 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sun, 6 Jul 2014 10:17:35 +0300 From: Konstantin Belousov To: Adrian Chadd Subject: Re: Strange IO performance with UFS Message-ID: <20140706071735.GZ93733@kib.kiev.ua> References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> <20140705095831.GO93733@kib.kiev.ua> <53B7D4DF.40301@citrix.com> <20140705112448.GQ93733@kib.kiev.ua> <53B8253F.5060403@citrix.com> <20140705195816.GV93733@kib.kiev.ua> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="HiaQvdnqo9FN6ymz" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 07:17:46 -0000 --HiaQvdnqo9FN6ymz Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jul 05, 2014 at 05:34:48PM -0700, Adrian Chadd wrote: > Hm, wait a sec. So if the IO size is a multiple of the underlying FS > block size, it should be okay? It was already suggested in the thread, and confirmed later. Did you read the mails ? >=20 >=20 > -a >=20 >=20 > On 5 July 2014 12:58, Konstantin Belousov wrote: > > On Sat, Jul 05, 2014 at 06:18:07PM +0200, Roger Pau Monn? wrote: > >> On 05/07/14 13:24, Konstantin Belousov wrote: > >> > On Sat, Jul 05, 2014 at 12:35:11PM +0200, Roger Pau Monn? wrote: > >> >> On 05/07/14 11:58, Konstantin Belousov wrote: > >> >>> On Sat, Jul 05, 2014 at 11:32:06AM +0200, Roger Pau Monn? > >> >>> wrote: > >> >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 > >> >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 > >> >>>> kernel`g_io_request+0x384 kernel`ufs_strategy+0x8a > >> >>>> kernel`VOP_STRATEGY_APV+0xf5 kernel`bufstrategy+0x46 > >> >>>> kernel`cluster_read+0x5e6 kernel`ffs_balloc_ufs2+0x1be2 > >> >>>> kernel`ffs_write+0x310 kernel`VOP_WRITE_APV+0x166 > >> >>>> kernel`vn_write+0x2eb kernel`vn_io_fault_doio+0x22 > >> >>>> kernel`vn_io_fault1+0x78 kernel`vn_io_fault+0x173 > >> >>>> kernel`dofilewrite+0x85 kernel`kern_writev+0x65 > >> >>>> kernel`sys_write+0x63 > >> >>>> > >> >>>> This can also be seen by running iostat in parallel with the > >> >>>> fio workload: > >> >>>> > >> >>>> device r/s w/s kr/s kw/s qlen svc_t %b ada0 > >> >>>> 243.3 233.7 31053.3 29919.1 31 57.4 100 > >> >>>> > >> >>>> This clearly shows that even when I was doing a sequential > >> >>>> write (the fio workload shown above), the disk was actually > >> >>>> reading more data than writing it, which makes no sense, and > >> >>>> all the reads come from the path trace shown above. > >> >>> > >> >>> The backtrace above means that the BA_CLRBUF was specified for > >> >>> UFS_BALLOC(). In turns, this occurs when the write size is > >> >>> less than the UFS block size. UFS has to read the block to > >> >>> ensure that partial write does not corrupt the rest of the > >> >>> buffer. > >> >> > >> >> Thanks for the clarification, that makes sense. I'm not opening > >> >> the file with O_DIRECT, so shouldn't the write be cached in > >> >> memory and flushed to disk when we have the full block? It's a > >> >> sequential write, so the whole block is going to be rewritten > >> >> very soon. > >> >> > >> >>> > >> >>> You can get the block size for file with stat(2), st_blksize > >> >>> field of the struct stat, or using statfs(2), field f_iosize of > >> >>> struct statfs, or just looking at the dumpfs output for your > >> >>> filesystem, the bsize value. For modern UFS typical value is > >> >>> 32KB. > >> >> > >> >> Yes, block size is 32KB, checked with dumpfs. I've changed the > >> >> block size in fio to 32k and then I get the expected results in > >> >> iostat and fio: > >> >> > >> >> extended device statistics device r/s w/s kr/s kw/s > >> >> qlen svc_t %b ada0 1.0 658.2 31.1 84245.1 58 108.4 > >> >> 101 extended device statistics device r/s w/s kr/s > >> >> kw/s qlen svc_t %b ada0 0.0 689.8 0.0 88291.4 54 > >> >> 112.1 99 extended device statistics device r/s w/s kr/s > >> >> kw/s qlen svc_t %b ada0 1.0 593.3 30.6 75936.9 80 > >> >> 111.7 97 > >> >> > >> >> write: io=3D10240MB, bw=3D81704KB/s, iops=3D2553, runt=3D128339msec > >> > > >> > The current code in ffs_write() only avoids read before write when > >> > write covers complete block. I think we can somewhat loose the > >> > test to also avoid read when we are at EOF and write covers > >> > completely the valid portion of the last block. > >> > > >> > This leaves the unwritten portion of the block with the garbage. I > >> > believe that it is not harmful, since the only way for usermode to > >> > access that garbage is through the mmap(2). The > >> > vnode_generic_getpages() zeroes out parts of the page which are > >> > after EOF. > >> > > >> > Try this, almost completely untested: > >> > >> Doesn't seem to help much, I'm still seeing the same issue. I'm > >> sampling iostat every 1s, and here's the output form the start of the > >> 4k block fio workload: > >> > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 349.5 0.0 44612.3 48 88.0 52 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 655.4 0.0 83773.6 76 99.8 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 699.2 0.0 89493.1 59 109.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 628.1 0.0 80392.6 55 114.8 98 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 655.7 0.0 83799.6 79 98.4 102 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 701.4 0.0 89782.0 80 105.5 97 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 697.9 0.0 89331.6 78 112.0 103 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 714.1 0.0 91408.7 77 110.3 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 724.0 0.0 92675.0 67 112.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 700.4 0.0 89646.6 49 102.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 686.4 0.0 87857.2 78 110.0 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 702.0 0.0 89851.6 80 112.9 97 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 736.3 0.0 94246.4 67 110.1 103 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 624.6 0.0 79950.0 48 115.7 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 704.0 0.0 90118.4 77 106.1 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 714.6 0.0 91470.0 80 103.6 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 710.4 0.0 90926.1 80 111.1 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 655.3 0.0 83882.1 70 115.8 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 539.8 0.0 69094.5 80 121.2 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 711.6 0.0 91087.6 79 107.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 705.5 0.0 90304.5 81 111.3 97 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 727.3 0.0 93092.8 81 108.9 102 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 699.5 0.0 89296.4 55 109.0 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 689.0 0.0 88066.1 78 96.6 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 738.3 0.0 94496.1 56 109.1 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 615.4 0.0 78770.0 80 112.3 98 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 707.3 0.0 90529.8 86 105.7 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 704.3 0.0 89333.9 67 98.3 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 641.3 0.0 82081.5 80 112.3 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 701.6 0.0 89747.9 51 101.1 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 693.0 0.0 88702.1 80 103.6 97 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 632.7 0.0 80991.8 80 112.0 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 709.0 0.0 90748.2 80 107.5 102 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 715.0 0.0 91523.0 80 104.7 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 650.1 0.0 83210.5 56 110.9 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 682.2 0.0 87319.1 57 107.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 719.0 0.0 92032.6 80 103.6 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 624.3 0.0 79905.8 80 110.5 97 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 696.5 0.0 89151.7 80 109.9 103 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 664.2 0.0 85017.6 77 109.9 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 681.7 0.0 87254.0 80 107.5 98 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 668.5 0.0 85569.3 57 109.9 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 682.3 0.0 87329.0 53 110.8 102 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 643.9 0.0 82420.9 77 104.8 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 107.5 457.1 13701.7 58471.3 57 106.0 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 220.9 253.9 28281.4 32498.9 54 108.8 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 290.6 277.9 37198.8 35576.1 65 94.3 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 309.3 267.9 39590.7 34295.9 80 89.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 213.6 302.0 27212.7 38562.0 24 93.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 232.1 224.3 29712.5 28339.8 31 117.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 262.9 249.4 33654.0 31928.1 47 81.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 232.2 229.2 29721.6 29340.5 50 78.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 222.8 229.4 28430.0 29362.7 42 85.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 231.5 246.5 29628.8 31555.9 6 72.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 261.7 256.8 33498.7 32769.1 33 83.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 262.7 260.7 33628.3 33279.4 35 85.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 234.0 249.1 29867.9 31883.1 18 90.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 252.1 239.8 32263.0 30581.4 32 91.2 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 241.5 257.5 30917.0 32961.1 16 69.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 257.9 243.5 33011.9 31164.2 32 86.8 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 237.5 235.6 30311.2 30046.9 31 67.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 290.4 213.1 37172.8 27277.0 79 65.3 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 216.4 284.3 27703.7 36392.5 42 95.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 223.8 248.2 28645.1 31774.4 16 69.4 89 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 294.0 217.7 37544.4 27864.2 64 68.0 110 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 210.7 245.6 26966.6 31439.8 59 107.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 228.5 265.2 29246.6 33940.5 10 99.2 98 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 279.1 218.4 35727.2 27955.0 52 71.9 102 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 232.3 293.4 29607.9 37521.4 14 93.2 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 299.5 236.6 38340.2 30288.8 79 69.7 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 216.3 268.9 27686.3 34417.3 4 90.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 285.8 261.0 36585.3 33409.5 53 84.6 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 228.5 232.5 29059.7 29661.1 48 74.3 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 242.7 262.4 31060.0 33588.2 27 69.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 248.2 252.2 31766.1 32149.3 8 78.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 267.9 230.2 34288.6 29462.8 62 68.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 238.0 266.2 30375.8 34075.6 0 95.4 100 > >> > >> As can be seen from the log above, at first the workload runs fine, > >> and the disk is only performing writes, but at some point (in this > >> case around 40% of completion) it starts performing this > >> read-before-write dance that completely screws up performance. > > > > I reproduced this locally. I think my patch is useless for the fio/4k = write > > situation. > > > > What happens is indeed related to the amount of the available memory. > > When the size of the file written by fio is larger than the memory, > > system has to recycle the cached pages. So after some moment, doing > > a write has to do read-before-write, and this occurs not at the EOF > > (since fio pre-allocated the job file). > > > > In fact, I used 10G file on 8G machine, but I interrupted the fio > > before it finish the job. The longer the previous job runs, the longer > > is time for which new job does not issue reads. If I allow the job to > > completely fill the cache, then the reads starts immediately on the next > > job run. > > > > I do not see how could anything be changed there, if we want to keep > > user file content on partial block writes, and we do. --HiaQvdnqo9FN6ymz Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBAgAGBQJTuPgPAAoJEJDCuSvBvK1BKnYQAI7EFGRcK2nFwWnUT/mqTMSS sTeywWLBdS/KZDLLzD+o1hnqWPy8RZw8Pj0RFs3d+ysqdUFgRg/D98LyaguKx9eR Ykec/ZV94SAIzGWPzW530ZXIQj55aUA/xIFsG0E6AdmatzZ/hY1NGTx95niVBy97 EIW0ikoRZT8JYsxG/UWkT+AmmQ+EyukpuOplui4MOAE17i4a6NNeswcrSkvc7Sty m8KsB1/BM/bUdSX8kFaz6on4XFwc/HAdtYDEL+vdmWQPLiLsVNNLaHu0YTdBUXtb 9G0mYkucuMbMvQRUAg4xUpHuG524zXa8XiEbvmqYk79rs0hlFakfl3qTS8/0cr3M QS9IM6VMCZZIWFVJCWPy/K8Vc2TkKjuu3QJvdK0+2I2AS8NHmmLRdxcLWbmBkrXd tbvsiRNhL9u4byK0zYq8Z5a9O6xxj7IWIYbQtJ3MBBtVtX9VsTIjf4UUdscHA4kH q/k2GPBaasjMqpnsYxJH4AppWIhq63EwE57b8lMMGMH4mpGytXzzOEwjTZ0LJd4R 7um8fsL/1nbxQmWsBve6bq3qqiQsuq8jDJQJzvUHjKM2g6ZZNOAjTz/VIg1UmZr6 rpOCTbl4ZBsBuQ4NeIYpG6e9QU0zqUOYDZIjJZteTM3AtX0QEJChUM+B0tbjWDWL /95Oy4vRwO523xY1zAFw =hBBZ -----END PGP SIGNATURE----- --HiaQvdnqo9FN6ymz-- From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 08:13:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E612F411; Sun, 6 Jul 2014 08:13:30 +0000 (UTC) Received: from mail-qg0-x229.google.com (mail-qg0-x229.google.com [IPv6:2607:f8b0:400d:c04::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 92ABE2BE9; Sun, 6 Jul 2014 08:13:30 +0000 (UTC) Received: by mail-qg0-f41.google.com with SMTP id i50so2744364qgf.28 for ; Sun, 06 Jul 2014 01:13:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=K5iTvwa+3zCE5oHL/S4zIpnC5KWzIh97gFWGEn+Q9ac=; b=jRNWPIucZfLjp9Xsu1hnpUVtzHY8/pkahxDrQDccWmoMJ6Zgvw9BKpfKJGCJO7r85J 8qOhEtMcLz/K74RF4cIZXu9WWkqzT+62Hru8xasSpLANA4h6lP90c3iF02t63V6M1aZG pgVnGi0FDzEmPm71Pm8bOxM/G6ccrvPHKEp9Kkilzzq+aX6vAjRNwGaXgJfYWhFNqDEn drYsfK6Py7FN116lF0d6LTk+FhQckCNpAvaGMszMXf//Rijoipu1srwSxC5/pAWIW6aJ EKYIvB7AA4NcpFiPuKC7RKyjsyecamA31uTfmDq7mF1nCZo+mNfLAwzgRULkAgVbAdon lGCQ== MIME-Version: 1.0 X-Received: by 10.224.128.133 with SMTP id k5mr18930843qas.49.1404634409695; Sun, 06 Jul 2014 01:13:29 -0700 (PDT) Sender: adrian.chadd@gmail.com Received: by 10.224.202.193 with HTTP; Sun, 6 Jul 2014 01:13:29 -0700 (PDT) In-Reply-To: <20140706071735.GZ93733@kib.kiev.ua> References: <53B691EA.3070108@citrix.com> <53B69C73.7090806@citrix.com> <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> <20140705095831.GO93733@kib.kiev.ua> <53B7D4DF.40301@citrix.com> <20140705112448.GQ93733@kib.kiev.ua> <53B8253F.5060403@citrix.com> <20140705195816.GV93733@kib.kiev.ua> <20140706071735.GZ93733@kib.kiev.ua> Date: Sun, 6 Jul 2014 01:13:29 -0700 X-Google-Sender-Auth: ya7N-p4PK2sVEfrz4TgWB0iDjoo Message-ID: Subject: Re: Strange IO performance with UFS From: Adrian Chadd To: Konstantin Belousov Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 08:13:31 -0000 On 6 July 2014 00:17, Konstantin Belousov wrote: > On Sat, Jul 05, 2014 at 05:34:48PM -0700, Adrian Chadd wrote: >> Hm, wait a sec. So if the IO size is a multiple of the underlying FS >> block size, it should be okay? > It was already suggested in the thread, and confirmed later. > Did you read the mails ? I did, I must've missed it. SOrry! -a From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 08:46:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E8B4B918; Sun, 6 Jul 2014 08:46:24 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6C1692DE0; Sun, 6 Jul 2014 08:46:24 +0000 (UTC) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id s668kF6V068052 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 6 Jul 2014 11:46:15 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua s668kF6V068052 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id s668kF0h068051; Sun, 6 Jul 2014 11:46:15 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sun, 6 Jul 2014 11:46:15 +0300 From: Konstantin Belousov To: Adrian Chadd Subject: Re: Strange IO performance with UFS Message-ID: <20140706084615.GC93733@kib.kiev.ua> References: <20140705001938.54a3873dd698080d93d840e2@systemdatarecorder.org> <53B7C616.1000702@citrix.com> <20140705095831.GO93733@kib.kiev.ua> <53B7D4DF.40301@citrix.com> <20140705112448.GQ93733@kib.kiev.ua> <53B8253F.5060403@citrix.com> <20140705195816.GV93733@kib.kiev.ua> <20140706071735.GZ93733@kib.kiev.ua> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="ukxn79E1+tniTzNx" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: freebsd-fs@freebsd.org, Stefan Parvu , FreeBSD Hackers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 08:46:25 -0000 --ukxn79E1+tniTzNx Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, Jul 06, 2014 at 01:13:29AM -0700, Adrian Chadd wrote: > On 6 July 2014 00:17, Konstantin Belousov wrote: > > On Sat, Jul 05, 2014 at 05:34:48PM -0700, Adrian Chadd wrote: > >> Hm, wait a sec. So if the IO size is a multiple of the underlying FS > >> block size, it should be okay? > > It was already suggested in the thread, and confirmed later. > > Did you read the mails ? >=20 > I did, I must've missed it. SOrry! The thread was definitely long. I am slightly curious how other systems perform in this case. In particular, since fio is originating from Linux, it probably should demonstrate some good behaviour with the default settings. --ukxn79E1+tniTzNx Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBAgAGBQJTuQzXAAoJEJDCuSvBvK1Bx4EP/0CIhDbDLKo2xYiw6d2pIbCd Rj9Izf5EiERXCcdk9iEOGxzplocMuf1zxLd5sx9to4AuBO0c8Wo853z7fjdiFT69 7mPR2jQAyuXMWzNiH5sOrAS2ZpSODnNrziZm2M6coDT+q980jhIzuNZ/91/QUhdm zXCUgbxclI0epbnNnmiTprsV58pEW2I7p2KE1LJ20eBqatQdyh9yAHWL5iP3+XSV 38b6Aux9+ICWoK0qxdLqltXcga9z/vEq7BeGwcio/hzmrZ7KQol7Vwro/tHfKmHT b1L5E9u22cepQvjiHhqXgiXm3maaQqYZKj3AhrfNaFPy9Txu1CRxozY2EL+DykqS 998BJmsQi/H+0ZL7L+g20Ge//flnRD+nk+iSJL1Y4yh4baltcDzHpQMqvBVmVZvC 9dcyaVODpNzUyeOZ8vzrOdhBx2FD6UN4EzNfi8n5hHfcaJU9gYlbWP0981lj1ITT SHYW00YtBPQ0Eqn3HtsTGk1hx3yCknIqB26TwiJHTwIaQ/TnvkBI+ZdArcjpvj3B veFHES+Drzc/Yb6R8ccrmQAzJLdra1sMMM43K1dmPN9mKoLA10HHkZRlJibpCI1U 5k6lk77WmdeUx86MjoYPYlZsAFO+V3d3mcL8+sHvGPticJblGAEFbuLayIp4xiGi w6fEogzflhWANIXMByBk =JtJc -----END PGP SIGNATURE----- --ukxn79E1+tniTzNx-- From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 11:17:12 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BB0A4A97; Sun, 6 Jul 2014 11:17:12 +0000 (UTC) Received: from archeo.suszko.eu (archeo.unixguru.pl [91.121.179.122]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4FF112873; Sun, 6 Jul 2014 11:17:11 +0000 (UTC) Received: from archeo (localhost [127.0.0.1]) by archeo.suszko.eu (Postfix) with ESMTP id 454D22063805; Sun, 6 Jul 2014 13:17:03 +0200 (CEST) X-Virus-Scanned: amavisd-new at archeo.local Received: from archeo.suszko.eu ([127.0.0.1]) by archeo (archeo.local [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id jz3zsLaLQNfU; Sun, 6 Jul 2014 13:17:03 +0200 (CEST) Received: from leo.lan (89-66-16-9.dynamic.chello.pl [89.66.16.9]) by archeo.suszko.eu (Postfix) with ESMTPSA id 8A688206380E; Sun, 6 Jul 2014 13:17:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=suszko.eu; s=dkim; t=1404645422; bh=bq6LOvD6x9/bjWYhMr2fB4FJkk38azJybT2DgoTpHKQ=; h=Date:From:To:Cc:Subject:In-Reply-To:References; b=Zq3DVKu0zafh8bePlgmhnCcS8W/7UD58KN0z6E9MNVnU9/9JI8g9Y36nTTLbQpNH1 MaWMA0zYycqK28vZrnJOBmuP+jYywn7qEEUZOcvJRkT8p3eddzKCfnu/7k8hKVJ5Vg XeeJO6LuorTXOZ6USqAcy5fOORIsQQIXwlWdj4nw= Date: Sun, 6 Jul 2014 13:16:58 +0200 From: Maciej Suszko To: freebsd-fs@freebsd.org Subject: Re: ccdconfig and Linux mdadm Message-ID: <20140706131658.782e8ba5@leo.lan> In-Reply-To: References: <20140703114254.6472055a@helium> <53B5395B.6040301@freebsd.org> <20140703152801.695a39e6@helium> <53B57935.3090209@freebsd.org> <20140703223548.49b5c907@leo.lan> <04F83DC7-CB5F-4308-A9E5-99F6EE35C7B7@FreeBSD.org> X-Mailer: Claws Mail 3.10.1 (GTK+ 2.24.22; amd64-portbld-freebsd10.0) MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; boundary="Sig_/Kyn==2iiQBe76g58praqcAO"; protocol="application/pgp-signature" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 11:17:12 -0000 --Sig_/Kyn==2iiQBe76g58praqcAO Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Warren Block wrote: > On Fri, 4 Jul 2014, Mark Felder wrote: >=20 > > > > On Jul 3, 2014, at 15:35, Maciej Suszko wrote: > >> > >> 180mb.* files were created under Linux. Again I can say FreeBSD > >> rocks! ... as usual :D > >> > >> > > > > Wow, this is excellent. I hope this doesn't get lost to the mailing > > list archives... >=20 > Agreed! >=20 > I suspect this can also be done with gconcant and gstripe. If so, > I'm willing to add it as an example to the gconcat man page. Indeed, it can :) Here are details what I just have checked: root@fbsd:~ # gpart show ada{1,3} =3D> 1 204799 ada1 MBR (100M) 1 204799 1 linux-raid (100M) =3D> 1 204799 ada3 MBR (100M) 1 204799 1 linux-raid (100M) root@fbsd:~ # file -s /dev/ada{1,3}s1 /dev/ada1s1: data /dev/ada3s1: data root@fbsd:~ # gnop create -o 520192 /dev/ada{1,3}s1 root@fbsd:~ # gstripe create -s 512K stripe0 /dev/ada1s1.nop /dev/ada3s1.nop root@fbsd:~ # file -s /dev/stripe/stripe0=20 /dev/stripe/stripe0: ReiserFS V3.6 root@fbsd:~ # mount -t reiserfs -o ro /dev/stripe/stripe0 nobackup root@fbsd:~ # cd nobackup/ && ls -1 .reiserfs_priv 150mb.MD5 150mb.file root@fbsd:~ # df -ht reiserfs Filesystem Size Used Avail Capacity Mounted on /dev/stripe/stripe0 199M 182M 17M 92% /root/nobackup root@fbsd:~/nobackup # gmd5sum -c 150mb.MD5=20 150mb.file: OK The only "magic" here is to create accurate nop device (this can be found reading first few KB from device - in case of superblock v1.2). I'm sure it can be automated. The next step is to create stripe with the same stripe size as created in Linux. I suggest someone to do more testing, for example creating md device in Linux with ext3 filesystem, mounting that in FreeBSD, doing some writes, checksums and going back to Linux - check all created files against those sums. > ccd and ccdconfig are really old, although the ccdconfig man page was=20 > updated in October 2013. The example could go there, but I'd really=20 > rather update the more modern tools. Your welcome to do so :) --=20 regards, Maciej Suszko. --Sig_/Kyn==2iiQBe76g58praqcAO Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlO5MCwACgkQCikUk0l7iGo4TgCeINmJmBrfs5FaP3FH+nBHMAhM 8t0An0iA/jvKuOGHN5Nxh6mJ88zxo9+z =AFKe -----END PGP SIGNATURE----- --Sig_/Kyn==2iiQBe76g58praqcAO-- From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 14:19:06 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A40A94A8 for ; Sun, 6 Jul 2014 14:19:06 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8BFAF25BE for ; Sun, 6 Jul 2014 14:19:06 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s66EJ6xK067945 for ; Sun, 6 Jul 2014 15:19:06 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191573] [zfs] kernel panic when running zpool/add/files.t Date: Sun, 06 Jul 2014 14:19:06 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 14:19:06 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191573 Steven Hartland changed: What |Removed |Added ---------------------------------------------------------------------------- Status|Needs Triage |In Discussion CC| |smh@FreeBSD.org --- Comment #5 from Steven Hartland --- Looks like this may be from an old version of current could you check with the latest source to see if it still exists? -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 14:23:47 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2382F932 for ; Sun, 6 Jul 2014 14:23:47 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0AB9B2664 for ; Sun, 6 Jul 2014 14:23:47 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s66ENkKq027975 for ; Sun, 6 Jul 2014 15:23:46 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Sun, 06 Jul 2014 14:23:47 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 14:23:47 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510 Steven Hartland changed: What |Removed |Added ---------------------------------------------------------------------------- Status|Needs Triage |In Discussion CC| |smh@FreeBSD.org --- Comment #2 from Steven Hartland --- This looks like ZFS has backed off from max usage due to app usage on the machine, which is expected behaviour. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 18:32:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E00D3932; Sun, 6 Jul 2014 18:32:14 +0000 (UTC) Received: from mail.iXsystems.com (newknight.ixsystems.com [206.40.55.70]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 967AA290D; Sun, 6 Jul 2014 18:32:13 +0000 (UTC) Received: from localhost (mail.ixsystems.com [10.2.55.1]) by mail.iXsystems.com (Postfix) with ESMTP id DE77277EDC; Sun, 6 Jul 2014 11:32:12 -0700 (PDT) Received: from mail.iXsystems.com ([10.2.55.1]) by localhost (mail.ixsystems.com [10.2.55.1]) (maiad, port 10024) with ESMTP id 59559-04; Sun, 6 Jul 2014 11:32:12 -0700 (PDT) Received: from [10.8.0.34] (unknown [10.8.0.34]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.iXsystems.com (Postfix) with ESMTPSA id C54AF77ED8; Sun, 6 Jul 2014 11:32:10 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.2\)) Subject: Re: FreeBSD support being added to GlusterFS From: Jordan Hubbard In-Reply-To: Date: Sun, 6 Jul 2014 11:32:07 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <0ABAE2AC-BF1B-4125-ACA9-C6177D013E25@mail.turbofuzz.com> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> <20140629203746.GI34108@ivaldir.etoilebsd.net> <1A58F492-946F-46D4-A19E-2734F368CDAC@mail.turbofuzz.com> To: Harshavardhana X-Mailer: Apple Mail (2.1878.2) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 18:32:15 -0000 On Jul 6, 2014, at 12:12 AM, Harshavardhana = wrote: > Just tested this on FreeBSD 10 after 'rm -rf /usr/local' and complete > GlusterFS recompile/install - glusterd doesn't indicate this issue. A > dummy directory "/usr/local/var/log/glusterfs" directory is created > nevertheless after a 'make install' >=20 > glusterfsd/src/Makefile.am: $(INSTALL) -d -m 755 > $(DESTDIR)$(localstatedir)/log/glusterfs >=20 > ^^ this should take care of the problematic directory. >=20 > Can you check if 'ls -l /usr/local/var/log' exists? as a work-around > you can compile with I can make the /usr/local/var/log/glusterfs directory and it gets much = further. That said, is there some special configure flags we should be = passing in our version of the port to properly stuff glusterfs into /var = instead? Your email tends to imply that we should be passing = =97localstatedir, which we can certainly do no problem, I=92m just = wondering if that=92s your long-term plan. Again, this is our port: = https://github.com/freenas/ports/tree/freenas/9-stable/sysutils/glusterfs The fundamental issue with /usr/local is, again, that /usr/local is = read-only on FreeNAS. If there are configuration files that glusterfs = expects to be modifiable, they can=92t live anywhere in /usr/local, nor = of course can any temporary files or log files. We have made special = provisions for /etc and /var such that those can be modified, so we = basically just need to compile gluster as a =93system service=94 and put = it in the system directories (e.g. prefix is /, not /usr/local). Thanks - I think we=92re getting closer! Now that glusterd is actually = running, I just need to figure out how to test it. :) - Jordan From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 20:08:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C0D17E50 for ; Sun, 6 Jul 2014 20:08:01 +0000 (UTC) Received: from mail-qc0-f182.google.com (mail-qc0-f182.google.com [209.85.216.182]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7BC8720D3 for ; Sun, 6 Jul 2014 20:08:00 +0000 (UTC) Received: by mail-qc0-f182.google.com with SMTP id m20so2977950qcx.41 for ; Sun, 06 Jul 2014 13:07:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=j1fEHlnLmu6PeoWvM4QtWv5Bdb8ZBZ+dkS0Vj4XTzi0=; b=OgczGx9uP9A4VDwP7x9GlrOrRDBWsPr0+fAJr8dmKR3ucBdLdev+/6ToGOx5JS1mrQ /iNvVQxVWgDPX/zqy24xms5ceNkcfhTWL6jORnbGYEQwUWN7zvmH4+VVxYcZ/s8Dd76v DY2naT9hdS3LnkFkZAuq4kHvsR3T71IuQ94n8YM0oFB2qa8kV0i0YiOWS6oSmu811XgM R7ULDgxnA9YOzrZF/5VihsYmhsl9R+nJcvl0XxcMm2/mllN6RyBGRD1T2n5uoXBIc39C RDknkvXvwFBu02d6gtcAUsfRuJDQzv1BOJZeEHfcV+MfRZACzTiir7okNPePHctebak3 wt6A== X-Gm-Message-State: ALoCoQnWEjNC00Dm6ypbwDd5WBCMmwfR0CqQUAhDjeAbw8fioaVf7Z47hSUoSDgANWb2NHk64jNu MIME-Version: 1.0 X-Received: by 10.224.126.70 with SMTP id b6mr42394665qas.74.1404673995401; Sun, 06 Jul 2014 12:13:15 -0700 (PDT) Received: by 10.229.70.66 with HTTP; Sun, 6 Jul 2014 12:13:15 -0700 (PDT) X-Originating-IP: [24.4.138.100] In-Reply-To: <0ABAE2AC-BF1B-4125-ACA9-C6177D013E25@mail.turbofuzz.com> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> <20140629203746.GI34108@ivaldir.etoilebsd.net> <1A58F492-946F-46D4-A19E-2734F368CDAC@mail.turbofuzz.com> <0ABAE2AC-BF1B-4125-ACA9-C6177D013E25@mail.turbofuzz.com> Date: Sun, 6 Jul 2014 12:13:15 -0700 Message-ID: Subject: Re: FreeBSD support being added to GlusterFS From: Harshavardhana To: Jordan Hubbard Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 20:08:01 -0000 > > I can make the /usr/local/var/log/glusterfs directory and it gets much fu= rther. That said, is there some special configure flags we should be pass= ing in our version of the port to properly stuff glusterfs into /var instea= d? Your email tends to imply that we should be passing =E2=80=94localstate= dir, which we can certainly do no problem, I=E2=80=99m just wondering if th= at=E2=80=99s your long-term plan. Again, this is our port: https://github= .com/freenas/ports/tree/freenas/9-stable/sysutils/glusterfs > > The fundamental issue with /usr/local is, again, that /usr/local is read-= only on FreeNAS. If there are configuration files that glusterfs expects t= o be modifiable, they can=E2=80=99t live anywhere in /usr/local, nor of cou= rse can any temporary files or log files. We have made special provisions = for /etc and /var such that those can be modified, so we basically just nee= d to compile gluster as a =E2=80=9Csystem service=E2=80=9D and put it in th= e system directories (e.g. prefix is /, not /usr/local). > Ah now i get it - "/usr/local" is not a requirement for "GlusterFS" it is a baggage of using "autotools" when during ./configure if you do not specify --prefix - so for a standard installation under RPM it is usually the following flags are used # ./configure --prefix=3D/usr --sysconfdir=3D/etc --localstatedir=3D/var --libdir=3D/usr/lib64 Since FreeBSD doesn't need "/usr/lib64" you could just use for packages # ./configure --prefix=3D/usr --sysconfdir=3D/etc --localstatedir=3D/var --=20 Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 22:09:16 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6F94DCF9; Sun, 6 Jul 2014 22:09:16 +0000 (UTC) Received: from gw.catspoiler.org (gw.catspoiler.org [75.1.14.242]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0DD682A48; Sun, 6 Jul 2014 22:09:15 +0000 (UTC) Received: from FreeBSD.org (mousie.catspoiler.org [192.168.101.2]) by gw.catspoiler.org (8.13.3/8.13.3) with ESMTP id s66LnGnm021769; Sun, 6 Jul 2014 14:49:20 -0700 (PDT) (envelope-from truckman@FreeBSD.org) Message-Id: <201407062149.s66LnGnm021769@gw.catspoiler.org> Date: Sun, 6 Jul 2014 14:49:16 -0700 (PDT) From: Don Lewis Subject: Re: Strange IO performance with UFS To: kostikbel@gmail.com In-Reply-To: <20140705195816.GV93733@kib.kiev.ua> MIME-Version: 1.0 Content-Type: TEXT/plain; charset=us-ascii Cc: freebsd-fs@FreeBSD.org, sparvu@systemdatarecorder.org, freebsd-hackers@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 22:09:16 -0000 On 5 Jul, Konstantin Belousov wrote: > On Sat, Jul 05, 2014 at 06:18:07PM +0200, Roger Pau Monn? wrote: >> On 05/07/14 13:24, Konstantin Belousov wrote: >> > On Sat, Jul 05, 2014 at 12:35:11PM +0200, Roger Pau Monn? wrote: >> >> On 05/07/14 11:58, Konstantin Belousov wrote: >> >>> On Sat, Jul 05, 2014 at 11:32:06AM +0200, Roger Pau Monn? >> >>> wrote: >> >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 >> >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3 >> >>>> kernel`g_io_request+0x384 kernel`ufs_strategy+0x8a >> >>>> kernel`VOP_STRATEGY_APV+0xf5 kernel`bufstrategy+0x46 >> >>>> kernel`cluster_read+0x5e6 kernel`ffs_balloc_ufs2+0x1be2 >> >>>> kernel`ffs_write+0x310 kernel`VOP_WRITE_APV+0x166 >> >>>> kernel`vn_write+0x2eb kernel`vn_io_fault_doio+0x22 >> >>>> kernel`vn_io_fault1+0x78 kernel`vn_io_fault+0x173 >> >>>> kernel`dofilewrite+0x85 kernel`kern_writev+0x65 >> >>>> kernel`sys_write+0x63 >> >>>> >> >>>> This can also be seen by running iostat in parallel with the >> >>>> fio workload: >> >>>> >> >>>> device r/s w/s kr/s kw/s qlen svc_t %b ada0 >> >>>> 243.3 233.7 31053.3 29919.1 31 57.4 100 >> >>>> >> >>>> This clearly shows that even when I was doing a sequential >> >>>> write (the fio workload shown above), the disk was actually >> >>>> reading more data than writing it, which makes no sense, and >> >>>> all the reads come from the path trace shown above. >> >>> >> >>> The backtrace above means that the BA_CLRBUF was specified for >> >>> UFS_BALLOC(). In turns, this occurs when the write size is >> >>> less than the UFS block size. UFS has to read the block to >> >>> ensure that partial write does not corrupt the rest of the >> >>> buffer. >> >> >> >> Thanks for the clarification, that makes sense. I'm not opening >> >> the file with O_DIRECT, so shouldn't the write be cached in >> >> memory and flushed to disk when we have the full block? It's a >> >> sequential write, so the whole block is going to be rewritten >> >> very soon. >> >> >> >>> >> >>> You can get the block size for file with stat(2), st_blksize >> >>> field of the struct stat, or using statfs(2), field f_iosize of >> >>> struct statfs, or just looking at the dumpfs output for your >> >>> filesystem, the bsize value. For modern UFS typical value is >> >>> 32KB. >> >> >> >> Yes, block size is 32KB, checked with dumpfs. I've changed the >> >> block size in fio to 32k and then I get the expected results in >> >> iostat and fio: >> >> >> >> extended device statistics device r/s w/s kr/s kw/s >> >> qlen svc_t %b ada0 1.0 658.2 31.1 84245.1 58 108.4 >> >> 101 extended device statistics device r/s w/s kr/s >> >> kw/s qlen svc_t %b ada0 0.0 689.8 0.0 88291.4 54 >> >> 112.1 99 extended device statistics device r/s w/s kr/s >> >> kw/s qlen svc_t %b ada0 1.0 593.3 30.6 75936.9 80 >> >> 111.7 97 >> >> >> >> write: io=10240MB, bw=81704KB/s, iops=2553, runt=128339msec >> > >> > The current code in ffs_write() only avoids read before write when >> > write covers complete block. I think we can somewhat loose the >> > test to also avoid read when we are at EOF and write covers >> > completely the valid portion of the last block. >> > >> > This leaves the unwritten portion of the block with the garbage. I >> > believe that it is not harmful, since the only way for usermode to >> > access that garbage is through the mmap(2). The >> > vnode_generic_getpages() zeroes out parts of the page which are >> > after EOF. >> > >> > Try this, almost completely untested: >> >> Doesn't seem to help much, I'm still seeing the same issue. I'm >> sampling iostat every 1s, and here's the output form the start of the >> 4k block fio workload: >> >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 349.5 0.0 44612.3 48 88.0 52 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 655.4 0.0 83773.6 76 99.8 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 699.2 0.0 89493.1 59 109.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 628.1 0.0 80392.6 55 114.8 98 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 655.7 0.0 83799.6 79 98.4 102 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 701.4 0.0 89782.0 80 105.5 97 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 697.9 0.0 89331.6 78 112.0 103 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 714.1 0.0 91408.7 77 110.3 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 724.0 0.0 92675.0 67 112.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 700.4 0.0 89646.6 49 102.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 686.4 0.0 87857.2 78 110.0 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 702.0 0.0 89851.6 80 112.9 97 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 736.3 0.0 94246.4 67 110.1 103 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 624.6 0.0 79950.0 48 115.7 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 704.0 0.0 90118.4 77 106.1 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 714.6 0.0 91470.0 80 103.6 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 710.4 0.0 90926.1 80 111.1 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 655.3 0.0 83882.1 70 115.8 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 539.8 0.0 69094.5 80 121.2 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 711.6 0.0 91087.6 79 107.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 705.5 0.0 90304.5 81 111.3 97 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 727.3 0.0 93092.8 81 108.9 102 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 699.5 0.0 89296.4 55 109.0 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 689.0 0.0 88066.1 78 96.6 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 738.3 0.0 94496.1 56 109.1 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 615.4 0.0 78770.0 80 112.3 98 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 707.3 0.0 90529.8 86 105.7 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 704.3 0.0 89333.9 67 98.3 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 641.3 0.0 82081.5 80 112.3 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 701.6 0.0 89747.9 51 101.1 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 693.0 0.0 88702.1 80 103.6 97 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 632.7 0.0 80991.8 80 112.0 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 709.0 0.0 90748.2 80 107.5 102 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 715.0 0.0 91523.0 80 104.7 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 650.1 0.0 83210.5 56 110.9 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 682.2 0.0 87319.1 57 107.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 719.0 0.0 92032.6 80 103.6 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 624.3 0.0 79905.8 80 110.5 97 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 696.5 0.0 89151.7 80 109.9 103 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 664.2 0.0 85017.6 77 109.9 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 681.7 0.0 87254.0 80 107.5 98 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 668.5 0.0 85569.3 57 109.9 99 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 682.3 0.0 87329.0 53 110.8 102 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 0.0 643.9 0.0 82420.9 77 104.8 101 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 107.5 457.1 13701.7 58471.3 57 106.0 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 220.9 253.9 28281.4 32498.9 54 108.8 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 290.6 277.9 37198.8 35576.1 65 94.3 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 309.3 267.9 39590.7 34295.9 80 89.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 213.6 302.0 27212.7 38562.0 24 93.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 232.1 224.3 29712.5 28339.8 31 117.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 262.9 249.4 33654.0 31928.1 47 81.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 232.2 229.2 29721.6 29340.5 50 78.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 222.8 229.4 28430.0 29362.7 42 85.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 231.5 246.5 29628.8 31555.9 6 72.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 261.7 256.8 33498.7 32769.1 33 83.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 262.7 260.7 33628.3 33279.4 35 85.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 234.0 249.1 29867.9 31883.1 18 90.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 252.1 239.8 32263.0 30581.4 32 91.2 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 241.5 257.5 30917.0 32961.1 16 69.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 257.9 243.5 33011.9 31164.2 32 86.8 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 237.5 235.6 30311.2 30046.9 31 67.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 290.4 213.1 37172.8 27277.0 79 65.3 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 216.4 284.3 27703.7 36392.5 42 95.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 223.8 248.2 28645.1 31774.4 16 69.4 89 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 294.0 217.7 37544.4 27864.2 64 68.0 110 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 210.7 245.6 26966.6 31439.8 59 107.4 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 228.5 265.2 29246.6 33940.5 10 99.2 98 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 279.1 218.4 35727.2 27955.0 52 71.9 102 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 232.3 293.4 29607.9 37521.4 14 93.2 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 299.5 236.6 38340.2 30288.8 79 69.7 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 216.3 268.9 27686.3 34417.3 4 90.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 285.8 261.0 36585.3 33409.5 53 84.6 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 228.5 232.5 29059.7 29661.1 48 74.3 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 242.7 262.4 31060.0 33588.2 27 69.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 248.2 252.2 31766.1 32149.3 8 78.9 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 267.9 230.2 34288.6 29462.8 62 68.5 100 >> extended device statistics >> device r/s w/s kr/s kw/s qlen svc_t %b >> ada0 238.0 266.2 30375.8 34075.6 0 95.4 100 >> >> As can be seen from the log above, at first the workload runs fine, >> and the disk is only performing writes, but at some point (in this >> case around 40% of completion) it starts performing this >> read-before-write dance that completely screws up performance. > > I reproduced this locally. I think my patch is useless for the fio/4k write > situation. > > What happens is indeed related to the amount of the available memory. > When the size of the file written by fio is larger than the memory, > system has to recycle the cached pages. So after some moment, doing > a write has to do read-before-write, and this occurs not at the EOF > (since fio pre-allocated the job file). It would seem to be much better to recycle pages associated parts of the file that haven't been touched in a long time before recycling pages associated with the filesystem block that is currently being written. If the writes are sequential, then it definitely makes sense to hang on to the block until the last portion of the block is written. It sounds like we are doing pretty much the opposite of this. What seems odd is that it sounds like we are detecting the partial write of a block, reading the block from the disk, updating it with the new partial data, writing the block, and then immediately tossing it. It seems odd that the the dirty block isn't allowed to stick around until the syncer causes it to be written, with clean blocks being reclaimed in the meantime. From owner-freebsd-fs@FreeBSD.ORG Sun Jul 6 23:09:16 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A3B7430E for ; Sun, 6 Jul 2014 23:09:16 +0000 (UTC) Received: from mail-we0-x22f.google.com (mail-we0-x22f.google.com [IPv6:2a00:1450:400c:c03::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 313CB2E99 for ; Sun, 6 Jul 2014 23:09:16 +0000 (UTC) Received: by mail-we0-f175.google.com with SMTP id k48so3559126wev.6 for ; Sun, 06 Jul 2014 16:09:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=PZV4gRQvxa9chDRVlwAtUVFmCaylWTCHvE7cNYIISHg=; b=o/UFO2w9Q88w1bnFqgOcsBlV40MbYv+J8PiqhdNINHG+gcOWT+lBN5/pBhZ5v0bvyB X50vK0/9cx3H0/4QUCZwuLzU8XkAj+GIKgnDQVj7TvdNEFyJkgY7jYJkzXNms49u3BK8 8UupzrUK5hOIO6ESt6D8IEyqmUmAXgxoSoCV/NnO/J8kuayh89kEbKeLWgWqvwwWrcnY rPY8J13lv/QRv4fNuEsxMJo3tB5dROiX7xb7zRsIUMwf8Y4uY33okx5knnhpj0d3fE39 2xrYz6JnW6tkhpyqel5ZJCb/OyEQVifVrcKkAWxtJP6FSQ/qjBBPtgyNKa09sSZYilCu LNXQ== X-Received: by 10.180.72.43 with SMTP id a11mr33208620wiv.21.1404688154382; Sun, 06 Jul 2014 16:09:14 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by mx.google.com with ESMTPSA id r9sm108037474wia.17.2014.07.06.16.09.12 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 Jul 2014 16:09:13 -0700 (PDT) Sender: Baptiste Daroussin Date: Mon, 7 Jul 2014 01:09:11 +0200 From: Baptiste Daroussin To: Harshavardhana Subject: Re: FreeBSD support being added to GlusterFS Message-ID: <20140706230910.GA8523@ivaldir.etoilebsd.net> References: <6ADBB2BF-C7E8-4050-9278-2565A63D2EA8@gluster.org> <20140627070411.GI24440@ivaldir.etoilebsd.net> <0F20AEEC-6244-42BC-815C-1440BBBDE664@mail.turbofuzz.com> <20140629203746.GI34108@ivaldir.etoilebsd.net> <1A58F492-946F-46D4-A19E-2734F368CDAC@mail.turbofuzz.com> <0ABAE2AC-BF1B-4125-ACA9-C6177D013E25@mail.turbofuzz.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="envbJBWh7q8WU6mo" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Jul 2014 23:09:16 -0000 --envbJBWh7q8WU6mo Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, Jul 06, 2014 at 12:13:15PM -0700, Harshavardhana wrote: > > > > I can make the /usr/local/var/log/glusterfs directory and it gets much = further. That said, is there some special configure flags we should be pa= ssing in our version of the port to properly stuff glusterfs into /var inst= ead? Your email tends to imply that we should be passing =E2=80=94localsta= tedir, which we can certainly do no problem, I=E2=80=99m just wondering if = that=E2=80=99s your long-term plan. Again, this is our port: https://gith= ub.com/freenas/ports/tree/freenas/9-stable/sysutils/glusterfs > > > > The fundamental issue with /usr/local is, again, that /usr/local is rea= d-only on FreeNAS. If there are configuration files that glusterfs expects= to be modifiable, they can=E2=80=99t live anywhere in /usr/local, nor of c= ourse can any temporary files or log files. We have made special provision= s for /etc and /var such that those can be modified, so we basically just n= eed to compile gluster as a =E2=80=9Csystem service=E2=80=9D and put it in = the system directories (e.g. prefix is /, not /usr/local). > > >=20 > Ah now i get it - "/usr/local" is not a requirement for "GlusterFS" it > is a baggage of using "autotools" when during ./configure if you do > not specify --prefix - so for a standard installation under RPM it is > usually the following flags are used >=20 > # ./configure --prefix=3D/usr --sysconfdir=3D/etc --localstatedir=3D/var > --libdir=3D/usr/lib64 >=20 > Since FreeBSD doesn't need "/usr/lib64" you could just use for packages >=20 > # ./configure --prefix=3D/usr --sysconfdir=3D/etc --localstatedir=3D/var >=20 Here is an updated version of my port http://people.freebsd.org/~bapt/glusterfs.diff This time it passes poudriere (for those not aware of poudriere it is for FreeBSD a bit like what mock is for fedora but on steroid :)) What is new in there: dependency on bison that I missed the first time, a dependency on libexecinfo (on non FreeBSD 10) and a build dependency on git other build-aux/pkg-version is not happily catching the version Tested on FreeBSD 10 regards, Bapt --envbJBWh7q8WU6mo Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlO51xYACgkQ8kTtMUmk6Ex6zACgh+6z/ECBHd2LlHmCFfuuyqTQ wUIAn18OEXA1pwtiHeVrBVA7fyIBf5Ac =s0+V -----END PGP SIGNATURE----- --envbJBWh7q8WU6mo-- From owner-freebsd-fs@FreeBSD.ORG Mon Jul 7 00:43:54 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E7A4812B for ; Mon, 7 Jul 2014 00:43:54 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id CF2F92673 for ; Mon, 7 Jul 2014 00:43:54 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s670hsdI082327 for ; Mon, 7 Jul 2014 01:43:54 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191573] [zfs] kernel panic when running zpool/add/files.t Date: Mon, 07 Jul 2014 00:43:55 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: yaneurabeya@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2014 00:43:55 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191573 --- Comment #6 from yaneurabeya@gmail.com --- Sure! -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jul 7 08:00:11 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 04A6DDF5 for ; Mon, 7 Jul 2014 08:00:11 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DBCD8259F for ; Mon, 7 Jul 2014 08:00:10 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6780Aqs030205 for ; Mon, 7 Jul 2014 09:00:10 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) Message-Id: <201407070800.s6780Aqs030205@kenobi.freebsd.org> From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bugzilla] Commit Needs MFC MIME-Version: 1.0 X-Bugzilla-Type: whine X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated Date: Mon, 07 Jul 2014 08:00:10 +0000 Content-Type: text/plain X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2014 08:00:11 -0000 Hi, You have a bug in the "Needs MFC" state which has not been touched in 7 or more days. This email serves as a reminder that you may want to MFC this bug or marked it as completed. In the event you have a longer MFC timeout you may update this bug with a comment and I won't remind you again for 7 days. This reminder is only sent on Mondays. Please file a bug about concerns you may have. This search was scheduled by eadler@FreeBSD.org. (5 bugs) Bug 133174: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=133174 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [msdosfs] [patch] msdosfs must support multibyte international characters in file names Bug 136470: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=136470 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] Cannot mount / in read-only, over NFS Bug 139651: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139651 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] mount(8): read-only remount of NFS volume does not work Bug 144447: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=144447 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] sharenfs fsunshare() & fsshare_main() non functional Bug 155411: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=155411 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [regression] [8.2-release] [tmpfs]: mount: tmpfs : No space left on device From owner-freebsd-fs@FreeBSD.ORG Mon Jul 7 08:58:32 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C0219533 for ; Mon, 7 Jul 2014 08:58:32 +0000 (UTC) Received: from wetteronline.de (mail.wetteronline.de [89.1.23.61]) by mx1.freebsd.org (Postfix) with ESMTP id 0E3612C48 for ; Mon, 7 Jul 2014 08:58:31 +0000 (UTC) Received: from thorsten-schlich.wetteronline.admin (account tschlich@wetteronline.de [192.168.30.235] verified) by wetteronline.de (CommuniGate Pro SMTP 5.4.4) with ESMTPSA id 22511601 for freebsd-fs@freebsd.org; Mon, 07 Jul 2014 07:58:03 +0000 Message-ID: <53BA5301.3030203@wetteronline.de> Date: Mon, 07 Jul 2014 09:57:53 +0200 From: Thorsten Schlich Reply-To: thorsten.schlich@wetteronline.de Organization: WetterOnline GmbH User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: copying files between zfs servers results in different data in these files X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2014 08:58:32 -0000 Hello, i have a somehow crazy behaviour in copying files from one zfs server to another. The files seem to be the same (same size, same timestamp, etc.) but after copying the binary data differs at some data points between the original and the copy. ZFS is on both servers the standard filesystem. server 1 is a freebsd 8.3 vm (update is in progress but for produtional reasons it needs some time) with a zpool named tank including one virtual disk. server 2 is a freebsd 9.2 hardware machine with 6x3 TB in raidz (1-0) pool "tank" The copy process is: - find ./ -type f -atime +2 >/tmp/file.txt - rsync the files named in /tmp/file.txt to server 2 For one day there are 15682 files copied and between 9000 and 10000 copied files differ from their original. The difference is small. For 21.35 in the original there is a 21.85 in the copy. But without any pattern (checked).As this are meteorological data such a minor change is crucial to every calculation which comes later. Copying between two VMs is no problem, all files are correct. Only the machine with the raidz has this problem. Additionally the raidz machine boots with a usb drive. Both server are in the same room wired in the same network and can access each other directly. The version of rsync is all the same and zfs is the same version too. Do you have any hints or suggestions where i could investigate further? Thanks in advance for help. Below you can find the zpool and zfs configuration. Regards, Thorsten *** Configuration Server 1: zfs get all tank NAME PROPERTY VALUE SOURCE tank type filesystem - tank creation Mo Jan 13 15:40 2014 - tank used 437G - tank available 193G - tank referenced 437G - tank compressratio 1.78x - tank mounted yes - tank quota none default tank reservation none default tank recordsize 128K default tank mountpoint /space local tank sharenfs off default tank checksum on default tank compression on local tank atime on default tank devices on default tank exec on default tank setuid on default tank readonly off default tank jailed off default tank snapdir hidden default tank aclmode passthrough local tank aclinherit passthrough local tank canmount on default tank xattr off temporary tank copies 1 default tank version 5 - tank utf8only off - tank normalization none - tank casesensitivity sensitive - tank vscan off default tank nbmand off default tank sharesmb off default tank refquota none default tank refreservation none default tank primarycache all default tank secondarycache all default tank usedbysnapshots 0 - tank usedbydataset 437G - tank usedbychildren 322M - tank usedbyrefreservation 0 - tank logbias latency default tank dedup off default tank mlslabel - tank sync standard default tank refcompressratio 1.78x - tank written 437G - zpool get all tank NAME PROPERTY VALUE SOURCE tank size 640G - tank capacity 68% - tank altroot - default tank health ONLINE - tank guid 7522257494086463050 default tank version 28 default tank bootfs - default tank delegation on default tank autoreplace off default tank cachefile - default tank failmode wait default tank listsnapshots off default tank autoexpand on local tank dedupditto 0 default tank dedupratio 1.00x - tank free 203G - tank allocated 437G - tank readonly off - tank comment - default *** Configuration Server 2: zfs get all tank NAME PROPERTY VALUE SOURCE tank type filesystem - tank creation Fr Feb 14 8:13 2014 - tank used 12,7T - tank available 5,10T - tank referenced 683M - tank compressratio 1.74x - tank mounted yes - tank quota none default tank reservation none default tank recordsize 128K default tank mountpoint /space local tank sharenfs off default tank checksum on default tank compression gzip-9 local tank atime on default tank devices on default tank exec on default tank setuid on default tank readonly off default tank jailed off default tank snapdir hidden default tank aclmode discard default tank aclinherit restricted default tank canmount on default tank xattr off temporary tank copies 1 default tank version 5 - tank utf8only off - tank normalization none - tank casesensitivity sensitive - tank vscan off default tank nbmand off default tank sharesmb off default tank refquota none default tank refreservation none default tank primarycache all default tank secondarycache all default tank usedbysnapshots 0 - tank usedbydataset 683M - tank usedbychildren 12,7T - tank usedbyrefreservation 0 - tank logbias latency default tank dedup off default tank mlslabel - tank sync standard default tank refcompressratio 1.94x - tank written 683M - tank logicalused 21,9T - tank logicalreferenced 1,28G - zpool get all tank NAME PROPERTY VALUE SOURCE tank size 21,8T - tank capacity 70% - tank altroot - default tank health ONLINE - tank guid 4365850585010436054 default tank version - default tank bootfs - default tank delegation on default tank autoreplace off default tank cachefile - default tank failmode wait default tank listsnapshots off default tank autoexpand on local tank dedupditto 0 default tank dedupratio 1.00x - tank free 6,49T - tank allocated 15,3T - tank readonly off - tank comment - default tank expandsize 0 - tank freeing 0 default tank feature@async_destroy enabled local tank feature@empty_bpobj active local tank feature@lz4_compress enabled local From owner-freebsd-fs@FreeBSD.ORG Mon Jul 7 09:30:06 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AA4203AF for ; Mon, 7 Jul 2014 09:30:06 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 91A392F2E for ; Mon, 7 Jul 2014 09:30:06 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s679U6w6090826 for ; Mon, 7 Jul 2014 10:30:06 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191573] [zfs] kernel panic when running zpool/add/files.t Date: Mon, 07 Jul 2014 09:30:06 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: yaneurabeya@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2014 09:30:06 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191573 --- Comment #7 from yaneurabeya@gmail.com --- Still occurs with r268351 on i386 built/booted today. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jul 7 09:31:13 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5657043B for ; Mon, 7 Jul 2014 09:31:13 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3DF9A2F39 for ; Mon, 7 Jul 2014 09:31:13 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s679VDoM018673 for ; Mon, 7 Jul 2014 10:31:13 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191573] [zfs] kernel panic when running zpool/add/files.t Date: Mon, 07 Jul 2014 09:31:13 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: yaneurabeya@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2014 09:31:13 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191573 --- Comment #8 from yaneurabeya@gmail.com --- Please note that this might be a malloc failure because in both cases I was running ZFS on VMs with less than <=4GB of RAM. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jul 7 09:49:40 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5DAA1B84 for ; Mon, 7 Jul 2014 09:49:40 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 454F6213B for ; Mon, 7 Jul 2014 09:49:40 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s679nefg051129 for ; Mon, 7 Jul 2014 10:49:40 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191573] [zfs] kernel panic when running zpool/add/files.t Date: Mon, 07 Jul 2014 09:49:40 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2014 09:49:40 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191573 --- Comment #9 from Steven Hartland --- Its not recommended to run on i386 due to stack size, can you try with either am64 or increase your stack size with: options KSTACK_PAGES=4 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jul 7 10:02:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AF886FFF for ; Mon, 7 Jul 2014 10:02:14 +0000 (UTC) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.81]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 70B0C22B8 for ; Mon, 7 Jul 2014 10:02:13 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1X45kK-0003V0-8G for freebsd-fs@freebsd.org; Mon, 07 Jul 2014 12:02:05 +0200 Content-Type: text/plain; charset=iso-8859-15; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: copying files between zfs servers results in different data in these files References: <53BA5301.3030203@wetteronline.de> Date: Mon, 07 Jul 2014 12:02:02 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Ronald Klop" Message-ID: In-Reply-To: <53BA5301.3030203@wetteronline.de> User-Agent: Opera Mail/12.17 (Win32) X-Authenticated-As-Hash: 398f5522cb258ce43cb679602f8cfe8b62a256d1 X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: - X-Spam-Score: -1.0 X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED, BAYES_20 autolearn=disabled version=3.3.2 X-Scan-Signature: 59382953e30ad212b3f63ce6d1cef961 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2014 10:02:14 -0000 On Mon, 07 Jul 2014 09:57:53 +0200, Thorsten Schlich wrote: > Hello, > > i have a somehow crazy behaviour in copying files from one zfs server to > another. > > The files seem to be the same (same size, same timestamp, etc.) but > after copying the binary data differs at some data points between the > original and the copy. > > ZFS is on both servers the standard filesystem. > > server 1 is a freebsd 8.3 vm (update is in progress but for produtional > reasons it needs some time) with a zpool named tank including one > virtual disk. > > server 2 is a freebsd 9.2 hardware machine with 6x3 TB in raidz (1-0) > pool "tank" > > The copy process is: > - find ./ -type f -atime +2 >/tmp/file.txt > - rsync the files named in /tmp/file.txt to server 2 Can you give your complete rsync command? As rsync is in between it would be interesting if you can determine if it reads the wrong value from the source or if the value has changed after sending it across the netwerk or if it is changed after writing the right value on the destination server. Regards, Ronald. > For one day there are 15682 files copied and between 9000 and 10000 > copied files differ from their original. > > The difference is small. For 21.35 in the original there is a 21.85 in > the copy. But without any pattern (checked).As this are meteorological > data such a minor change is crucial to every calculation which comes > later. > > Copying between two VMs is no problem, all files are correct. > > Only the machine with the raidz has this problem. Additionally the raidz > machine boots with a usb drive. > > Both server are in the same room wired in the same network and can > access each other directly. > > The version of rsync is all the same and zfs is the same version too. > > Do you have any hints or suggestions where i could investigate further? > > Thanks in advance for help. Below you can find the zpool and zfs > configuration. > > Regards, > Thorsten > > > *** > Configuration Server 1: > > zfs get all tank > NAME PROPERTY VALUE SOURCE > tank type filesystem - > tank creation Mo Jan 13 15:40 2014 - > tank used 437G - > tank available 193G - > tank referenced 437G - > tank compressratio 1.78x - > tank mounted yes - > tank quota none default > tank reservation none default > tank recordsize 128K default > tank mountpoint /space local > tank sharenfs off default > tank checksum on default > tank compression on local > tank atime on default > tank devices on default > tank exec on default > tank setuid on default > tank readonly off default > tank jailed off default > tank snapdir hidden default > tank aclmode passthrough local > tank aclinherit passthrough local > tank canmount on default > tank xattr off temporary > tank copies 1 default > tank version 5 - > tank utf8only off - > tank normalization none - > tank casesensitivity sensitive - > tank vscan off default > tank nbmand off default > tank sharesmb off default > tank refquota none default > tank refreservation none default > tank primarycache all default > tank secondarycache all default > tank usedbysnapshots 0 - > tank usedbydataset 437G - > tank usedbychildren 322M - > tank usedbyrefreservation 0 - > tank logbias latency default > tank dedup off default > tank mlslabel - > tank sync standard default > tank refcompressratio 1.78x - > tank written 437G - > > zpool get all tank > NAME PROPERTY VALUE SOURCE > tank size 640G - > tank capacity 68% - > tank altroot - default > tank health ONLINE - > tank guid 7522257494086463050 default > tank version 28 default > tank bootfs - default > tank delegation on default > tank autoreplace off default > tank cachefile - default > tank failmode wait default > tank listsnapshots off default > tank autoexpand on local > tank dedupditto 0 default > tank dedupratio 1.00x - > tank free 203G - > tank allocated 437G - > tank readonly off - > tank comment - default > > *** > Configuration Server 2: > zfs get all tank > NAME PROPERTY VALUE SOURCE > tank type filesystem - > tank creation Fr Feb 14 8:13 2014 - > tank used 12,7T - > tank available 5,10T - > tank referenced 683M - > tank compressratio 1.74x - > tank mounted yes - > tank quota none default > tank reservation none default > tank recordsize 128K default > tank mountpoint /space local > tank sharenfs off default > tank checksum on default > tank compression gzip-9 local > tank atime on default > tank devices on default > tank exec on default > tank setuid on default > tank readonly off default > tank jailed off default > tank snapdir hidden default > tank aclmode discard default > tank aclinherit restricted default > tank canmount on default > tank xattr off temporary > tank copies 1 default > tank version 5 - > tank utf8only off - > tank normalization none - > tank casesensitivity sensitive - > tank vscan off default > tank nbmand off default > tank sharesmb off default > tank refquota none default > tank refreservation none default > tank primarycache all default > tank secondarycache all default > tank usedbysnapshots 0 - > tank usedbydataset 683M - > tank usedbychildren 12,7T - > tank usedbyrefreservation 0 - > tank logbias latency default > tank dedup off default > tank mlslabel - > tank sync standard default > tank refcompressratio 1.94x - > tank written 683M - > tank logicalused 21,9T - > tank logicalreferenced 1,28G - > > zpool get all tank > NAME PROPERTY VALUE SOURCE > tank size 21,8T - > tank capacity 70% - > tank altroot - default > tank health ONLINE - > tank guid 4365850585010436054 default > tank version - default > tank bootfs - default > tank delegation on default > tank autoreplace off default > tank cachefile - default > tank failmode wait default > tank listsnapshots off default > tank autoexpand on local > tank dedupditto 0 default > tank dedupratio 1.00x - > tank free 6,49T - > tank allocated 15,3T - > tank readonly off - > tank comment - default > tank expandsize 0 - > tank freeing 0 default > tank feature@async_destroy enabled local > tank feature@empty_bpobj active local > tank feature@lz4_compress enabled local > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Jul 7 13:52:12 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2D76AF81; Mon, 7 Jul 2014 13:52:12 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9655F27B7; Mon, 7 Jul 2014 13:52:11 +0000 (UTC) Received: from tom.home (kib@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id s67DpsIh021439 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 7 Jul 2014 16:51:54 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua s67DpsIh021439 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id s67DpsdY021438; Mon, 7 Jul 2014 16:51:54 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Mon, 7 Jul 2014 16:51:54 +0300 From: Konstantin Belousov To: Don Lewis Subject: Re: Strange IO performance with UFS Message-ID: <20140707135154.GM93733@kib.kiev.ua> References: <20140705195816.GV93733@kib.kiev.ua> <201407062149.s66LnGnm021769@gw.catspoiler.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="IhaVjqNYhz9BDJmo" Content-Disposition: inline In-Reply-To: <201407062149.s66LnGnm021769@gw.catspoiler.org> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: freebsd-fs@FreeBSD.org, sparvu@systemdatarecorder.org, freebsd-hackers@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2014 13:52:12 -0000 --IhaVjqNYhz9BDJmo Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, Jul 06, 2014 at 02:49:16PM -0700, Don Lewis wrote: > On 5 Jul, Konstantin Belousov wrote: > > On Sat, Jul 05, 2014 at 06:18:07PM +0200, Roger Pau Monn? wrote: > >> On 05/07/14 13:24, Konstantin Belousov wrote: > >> > On Sat, Jul 05, 2014 at 12:35:11PM +0200, Roger Pau Monn? wrote: > >> >> On 05/07/14 11:58, Konstantin Belousov wrote: > >> >>> On Sat, Jul 05, 2014 at 11:32:06AM +0200, Roger Pau Monn? > >> >>> wrote: > >> >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3=20 > >> >>>> kernel`g_io_request+0x384 kernel`g_part_start+0x2c3=20 > >> >>>> kernel`g_io_request+0x384 kernel`ufs_strategy+0x8a=20 > >> >>>> kernel`VOP_STRATEGY_APV+0xf5 kernel`bufstrategy+0x46=20 > >> >>>> kernel`cluster_read+0x5e6 kernel`ffs_balloc_ufs2+0x1be2=20 > >> >>>> kernel`ffs_write+0x310 kernel`VOP_WRITE_APV+0x166=20 > >> >>>> kernel`vn_write+0x2eb kernel`vn_io_fault_doio+0x22=20 > >> >>>> kernel`vn_io_fault1+0x78 kernel`vn_io_fault+0x173=20 > >> >>>> kernel`dofilewrite+0x85 kernel`kern_writev+0x65=20 > >> >>>> kernel`sys_write+0x63 > >> >>>>=20 > >> >>>> This can also be seen by running iostat in parallel with the > >> >>>> fio workload: > >> >>>>=20 > >> >>>> device r/s w/s kr/s kw/s qlen svc_t %b ada0=20 > >> >>>> 243.3 233.7 31053.3 29919.1 31 57.4 100 > >> >>>>=20 > >> >>>> This clearly shows that even when I was doing a sequential > >> >>>> write (the fio workload shown above), the disk was actually > >> >>>> reading more data than writing it, which makes no sense, and > >> >>>> all the reads come from the path trace shown above. > >> >>>=20 > >> >>> The backtrace above means that the BA_CLRBUF was specified for=20 > >> >>> UFS_BALLOC(). In turns, this occurs when the write size is > >> >>> less than the UFS block size. UFS has to read the block to > >> >>> ensure that partial write does not corrupt the rest of the > >> >>> buffer. > >> >>=20 > >> >> Thanks for the clarification, that makes sense. I'm not opening > >> >> the file with O_DIRECT, so shouldn't the write be cached in > >> >> memory and flushed to disk when we have the full block? It's a > >> >> sequential write, so the whole block is going to be rewritten > >> >> very soon. > >> >>=20 > >> >>>=20 > >> >>> You can get the block size for file with stat(2), st_blksize > >> >>> field of the struct stat, or using statfs(2), field f_iosize of > >> >>> struct statfs, or just looking at the dumpfs output for your > >> >>> filesystem, the bsize value. For modern UFS typical value is > >> >>> 32KB. > >> >>=20 > >> >> Yes, block size is 32KB, checked with dumpfs. I've changed the > >> >> block size in fio to 32k and then I get the expected results in > >> >> iostat and fio: > >> >>=20 > >> >> extended device statistics device r/s w/s kr/s kw/s > >> >> qlen svc_t %b ada0 1.0 658.2 31.1 84245.1 58 108.4 > >> >> 101 extended device statistics device r/s w/s kr/s > >> >> kw/s qlen svc_t %b ada0 0.0 689.8 0.0 88291.4 54 > >> >> 112.1 99 extended device statistics device r/s w/s kr/s > >> >> kw/s qlen svc_t %b ada0 1.0 593.3 30.6 75936.9 80 > >> >> 111.7 97 > >> >>=20 > >> >> write: io=3D10240MB, bw=3D81704KB/s, iops=3D2553, runt=3D128339msec > >> >=20 > >> > The current code in ffs_write() only avoids read before write when= =20 > >> > write covers complete block. I think we can somewhat loose the > >> > test to also avoid read when we are at EOF and write covers > >> > completely the valid portion of the last block. > >> >=20 > >> > This leaves the unwritten portion of the block with the garbage. I= =20 > >> > believe that it is not harmful, since the only way for usermode to= =20 > >> > access that garbage is through the mmap(2). The > >> > vnode_generic_getpages() zeroes out parts of the page which are > >> > after EOF. > >> >=20 > >> > Try this, almost completely untested: > >>=20 > >> Doesn't seem to help much, I'm still seeing the same issue. I'm > >> sampling iostat every 1s, and here's the output form the start of the > >> 4k block fio workload: > >>=20 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 349.5 0.0 44612.3 48 88.0 52 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 655.4 0.0 83773.6 76 99.8 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 699.2 0.0 89493.1 59 109.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 628.1 0.0 80392.6 55 114.8 98 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 655.7 0.0 83799.6 79 98.4 102 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 701.4 0.0 89782.0 80 105.5 97 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 697.9 0.0 89331.6 78 112.0 103 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 714.1 0.0 91408.7 77 110.3 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 724.0 0.0 92675.0 67 112.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 700.4 0.0 89646.6 49 102.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 686.4 0.0 87857.2 78 110.0 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 702.0 0.0 89851.6 80 112.9 97 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 736.3 0.0 94246.4 67 110.1 103 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 624.6 0.0 79950.0 48 115.7 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 704.0 0.0 90118.4 77 106.1 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 714.6 0.0 91470.0 80 103.6 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 710.4 0.0 90926.1 80 111.1 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 655.3 0.0 83882.1 70 115.8 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 539.8 0.0 69094.5 80 121.2 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 711.6 0.0 91087.6 79 107.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 705.5 0.0 90304.5 81 111.3 97 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 727.3 0.0 93092.8 81 108.9 102 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 699.5 0.0 89296.4 55 109.0 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 689.0 0.0 88066.1 78 96.6 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 738.3 0.0 94496.1 56 109.1 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 615.4 0.0 78770.0 80 112.3 98 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 707.3 0.0 90529.8 86 105.7 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 704.3 0.0 89333.9 67 98.3 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 641.3 0.0 82081.5 80 112.3 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 701.6 0.0 89747.9 51 101.1 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 693.0 0.0 88702.1 80 103.6 97 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 632.7 0.0 80991.8 80 112.0 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 709.0 0.0 90748.2 80 107.5 102 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 715.0 0.0 91523.0 80 104.7 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 650.1 0.0 83210.5 56 110.9 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 682.2 0.0 87319.1 57 107.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 719.0 0.0 92032.6 80 103.6 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 624.3 0.0 79905.8 80 110.5 97 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 696.5 0.0 89151.7 80 109.9 103 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 664.2 0.0 85017.6 77 109.9 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 681.7 0.0 87254.0 80 107.5 98 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 668.5 0.0 85569.3 57 109.9 99 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 682.3 0.0 87329.0 53 110.8 102 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 0.0 643.9 0.0 82420.9 77 104.8 101 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 107.5 457.1 13701.7 58471.3 57 106.0 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 220.9 253.9 28281.4 32498.9 54 108.8 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 290.6 277.9 37198.8 35576.1 65 94.3 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 309.3 267.9 39590.7 34295.9 80 89.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 213.6 302.0 27212.7 38562.0 24 93.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 232.1 224.3 29712.5 28339.8 31 117.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 262.9 249.4 33654.0 31928.1 47 81.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 232.2 229.2 29721.6 29340.5 50 78.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 222.8 229.4 28430.0 29362.7 42 85.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 231.5 246.5 29628.8 31555.9 6 72.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 261.7 256.8 33498.7 32769.1 33 83.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 262.7 260.7 33628.3 33279.4 35 85.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 234.0 249.1 29867.9 31883.1 18 90.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 252.1 239.8 32263.0 30581.4 32 91.2 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 241.5 257.5 30917.0 32961.1 16 69.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 257.9 243.5 33011.9 31164.2 32 86.8 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 237.5 235.6 30311.2 30046.9 31 67.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 290.4 213.1 37172.8 27277.0 79 65.3 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 216.4 284.3 27703.7 36392.5 42 95.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 223.8 248.2 28645.1 31774.4 16 69.4 89 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 294.0 217.7 37544.4 27864.2 64 68.0 110 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 210.7 245.6 26966.6 31439.8 59 107.4 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 228.5 265.2 29246.6 33940.5 10 99.2 98 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 279.1 218.4 35727.2 27955.0 52 71.9 102 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 232.3 293.4 29607.9 37521.4 14 93.2 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 299.5 236.6 38340.2 30288.8 79 69.7 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 216.3 268.9 27686.3 34417.3 4 90.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 285.8 261.0 36585.3 33409.5 53 84.6 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 228.5 232.5 29059.7 29661.1 48 74.3 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 242.7 262.4 31060.0 33588.2 27 69.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 248.2 252.2 31766.1 32149.3 8 78.9 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 267.9 230.2 34288.6 29462.8 62 68.5 100 > >> extended device statistics > >> device r/s w/s kr/s kw/s qlen svc_t %b > >> ada0 238.0 266.2 30375.8 34075.6 0 95.4 100 > >>=20 > >> As can be seen from the log above, at first the workload runs fine, > >> and the disk is only performing writes, but at some point (in this > >> case around 40% of completion) it starts performing this > >> read-before-write dance that completely screws up performance. > >=20 > > I reproduced this locally. I think my patch is useless for the fio/4k = write > > situation. > >=20 > > What happens is indeed related to the amount of the available memory. > > When the size of the file written by fio is larger than the memory, > > system has to recycle the cached pages. So after some moment, doing > > a write has to do read-before-write, and this occurs not at the EOF > > (since fio pre-allocated the job file). >=20 > It would seem to be much better to recycle pages associated parts of the > file that haven't been touched in a long time before recycling pages > associated with the filesystem block that is currently being written. If > the writes are sequential, then it definitely makes sense to hang on to > the block until the last portion of the block is written. It sounds > like we are doing pretty much the opposite of this. Yes, this sounds suspicious. >=20 > What seems odd is that it sounds like we are detecting the partial write > of a block, reading the block from the disk, updating it with the new > partial data, writing the block, and then immediately tossing it. It > seems odd that the the dirty block isn't allowed to stick around until > the syncer causes it to be written, with clean blocks being reclaimed in > the meantime. No, we don't. The amount of read and written bytes in the read-after-write state are equal. We read the whole buffer, then write 8 chunks by 4KB, all using the cached and delayed-write buffer. Would we indeed tossed the buffer after each write, then the read bytes count would be 8 times the write. --IhaVjqNYhz9BDJmo Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBAgAGBQJTuqX5AAoJEJDCuSvBvK1BxlsP/1iqvd/uelAk/fj6y6Br4cJH r2rC4SSn/+jhLRRmq4rzV913bz5+lf3KNARvnTiGwXmZhKr0U8AE98N+LN5F9Iba rJj6iPtIiuhd9EYGKHb19CC84bME70kQ0mrQ+cER0sd2SLRCGtEsIupfiEJ71Gp1 TcMTNnQ2qI50Ri8NTN/7vi0IMwZo5XPYGTX61KBxE17XYCMghQttz+pqoAd7JY7Q G/VlANlF5VTetcRGUSJ+6+v9J1QSRXTL8pN9oynmaqu+4A5vKBDppNys237Xe9Fg cGhg7K8zGgDl9MdMal8d0sLbtfmHNzvTKktUFdsvLonBiC+EOdiOzGo3tfV2Nc5r CjoU/lLPPo80RTCAo8xFaALqFuqjeSgmSDHnO2TsVSm6u7HScp3BfdXPwpIJWsbp xDcDkuzvH5f6/FkXM5nD0FznyVCwPN5H6VTID45fN0BWhH7+YMklDGyRFsAZHUNN 3e8Ye7QjMEVqSRG2m5dkoXwt34f213xiqz87r/KZh8s4RZVed7SHNsMoW5l5LTCf dDE0CjpUuaY9+xBl5pS9EFj4qctkVgI01UJ/BKy/3fubSkPfaV4CuE7BTXKCyr6q MIA9Xlr5XDkYq/dOUmI3IC6rqdGMTwIHQPGFbNkSTKyVvmLVjCUBtljSS26XCix8 tICaW7erKfH//jPOGhp+ =WJNK -----END PGP SIGNATURE----- --IhaVjqNYhz9BDJmo-- From owner-freebsd-fs@FreeBSD.ORG Tue Jul 8 01:06:46 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 43AD295B for ; Tue, 8 Jul 2014 01:06:46 +0000 (UTC) Received: from mail-la0-x236.google.com (mail-la0-x236.google.com [IPv6:2a00:1450:4010:c03::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C26F4253A for ; Tue, 8 Jul 2014 01:06:45 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id mc6so3501251lab.27 for ; Mon, 07 Jul 2014 18:06:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=Yhiyq6GKKGu5BPXztfpXua6PFaGTfS1XAoE3HzaN69M=; b=wl4J1Knp6YWPfjiFaqlaG7uCLprRvyaq+QjrwYE6EyxIXjYxU7UElNVzZaNy+o82m8 ktglo5oo3MsJPGFm7kgGypzxet/dEFbvFwhWj3ePN3XGxWUKatLv+UxFX0/VcEeBYjgH 5ZD60xWUJ9yYtlPaY5dzpZSAeDcJE7fZm0rnoauGoV/bOvAMRWTnRCdhARW+VaZvWGmb SrgcNzCIJT6ieQRCGacIKi907cAkPQUTDfPH/4tZnUuHQXPBzJixVNr9p5JX2l0JC0yy A8Ygp9mit44RPqYw+4ql5y8/frM6r5XI2hj47QZcZzeQ3o46758R/3VhKtPyuyn4gpus YbJQ== MIME-Version: 1.0 X-Received: by 10.152.37.194 with SMTP id a2mr25689891lak.29.1404781603561; Mon, 07 Jul 2014 18:06:43 -0700 (PDT) Received: by 10.115.2.3 with HTTP; Mon, 7 Jul 2014 18:06:43 -0700 (PDT) Date: Mon, 7 Jul 2014 18:06:43 -0700 Message-ID: Subject: Using 2 SSD's to create a SLOG From: javocado To: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jul 2014 01:06:46 -0000 I am interested in adding an SSD "SLOG" to my ZFS system so as to (dramatically) speed up writes on this system. My question is if ZFS will, itself, internally, mirror two SSDs that are used as a SLOG ? What I mean is, if ZFS is already smart enough to create a zpool mirror (or, on my case, a zpool raidz3) then perhaps ZFS is also smart enough to mirror the SLOG to two individual SSDs ? I am hoping to dumbly plug two SSDs onto motherboard SATA ports and just hand them over, raw, to ZFS. Can someone shed some light on how this might, could or should work? From owner-freebsd-fs@FreeBSD.ORG Tue Jul 8 11:11:14 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CEE35886 for ; Tue, 8 Jul 2014 11:11:14 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B1231261E for ; Tue, 8 Jul 2014 11:11:14 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s68BBETJ075743 for ; Tue, 8 Jul 2014 12:11:14 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Tue, 08 Jul 2014 11:11:14 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: vsjcfm@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jul 2014 11:11:14 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D191510 --- Comment #3 from vsjcfm@gmail.com --- (In reply to Steven Hartland from comment #2) > This looks like ZFS has backed off from max usage due to app usage on the > machine, which is expected behaviour. I don't think so because: 1. ARC memory usage never grows above 188 G. 2. I have no memory-hungry processes on this machine. root@cs0:~# ps axu USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMA= ND root 11 2292,8 0,0 0 384 ?? RL 14=D0=B8=D1=8E=D0=BD14 784228= :58,16 [idle] root 0 60,2 0,0 0 9904 ?? DLs 14=D0=B8=D1=8E=D0=BD14 7988= :47,05 [kernel] root 12 45,3 0,0 0 960 ?? WL 14=D0=B8=D1=8E=D0=BD14 10349= :33,20 [intr] root 4 10,6 0,0 0 176 ?? DL 14=D0=B8=D1=8E=D0=BD14 3197= :31,63 [zfskern] root 13 6,0 0,0 0 48 ?? DL 14=D0=B8=D1=8E=D0=BD14 1325= :45,03 [geom] jason 53722 4,7 0,0 52080 6124 ?? S 12:24 3:59,00 sshd: jason@notty (sshd) root 1256 3,7 0,0 12008 1588 ?? Ss 14=D0=B8=D1=8E=D0=BD14 14= :07,96 /usr/sbin/syslogd -ccss www 79586 3,3 0,0 25156 9184 ?? S 25=D0=B8=D1=8E=D0=BD14 148= :51,93 nginx: worker process (nginx) www 79575 2,5 0,0 25156 9792 ?? S 25=D0=B8=D1=8E=D0=BD14 156= :30,95 nginx: worker process (nginx) www 79571 2,4 0,0 25156 8764 ?? S 25=D0=B8=D1=8E=D0=BD14 154= :22,46 nginx: worker process (nginx) root 33499 2,3 0,0 0 16 ?? DLs 13:57 0:11,76 [aiod= 3] www 79588 2,3 0,0 25156 9076 ?? S 25=D0=B8=D1=8E=D0=BD14 154= :41,30 nginx: worker process (nginx) www 79591 2,2 0,0 25156 9564 ?? S 25=D0=B8=D1=8E=D0=BD14 164= :38,45 nginx: worker process (nginx) root 33496 1,8 0,0 0 16 ?? DLs 13:56 0:13,22 [aiod= 1] root 97809 1,8 0,0 0 16 ?? DLs 14:02 0:04,57 [aiod= 4] root 35718 1,7 0,0 75336 15312 ?? S =D1=87=D1=8216 131:47,= 49 /usr/local/sbin/snmpd -p /var/run/net_snmpd.pid -c /usr/local/etc/snmpd.con www 79576 1,6 0,0 25156 8796 ?? S 25=D0=B8=D1=8E=D0=BD14 154= :48,13 nginx: worker process (nginx) www 79590 1,6 0,0 25156 8788 ?? S 25=D0=B8=D1=8E=D0=BD14 152= :07,78 nginx: worker process (nginx) root 97812 1,4 0,0 0 16 ?? DLs 14:02 0:02,15 [aiod= 5] www 79580 1,3 0,0 25156 9048 ?? S 25=D0=B8=D1=8E=D0=BD14 160= :04,70 nginx: worker process (nginx) www 79582 1,3 0,0 25156 8544 ?? S 25=D0=B8=D1=8E=D0=BD14 157= :14,82 nginx: worker process (nginx) root 55608 1,2 0,0 0 16 ?? DLs 14:00 0:06,40 [aiod= 6] root 65838 1,1 0,0 0 16 ?? DLs 14:00 0:07,86 [aiod= 8] www 79579 1,1 0,0 25156 9692 ?? S 25=D0=B8=D1=8E=D0=BD14 156= :07,03 nginx: worker process (nginx) www 79589 1,1 0,0 25156 8532 ?? S 25=D0=B8=D1=8E=D0=BD14 146= :36,24 nginx: worker process (nginx) root 97883 1,1 0,0 0 16 ?? DLs 14:05 0:00,27 [aiod= 2] root 97815 1,0 0,0 0 16 ?? DLs 14:04 0:02,10 [aiod= 9] www 79573 0,9 0,0 25156 9652 ?? S 25=D0=B8=D1=8E=D0=BD14 162= :47,38 nginx: worker process (nginx) www 79570 0,7 0,0 25156 8792 ?? S 25=D0=B8=D1=8E=D0=BD14 156= :29,25 nginx: worker process (nginx) www 79577 0,5 0,0 25156 9352 ?? S 25=D0=B8=D1=8E=D0=BD14 148= :34,48 nginx: worker process (nginx) www 79585 0,5 0,0 25156 9052 ?? S 25=D0=B8=D1=8E=D0=BD14 155= :20,44 nginx: worker process (nginx) www 79569 0,4 0,0 25156 9788 ?? S 25=D0=B8=D1=8E=D0=BD14 153= :58,32 nginx: worker process (nginx) www 79592 0,4 0,0 25156 8788 ?? S 25=D0=B8=D1=8E=D0=BD14 152= :43,48 nginx: worker process (nginx) www 79572 0,3 0,0 25156 9588 ?? S 25=D0=B8=D1=8E=D0=BD14 158= :04,14 nginx: worker process (nginx) root 53765 0,3 0,0 20620 5148 0 S+ 12:25 0:32,40 top -= aSHz www 79583 0,2 0,0 25156 9316 ?? S 25=D0=B8=D1=8E=D0=BD14 154= :24,06 nginx: worker process (nginx) root 1 0,0 0,0 6280 560 ?? ILs 14=D0=B8=D1=8E=D0=BD14 0= :01,57 /sbin/init -- root 2 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :06,31 [mps_scan0] root 3 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :06,86 [mps_scan1] root 5 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :00,00 [xpt_thrd] root 6 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :00,00 [ipmi0: kcs] root 7 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :01,45 [pagedaemon] root 8 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :00,00 [vmdaemon] root 9 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :00,02 [pagezero] root 10 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :00,00 [audit] root 14 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 40= :02,48 [yarrow] root 15 0,0 0,0 0 128 ?? DL 14=D0=B8=D1=8E=D0=BD14 1= :54,87 [usb] root 16 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :07,12 [bufdaemon] root 17 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 133= :39,76 [syncer] root 18 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :07,71 [vnlru] root 19 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :00,04 [g_mirror swap0] root 20 0,0 0,0 0 16 ?? DL 14=D0=B8=D1=8E=D0=BD14 0= :00,04 [g_mirror swap1] root 987 0,0 0,0 14184 1612 ?? Is 14=D0=B8=D1=8E=D0=BD14 0= :00,00 /usr/sbin/moused -p /dev/ums0 -t auto -I /var/run/moused.ums0.pid root 1027 0,0 0,0 10372 4456 ?? Is 14=D0=B8=D1=8E=D0=BD14 0= :00,65 /sbin/devd root 1267 0,0 0,0 14092 1932 ?? Ss 14=D0=B8=D1=8E=D0=BD14 0= :02,50 /usr/sbin/rpcbind -h 10.0.8.30 -l root 1298 0,0 0,0 12008 1852 ?? Is 14=D0=B8=D1=8E=D0=BD14 0= :00,00 /usr/sbin/mountd -h 10.0.8.30 -l -S /etc/exports /etc/zfs/exports root 1304 0,0 0,0 9868 1628 ?? Is 14=D0=B8=D1=8E=D0=BD14 0= :00,26 nfsd: master (nfsd) root 1305 0,0 0,0 9868 1780 ?? S 14=D0=B8=D1=8E=D0=BD14 37= :23,54 nfsd: server (nfsd) root 1308 0,0 0,0 274112 1744 ?? Is 14=D0=B8=D1=8E=D0=BD14 0= :01,57 /usr/sbin/rpc.statd -h 10.0.8.30 root 1311 0,0 0,0 14092 1744 ?? Ss 14=D0=B8=D1=8E=D0=BD14 0= :02,36 /usr/sbin/rpc.lockd -h 10.0.8.30 root 1385 0,0 0,0 22216 3508 ?? Ss 14=D0=B8=D1=8E=D0=BD14 1= :17,27 /usr/sbin/ntpd -g -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.dri root 1388 0,0 0,0 12004 1528 ?? Ss 14=D0=B8=D1=8E=D0=BD14 38= :53,23 /usr/sbin/powerd root 1402 0,0 0,0 28144 5016 ?? I 14=D0=B8=D1=8E=D0=BD14 0= :12,78 /usr/local/sbin/smartd -c /usr/local/etc/smartd.conf -p /var/run/smartd.pid root 1448 0,0 0,0 28868 4168 ?? Is 14=D0=B8=D1=8E=D0=BD14 0= :02,46 /usr/sbin/sshd root 1451 0,0 0,0 20288 4592 ?? Ss 14=D0=B8=D1=8E=D0=BD14 0= :27,04 sendmail: accepting connections (sendmail) smmsp 1454 0,0 0,0 20288 4408 ?? Is 14=D0=B8=D1=8E=D0=BD14 0= :00,66 sendmail: Queue runner@00:30:00 for /var/spool/clientmqueue (sendmail) root 1458 0,0 0,0 14096 1760 ?? Is 14=D0=B8=D1=8E=D0=BD14 0= :06,77 /usr/sbin/cron -s root 24965 0,0 0,0 52080 4772 ?? Is 12:05 0:00,02 sshd: jason [priv] (sshd) jason 24967 0,0 0,0 52080 5080 ?? S 12:05 0:01,40 sshd: jason@pts/0 (sshd) root 35701 0,0 0,0 19152 2292 ?? Is =D1=87=D1=8216 0:00,= 67 /usr/local/bin/rsync --daemon root 53717 0,0 0,0 52080 4836 ?? Is 12:24 0:00,05 sshd: jason [priv] (sshd) jason 53723 0,0 0,0 17388 3240 ?? Is 12:24 0:00,01 tcsh = -c dd of=3D/mnt/ztemp/jason/lvhdd.dd.gz bs=3D128k jason 53725 0,0 0,0 9872 1916 ?? S 12:24 0:16,87 dd of=3D/mnt/ztemp/jason/lvhdd.dd.gz bs=3D128k root 79568 0,0 0,0 21060 4352 ?? I 25=D0=B8=D1=8E=D0=BD14 0= :00,02 nginx: master process /usr/local/sbin/nginx www 79574 0,0 0,0 25156 9308 ?? S 25=D0=B8=D1=8E=D0=BD14 159= :31,99 nginx: worker process (nginx) www 79578 0,0 0,0 25156 9688 ?? S 25=D0=B8=D1=8E=D0=BD14 156= :39,05 nginx: worker process (nginx) www 79581 0,0 0,0 25156 9048 ?? S 25=D0=B8=D1=8E=D0=BD14 152= :05,38 nginx: worker process (nginx) www 79584 0,0 0,0 25156 8792 ?? S 25=D0=B8=D1=8E=D0=BD14 149= :30,11 nginx: worker process (nginx) www 79587 0,0 0,0 25156 8740 ?? S 25=D0=B8=D1=8E=D0=BD14 157= :19,82 nginx: worker process (nginx) root 97816 0,0 0,0 52080 4772 ?? Is 14:04 0:00,02 sshd: jason [priv] (sshd) jason 97818 0,0 0,0 52080 5080 ?? S 14:04 0:00,03 sshd: jason@pts/1 (sshd) root 1520 0,0 0,0 12008 1588 v1 Is+ 14=D0=B8=D1=8E=D0=BD14 0= :00,04 /usr/libexec/getty Pc ttyv1 root 1521 0,0 0,0 12008 1588 v2 Is+ 14=D0=B8=D1=8E=D0=BD14 0= :00,04 /usr/libexec/getty Pc ttyv2 root 1522 0,0 0,0 12008 1588 v3 Is+ 14=D0=B8=D1=8E=D0=BD14 0= :00,04 /usr/libexec/getty Pc ttyv3 jason 24968 0,0 0,0 17388 4148 0 Is 12:05 0:00,02 -tcsh (tcsh) root 24970 0,0 0,0 46560 3548 0 I 12:05 0:00,02 sudo = su - root 24971 0,0 0,0 43180 2220 0 I 12:06 0:00,00 su - root 24972 0,0 0,0 17388 3952 0 I 12:06 0:00,02 -su (tcsh) jason 97819 0,0 0,0 17388 4148 1 Is 14:04 0:00,02 -tcsh (tcsh) root 97821 0,0 0,0 46560 3548 1 I 14:04 0:00,01 sudo = su - root 97832 0,0 0,0 43180 2220 1 I 14:05 0:00,00 su - root 97833 0,0 0,0 17388 3952 1 S 14:05 0:00,03 -su (tcsh) root 97902 0,0 0,0 14144 2092 1 R+ 14:06 0:00,00 ps axu root@cs0:~# --=20 You are receiving this mail because: You are the assignee for the bug.= From owner-freebsd-fs@FreeBSD.ORG Tue Jul 8 12:33:20 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B941765E for ; Tue, 8 Jul 2014 12:33:20 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A1B642D9D for ; Tue, 8 Jul 2014 12:33:20 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s68CXK90066371 for ; Tue, 8 Jul 2014 13:33:20 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Tue, 08 Jul 2014 12:33:20 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jul 2014 12:33:20 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510 --- Comment #4 from Steven Hartland --- Are you sure that its not reduced over time due to something high memory usage processes which now aren't running? To check this reboot and check the values then. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Jul 8 13:02:49 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3F6BF4B8 for ; Tue, 8 Jul 2014 13:02:49 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1CDC5208D for ; Tue, 8 Jul 2014 13:02:49 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s68D2mgM001435 for ; Tue, 8 Jul 2014 14:02:48 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Tue, 08 Jul 2014 13:02:49 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jul 2014 13:02:49 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510 --- Comment #5 from Steven Hartland --- sysctl kstat.zfs.misc.arcstats would also be useful -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Jul 8 13:31:48 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1D5F65AD for ; Tue, 8 Jul 2014 13:31:48 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F3F0423A3 for ; Tue, 8 Jul 2014 13:31:47 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s68DVl4T077895 for ; Tue, 8 Jul 2014 14:31:47 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Tue, 08 Jul 2014 13:31:48 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: vsjcfm@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jul 2014 13:31:48 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510 --- Comment #6 from vsjcfm@gmail.com --- (In reply to Steven Hartland from comment #4) > Are you sure that its not reduced over time due to something high memory > usage processes which now aren't running? > > To check this reboot and check the values then. Absolutely, this machine is just nginx fileserver, without any other tasks. I can collect ARC graph if this information will be useful. (In reply to Steven Hartland from comment #5) > sysctl kstat.zfs.misc.arcstats would also be useful kstat.zfs.misc.arcstats.hits: 6357561684 kstat.zfs.misc.arcstats.misses: 1754865888 kstat.zfs.misc.arcstats.demand_data_hits: 2247259288 kstat.zfs.misc.arcstats.demand_data_misses: 8782100 kstat.zfs.misc.arcstats.demand_metadata_hits: 2211824311 kstat.zfs.misc.arcstats.demand_metadata_misses: 8160603 kstat.zfs.misc.arcstats.prefetch_data_hits: 1283671663 kstat.zfs.misc.arcstats.prefetch_data_misses: 1737864288 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 614806422 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 58897 kstat.zfs.misc.arcstats.mru_hits: 1548194108 kstat.zfs.misc.arcstats.mru_ghost_hits: 10388582 kstat.zfs.misc.arcstats.mfu_hits: 3287889203 kstat.zfs.misc.arcstats.mfu_ghost_hits: 102318783 kstat.zfs.misc.arcstats.allocated: 1774678632 kstat.zfs.misc.arcstats.deleted: 1646058550 kstat.zfs.misc.arcstats.stolen: 1146219028 kstat.zfs.misc.arcstats.recycle_miss: 3101763 kstat.zfs.misc.arcstats.mutex_miss: 4427461 kstat.zfs.misc.arcstats.evict_skip: 284890938 kstat.zfs.misc.arcstats.evict_l2_cached: 188456702423040 kstat.zfs.misc.arcstats.evict_l2_eligible: 32054253667840 kstat.zfs.misc.arcstats.evict_l2_ineligible: 9169893205504 kstat.zfs.misc.arcstats.hash_elements: 18175960 kstat.zfs.misc.arcstats.hash_elements_max: 18488668 kstat.zfs.misc.arcstats.hash_collisions: 1127435975 kstat.zfs.misc.arcstats.hash_chains: 3877554 kstat.zfs.misc.arcstats.hash_chain_max: 24 kstat.zfs.misc.arcstats.p: 145378360667 kstat.zfs.misc.arcstats.c: 197771088957 kstat.zfs.misc.arcstats.c_min: 33203088384 kstat.zfs.misc.arcstats.c_max: 265624707072 kstat.zfs.misc.arcstats.size: 200751138272 kstat.zfs.misc.arcstats.hdr_size: 1329000768 kstat.zfs.misc.arcstats.data_size: 194945960448 kstat.zfs.misc.arcstats.other_size: 1720016232 kstat.zfs.misc.arcstats.l2_hits: 558004136 kstat.zfs.misc.arcstats.l2_misses: 1196861331 kstat.zfs.misc.arcstats.l2_feeds: 4171054 kstat.zfs.misc.arcstats.l2_rw_clash: 72326 kstat.zfs.misc.arcstats.l2_read_bytes: 73002745243136 kstat.zfs.misc.arcstats.l2_write_bytes: 113011830327808 kstat.zfs.misc.arcstats.l2_writes_sent: 3913397 kstat.zfs.misc.arcstats.l2_writes_done: 3913396 kstat.zfs.misc.arcstats.l2_writes_error: 0 kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 117425 kstat.zfs.misc.arcstats.l2_evict_lock_retry: 91025 kstat.zfs.misc.arcstats.l2_evict_reading: 219 kstat.zfs.misc.arcstats.l2_free_on_write: 4702368 kstat.zfs.misc.arcstats.l2_abort_lowmem: 56 kstat.zfs.misc.arcstats.l2_cksum_bad: 18 kstat.zfs.misc.arcstats.l2_io_error: 0 kstat.zfs.misc.arcstats.l2_size: 1838076781056 kstat.zfs.misc.arcstats.l2_asize: 1838076145664 kstat.zfs.misc.arcstats.l2_hdr_size: 3164330456 kstat.zfs.misc.arcstats.l2_compress_successes: 144015 kstat.zfs.misc.arcstats.l2_compress_zeros: 0 kstat.zfs.misc.arcstats.l2_compress_failures: 0 kstat.zfs.misc.arcstats.l2_write_trylock_fail: 387445125 kstat.zfs.misc.arcstats.l2_write_passed_headroom: 134983793 kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 59111238565 kstat.zfs.misc.arcstats.l2_write_in_l2: 850651818119 kstat.zfs.misc.arcstats.l2_write_io_in_progress: 847 kstat.zfs.misc.arcstats.l2_write_not_cacheable: 36478134670 kstat.zfs.misc.arcstats.l2_write_full: 2008530 kstat.zfs.misc.arcstats.l2_write_buffer_iter: 4171054 kstat.zfs.misc.arcstats.l2_write_pios: 3913397 kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 27587228945605632 kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 225730988 kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 19832 kstat.zfs.misc.arcstats.memory_throttle_count: 0 kstat.zfs.misc.arcstats.duplicate_buffers: 0 kstat.zfs.misc.arcstats.duplicate_buffers_size: 0 kstat.zfs.misc.arcstats.duplicate_reads: 0 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Jul 8 14:28:05 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5D024EC6 for ; Tue, 8 Jul 2014 14:28:05 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4524A2927 for ; Tue, 8 Jul 2014 14:28:05 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s68ES5lX070930 for ; Tue, 8 Jul 2014 15:28:05 +0100 (BST) (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Tue, 08 Jul 2014 14:28:05 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jul 2014 14:28:05 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510 --- Comment #7 from Steven Hartland --- Just because the processes are using small amounts of memory doesn't mean others aspects of the kernel aren't spiking up and demanding ram such as mbufs, so just because its only running nginx doesn't mean you haven't seen a memory spike in other areas. Are you seeing any movement at all over time in: kstat.zfs.misc.arcstats.c -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Tue Jul 8 22:30:13 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 34DF63C1; Tue, 8 Jul 2014 22:30:13 +0000 (UTC) Received: from gw.catspoiler.org (gw.catspoiler.org [75.1.14.242]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 155CD2925; Tue, 8 Jul 2014 22:30:12 +0000 (UTC) Received: from FreeBSD.org (mousie.catspoiler.org [192.168.101.2]) by gw.catspoiler.org (8.13.3/8.13.3) with ESMTP id s68MU0Dw028257; Tue, 8 Jul 2014 15:30:04 -0700 (PDT) (envelope-from truckman@FreeBSD.org) Message-Id: <201407082230.s68MU0Dw028257@gw.catspoiler.org> Date: Tue, 8 Jul 2014 15:30:00 -0700 (PDT) From: Don Lewis Subject: Re: Strange IO performance with UFS To: kostikbel@gmail.com In-Reply-To: <20140705195816.GV93733@kib.kiev.ua> MIME-Version: 1.0 Content-Type: TEXT/plain; charset=us-ascii Cc: freebsd-fs@FreeBSD.org, sparvu@systemdatarecorder.org, freebsd-hackers@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jul 2014 22:30:13 -0000 On 5 Jul, Konstantin Belousov wrote: > On Sat, Jul 05, 2014 at 06:18:07PM +0200, Roger Pau Monn? wrote: >> As can be seen from the log above, at first the workload runs fine, >> and the disk is only performing writes, but at some point (in this >> case around 40% of completion) it starts performing this >> read-before-write dance that completely screws up performance. > > I reproduced this locally. I think my patch is useless for the fio/4k write > situation. > > What happens is indeed related to the amount of the available memory. > When the size of the file written by fio is larger than the memory, > system has to recycle the cached pages. So after some moment, doing > a write has to do read-before-write, and this occurs not at the EOF > (since fio pre-allocated the job file). I reproduced this locally with dd if=/dev/zero bs=4k conv=notrunc ... For the small file case, if I flush the file from cache by unmounting the filesystem where it resides and then remounting the filesystem, then I see lots of reads right from the start. > In fact, I used 10G file on 8G machine, but I interrupted the fio > before it finish the job. The longer the previous job runs, the longer > is time for which new job does not issue reads. If I allow the job to > completely fill the cache, then the reads starts immediately on the next > job run. > > I do not see how could anything be changed there, if we want to keep > user file content on partial block writes, and we do. About the only thing I can think of that might help is to trigger readahead when we detect sequential small writes. We'll still have to do the reads, but hopefully they will be larger and occupy less time in the critical path. Writing a multiple of the filesystem blocksize is still the most efficient strategy. From owner-freebsd-fs@FreeBSD.ORG Wed Jul 9 07:21:20 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 24C15362 for ; Wed, 9 Jul 2014 07:21:20 +0000 (UTC) Received: from mail-qa0-x229.google.com (mail-qa0-x229.google.com [IPv6:2607:f8b0:400d:c00::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DBC3D2B61 for ; Wed, 9 Jul 2014 07:21:19 +0000 (UTC) Received: by mail-qa0-f41.google.com with SMTP id cm18so5843180qab.14 for ; Wed, 09 Jul 2014 00:21:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=nQv3dBegLuJa0UNx+Tvjr2DVvwdsMG9knQPbwB60Wxg=; b=rD+BPrvAC/OlL+T0BaEhsXcVdnlcxlJJOeTtKYhg5caMsGUN/q6TjGyhYddGQCx3zJ I/OI9/g71V8HTlndXB0sWeo1C9RIZ61fiyBzyNrV4GVoCa7IoQ6vhV/3by9ROq6y6JgG QZG+yKATf66fHFX81wUz2muEOyGePv1lzGXfISBbltsl0AsT35547jN7x1wqdvOoYMn1 jPISafbf4Rcs6jn22AsTD5T6SacEjkMu0GIOyfEO78CXxmFlkAzTd7Lkn1T5BEPjfcKY FQ/SU9jNmQeLqEMSjME2K3Ev3rXZtwWMuQYHzATmey2i1OFNoloBTHe3RS+c2VdNF/FY U3rw== MIME-Version: 1.0 X-Received: by 10.224.172.201 with SMTP id m9mr67912263qaz.32.1404890478951; Wed, 09 Jul 2014 00:21:18 -0700 (PDT) Received: by 10.96.13.133 with HTTP; Wed, 9 Jul 2014 00:21:18 -0700 (PDT) In-Reply-To: <20140708025106.GA85067@neutralgood.org> References: <20140708025106.GA85067@neutralgood.org> Date: Wed, 9 Jul 2014 08:21:18 +0100 Message-ID: Subject: Re: Using 2 SSD's to create a SLOG From: krad To: kpneal@pobox.com Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 07:21:20 -0000 An NFS server is a common task that generates lots of synchronous writes On 8 July 2014 03:51, wrote: > On Mon, Jul 07, 2014 at 06:06:43PM -0700, javocado wrote: > > I am interested in adding an SSD "SLOG" to my ZFS system so as to > > (dramatically) speed up writes on this system. > > > > My question is if ZFS will, itself, internally, mirror two SSDs that are > > used as a SLOG ? > > > > What I mean is, if ZFS is already smart enough to create a zpool mirror > > (or, on my case, a zpool raidz3) then perhaps ZFS is also smart enough to > > mirror the SLOG to two individual SSDs ? > > > > I am hoping to dumbly plug two SSDs onto motherboard SATA ports and just > > hand them over, raw, to ZFS. > > From the zpool man page: > > Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs > > The following command creates a ZFS storage pool consisting of > two, > two-way mirrors and mirrored log devices: > > # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \ > c4d0 c5d0 > > You should be able to use that example to make the 'zpool add' command to > add the mirrored log to an existing pool. > > But know that the SLOG only helps writes that are synchronous. This is in > many workloads a small fraction of the total writes. For other workloads > it is a large portion of the writes. > > Do you know for certain that you need a SLOG? > -- > Kevin P. Neal http://www.pobox.com/~kpn/ > On the community of supercomputer fans: > "But what we lack in size we make up for in eccentricity." > from Steve Gombosi, comp.sys.super, 31 Jul 2000 11:22:43 -0600 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Jul 9 08:11:56 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2A539BCB for ; Wed, 9 Jul 2014 08:11:56 +0000 (UTC) Received: from mail-wi0-x22a.google.com (mail-wi0-x22a.google.com [IPv6:2a00:1450:400c:c05::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 9F6A62F5C for ; Wed, 9 Jul 2014 08:11:55 +0000 (UTC) Received: by mail-wi0-f170.google.com with SMTP id cc10so2324465wib.3 for ; Wed, 09 Jul 2014 01:11:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type; bh=DakfFTU7y0QwUasfRVVt8p7/PYu0TTHxNyKJFZAc6hE=; b=gOQAZLrtr9yGPAoerLQwJRugYPtyk8DMJ+yqU/4OQSHxJOXTT5o+VwBEpjXVFH5X/K PZxBBeFSiBEY6Gyx/Ok7KSjAIF+YIYSS6FRoZVT5kqwmPDf8sElIoZ62CfdhOf9eUYfh hH/O2+rV38a6CZfLlUk2+nBk/OFfflGxUUDrLyrlYkc8unhr16lFmabR0FlaqIXwgk9J kEaijX//ddijx7ian3WFFzjt4DDZoWhboPTKmed7fgJWtvQ5dnVPDdgrAUh+S9jhj92O 8P5EIvOLhZRSCnbjFVVtC3iDVTM6ZYDjnMdx8iS4xBeqgV0W8i1Q/PobjINpX0QYEIRc cHww== X-Received: by 10.180.126.9 with SMTP id mu9mr9463119wib.69.1404893513744; Wed, 09 Jul 2014 01:11:53 -0700 (PDT) Received: from [192.168.1.145] ([193.173.55.180]) by mx.google.com with ESMTPSA id 19sm101749449wjz.3.2014.07.09.01.11.52 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 09 Jul 2014 01:11:53 -0700 (PDT) Message-ID: <53BCF948.6010303@gmail.com> Date: Wed, 09 Jul 2014 10:11:52 +0200 From: Johan Hendriks User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: krad Subject: Re: Using 2 SSD's to create a SLOG References: <20140708025106.GA85067@neutralgood.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 08:11:56 -0000 op 09-07-14 09:21, krad schreef: > An NFS server is a common task that generates lots of synchronous writes > > > On 8 July 2014 03:51, wrote: > >> On Mon, Jul 07, 2014 at 06:06:43PM -0700, javocado wrote: >>> I am interested in adding an SSD "SLOG" to my ZFS system so as to >>> (dramatically) speed up writes on this system. >>> >>> My question is if ZFS will, itself, internally, mirror two SSDs that are >>> used as a SLOG ? >>> >>> What I mean is, if ZFS is already smart enough to create a zpool mirror >>> (or, on my case, a zpool raidz3) then perhaps ZFS is also smart enough to >>> mirror the SLOG to two individual SSDs ? >>> >>> I am hoping to dumbly plug two SSDs onto motherboard SATA ports and just >>> hand them over, raw, to ZFS. >> From the zpool man page: >> >> Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs >> >> The following command creates a ZFS storage pool consisting of >> two, >> two-way mirrors and mirrored log devices: >> >> # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \ >> c4d0 c5d0 >> >> You should be able to use that example to make the 'zpool add' command to >> add the mirrored log to an existing pool. >> >> But know that the SLOG only helps writes that are synchronous. This is in >> many workloads a small fraction of the total writes. For other workloads >> it is a large portion of the writes. >> >> Do you know for certain that you need a SLOG? >> -- >> Kevin P. Neal http://www.pobox.com/~kpn/ >> On the community of supercomputer fans: >> "But what we lack in size we make up for in eccentricity." >> from Steve Gombosi, comp.sys.super, 31 Jul 2000 11:22:43 -0600 >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" I would not use raw disks... The way I add a SLOG to the system is as follows. In my case I can hotswap disks so I insert one SSD, On the console it will show up as for example da10. I also mark the disk with the label I use on the system in this example slog01. Then I use gpart to label the disk # gpart create -s gpt /dev/da10 # gpart add -t freebsd-zfs -a 4k -l slog01 /dev/da10 Then I Insert the second disk If this disk shows up as da11 I use the following commands. I also label the disk with a sticker or pen slog02 # gpart create -s gpt /dev/da11 # gpart add -t freebsd-zfs -a 4k -l slog02 /dev/da11 This way I now for certain that the disk slog01. If you do not label the disk itself and /dev/daxx numbers change you can remove the wrong disk... Then I add the slog device to the pool. In the example my pool is named storage # zpool add storage log mirror gpt/slog01 gpt/slog02 A zpool status will show you the whole pool and you will see in the end the mirrored log device san01 ~ # zpool status pool: storage state: ONLINE scan: scrub repaired 0 in 1h21m with 0 errors on Tue Jul 1 06:51:21 2014 config: NAME STATE READ WRITE CKSUM sanstorage ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gpt/disk2 ONLINE 0 0 0 gpt/disk3 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gpt/disk4 ONLINE 0 0 0 gpt/disk5 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gpt/disk12 ONLINE 0 0 0 gpt/disk13 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 gpt/disk6 ONLINE 0 0 0 gpt/disk7 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 gpt/disk10 ONLINE 0 0 0 gpt/disk11 ONLINE 0 0 0 mirror-6 ONLINE 0 0 0 gpt/disk8 ONLINE 0 0 0 gpt/disk9 ONLINE 0 0 0 logs mirror-7 ONLINE 0 0 0 gpt/slog01 ONLINE 0 0 0 gpt/slog02 ONLINE 0 0 0 errors: No known data errors The main advantage of the gpart label is that you can use it on every sata/sas port in the system. If I use it in the front bays on the system they are known as daxx but I can put them on the sata controller on the motherboard if I want, they will become adaxx. Because ZFS uses the gpt labels it will always find them. Please make sure you have a backup... Also first try it on a virtual machine and get comfortable with the commands... regards From owner-freebsd-fs@FreeBSD.ORG Wed Jul 9 09:28:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4AFF132F for ; Wed, 9 Jul 2014 09:28:51 +0000 (UTC) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.21.123]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "smtp-sofia.digsys.bg", Issuer "Digital Systems Operational CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id CDF7B2690 for ; Wed, 9 Jul 2014 09:28:50 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [193.68.6.1]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.6/8.14.6) with ESMTP id s699RNLg010304 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO) for ; Wed, 9 Jul 2014 12:27:23 +0300 (EEST) (envelope-from daniel@digsys.bg) Message-ID: <53BD0AFB.3000909@digsys.bg> Date: Wed, 09 Jul 2014 12:27:23 +0300 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Using 2 SSD's to create a SLOG References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 09:28:51 -0000 On 08.07.14 04:06, javocado wrote: > I am hoping to dumbly plug two SSDs onto motherboard SATA ports and just > hand them over, raw, to ZFS. > Others already commented how you should setup mirrored SLOG. In addition to that, because of the nature of SSDs and SLOG, I would recommend the following: The SLOG size does not need to be large, it should only cover several seconds of your synchronous write throughput -- usually few GB are plenty. Today's SSDs are much large than needed for SLOG. But, today's SSDs also suffer severe performance degradation, especially for writing when you fill them up with data and they need to do garbage collection. Also, most SSDs have "good performance" only when using an 8GB span, not the whole drive. All of this only makes sense if the drive has TRIM. FreeBSD already supports TRIM for ZFS SLOG. Therefore ensure you do TRIM of the entire drive, then partition it with GPT to only use (say) 8GB for the SLOG. Leave the rest unallocated -- you will never write there but the drive's controller will use those blocks as spares for TRIM and this will both improve performance and make the drive last much longer. Then add both slices as a mirrored log device to your ZFS pool. Daniel From owner-freebsd-fs@FreeBSD.ORG Wed Jul 9 11:30:54 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C4A68E8D for ; Wed, 9 Jul 2014 11:30:54 +0000 (UTC) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id 87FDA20F9 for ; Wed, 9 Jul 2014 11:30:54 +0000 (UTC) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:8cc3:e5a9:fcf2:1e1c]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 0F43D4AC0F for ; Wed, 9 Jul 2014 15:30:49 +0400 (MSK) Date: Wed, 9 Jul 2014 15:30:42 +0400 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <578789604.20140709153042@serebryakov.spb.ru> To: freebsd-fs@freebsd.org Subject: Is it bug in UFS SUJ or broken RAM/bus? MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 11:30:54 -0000 Hello, Freebsd-fs. FreeBSD 10-STABLE. panic: handle_written_inodeblock: Invalid link count 65535 for inodedep 0xfffff800829d3200 #1 0xffffffff8045fb62 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:452 #2 0xffffffff8045ff24 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:759 #3 0xffffffff805bc898 in softdep_disk_write_complete ( bp=) at /usr/src/sys/ufs/ffs/ffs_softdep.c:11270 #4 0xffffffff804e5313 in bufdone_finish (bp=0xfffffe0171db03c0) at buf.h:420 #5 0xffffffff804e5177 in bufdone (bp=) at /usr/src/sys/kern/vfs_bio.c:3754 #6 0xffffffff803fa381 in g_io_schedule_up (tp=) at /usr/src/sys/geom/geom_io.c:845 #7 0xffffffff803fa8ad in g_up_procbody (arg=) at /usr/src/sys/geom/geom_kern.c:98 #8 0xffffffff80430a3a in fork_exit ( callout=0xffffffff803fa840 , arg=0x0, frame=0xfffffe01a2b60c00) at /usr/src/sys/kern/kern_fork.c:995 #9 0xffffffff8061c26e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606 #10 0x0000000000000000 in ?? () -- // Black Lion AKA Lev Serebryakov From owner-freebsd-fs@FreeBSD.ORG Wed Jul 9 12:23:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A2F235C6; Wed, 9 Jul 2014 12:23:52 +0000 (UTC) Received: from mail109.syd.optusnet.com.au (mail109.syd.optusnet.com.au [211.29.132.80]) by mx1.freebsd.org (Postfix) with ESMTP id 3750B269B; Wed, 9 Jul 2014 12:23:52 +0000 (UTC) Received: from c122-106-147-133.carlnfd1.nsw.optusnet.com.au (c122-106-147-133.carlnfd1.nsw.optusnet.com.au [122.106.147.133]) by mail109.syd.optusnet.com.au (Postfix) with ESMTPS id D653FD65705; Wed, 9 Jul 2014 22:23:49 +1000 (EST) Date: Wed, 9 Jul 2014 22:23:48 +1000 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: Don Lewis Subject: Re: Strange IO performance with UFS In-Reply-To: <201407082230.s68MU0Dw028257@gw.catspoiler.org> Message-ID: <20140709213958.K1732@besplex.bde.org> References: <201407082230.s68MU0Dw028257@gw.catspoiler.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.1 cv=QIpRGG7L c=1 sm=1 tr=0 a=7NqvjVvQucbO2RlWB8PEog==:117 a=PO7r1zJSAAAA:8 a=a5X3N0CLKfYA:10 a=kj9zAlcOel0A:10 a=JzwRw_2MAAAA:8 a=j_qACPGiQZ6BHRBonU4A:9 a=CjuIK1q_8ugA:10 Cc: freebsd-fs@freebsd.org, freebsd-hackers@freebsd.org, sparvu@systemdatarecorder.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 12:23:52 -0000 On Tue, 8 Jul 2014, Don Lewis wrote: > On 5 Jul, Konstantin Belousov wrote: >> On Sat, Jul 05, 2014 at 06:18:07PM +0200, Roger Pau Monn? wrote: > >>> As can be seen from the log above, at first the workload runs fine, >>> and the disk is only performing writes, but at some point (in this >>> case around 40% of completion) it starts performing this >>> read-before-write dance that completely screws up performance. >> >> I reproduced this locally. I think my patch is useless for the fio/4k write >> situation. >> >> What happens is indeed related to the amount of the available memory. >> When the size of the file written by fio is larger than the memory, >> system has to recycle the cached pages. So after some moment, doing >> a write has to do read-before-write, and this occurs not at the EOF >> (since fio pre-allocated the job file). > > I reproduced this locally with dd if=/dev/zero bs=4k conv=notrunc ... > For the small file case, if I flush the file from cache by unmounting > the filesystem where it resides and then remounting the filesystem, then > I see lots of reads right from the start. This seems to be related to kern/178997: Heavy disk I/O may hang system. Test programs doing more complicated versions of conv=notrunc caused even worse problems when run in parallel. I lost track of what happened with that. I think kib committed a partial fix that doesn't apply to the old version of FreeBSD that I use. >> In fact, I used 10G file on 8G machine, but I interrupted the fio >> before it finish the job. The longer the previous job runs, the longer >> is time for which new job does not issue reads. If I allow the job to >> completely fill the cache, then the reads starts immediately on the next >> job run. >> >> I do not see how could anything be changed there, if we want to keep >> user file content on partial block writes, and we do. > > About the only thing I can think of that might help is to trigger > readahead when we detect sequential small writes. We'll still have to > do the reads, but hopefully they will be larger and occupy less time in > the critical path. ffs_balloc*() already uses cluster_write() so sequentuial small writes already normally do at least 128K of readahead and you should rarely see the the 4K-reads (except with O_DIRECT?). msdosfs is missing this readahead. I never got around to sending my patches for this to kib in the PR 178997 discussion. Here I see full clustering with 64K-clusters on the old version of FreeBSD, but my drive doesn't like going back and forth, so the writes go 8 times as slow as without the reads instead of only 2 times as slow. (It's an old ATA drive with a ~1MB buffer, but apparently has dumb firmware so seeking back just 64K is too much for it to cache.) Just remembered I have a newer SATA drive with a ~32MB buffer. It only goes 3 times as slow. The second drive is also on a not quite so old version of FreeBSD that certainly doesn't have any workarounds for PR 178977. All file systems were mounted async, which shouldn't affect this much. > Writing a multiple of the filesystem blocksize is still the most > efficient strategy. Except when the filesystem block size is too large to be efficient. The FreeBSD ffs default block size of 32K is slow for small files. Fragments reduce its space wastage but interact badly with the buffer cache. Linux avoids some of these problems by using smaller filesystem block sizes and not using fragments (at least in old filesystems). Bruce From owner-freebsd-fs@FreeBSD.ORG Wed Jul 9 12:54:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 42C35FBD; Wed, 9 Jul 2014 12:54:00 +0000 (UTC) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D8E2229A6; Wed, 9 Jul 2014 12:53:59 +0000 (UTC) Received: from tom.home (kib@localhost [127.0.0.1]) by kib.kiev.ua (8.14.9/8.14.9) with ESMTP id s69CrnJr011171 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 9 Jul 2014 15:53:49 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.8.3 kib.kiev.ua s69CrnJr011171 Received: (from kostik@localhost) by tom.home (8.14.9/8.14.9/Submit) id s69Crngi011170; Wed, 9 Jul 2014 15:53:49 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Wed, 9 Jul 2014 15:53:49 +0300 From: Konstantin Belousov To: Bruce Evans Subject: Re: Strange IO performance with UFS Message-ID: <20140709125349.GV93733@kib.kiev.ua> References: <201407082230.s68MU0Dw028257@gw.catspoiler.org> <20140709213958.K1732@besplex.bde.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="IF36ViUsfxDRDLKY" Content-Disposition: inline In-Reply-To: <20140709213958.K1732@besplex.bde.org> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on tom.home Cc: freebsd-fs@freebsd.org, sparvu@systemdatarecorder.org, Don Lewis , freebsd-hackers@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 12:54:00 -0000 --IF36ViUsfxDRDLKY Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Jul 09, 2014 at 10:23:48PM +1000, Bruce Evans wrote: > On Tue, 8 Jul 2014, Don Lewis wrote: >=20 > > On 5 Jul, Konstantin Belousov wrote: > >> On Sat, Jul 05, 2014 at 06:18:07PM +0200, Roger Pau Monn? wrote: > > > >>> As can be seen from the log above, at first the workload runs fine, > >>> and the disk is only performing writes, but at some point (in this > >>> case around 40% of completion) it starts performing this > >>> read-before-write dance that completely screws up performance. > >> > >> I reproduced this locally. I think my patch is useless for the fio/4k= write > >> situation. > >> > >> What happens is indeed related to the amount of the available memory. > >> When the size of the file written by fio is larger than the memory, > >> system has to recycle the cached pages. So after some moment, doing > >> a write has to do read-before-write, and this occurs not at the EOF > >> (since fio pre-allocated the job file). > > > > I reproduced this locally with dd if=3D/dev/zero bs=3D4k conv=3Dnotrunc= ... > > For the small file case, if I flush the file from cache by unmounting > > the filesystem where it resides and then remounting the filesystem, then > > I see lots of reads right from the start. >=20 > This seems to be related to kern/178997: Heavy disk I/O may hang system. > Test programs doing more complicated versions of conv=3Dnotrunc caused > even worse problems when run in parallel. I lost track of what happened > with that. I think kib committed a partial fix that doesn't apply to > the old version of FreeBSD that I use. I do not think this is related to kern/178997. Yes, kern/178997 is only partially fixed, parallel reads and starved writer could still cause buffer cache livelock. On the other hand, I am not sure how feasible is to create a real test case for this. Fix would be not easy. >=20 > >> In fact, I used 10G file on 8G machine, but I interrupted the fio > >> before it finish the job. The longer the previous job runs, the longer > >> is time for which new job does not issue reads. If I allow the job to > >> completely fill the cache, then the reads starts immediately on the ne= xt > >> job run. > >> > >> I do not see how could anything be changed there, if we want to keep > >> user file content on partial block writes, and we do. > > > > About the only thing I can think of that might help is to trigger > > readahead when we detect sequential small writes. We'll still have to > > do the reads, but hopefully they will be larger and occupy less time in > > the critical path. >=20 > ffs_balloc*() already uses cluster_write() so sequentuial small writes > already normally do at least 128K of readahead and you should rarely > see the the 4K-reads (except with O_DIRECT?). You mean cluster_read(). Indeed, ffs_balloc* already does this. This is also useful since it preallocates vnode pages, making writes even less blocking. >=20 > msdosfs is missing this readahead. I never got around to sending > my patches for this to kib in the PR 178997 discussion. >=20 > Here I see full clustering with 64K-clusters on the old version of > FreeBSD, but my drive doesn't like going back and forth, so the writes > go 8 times as slow as without the reads instead of only 2 times as > slow. (It's an old ATA drive with a ~1MB buffer, but apparently has > dumb firmware so seeking back just 64K is too much for it to cache.) > Just remembered I have a newer SATA drive with a ~32MB buffer. It > only goes 3 times as slow. The second drive is also on a not quite > so old version of FreeBSD that certainly doesn't have any workarounds > for PR 178977. All file systems were mounted async, which shouldn't > affect this much. >=20 > > Writing a multiple of the filesystem blocksize is still the most > > efficient strategy. >=20 > Except when the filesystem block size is too large to be efficient. > The FreeBSD ffs default block size of 32K is slow for small files. > Fragments reduce its space wastage but interact badly with the > buffer cache. Linux avoids some of these problems by using smaller > filesystem block sizes and not using fragments (at least in old > filesystems). >=20 > Bruce --IF36ViUsfxDRDLKY Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBAgAGBQJTvTtdAAoJEJDCuSvBvK1B7MMP/jLFcxLIyRajE89ULELxiNPp W7Kip0tE04OzNm6DnsGdA0L9vimqdwNfSFdXn42sXaqUrFnaijAZoTjhEEYX+khD 22ijy3AoW7g5J2qRM0p/rbgwBKVFGtsw2GBFTQvFySGfFunmSssMGXOucXYQjo6z jf8AyR/wLNG5+R+1rkykysAPbqGr8l/K/msxWmV0rqk1rjIMFRONOw5H0hz+hXXN 8YkkqsiJxq745ZyooHP8IsDEne1WIrbE1cY3XQPuDiI0fQb26M9qnufe+4KjLINe teSN6OeHNAy8LBnP4OWWiW+NxigNpwL4vuyikJ7zpNm4WB24cad3KQiytGacNRgo nat26IFAfpRciRnVtiPQyT+21sO/OBqTyRtnBzf3RATyKtZSfJupoKZXtVCkjTO4 eZ0gdb835dvq2zKKu/kl1mM9+R/hguFZfJVxBNSAzwhOms+7CxqFet78nnKGFVjn kCpm18ZT1PY9mMyMdE0v/0w2Rnnq3RUAGGZZHAGD7V0LIX+ZXyjtuibNPqaJiLhQ c0IxXzGdfJx02sh0HEUyPdlZZ1lcYZG7gEYQV5Lt5Iff28tkZcFITrE0j9Mh/3PR BhAUUMAKWeiSbyZ+UVQmfIVZfQ9nrV9G14ZsfHmjgqagbuqh44P1QjDsHm+QRzkR 97SvM2fPqlJLuCDuGRge =i4cX -----END PGP SIGNATURE----- --IF36ViUsfxDRDLKY-- From owner-freebsd-fs@FreeBSD.ORG Wed Jul 9 14:42:38 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D6A445E0 for ; Wed, 9 Jul 2014 14:42:38 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BECF3260C for ; Wed, 9 Jul 2014 14:42:38 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s69EgcZN031271 for ; Wed, 9 Jul 2014 14:42:38 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 156781] [zfs] zfs is losing the snapshot directory, Date: Wed, 09 Jul 2014 14:42:37 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 8.2-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: andrew@azar-a.net X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 14:42:38 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=156781 andrew@azar-a.net changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |andrew@azar-a.net --- Comment #12 from andrew@azar-a.net --- We've experienced the same problem on 8.2-stable. Seems to be related to upgrade to 8.3-RELEASE not finished (we cannot shutdown the machine until replacement is ready). So the world and kernel upgrade is installed but system has not been rebooted. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Jul 9 15:01:14 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 454E5EE8 for ; Wed, 9 Jul 2014 15:01:14 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2D8262861 for ; Wed, 9 Jul 2014 15:01:14 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s69F1ExQ099787 for ; Wed, 9 Jul 2014 15:01:14 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 156781] [zfs] zfs is losing the snapshot directory, Date: Wed, 09 Jul 2014 15:01:13 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 8.2-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 15:01:14 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=156781 Steven Hartland changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |smh@FreeBSD.org --- Comment #13 from Steven Hartland --- Your aware 8.3 is now EOL right? -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Jul 9 15:45:31 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 60DB3684 for ; Wed, 9 Jul 2014 15:45:31 +0000 (UTC) Received: from mail-wi0-x232.google.com (mail-wi0-x232.google.com [IPv6:2a00:1450:400c:c05::232]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id F34CC2C2B for ; Wed, 9 Jul 2014 15:45:30 +0000 (UTC) Received: by mail-wi0-f178.google.com with SMTP id f8so2509790wiw.11 for ; Wed, 09 Jul 2014 08:45:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=4GxASv5JZ+1vAD2tDLTPmg+4zS4r24rDzOJGtArUfuE=; b=OI30FyxYxdla0WRMR7IXrOEXaCjodCi0x5ZTJBTpBiUMPtodBVvqlz2Uk+mN3v4ZsI 0GCsfXC4AB3JiHhv4go6Q+0Plvp2m+f47nfWwwORfJttCVTC57OYu/JcI1W5ooKIaRzi ZVEXKRzsQ51592MeLQ0/3u6FIKjzeNnsS7DkdKw0xqffgLIl3OZPTZUcfflXYPxoIPsb 5h/pQHc6VYzkwzQL/h88Oqt8cVLYykrWX0zFnFXilayUcwjBR9M6R0CgpH5tDwkR/THV TOUVBIg1isLV2EkZ9MAZwLSySDNy0eYbvkmpUd0kw+ep4GYhYFGhL2lwyIGxyx9xr2kt I1CA== MIME-Version: 1.0 X-Received: by 10.180.105.170 with SMTP id gn10mr12632543wib.31.1404920729043; Wed, 09 Jul 2014 08:45:29 -0700 (PDT) Received: by 10.217.140.195 with HTTP; Wed, 9 Jul 2014 08:45:28 -0700 (PDT) Date: Wed, 9 Jul 2014 17:45:28 +0200 Message-ID: Subject: ZFS: always use ZIL instead of memory? From: Alexander Kriventsov To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 15:45:31 -0000 Hello. Is it possible always use ZIL instead of memory for log write transactions for any write operation (sync and async)? I see that ZIL using only for sync operations, but I need for all write operation. Thanks a lot From owner-freebsd-fs@FreeBSD.ORG Thu Jul 10 02:30:50 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 143DAFFC for ; Thu, 10 Jul 2014 02:30:50 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F09F22456 for ; Thu, 10 Jul 2014 02:30:49 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6A2UnJs089481 for ; Thu, 10 Jul 2014 02:30:49 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 187594] [zfs] [patch] ZFS ARC behavior problem and fix Date: Thu, 10 Jul 2014 02:30:49 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: mcdouga9@egr.msu.edu X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jul 2014 02:30:50 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 --- Comment #21 from mcdouga9@egr.msu.edu --- I have been using this patch for a while on two desktops (4G and 16G) and my swapping activity has stopped with no visible negative impact. Positive impact for me is no longer having to wait for applications to swap back in when I need to use them. I tried r265945 and vm.lowmem_period=0 and neither were as effective as the patch in this bug report. I plan to roll this patch into my next build for general deployment to my servers and start experimenting with removing my vfs.zfs.arc_max setting which I am currently using on some. I hope it can gain enough consensus to at least commit to -current. Thanks. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Thu Jul 10 18:43:04 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 39B75D77 for ; Thu, 10 Jul 2014 18:43:04 +0000 (UTC) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 10EA027E9 for ; Thu, 10 Jul 2014 18:43:03 +0000 (UTC) Received: from peevish.spa.umn.edu ([128.101.220.230]) by mail.physics.umn.edu with esmtp (Exim 4.77 (FreeBSD)) (envelope-from ) id 1X5JJ2-00073Y-7e for freebsd-fs@freebsd.org; Thu, 10 Jul 2014 13:42:56 -0500 Received: by peevish.spa.umn.edu (Postfix, from userid 5000) id 2C0CB472; Thu, 10 Jul 2014 13:42:56 -0500 (CDT) Date: Thu, 10 Jul 2014 13:42:56 -0500 From: Graham Allan To: freebsd-fs@freebsd.org Subject: Re: replaced da devices not being detected Message-ID: <20140710184256.GM18548@physics.umn.edu> References: <53B5B712.5050404@physics.umn.edu> <53B5EA11.4060509@physics.umn.edu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <53B5EA11.4060509@physics.umn.edu> User-Agent: Mutt/1.5.20 (2009-12-10) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Jul 2014 18:43:04 -0000 On Thu, Jul 03, 2014 at 06:41:05PM -0500, Graham Allan wrote: > On 7/3/2014 3:03 PM, Graham Allan wrote: > > > >It does seem to me like we get to replace some number of drives without > >incident, then after some point no new da devices are detected. > > I should have given some more info about the HBA etc in use - it's > an LSI 9205-8e (SAS2308, using mps driver), and dmesg is telling me > the HBA has (IT) firmware 14.00.00.00. Don't know if this is good or > bad but it appears to match the mps driver version, if that means > anything. > > I can see LSI is up to firmware 19.00.00.00 for the card, and I know > I've seen discussion here of the favored version, but can't find it > now. > However SAS2IRCU can see the added drive even when camcontrol fails > to, so I'm not sure that it's related to the HBA as such - unless > SAS2IRCI gets that information by a different path such as querying > the enclosure controller. Funnily enough the "missing" drive showed up round about the time I was messing with sas2ircu - though I didn't notice at first. The first time I ran "sas2ircu 0 display", it took a *really* long time to respond - subsequent runs were instant. I see now in kern.log that something issued a reinit to the HBA: Jul 3 17:57:39 hostname kernel: mps0: Calling Reinit from mps_wait_command Jul 3 17:57:39 hostname kernel: mps0: mps_reinit sc 0xffffff8002a77000 Jul 3 17:57:39 hostname kernel: mps0: mps_reinit mask interrupts Jul 3 17:57:40 hostname kernel: mps0: mpssas_handle_reinit startup Jul 3 17:57:40 hostname kernel: mps0: mpssas_announce_reset code 1 target -1 lun -1 Jul 3 17:57:40 hostname kernel: mps0: mpssas_complete_all_commands Jul 3 17:57:40 hostname kernel: (noperiph:mps0:0:4294967295:0): SMID 370 waking up cm 0xffffff8002aa7a10 state 1 ccb 0 for diag reset Jul 3 17:57:40 hostname kernel: mps0: mpssas_handle_reinit startup 0 tm 0 after command completion Jul 3 17:57:40 hostname kernel: mps0: mps_reinit doorbell 0x24000000 Jul 3 17:57:40 hostname kernel: mps0: mps_reinit unmask interrupts post 0 free 1055 Jul 3 17:57:40 hostname kernel: mps0: mps_reinit restarting post 0 free 1055 Jul 3 17:57:40 hostname kernel: mps0: mps_reinit finished sc 0xffffff8002a77000 post 0 free 1055 Jul 3 17:57:40 hostname kernel: mps0: Reinit success Jul 3 17:57:40 hostname kernel: mps0: mps_user_pass_thru: invalid request: error 60 the drive showed up right after this. Jul 3 18:00:15 hostname kernel: da91 at mps0 bus 0 scbus0 target 218 lun 0 Jul 3 18:00:15 hostname kernel: da91: Fixed Direct Access SCSI-6 device Jul 3 18:00:15 hostname kernel: da91: 600.000MB/s transfers Jul 3 18:00:15 hostname kernel: da91: Command Queueing enabled Jul 3 18:00:15 hostname kernel: da91: 2861588MB (5860533168 512 byte sectors: 255H 63S/T 364801C) I suspect sas2ircu was probably responsible for this. The system was generally unresponsive during that first sas2ircu run, but was normal before and after. Does this make any sense? Is there a recommended firmware version (other than our current 14.00.00.00) for the 9205-8e which might help with this? Thanks for any ideas, Graham -- ------------------------------------------------------------------------- Graham Allan School of Physics and Astronomy - University of Minnesota ------------------------------------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Jul 11 11:25:25 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8138BCBD for ; Fri, 11 Jul 2014 11:25:25 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 698162E27 for ; Fri, 11 Jul 2014 11:25:25 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6BBPPQe009396 for ; Fri, 11 Jul 2014 11:25:25 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Fri, 11 Jul 2014 11:25:25 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: vsjcfm@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jul 2014 11:25:25 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510 --- Comment #8 from vsjcfm@gmail.com --- (In reply to Steven Hartland from comment #7) > Just because the processes are using small amounts of memory doesn't mean > others aspects of the kernel aren't spiking up and demanding ram such as > mbufs, so just because its only running nginx doesn't mean you haven't seen > a memory spike in other areas. I'm using static mbuf setting, they're using ~500M RAM. I'm also wondering what part of kernel could use ~50G of RAM for a short period - this amount is always free (not inactive). > Are you seeing any movement at all over time in: > kstat.zfs.misc.arcstats.c I will update machine to 9.3R and build some MRTG graphs for memory usage. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Fri Jul 11 12:35:03 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F2EAAC53 for ; Fri, 11 Jul 2014 12:35:03 +0000 (UTC) Received: from smtp-outbound.userve.net (smtp-outbound.userve.net [217.196.1.22]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "*.userve.net", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1359C2454 for ; Fri, 11 Jul 2014 12:35:02 +0000 (UTC) Received: from owa.usd-group.com (owa.usd-group.com [217.196.1.2]) by smtp-outbound.userve.net (8.14.7/8.14.7) with ESMTP id s6BCCaje094843 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL) for ; Fri, 11 Jul 2014 13:12:36 +0100 (BST) (envelope-from matt.churchyard@userve.net) Received: from SERVER.ad.usd-group.com (192.168.0.1) by SERVER.ad.usd-group.com (192.168.0.1) with Microsoft SMTP Server (TLS) id 15.0.516.32; Fri, 11 Jul 2014 13:12:10 +0100 Received: from SERVER.ad.usd-group.com ([fe80::b19d:892a:6fc7:1c9]) by SERVER.ad.usd-group.com ([fe80::b19d:892a:6fc7:1c9%12]) with mapi id 15.00.0516.029; Fri, 11 Jul 2014 13:12:10 +0100 From: Matt Churchyard To: "freebsd-fs@freebsd.org" Subject: Re: ZFS: always use ZIL instead of memory? Thread-Topic: Re: ZFS: always use ZIL instead of memory? Thread-Index: Ac+c/8lkDizB9v0LQFCzZGyBY7iT+A== Date: Fri, 11 Jul 2014 12:12:10 +0000 Message-ID: <3cb5c826a61b43f588dda48537076584@SERVER.ad.usd-group.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.0.10] MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jul 2014 12:35:04 -0000 > Hello. > Is it possible always use ZIL instead of memory for log write transaction= s > for any write operation (sync and async)? > I see that ZIL using only for sync operations, but I need for all write > operation. > Thanks a lot ZFS never uses the ZIL *instead* of memory. All writes, whether they are sync or async are tracked in memory, as part o= f the current transaction. After a certain amount of time, all those write = operations in memory are committed to disk and a new transaction is started= . Because it's possible for the machine to crash while writes are still sitti= ng in memory, ZFS also puts a copy of all sync writes in the ZIL (which sho= uld be on non-volatile storage somewhere). It does this because when the ap= plication requested that sync write, ZFS had to guarantee that the data mad= e it to disk. There is no such promise with async writes. In normal operati= on ZFS still uses the in-memory copy for everything, and only reads the ZIL= after a crash (in order to replay and sync writes that were lost in RAM). If you want all writes to be written to the ZIL, regardless of whether they= are sync or async, set the sync option to always on your dataset: zfs set sync=3Dalways my/dataset Although understand that, as explained above, all writes will still be hand= led in memory as normal, it's just that a copy of all writes will be also w= ritten to ZIL. The benefit of sync=3Dalways is that if the machine crashes,= all writes can be recovered, not just those that were written in sync mode= . Downside is that it may affect performance, especially if you don't have = a fast ZIL device. As far as I am aware, there is no way to stop ZFS using memory for the curr= ent write transaction, it's just part of how ZFS works. Matt From owner-freebsd-fs@FreeBSD.ORG Fri Jul 11 19:18:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6582399A for ; Fri, 11 Jul 2014 19:18:48 +0000 (UTC) Received: from mail-lb0-x236.google.com (mail-lb0-x236.google.com [IPv6:2a00:1450:4010:c04::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CE7B12D61 for ; Fri, 11 Jul 2014 19:18:47 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id c11so1240399lbj.13 for ; Fri, 11 Jul 2014 12:18:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=nSbkHM43J5Yzv2nbhHCghn5UKqfEcU3lFLwwVih7uoc=; b=iYTCo/MsZ7zgbbb8wwWA8HT5auAcg6EPWIjp4gHBP3tX0tMxWf062lmYkWgCKUjfjx GuUPkA/qWaj17JEs04QTFR8zEy9M5Ho10et6wlLCMQfopghl+0P5cvRNHnmCMRPmLMF6 2EBDCbL0dsIaCDRQzlylQWq1xVF2M+yqbUNCKayPyiBzwIcJuH7V+lZwZfuDegXPBydO BSmPFxEFT8Mbysk8MonZ8TcSbTteyoFKHP/5/PLWEaZa3p7n7OpDb1VEW9jWJ0oL8cP5 bF8wtvm5p6khidhE3oOS8qo36Z9yzhhHXvOPwncCtEhV7h37YT1y692D2CvSo+9aAltl EOog== MIME-Version: 1.0 X-Received: by 10.152.207.37 with SMTP id lt5mr866425lac.10.1405106324391; Fri, 11 Jul 2014 12:18:44 -0700 (PDT) Received: by 10.115.2.3 with HTTP; Fri, 11 Jul 2014 12:18:44 -0700 (PDT) In-Reply-To: <53BD0AFB.3000909@digsys.bg> References: <53BD0AFB.3000909@digsys.bg> Date: Fri, 11 Jul 2014 12:18:44 -0700 Message-ID: Subject: Re: Using 2 SSD's to create a SLOG From: javocado To: Daniel Kalchev Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jul 2014 19:18:48 -0000 Thank you for your feedback. We understand that the devices are mirrored and tolerant. To further my understanding, confirm that I should be able to yank one of the SLOG SSD drives and zfs won't care, and I can subsequently replace it (issuing what command?) and zfs will then re-mirror it? And if both drives are removed then only the sync data would be lost? But if the drives are restored, or in the case of a power loss when the server comes back up, the sync data on the SSD's will be processed once zfs starts? On Wed, Jul 9, 2014 at 2:27 AM, Daniel Kalchev wrote: > > On 08.07.14 04:06, javocado wrote: > >> I am hoping to dumbly plug two SSDs onto motherboard SATA ports and just >> hand them over, raw, to ZFS. >> >> > Others already commented how you should setup mirrored SLOG. In addition > to that, because of the nature of SSDs and SLOG, I would recommend the > following: > > The SLOG size does not need to be large, it should only cover several > seconds of your synchronous write throughput -- usually few GB are plenty. > Today's SSDs are much large than needed for SLOG. But, today's SSDs also > suffer severe performance degradation, especially for writing when you fill > them up with data and they need to do garbage collection. Also, most SSDs > have "good performance" only when using an 8GB span, not the whole drive. > All of this only makes sense if the drive has TRIM. FreeBSD already > supports TRIM for ZFS SLOG. Therefore ensure you do TRIM of the entire > drive, then partition it with GPT to only use (say) 8GB for the SLOG. Leave > the rest unallocated -- you will never write there but the drive's > controller will use those blocks as spares for TRIM and this will both > improve performance and make the drive last much longer. Then add both > slices as a mirrored log device to your ZFS pool. > > Daniel > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri Jul 11 19:30:38 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DCE9ED11 for ; Fri, 11 Jul 2014 19:30:37 +0000 (UTC) Received: from mail-oa0-x235.google.com (mail-oa0-x235.google.com [IPv6:2607:f8b0:4003:c02::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A514B2E3D for ; Fri, 11 Jul 2014 19:30:37 +0000 (UTC) Received: by mail-oa0-f53.google.com with SMTP id l6so1703096oag.40 for ; Fri, 11 Jul 2014 12:30:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=U7UvXnB7QlnObTmrm/aK/Ivc1eg5mP1EKBrQTsNDVHU=; b=CImWGbNiKpUVbwlCekAj/9lCEwJ9xWcnTV+E3YJv8ST2DK006QM/CAI0YuEmHyx7fQ MTWA6IXa1+HkRpLrWQra6akzdvDypHE9eXIQsmTDjyFbxyCylB94g2/0x208tTbtTfjx SaQGten011hfoMYyGEEkyTU8UuAltSWGK7hkE/P5hHoIzTk5T6Rk+uU4+ABMN7BM/9pg qbjasdLyk6lOAM1vfYIAIKZY97UCaHNkZuznOTf9Lpa+pHC+Ksv1aX6E9sX845LFVKWA B3d3VaodG9H+c9QTF4aYnbCJL4j2f9M8WLDZfIuNxWhiVHeF5VURP3OeXjqW8M82cFXX OdNQ== MIME-Version: 1.0 X-Received: by 10.182.60.65 with SMTP id f1mr1258600obr.78.1405107036881; Fri, 11 Jul 2014 12:30:36 -0700 (PDT) Received: by 10.202.49.198 with HTTP; Fri, 11 Jul 2014 12:30:36 -0700 (PDT) In-Reply-To: References: <53BD0AFB.3000909@digsys.bg> Date: Fri, 11 Jul 2014 12:30:36 -0700 Message-ID: Subject: Re: Using 2 SSD's to create a SLOG From: Freddie Cash To: javocado Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jul 2014 19:30:38 -0000 On Fri, Jul 11, 2014 at 12:18 PM, javocado wrote: > Thank you for your feedback. We understand that the devices are mirrored > and tolerant. > To further my understanding, confirm that I should be able to yank one of > the SLOG SSD drives and zfs won't care, and I can subsequently replace it > (issuing what command?) and zfs will then re-mirror it? And if both drive= s > are removed then only the sync data would be lost? But if the drives are > restored, or in the case of a power loss when the server comes back up, t= he > sync data on the SSD's will be processed once zfs starts? > http://man.freebsd.org/zpool :) Read the parts about "attach" and "detach". "attach" will add a drive to an existing single-drive vdev, thus converting it to a 2-way mirror vdev. "attach" can also be used to add a drive to an existing mirror vdev, thus converting it to a 3-way=E2=80=8B mirror vdev. "detach" removes a drive from an existing mirror vdev. So, if you have a 2-way mirror vdev, and one drive is dying, you can attach a new drive to it (converting it to a 3-way mirror), wait for it to resilver, then detach the dying drive. That way, you never lose redundancy in the mirror vdev. :) =E2=80=8B-- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 01:42:38 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D9C556C8 for ; Sat, 12 Jul 2014 01:42:38 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C11B92DFD for ; Sat, 12 Jul 2014 01:42:38 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6C1gcFa058226 for ; Sat, 12 Jul 2014 01:42:38 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 71774] [ntfs] NTFS cannot "see" files on a WinXP filesystem Date: Sat, 12 Jul 2014 01:42:38 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 5.3-BETA4 X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 01:42:38 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=71774 --- Comment #3 from joeb1@a1poweruser.com --- This PR should be closed. 5.3 is way past EOL and ntfs has been removed from 10.0 base. See. http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 01:47:13 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E089E8AC for ; Sat, 12 Jul 2014 01:47:13 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C63662E21 for ; Sat, 12 Jul 2014 01:47:13 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6C1lD7N063577 for ; Sat, 12 Jul 2014 01:47:13 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 80088] [smbfs] Incorrect file time setting on NTFS mounted via mount_smbfs Date: Sat, 12 Jul 2014 01:47:13 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 4.11-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 01:47:14 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=80088 joeb1@a1poweruser.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joeb1@a1poweruser.com --- Comment #4 from joeb1@a1poweruser.com --- 4.11 is so far past EOL as this pr is now meaningless. And ntfs has been removed from 10.0 base see http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 01:51:10 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 270A3C47 for ; Sat, 12 Jul 2014 01:51:10 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0E5E42EB6 for ; Sat, 12 Jul 2014 01:51:10 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6C1p9iB071773 for ; Sat, 12 Jul 2014 01:51:09 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 99290] [ntfs] mount_ntfs ignorant of cluster sizes Date: Sat, 12 Jul 2014 01:51:10 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 6.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 01:51:10 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=99290 joeb1@a1poweruser.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joeb1@a1poweruser.com --- Comment #2 from joeb1@a1poweruser.com --- 6.1 is so far past EOL as this pr is now meaningless. And ntfs has been removed from 10.0 base see http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 01:54:34 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 05C3DCE3 for ; Sat, 12 Jul 2014 01:54:34 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E0D9A2ECE for ; Sat, 12 Jul 2014 01:54:33 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6C1sXTT098373 for ; Sat, 12 Jul 2014 01:54:33 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 114847] [ntfs] [patch] [request] dirmask support for NTFS ala MSDOSFS Date: Sat, 12 Jul 2014 01:54:34 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 6.2-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 01:54:34 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=114847 joeb1@a1poweruser.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joeb1@a1poweruser.com --- Comment #5 from joeb1@a1poweruser.com --- Close this pr. 6.2 is so far past EOL as this pr is now meaningless. And ntfs has been removed from 10.0 base see http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 01:56:11 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A2F5ED64 for ; Sat, 12 Jul 2014 01:56:11 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 89DBB2ED9 for ; Sat, 12 Jul 2014 01:56:11 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6C1uBt2000152 for ; Sat, 12 Jul 2014 01:56:11 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 118107] [ntfs] [panic] Kernel panic when accessing a file at NTFS file system [regression] Date: Sat, 12 Jul 2014 01:56:11 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 7.0-BETA3 X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 01:56:11 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=118107 joeb1@a1poweruser.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joeb1@a1poweruser.com --- Comment #3 from joeb1@a1poweruser.com --- Close this pr. 7.0 is so far past EOL as this pr is now meaningless. And ntfs has been removed from 10.0 base see http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 01:59:56 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8F1FAF6F for ; Sat, 12 Jul 2014 01:59:56 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 75C662F05 for ; Sat, 12 Jul 2014 01:59:56 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6C1xu1g004094 for ; Sat, 12 Jul 2014 01:59:56 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 120482] [ntfs] [patch] Sync style changes between NetBSD and FreeBSD ntfs filesystem Date: Sat, 12 Jul 2014 01:59:56 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 8.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 01:59:56 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=120482 joeb1@a1poweruser.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joeb1@a1poweruser.com --- Comment #2 from joeb1@a1poweruser.com --- Close this pr. 8.0 is past EOL as this pr is now meaningless. And ntfs has been removed from 10.0 base see http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 02:01:48 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D0618FFE for ; Sat, 12 Jul 2014 02:01:48 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B6B8D2F87 for ; Sat, 12 Jul 2014 02:01:48 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6C21mhq027406 for ; Sat, 12 Jul 2014 02:01:48 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 120483] [ntfs] [patch] NTFS filesystem locking changes Date: Sat, 12 Jul 2014 02:01:48 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 8.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 02:01:48 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=120483 joeb1@a1poweruser.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joeb1@a1poweruser.com --- Comment #2 from joeb1@a1poweruser.com --- Close this pr. 8.0 is past EOL as this pr is now meaningless. And ntfs has been removed from 10.0 base see http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 02:03:27 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B8636111 for ; Sat, 12 Jul 2014 02:03:27 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9FCB32F98 for ; Sat, 12 Jul 2014 02:03:27 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6C23R2P044198 for ; Sat, 12 Jul 2014 02:03:27 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 136873] [ntfs] Missing directories/files on NTFS volume Date: Sat, 12 Jul 2014 02:03:27 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 02:03:27 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=136873 joeb1@a1poweruser.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joeb1@a1poweruser.com --- Comment #2 from joeb1@a1poweruser.com --- Close this pr. ntfs has been removed from 10.0 base see http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 11:11:54 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CC329604 for ; Sat, 12 Jul 2014 11:11:54 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B2A132911 for ; Sat, 12 Jul 2014 11:11:54 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6CBBsFj083702 for ; Sat, 12 Jul 2014 11:11:54 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 73484] [ntfs] Kernel panic when doing `ls` from the client side (running Solaris 8) on an NFS exported ntfs file system in 4.10 Date: Sat, 12 Jul 2014 11:11:54 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 4.10-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 11:11:54 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=73484 joeb1@a1poweruser.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joeb1@a1poweruser.com --- Comment #3 from joeb1@a1poweruser.com --- Close this pr. ntfs has been removed from 10.0 base see http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 11:13:28 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E869D676 for ; Sat, 12 Jul 2014 11:13:28 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D04ED291D for ; Sat, 12 Jul 2014 11:13:28 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6CBDSj4099811 for ; Sat, 12 Jul 2014 11:13:28 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 97377] [ntfs] [patch] syntax cleanup for ntfs_ihash.c Date: Sat, 12 Jul 2014 11:13:28 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 1.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 11:13:29 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=97377 joeb1@a1poweruser.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joeb1@a1poweruser.com --- Comment #2 from joeb1@a1poweruser.com --- Close this pr. ntfs has been removed from 10.0 base see http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sat Jul 12 11:14:38 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 45DCD6F4 for ; Sat, 12 Jul 2014 11:14:38 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2CBE62924 for ; Sat, 12 Jul 2014 11:14:38 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6CBEckK000773 for ; Sat, 12 Jul 2014 11:14:38 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 103035] [ntfs] Directories in NTFS mounted disc images appear as empty files in Samba export Date: Sat, 12 Jul 2014 11:14:38 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 6.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: joeb1@a1poweruser.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Jul 2014 11:14:38 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=103035 joeb1@a1poweruser.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |joeb1@a1poweruser.com --- Comment #2 from joeb1@a1poweruser.com --- Close this pr. ntfs has been removed from 10.0 base see http://svnweb.freebsd.org/base/head/sbin/Makefile?view=log&pathrev=247665 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Sun Jul 13 20:19:58 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2E1FDF62 for ; Sun, 13 Jul 2014 20:19:58 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 165D3259B for ; Sun, 13 Jul 2014 20:19:58 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6DKJvx9055104 for ; Sun, 13 Jul 2014 20:19:57 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 86587] [msdosfs] rm -r /PATH fails with lots of small files Date: Sun, 13 Jul 2014 20:19:58 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: yaneurabeya@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Jul 2014 20:19:58 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=86587 yaneurabeya@gmail.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |yaneurabeya@gmail.com --- Comment #3 from yaneurabeya@gmail.com --- I see this occur frequently with UFS and ZFS when running buildworld with high -j values, so I'm not sure if this is an issue with just msdosfs, or if it's a race within the VFS layer :/. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Mon Jul 14 08:00:11 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AF852DC6 for ; Mon, 14 Jul 2014 08:00:11 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9130C2A53 for ; Mon, 14 Jul 2014 08:00:11 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6E80Bxl043277 for ; Mon, 14 Jul 2014 08:00:11 GMT (envelope-from bugzilla-noreply@freebsd.org) Message-Id: <201407140800.s6E80Bxl043277@kenobi.freebsd.org> From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bugzilla] Commit Needs MFC MIME-Version: 1.0 X-Bugzilla-Type: whine X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated Date: Mon, 14 Jul 2014 08:00:11 +0000 Content-Type: text/plain X-Content-Filtered-By: Mailman/MimeDel 2.1.18 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Jul 2014 08:00:11 -0000 Hi, You have a bug in the "Needs MFC" state which has not been touched in 7 or more days. This email serves as a reminder that you may want to MFC this bug or marked it as completed. In the event you have a longer MFC timeout you may update this bug with a comment and I won't remind you again for 7 days. This reminder is only sent on Mondays. Please file a bug about concerns you may have. This search was scheduled by eadler@FreeBSD.org. (5 bugs) Bug 133174: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=133174 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [msdosfs] [patch] msdosfs must support multibyte international characters in file names Bug 136470: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=136470 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] Cannot mount / in read-only, over NFS Bug 139651: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139651 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [nfs] mount(8): read-only remount of NFS volume does not work Bug 144447: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=144447 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [zfs] sharenfs fsunshare() & fsshare_main() non functional Bug 155411: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=155411 Severity: Affects Only Me Priority: Normal Hardware: Any Assignee: freebsd-fs@FreeBSD.org Status: Needs MFC Resolution: Summary: [regression] [8.2-release] [tmpfs]: mount: tmpfs : No space left on device From owner-freebsd-fs@FreeBSD.ORG Mon Jul 14 09:43:48 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 12CCB693 for ; Mon, 14 Jul 2014 09:43:48 +0000 (UTC) Received: from nm13-vm4.bullet.mail.ne1.yahoo.com (nm13-vm4.bullet.mail.ne1.yahoo.com [98.138.91.173]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id CA15E23DF for ; Mon, 14 Jul 2014 09:43:47 +0000 (UTC) Received: from [98.138.100.103] by nm13.bullet.mail.ne1.yahoo.com with NNFMP; 14 Jul 2014 09:40:07 -0000 Received: from [98.138.101.179] by tm102.bullet.mail.ne1.yahoo.com with NNFMP; 14 Jul 2014 09:40:07 -0000 Received: from [127.0.0.1] by omp1090.mail.ne1.yahoo.com with NNFMP; 14 Jul 2014 09:40:07 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 312048.88479.bm@omp1090.mail.ne1.yahoo.com Received: (qmail 52069 invoked by uid 60001); 14 Jul 2014 09:40:07 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1405330807; bh=hczx8qQ4PY+JU/4bdJd+bJgJAvDM3Ol2MuVPFZUDQBQ=; h=References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=IFGzNV59R4xM+04ZgwaRc4kE83pUZ3Bphcc9Yq9Q4VVQKpoPsIKAcV65CcFyKPpAorDE9OwRnSCfwgiUWRrvk4uqDqeka8hQbbURtwHGk4qYhuVp9yIf72vYid80lQFJ6wQrk9t33+tLqoNEtQuJdpTnN4ndx2l2KUmXCimv/b0= X-YMail-OSG: 8ZkZ42EVM1mAa7dU8KpTCVVmN1Tn7K7pav114kldjjF8Y9S _OLdMfxlnH4AlsTSrqGTTk8oXOIZ4Zp6f5y6TXHROFKPzJPoDT.CAPT182PY SwP4QhJsHvaUVx6naXjVoed5uTIEcp8VMBZJnPV06qhYdxvQYv7z0PkKODJa .96MnmDws0PTQtg8D6DSidT5VEWXs0wthfcqK376e4C9ESNCqJVBSL_gSse1 rvvxJaWYZspuOEOSzLvmUPZy1V5q57sgsm9HW2gs0leg9gbYIG8dt1g.RvUk rMRKJ_uiOlKZe1CU4p6CbL3YT1x.temvIQOSYJWOQJY.Kc4Kx8ZTLywZkIWj 4XZ2UXlwEUHAR_YObix48eG_hL_5qSxOPzkMyOlL5L4ReKrNEVAE.9zV01li pJ2xnAiu5wplc7WFtku1_MlgQ5u9BwPUJ8pdrs0zJzuQZJZPNoLFCDT44CXn ZhXZ2cEnSw9dN8jSIgwUzUOfTHECCK4Ku9cGZm0gyIdUkDafhKfWB6O1eLaR SdINZnIWW4Ng8T8pGwtrJyBa_UcYUjv5qkcmdG5fNnF4riS8upiDfsegX Received: from [207.154.100.163] by web120906.mail.ne1.yahoo.com via HTTP; Mon, 14 Jul 2014 02:40:07 PDT X-Rocket-MIMEInfo: 002.001, CgpUaGFua3MgZm9yIHRoZSBoZWxwZnVsIHJlc3BvbnNlcy4KCkkgdHJpZWQgdGhlIGR1bXBmcyB1dGlsaXR5LQojIGR1bXBmcyAvZGV2L2RhMGEKZHVtcGZzOiAvZGV2L2RhMGE6IGNvdWxkIG5vdCByZWFkIHN1cGVyYmxvY2sgdG8gZmlsbCBvdXQgZGlzawojIGR1bXBmcyAvZGV2L2RhMGIKZHVtcGZzOiAvZGV2L2RhMGI6IGNvdWxkIG5vdCByZWFkIHN1cGVyYmxvY2sgdG8gZmlsbCBvdXQgZGlzawojIGR1bXBmcyAvZGV2L2RhMGUKZHVtcGZzOiAvZGV2L2RhMGU6IGNvdWxkIG5vdCByZWFkIHN1cGVyYmxvY2sBMAEBAQE- X-Mailer: YahooMailWebService/0.8.194.680 References: <1403926549.37922.YahooMailNeo@web120905.mail.ne1.yahoo.com> <20140628224020.GB68178@neutralgood.org> Message-ID: <1405330807.92885.YahooMailNeo@web120906.mail.ne1.yahoo.com> Date: Mon, 14 Jul 2014 02:40:07 -0700 From: Duckbreath Reply-To: Duckbreath Subject: Re: Mounting a file system with superblock 32 To: "kpneal@pobox.com" In-Reply-To: <20140628224020.GB68178@neutralgood.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Jul 2014 09:43:48 -0000 =0A=0AThanks for the helpful responses.=0A=0AI tried the dumpfs utility-=0A= # dumpfs /dev/da0a=0Adumpfs: /dev/da0a: could not read superblock to fill o= ut disk=0A# dumpfs /dev/da0b=0Adumpfs: /dev/da0b: could not read superblock= to fill out disk=0A# dumpfs /dev/da0e=0Adumpfs: /dev/da0e: could not read = superblock to fill out disk=0A# dumpfs /dev/da0f=0Adumpfs: /dev/da0f: could= not read superblock to fill out disk=0A# dumpfs /dev/da0=0Adumpfs: /dev/da= 0: could not read superblock to fill out disk=0A=0A=0AI'm somewhat perplexe= d that fsck_ffs -b 32 thinks the disk is fine.=0A=0A=0ASo, I decided to pay= the sourcecode of dumpfs.c a visit!The error message I get is probably the= product of ufserr(name);=0A=0A=0A=A0=A0=A0=A0=A0=A0 while ((name =3D *argv= ++) !=3D NULL) {=0A=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (ufs= _disk_fillout(&disk, name) =3D=3D -1) {=0A=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ufserr(name);=0A=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 eval |=3D 1;=0A=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 conti= nue;=0A=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 }=0A=0A=0AThe funct= ion ufs_disk_fillout() is a a core/system function with a manpage, essentia= lly an attempt to fill out a structure so it can print the disk information= within it for the dumpfs utility.=0A=0ASo my journey continues to where th= is onerous function which DARES (sorry, some list humor...) not to see my s= uperblock as defined, =0A=0A/usr/src/lib/libufs/type.c and I see this defin= ition:=0A=0A=A0int ufs_disk_fillout(struct uufsd *disk, const char *name)= =0A=A0 {=0A=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (ufs_disk_fillout_blank(disk, nam= e) =3D=3D -1) {=0A=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 retur= n (-1);=0A=A0=A0=A0=A0=A0=A0=A0=A0=A0 }=0A=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (s= bread(disk) =3D=3D -1) {=0A=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 ERROR(disk, "could not read superblock to fill out disk");=0A=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 return (-1);=0A=A0=A0=A0=A0=A0= =A0=A0=A0=A0 }=0A=A0=A0=A0=A0=A0=A0=A0=A0=A0 return (0);=0A=A0 }=0A=0A=0AOk= , so there is my error message.=A0 sbread(disk) is returning -1.=A0 If you = have a crafty eye, you'll see some really bad error handling practices comi= ng up next, because sbread() already has more meaningful errors and they ar= e overwritten by ufs_disk_fillout's generic error of "could not read".=A0 T= hose error messages should not be overwrriten!=0A=0AAnyway, sbread() is a b= it long, so it would be inappropriate to post it in its entirety, but it re= turns -1 upon tripping the following error messages defined within it on th= e respective line numbers:=0A=0A=A066=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ERROR(disk, "non-existent or truncated= superblock");=0A=A087=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ERRO= R(disk, "no usable known superblock found");=0A111=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ERROR(disk, "Failed to rea= d sb summary information");=0A=0A=0ASo unfortunately I don't know which of = the three my disk is falling in.=A0 I could try to write a stub and see?=0A= =0AI find it odd though that fsck_ff -b 32 does not have a problem with thi= s, but the code that attempts to seek the super block.... well... it looks = valid.=A0 But it could be failing on the other two?=A0 Who knows. :)=0A=0A= =0AThe drive AFAIK is good, or always was.=A0 It's file system is a bit dat= ed, heralding back to the FreeBSD 5.0 days...=0A=0A=0A=0A=0A_______________= _________________=0AFrom: "kpneal@pobox.com" =0ATo: Duckb= reath =0ACc: "freebsd-fs@freebsd.org" =0ASent: Saturday, June 28, 2014 3:40 PM=0ASubject: Re: Mounting= a file system with superblock 32=0A=0A=0AOn Fri, Jun 27, 2014 at 08:35:49P= M -0700, Duckbreath via freebsd-fs wrote:=0A> =0A> =0A> Hello all, I have a= hard drive that represents an older installation of FreeBSD and I would li= ke to access it.=A0 Using a USB -> IDE connection device the drive appears = as:=0A> =0A> /dev/da0[x*]=A0=A0 where x* is various letters 'a', 'e', 'f', = which no doubt represent the partitions from the previous installation.=0A>= =0A> =0A> A simple mount doesn't work though, returning an error message a= bout unrecognized device.=0A=0ACan you cut-n-paste the exact command you us= ed and the exact error message?=0A=0AOh, and can you run "gpart show da0" (= or whatever the entire disk appears as)?=0A=0AAre you trying to mount the e= xact same partition as the one that fsck says=0Ais fine?=0A=0A> A simple us= age of fsck_ff however shows the file system clear,=0A> fsck_ff -b 32 /dev/= da0a returns system clean, and newfs -N will give me various facts about th= e drive (blocksize, fragment size, cylinder groups, blocks, indoes, and sec= tors).=0A=0AI don't think newfs -N is what you want. Rather, try dumpfs or = ffsinfo to=0Aget info on what is actually on the disk.=0A=0AWith newfs -N y= ou are getting what newfs would put if it hadn't been given=0Athe "-N" opti= on. Meaning, that's the "dry run" option for when creating a=0Anew filesyst= em.=0A=0A> Googling around has shown that perhaps the mdmfs utility is what= I need.=0A=0ADoubtful. I'm guessing a typo when using the mount command.= =0A=0A=0A=0A=0A> This fits my definition of non-trivial.=A0 Any of you know= how to mount a UFS1 drive?=0A=0AIt should work. Show us the command that f= ails and the message it prints=0Aif you would. That might give the clue nee= ded.=0A-- =0A"A method for inducing cats to exercise consists of directing = a beam of=0Ainvisible light produced by a hand-held laser apparatus onto th= e floor ...=0Ain the vicinity of the cat, then moving the laser ... in an i= rregular way=0Afascinating to cats,..." -- US patent 5443036, "Method of ex= ercising a cat" From owner-freebsd-fs@FreeBSD.ORG Mon Jul 14 12:40:08 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CA7BD15E for ; Mon, 14 Jul 2014 12:40:08 +0000 (UTC) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.21.123]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "smtp-sofia.digsys.bg", Issuer "Digital Systems Operational CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4E771241F for ; Mon, 14 Jul 2014 12:40:07 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [193.68.6.1]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.6/8.14.6) with ESMTP id s6ECe3Y3097228 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Mon, 14 Jul 2014 15:40:03 +0300 (EEST) (envelope-from daniel@digsys.bg) Message-ID: <53C3CFA2.9070309@digsys.bg> Date: Mon, 14 Jul 2014 15:40:02 +0300 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: javocado Subject: Re: Using 2 SSD's to create a SLOG References: <53BD0AFB.3000909@digsys.bg> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.18 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Jul 2014 12:40:08 -0000 On 11.07.14 22:18, javocado wrote: > Thank you for your feedback. We understand that the devices are > mirrored and tolerant. > To further my understanding, confirm that I should be able to yank one > of the SLOG SSD drives and zfs won't care, and I can subsequently > replace it (issuing what command?) and zfs will then re-mirror it? And > if both drives are removed then only the sync data would be lost? But > if the drives are restored, or in the case of a power loss when the > server comes back up, the sync data on the SSD's will be processed > once zfs starts? > As the SLOG is only ever read at system boot time (actually, when importing the pool), there is no chance you lose data if both of your SLOG drives die *while the system is still running*. If this happens, ZFS will simply work as if no SLOG was configured. The only possible time you can lose data due to SLOG is if your system is suddenly restarted and *both* drives die before it boots. Then, any data that was not committed to the other disks, but still in SLOG will be lost. ZFS integrity should not be impacted and any older data should still be intact. Of course, bugs happen: the above is what should happen in absence of bugs. Daniel > > On Wed, Jul 9, 2014 at 2:27 AM, Daniel Kalchev > wrote: > > > On 08.07.14 04:06, javocado wrote: > > I am hoping to dumbly plug two SSDs onto motherboard SATA > ports and just > hand them over, raw, to ZFS. > > > Others already commented how you should setup mirrored SLOG. In > addition to that, because of the nature of SSDs and SLOG, I would > recommend the following: > > The SLOG size does not need to be large, it should only cover > several seconds of your synchronous write throughput -- usually > few GB are plenty. Today's SSDs are much large than needed for > SLOG. But, today's SSDs also suffer severe performance > degradation, especially for writing when you fill them up with > data and they need to do garbage collection. Also, most SSDs have > "good performance" only when using an 8GB span, not the whole > drive. All of this only makes sense if the drive has TRIM. FreeBSD > already supports TRIM for ZFS SLOG. Therefore ensure you do TRIM > of the entire drive, then partition it with GPT to only use (say) > 8GB for the SLOG. Leave the rest unallocated -- you will never > write there but the drive's controller will use those blocks as > spares for TRIM and this will both improve performance and make > the drive last much longer. Then add both slices as a mirrored log > device to your ZFS pool. > > Daniel > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org > " > > From owner-freebsd-fs@FreeBSD.ORG Mon Jul 14 22:42:12 2014 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A1E99EDC for ; Mon, 14 Jul 2014 22:42:12 +0000 (UTC) Received: from albert.catwhisker.org (mx.catwhisker.org [198.144.209.73]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6FFB62BDB for ; Mon, 14 Jul 2014 22:42:11 +0000 (UTC) Received: from albert.catwhisker.org (localhost [127.0.0.1]) by albert.catwhisker.org (8.14.9/8.14.9) with ESMTP id s6EMg3ss010543; Mon, 14 Jul 2014 15:42:03 -0700 (PDT) (envelope-from david@albert.catwhisker.org) Received: (from david@localhost) by albert.catwhisker.org (8.14.9/8.14.9/Submit) id s6EMg3Vk010542; Mon, 14 Jul 2014 15:42:03 -0700 (PDT) (envelope-from david) Date: Mon, 14 Jul 2014 15:42:03 -0700 From: David Wolfskill To: fs@freebsd.org Subject: SUJ issue for stable/10 @r268091 Message-ID: <20140714224203.GM1241@albert.catwhisker.org> Reply-To: fs@freebsd.org, David Wolfskill MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="+278g007AL/ykmV8" Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Jul 2014 22:42:12 -0000 --+278g007AL/ykmV8 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable I rebooted my test machine, and saw: =2E.. Mounting local file systems:mount: /dev/mfid1: R/W mount of /b denied. Filesystem is not clean - run fsck. Forced mount will invalidate journal contents: Operation not permitted =2E Mounting /etc/fstab filesystems failed, startup aborted ERROR: ABORTING BOOT (sending SIGTERM to parent)! Jul 14 04:29:19 init: /bin/sh on /etc/rc terminated abnormally, going to single user mode Enter full pathname of shell or RETURN for /bin/sh:=20 # fsck -p /b && exit || fsck -Cy /b && exit ** SU+J Recovering /dev/mfid1 ** Reading 33554432 byte journal from inode 5. ** Building recovery table. ** Resolving unreferenced inode list. /dev/mfid1: Inode 1099973112 link count 0 invalid /dev/mfid1: UNEXPECTED SU+J INCONSISTENCY /dev/mfid1: INTERNAL ERROR: GOT TO reply() /dev/mfid1: UNEXPECTED SOFT UPDATE INCONSISTENCY; RUN fsck MANUALLY. ** /dev/mfid1 USE JOURNAL? yes ** SU+J Recovering /dev/mfid1 ** Reading 33554432 byte journal from inode 5. RECOVER? yes ** Building recovery table. ** Resolving unreferenced inode list. Inode 1099973112 link count 0 invalid UNEXPECTED SU+J INCONSISTENCY FALLBACK TO FULL FSCK? yes ** Skipping journal, falling through to full fsck ** Last Mounted on /b ** Phase 1 - Check Blocks and Sizes =2E.. SALVAGE? yes 74628531 files, 1805904798 used, 6705919232 free (27523320 frags, 834799489= blocks, 0.3% fragmentation) ***** FILE SYSTEM STILL DIRTY ***** ***** FILE SYSTEM WAS MODIFIED ***** ***** PLEASE RERUN FSCK ***** Mounting local file systems:mount: /dev/mfid1: R/W mount of /b denied. File= system is not clean - run fsck. Forced mount will invalidate journal conten= ts: Operation not permitted =2E Mounting /etc/fstab filesystems failed, startup aborted ERROR: ABORTING BOOT (sending SIGTERM to parent)! Jul 14 08:05:38 init: /bin/sh on /etc/rc terminated abnormally, going to si= ngle user mode Enter full pathname of shell or RETURN for /bin/sh:=20 # fsck -p /b && exit || fsck -Cy /b && exit ** SU+J Recovering /dev/mfid1 Journal timestamp does not match fs mount time ** Skipping journal, falling through to full fsck =2E.. That's not something I'd want to deploy on a couple hundred machines with the expectation that full fsck is a thing of the past. How might I gather clues as to what happened (and ini particular, what went wrong)? (I'm not subscribed to -fs@; I've set Reply-To as a hint -- thanks!) Peace, david --=20 David H. Wolfskill david@catwhisker.org Taliban: Evil cowards with guns afraid of truth from a 14-year old girl. See http://www.catwhisker.org/~david/publickey.gpg for my public key. --+278g007AL/ykmV8 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJTxFy5XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ4RThEMDY4QTIxMjc1MDZFRDIzODYzRTc4 QTY3RjlDOERFRjQxOTNCAAoJEIpn+cje9Bk7s/EP/R7HhRIER8hwk8urPaXeOELJ DlnsQIiapZFqBcTHPy4I0fm9/mhEI9I2m7nMPr0klSQogge1I8Zce3Mh+Nwtgy8X +EIsQrRL5Si5zWWKrfHnr0g96j0mhnKvThO9aH+UdryYOaxCSEIN8/+5iYaoR+St 3k2vCrFKzmj3Gglp7jMGjfGF2lMqW1mzHRwSpu3XKK81PLMAyhjamT46h7oH9Lbe 3soem2jvkIfYxUsDJ5Teelw7YmzR+iidV0E1DeSdun7i70wGsEQwLcfigRx2U9Jo 7Ui4nif5+YkkQQYE+njwFSzzBCUK+cw0QJknlsHr6PT2W9l/J5GcLeRkfMqeIAaP sY43uTyVBrbGDVQMqU9FOecSqITYyCIByunyJLPGt/uJqK/6zRwcKIJZIG+KPH6Q 1efw22yxdZuzgWCqyYemWn53cZTi3nd2D7KGeFVqk1bm3101dX6Mdl2qfgWPxiRA D0x/mztCCacnzpsiCXvVwbbYr/qoFMrc+iYSG0ra3jOM4q62muQjb+IFH+LD45ZW y3//DVS7333RQveqtSTp4OaJ5Lklk8hu5b65/Gj+S0c61uQ8wOeti3xnlu12E5wE THeqqGHLqBJ8sKinu5gWH+WlYI7dUaMKlMgT6oFfn4Eef80C2Fjio6+H/NfoOngA mc2PiD0vU45I6oS2HKvU =hobZ -----END PGP SIGNATURE----- --+278g007AL/ykmV8-- From owner-freebsd-fs@FreeBSD.ORG Tue Jul 15 05:50:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0F8F7608 for ; Tue, 15 Jul 2014 05:50:22 +0000 (UTC) Received: from mx.got.net (mx2.mx3.got.net [207.111.237.41]) by mx1.freebsd.org (Postfix) with ESMTP id E92FC2CB7 for ; Tue, 15 Jul 2014 05:50:21 +0000 (UTC) Received: from do-13.discdrive.bayphoto.com (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx.got.net (mx2.mx3.got.net) with ESMTP id 719B323B8A9 for ; Mon, 14 Jul 2014 22:18:18 -0700 (PDT) Message-ID: <53C4B99A.9000508@bayphoto.com> Date: Mon, 14 Jul 2014 22:18:18 -0700 From: Mike Carlson User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: ZFS Panic on 10.0-RELEASE - again Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms050603000305030104030407" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jul 2014 05:50:22 -0000 This is a cryptographically signed message in MIME format. --------------ms050603000305030104030407 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable I posted to the list about a month ago where our ZFS pool panic upon=20 mounting Well, after backing up and restoring from some snapshots, I rebuilt the=20 server on July 4th and restored the data. Today, competely unrelated to zfs, our root volume (UFS2) was running a=20 portsnap fetch extract and trigger a separate panic I assumed the ZFS data was fine, as it was unrelated. That was not the=20 case. After re-installing and performing a zpool import, our pool panic=20 yet again. This is a completely different situation, and again, I've lost around=20 20TB of data. Here is the vmcore.0's backtrace: Fatal trap 12: page fault while in kernel mode cpuid =3D 2; apic id =3D 12 fault virtual address =3D 0x50 fault code =3D supervisor read data, page not present instruction pointer =3D 0x20:0xffffffff81a85246 stack pointer =3D 0x28:0xfffffe104cb5aab0 frame pointer =3D 0x28:0xfffffe104cb5aac0 code segment =3D base 0x0, limit 0xfffff, type 0x1b =3D DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags =3D interrupt enabled, resume, IOPL =3D 0 current process =3D 0 (system_taskq_7) trap number =3D 12 panic: page fault cpuid =3D 2 KDB: stack backtrace: #0 0xffffffff808e7dd0 at kdb_backtrace+0x60 #1 0xffffffff808af8b5 at panic+0x155 #2 0xffffffff80c8e692 at trap_fatal+0x3a2 #3 0xffffffff80c8e969 at trap_pfault+0x2c9 #4 0xffffffff80c8e0f6 at trap+0x5e6 #5 0xffffffff80c75392 at calltrap+0x8 #6 0xffffffff81a8b710 at vdev_mirror_child_select+0x70 #7 0xffffffff81a8b254 at vdev_mirror_io_start+0x234 #8 0xffffffff81aa52d4 at zio_vdev_io_start+0x184 #9 0xffffffff81aa26a6 at zio_execute+0x136 #10 0xffffffff81a32dec at arc_read+0x87c #15 0xffffffff81a4aee3 at traverse_visitbp+0x393 #16 0xffffffff81a4aee3 at traverse_visitbp+0x393 #17 0xffffffff81a4aee3 at traverse_visitbp+0x393 Uptime: 4m27s Dumping 2286 out of 65496=20 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% Reading symbols from /boot/kernel/ums.ko.symbols...done. Loaded symbols for /boot/kernel/ums.ko.symbols Reading symbols from /boot/kernel/zfs.ko.symbols...done. Loaded symbols for /boot/kernel/zfs.ko.symbols Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. Loaded symbols for /boot/kernel/opensolaris.ko.symbols #0 doadump (textdump=3D) at pcpu.h:219 219 __asm("movq %%gs:%1,%0" : "=3Dr" (td) (kgdb) bt #0 doadump (textdump=3D) at pcpu.h:219 #1 0xffffffff808af530 in kern_reboot (howto=3D260) at=20 /usr/src/sys/kern/kern_shutdown.c:447 #2 0xffffffff808af8f4 in panic (fmt=3D) at=20 /usr/src/sys/kern/kern_shutdown.c:754 #3 0xffffffff80c8e692 in trap_fatal (frame=3D,=20 eva=3D) at /usr/src/sys/amd64/amd64/trap.c:882 #4 0xffffffff80c8e969 in trap_pfault (frame=3D0xfffffe104cb5aa00,=20 usermode=3D0) at /usr/src/sys/amd64/amd64/trap.c:699 #5 0xffffffff80c8e0f6 in trap (frame=3D0xfffffe104cb5aa00) at=20 /usr/src/sys/amd64/amd64/trap.c:463 #6 0xffffffff80c75392 in calltrap () at=20 /usr/src/sys/amd64/amd64/exception.S:232 #7 0xffffffff81a85246 in vdev_validate (vd=3D0xfffff80116622c10,=20 strict=3D) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /vdev.c:1451 #8 0xffffffff81a8b710 in vdev_mirror_io_done (zio=3D0x20) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /vdev_mirror.c:397 #9 0xffffffff81a8b254 in vdev_mirror_io_start (zio=3D0xfffff80116622c00)= =20 at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /vdev_mirror.c:90 #10 0xffffffff81aa52d4 in zio_vdev_io_start (zio=3D0xfffff80015db43b0) at= =20 time.h:63 #11 0xffffffff81aa26a6 in zio_execute (zio=3D0xfffff80015db43b0) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /zio.c:1319 #12 0xffffffff81a32dec in arc_read (pio=3D0x0, spa=3D0xfffff80015e0a000, = bp=3D, done=3D0x2, private=3D0x0, priority=3D6,=20 zio_flags=3D0, arc_flags=3D, zb=3D0xfffff80116d7b048) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /arc.c:3371 #13 0xffffffff81a4b8f1 in traverse_prefetcher (spa=3D0xfffff80015e0a000, = zilog=3D0xf01ff, bp=3D, zb=3D, = dnp=3D0xfffff80116622c00, arg=3D) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:451 #14 0xffffffff81a4ad14 in traverse_visitbp (td=3D0xfffffe104cb5b900,=20 dnp=3D0xfffffe00112ba800, bp=3D0xfffffe00112ba980, zb=3D0xfffffe104cb5ae8= 8) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:250 #15 0xffffffff81a4b77f in traverse_dnode (td=3D0xfffffe104cb5b900,=20 dnp=3D0xfffffe00112ba800, objset=3D203, object=3D24767564) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:417 #16 0xffffffff81a4b487 in traverse_visitbp (td=3D0xfffffe104cb5b900,=20 dnp=3D0xfffffe00112b9000, bp=3D0xfffffe0012145100, zb=3D0xfffffe104cb5b0a= 8) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:309 #17 0xffffffff81a4aee3 in traverse_visitbp (td=3D0xfffffe104cb5b900,=20 dnp=3D0xfffff80010f5d800, bp=3D0xfffffe00121d9f00, zb=3D0xfffffe104cb5b1d= 8) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:284 #18 0xffffffff81a4aee3 in traverse_visitbp (td=3D0xfffffe104cb5b900,=20 dnp=3D0xfffff80010f5d800, bp=3D0xfffffe001057b780, zb=3D0xfffffe104cb5b30= 8) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:284 #19 0xffffffff81a4aee3 in traverse_visitbp (td=3D0xfffffe104cb5b900,=20 dnp=3D0xfffff80010f5d800, bp=3D0xfffffe000eb3c000, zb=3D0xfffffe104cb5b43= 8) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:284 #20 0xffffffff81a4aee3 in traverse_visitbp (td=3D0xfffffe104cb5b900,=20 dnp=3D0xfffff80010f5d800, bp=3D0xfffffe000eb28000, zb=3D0xfffffe104cb5b56= 8) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:284 #21 0xffffffff81a4aee3 in traverse_visitbp (td=3D0xfffffe104cb5b900,=20 dnp=3D0xfffff80010f5d800, bp=3D0xfffffe0011694000, zb=3D0xfffffe104cb5b69= 8) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:284 #22 0xffffffff81a4aee3 in traverse_visitbp (td=3D0xfffffe104cb5b900,=20 dnp=3D0xfffff80010f5d800, bp=3D0xfffff80010f5d840, zb=3D0xfffffe104cb5b75= 8) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:284 #23 0xffffffff81a4b714 in traverse_dnode (td=3D0xfffffe104cb5b900,=20 dnp=3D0xfffff80010f5d800, objset=3D203, object=3D0) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:407 #24 0xffffffff81a4b190 in traverse_visitbp (td=3D0xfffffe104cb5b900,=20 dnp=3D0x0, bp=3D0xfffff80116b3c880, zb=3D0xfffffe104cb5b8e0) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:338 #25 0xffffffff81a4aaf6 in traverse_prefetch_thread=20 (arg=3D0xfffffe104ca9a3a0) at=20 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs= /dmu_traverse.c:470 #26 0xffffffff81a21c00 in taskq_run (arg=3D0xfffff80116cb5780,=20 pending=3D983551) at=20 /usr/src/sys/modules/zfs/../../cddl/compat/opensolaris/kern/opensolaris_t= askq.c:109 #27 0xffffffff808f5b66 in taskqueue_run_locked=20 (queue=3D0xfffff8001535e800) at /usr/src/sys/kern/subr_taskqueue.c:333 #28 0xffffffff808f63e8 in taskqueue_thread_loop (arg=3D) at /usr/src/sys/kern/subr_taskqueue.c:535 #29 0xffffffff8088198a in fork_exit (callout=3D0xffffffff808f6340=20 , arg=3D0xfffff80015025e80,=20 frame=3D0xfffffe104cb5ba40) at /usr/src/sys/kern/kern_fork.c:995 #30 0xffffffff80c758ce in fork_trampoline () at=20 /usr/src/sys/amd64/amd64/exception.S:606 #31 0x0000000000000000 in ?? () Current language: auto; currently minimal Pool creation info: zpool create data raidz /dev/da2.nop /dev/da3.nop /dev/da4.nop /dev/da5.n= op zpool add data raidz /dev/da6.nop /dev/da7.nop /dev/da8.nop /dev/da9.nop zpool add data raidz /dev/da10.nop /dev/da11.nop /dev/da12.nop /dev/da13.= nop zpool add data raidz /dev/da14.nop /dev/da15.nop /dev/da16.nop /dev/da17.= nop zpool add data raidz /dev/da18.nop /dev/da19.nop /dev/da20.nop /dev/da21.= nop zpool add data raidz /dev/da22.nop /dev/da23.nop /dev/da24.nop /dev/da25.= nop zpool add data spare /dev/da26.nop /dev/da27.nop zpool add data log /dev/gpt/log.nop zpool add data cache /dev/gpt/cache.nop The pool has a log and cache, and I suspect the log device is corrupt. All zfs commands cause a panic (zdb, zfs list, zpool status, etc...) I'm also wondering if the LSI controller itself is buggy: mps0: port 0xc000-0xc0ff mem=20 0xfe83c000-0xfe83ffff,0xfe840000-0xfe87ffff irq 28 at device 0.0 on pci1 mps0: Firmware: 15.00.00.00, Driver: 16.00.00.00-fbsd mps0: IOCCapabilities:=20 185c --------------ms050603000305030104030407 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNzE1MDUx ODE4WjAjBgkqhkiG9w0BCQQxFgQUFTVAPPRRMgjWzMfX78m/ngDu1x4wbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICABK5FuRQA2/25er0aYEkOgsKVlw9FJqQ2WsT 7N/Jox5yz5kaKH7ENwTqbXO+q42oeifmwiKsoQv+F5t2cO1bSu/zk82haZtDQ0iHIiwgvuO0 xaR11dJ4RwvWdUux8NaCHVqgpfI1K1EkWZsPXtLiNeQuN48Fu2E35TYfwy2ldjkDosdDz0ct NF3u5EI+/Z4jRQ6rth1ljCRl+npmpsbWMSxPYR+Lc8/RPAjzmmuEgVC5ApWmoHiNqXTfGbn6 QpPgGEY5/puC/bhXIEpHSm3LD8BnI0Jasq3W3hWzH5per8s6mJcePi6bKyQVptU+zLTIc7sv bJgQqINKfZfp9cLGDtxrw/Na1OyiZE+KUNe/h9j3zNvYDbvvzt/+mF6u85gNDl2CNX9QpWeL rWebuMI8KF6CecERx7ZeagZFjKfaUIh/TNHsjjTyDGnmkhio5aJ1zPwenwMyWj9maeENq7iS 0mIiD5fXUSrbnWo9zSLOVXj7RLi2m74VLVKomluIrneZ4dlyCAdX4L02YLGuN4l0AMWFjTx9 Ax3jGDmrqOGUvGdMJ1Co8x26BUjW56RrQ4cG4LnaaJ/ldhMgU2Ie4imMOTuBi788XruhI9IS 6WoGIj9wkV4xiguj/A7pjkQJH5YtQwzSBhAQSZN9S/Loj4jcmyU92dJRn0I81cAtWS09pa1p AAAAAAAA --------------ms050603000305030104030407-- From owner-freebsd-fs@FreeBSD.ORG Tue Jul 15 15:17:30 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7E8B3CF7 for ; Tue, 15 Jul 2014 15:17:30 +0000 (UTC) Received: from mail-wi0-x22f.google.com (mail-wi0-x22f.google.com [IPv6:2a00:1450:400c:c05::22f]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 17CFE20C6 for ; Tue, 15 Jul 2014 15:17:29 +0000 (UTC) Received: by mail-wi0-f175.google.com with SMTP id ho1so4547095wib.14 for ; Tue, 15 Jul 2014 08:17:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject :content-type:content-transfer-encoding; bh=TVWAdGZT/vKqnNwHE+fgHG/IiPI+CIahv9GP5aeDJss=; b=zGen+TJlYjfxzo4Pp6jDIx8vUj5xkanKYekHRX+TS0YzKyR4e2UnsfB6qC0jauS3mV epHHnRQe+rklzhlFZmP6gnbFXOAloXJ71vDIwddP04sEOOPOE+0ee07gCovRLWHKTqo+ tLiPTHDG2JKBhQ5npVqqukm5v3z6eD8bNvnAXQ6F9BYAFO9V7tK1k0SZw2fm4ouJraTO Q2XQ3QllgVtFck97grNxX29tYul5hpkilXCKXCMay9sO68v7ZF/Lbcv+xJQk0PtmDdmk 0uDoTK5ckZBMsMnYC6fw1Mpot0LAw1ott4OU7hycSv18NCkoC0wCjePjZSM5HfK+AIWu QJnQ== X-Received: by 10.194.2.132 with SMTP id 4mr28353135wju.49.1405437448270; Tue, 15 Jul 2014 08:17:28 -0700 (PDT) Received: from [192.168.1.145] ([193.173.55.180]) by mx.google.com with ESMTPSA id gq4sm44869077wib.8.2014.07.15.08.17.27 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 15 Jul 2014 08:17:27 -0700 (PDT) Message-ID: <53C54607.2090100@gmail.com> Date: Tue, 15 Jul 2014 17:17:27 +0200 From: Johan Hendriks User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: zpool upgrade does not work with latest 10 STABLE Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jul 2014 15:17:30 -0000 Hello all. Two days ago I upgraded my zpool to the latest version using zpool upgrade. This worked fine. Today there where some new features added to ZFS and I did rebuild the system. A zpool status shows me that the pool can be upgraded. beasty ~ # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT storage 1.81T 1.06T 767G 58% 1.00x ONLINE - beasty ~ # zpool status pool: storage state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0 in 2h28m with 0 errors on Tue Jul 15 13:57:56 2014 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/disk01 ONLINE 0 0 0 gpt/disk02 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gpt/disk03 ONLINE 0 0 0 gpt/disk04-r ONLINE 0 0 0 errors: No known data errors beasty ~ # uname -a FreeBSD beasty.mydomain.local 10.0-STABLE FreeBSD 10.0-STABLE #0 r268604: Mon Jul 14 11:01:01 CEST 2014 root@beasty.mydomain.local:/usr/obj/usr/src/sys/KRNL amd64 beasty ~ # zpool upgrade This system supports ZFS pool feature flags. All pools are formatted using feature flags. Some supported features are not enabled on the following pools. Once a feature is enabled the pool may become incompatible with software that does not support the feature. See zpool-features(7) for details. POOL FEATURE --------------- storage embedded_data But when i do a zpool upgrade it give me an error. beasty ~ # zpool upgrade storage This system supports ZFS pool feature flags. cannot set property for 'storage': invalid argument for this pool operation All things seem to work, and the scrub did a good job also.... Is there something I am missing? regards Johan From owner-freebsd-fs@FreeBSD.ORG Tue Jul 15 15:29:15 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9F66A36A for ; Tue, 15 Jul 2014 15:29:15 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id 64EBB21D9 for ; Tue, 15 Jul 2014 15:29:14 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 83D7D20E7088A; Tue, 15 Jul 2014 15:29:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 4204220E70885; Tue, 15 Jul 2014 15:29:01 +0000 (UTC) Message-ID: <32B54DD726454D099B9C380913A65ABC@multiplay.co.uk> From: "Steven Hartland" To: "Johan Hendriks" , References: <53C54607.2090100@gmail.com> Subject: Re: zpool upgrade does not work with latest 10 STABLE Date: Tue, 15 Jul 2014 16:29:01 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jul 2014 15:29:15 -0000 kernel and world out to sync? ----- Original Message ----- From: "Johan Hendriks" To: Sent: Tuesday, July 15, 2014 4:17 PM Subject: zpool upgrade does not work with latest 10 STABLE > Hello all. > > Two days ago I upgraded my zpool to the latest version using zpool > upgrade. This worked fine. Today there where some new features added to > ZFS and I did rebuild the system. A zpool status shows me that the pool > can be upgraded. > > beasty ~ # zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > storage 1.81T 1.06T 767G 58% 1.00x ONLINE - > > beasty ~ # zpool status > pool: storage > state: ONLINE > status: Some supported features are not enabled on the pool. The pool can > still be used, but some features are unavailable. > action: Enable all features using 'zpool upgrade'. Once this is done, > the pool may no longer be accessible by software that does not support > the features. See zpool-features(7) for details. > scan: scrub repaired 0 in 2h28m with 0 errors on Tue Jul 15 13:57:56 2014 > config: > > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > gpt/disk01 ONLINE 0 0 0 > gpt/disk02 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > gpt/disk03 ONLINE 0 0 0 > gpt/disk04-r ONLINE 0 0 0 > > errors: No known data errors > > beasty ~ # uname -a > FreeBSD beasty.mydomain.local 10.0-STABLE FreeBSD 10.0-STABLE #0 > r268604: Mon Jul 14 11:01:01 CEST 2014 > root@beasty.mydomain.local:/usr/obj/usr/src/sys/KRNL amd64 > > beasty ~ # zpool upgrade > This system supports ZFS pool feature flags. > > All pools are formatted using feature flags. > > Some supported features are not enabled on the following pools. Once a > feature is enabled the pool may become incompatible with software > that does not support the feature. See zpool-features(7) for details. > > POOL FEATURE > --------------- > storage > embedded_data > > > But when i do a zpool upgrade it give me an error. > > beasty ~ # zpool upgrade storage > This system supports ZFS pool feature flags. > > cannot set property for 'storage': invalid argument for this pool operation > > > All things seem to work, and the scrub did a good job also.... > > Is there something I am missing? > > regards > Johan > > > > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Jul 15 15:39:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 00C2323F for ; Tue, 15 Jul 2014 15:39:00 +0000 (UTC) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C3BE8230C for ; Tue, 15 Jul 2014 15:39:00 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.9/8.14.8) with ESMTP id s6FFcmM2053484; Tue, 15 Jul 2014 08:38:48 -0700 (PDT) (envelope-from freebsd@penx.com) Subject: Re: zpool upgrade does not work with latest 10 STABLE From: Dennis Glatting To: Johan Hendriks In-Reply-To: <53C54607.2090100@gmail.com> References: <53C54607.2090100@gmail.com> Content-Type: text/plain; charset="us-ascii" Date: Tue, 15 Jul 2014 08:38:48 -0700 Message-ID: <1405438728.90626.109.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-SoftwareMunitions-MailScanner-Information: Dennis Glatting X-SoftwareMunitions-MailScanner-ID: s6FFcmM2053484 X-SoftwareMunitions-MailScanner: Found to be clean X-MailScanner-From: freebsd@penx.com Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jul 2014 15:39:01 -0000 On Tue, 2014-07-15 at 17:17 +0200, Johan Hendriks wrote: > Hello all. > > Two days ago I upgraded my zpool to the latest version using zpool > upgrade. This worked fine. Today there where some new features added to > ZFS and I did rebuild the system. A zpool status shows me that the pool > can be upgraded. > No problem on my side: root@Tasha# uname -a FreeBSD Tasha 10.0-STABLE FreeBSD 10.0-STABLE #0 r268669: Tue Jul 15 07:30:35 PDT 2014 root@Tasha:/disk-2/obj/disk-1/src/sys/SMUNI-FreeBSD10-amd64 amd64 root@Tasha# zpool status disk-2 pool: disk-2 state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Mon Jul 7 12:30:48 2014 config: NAME STATE READ WRITE CKSUM disk-2 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da3 ONLINE 0 0 0 da0 ONLINE 0 0 0 errors: No known data errors > beasty ~ # zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > storage 1.81T 1.06T 767G 58% 1.00x ONLINE - > > beasty ~ # zpool status > pool: storage > state: ONLINE > status: Some supported features are not enabled on the pool. The pool can > still be used, but some features are unavailable. > action: Enable all features using 'zpool upgrade'. Once this is done, > the pool may no longer be accessible by software that does not support > the features. See zpool-features(7) for details. > scan: scrub repaired 0 in 2h28m with 0 errors on Tue Jul 15 13:57:56 2014 > config: > > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > gpt/disk01 ONLINE 0 0 0 > gpt/disk02 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > gpt/disk03 ONLINE 0 0 0 > gpt/disk04-r ONLINE 0 0 0 > > errors: No known data errors > > beasty ~ # uname -a > FreeBSD beasty.mydomain.local 10.0-STABLE FreeBSD 10.0-STABLE #0 > r268604: Mon Jul 14 11:01:01 CEST 2014 > root@beasty.mydomain.local:/usr/obj/usr/src/sys/KRNL amd64 > > beasty ~ # zpool upgrade > This system supports ZFS pool feature flags. > > All pools are formatted using feature flags. > > Some supported features are not enabled on the following pools. Once a > feature is enabled the pool may become incompatible with software > that does not support the feature. See zpool-features(7) for details. > > POOL FEATURE > --------------- > storage > embedded_data > > > But when i do a zpool upgrade it give me an error. > > beasty ~ # zpool upgrade storage > This system supports ZFS pool feature flags. > > cannot set property for 'storage': invalid argument for this pool operation > > > All things seem to work, and the scrub did a good job also.... > > Is there something I am missing? > > regards > Johan > > > > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Jul 15 20:34:39 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A356C801 for ; Tue, 15 Jul 2014 20:34:39 +0000 (UTC) Received: from mail-lb0-x236.google.com (mail-lb0-x236.google.com [IPv6:2a00:1450:4010:c04::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 305D521CD for ; Tue, 15 Jul 2014 20:34:38 +0000 (UTC) Received: by mail-lb0-f182.google.com with SMTP id z11so1597510lbi.13 for ; Tue, 15 Jul 2014 13:34:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=OlG7uADPwxqcDjlO660ef90DCKNt+c6YPd0Y+TMXORY=; b=m+CImXXtAvFLUcy3qKRAso14ATO9n447zBmQA0XzKuixIkx7UbG/JO5cDA5Yx0RDiA 6WglH0IwDa3k+jb9OgJpAG8NvsalyNSBKfbQGJpmWkGa5jugjuZjUu5vlgSgpaA89bk5 IcgvyondWIJCBCMQHXdeTRxitAFSOtMGokqOh/XHhrA31THGCMAv9koHWowHPcKkD5t5 ePgnwdNhNzDN09N68R5pFqaZ/q8lJ9B68ON+hknF5LU8MNv9yZYZpXDjAlU+LDq43nIa RxU5OcmsiwDK2V94VXZKc6nfC9WQqk072FmdFD17RquEqJfyyWq9gZ+bf0zim9df0puc +nYg== MIME-Version: 1.0 X-Received: by 10.152.243.34 with SMTP id wv2mr162262lac.58.1405456476783; Tue, 15 Jul 2014 13:34:36 -0700 (PDT) Received: by 10.112.137.70 with HTTP; Tue, 15 Jul 2014 13:34:36 -0700 (PDT) In-Reply-To: <53C54607.2090100@gmail.com> References: <53C54607.2090100@gmail.com> Date: Tue, 15 Jul 2014 21:34:36 +0100 Message-ID: Subject: Re: zpool upgrade does not work with latest 10 STABLE From: Tom Evans To: Johan Hendriks Content-Type: text/plain; charset=UTF-8 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jul 2014 20:34:39 -0000 On Tue, Jul 15, 2014 at 4:17 PM, Johan Hendriks wrote: > Hello all. > > Two days ago I upgraded my zpool to the latest version using zpool upgrade. > This worked fine. Today there where some new features added to ZFS and I did > rebuild the system. A zpool status shows me that the pool can be upgraded. > [..] > beasty ~ # uname -a > FreeBSD beasty.mydomain.local 10.0-STABLE FreeBSD 10.0-STABLE #0 r268604: > Mon Jul 14 11:01:01 CEST 2014 Kernel is from yesterday, and the features you are looking for were added today you say? Cheers Tom From owner-freebsd-fs@FreeBSD.ORG Tue Jul 15 21:27:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D2EED95D for ; Tue, 15 Jul 2014 21:27:51 +0000 (UTC) Received: from mail-wi0-x22d.google.com (mail-wi0-x22d.google.com [IPv6:2a00:1450:400c:c05::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6A73F2683 for ; Tue, 15 Jul 2014 21:27:51 +0000 (UTC) Received: by mail-wi0-f173.google.com with SMTP id f8so4185695wiw.6 for ; Tue, 15 Jul 2014 14:27:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=qKqgC1cCDMss0a+WnFlIzsmD1Dsk6WSBUpLdlV7ww6g=; b=iZchcVWWVfRojYW4kblKhcMOU+Hq+J2bWwoJKDeSwrOqP92f50B/+hXsRkWSOwihJn akdubsbol7yrQup3vyZBYDPbMNG0CIguCWqPokb3M3RDnqSENi5qSkBgw0AhKoPtf6oZ hF4+GLLTqg23lxz1lf4d8b8RD86cDYiFxwoSNpYD1OFncCF4c/YUS1Z5oJIyH5HpItTr wjQA+A5dNEyogKFwhBOu3lF8AtsdCRm9NftXBuwQb7P9MqO68CsIvKgyB+qZDL3MaGJV BehhYm7hHrd9FWBDz5iIKtdDJWItDDkSQ3a7xhGXiY4FibPhyJHqIQtmc/kSoL59/ZfH LDag== X-Received: by 10.194.92.115 with SMTP id cl19mr30899905wjb.29.1405459669221; Tue, 15 Jul 2014 14:27:49 -0700 (PDT) Received: from [192.168.1.210] (5ED0CC40.cm-7-1d.dynamic.ziggo.nl. [94.208.204.64]) by mx.google.com with ESMTPSA id fc7sm35125070wjc.37.2014.07.15.14.27.48 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 15 Jul 2014 14:27:48 -0700 (PDT) Message-ID: <53C59CD3.3070805@gmail.com> Date: Tue, 15 Jul 2014 23:27:47 +0200 From: Johan Hendriks User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zpool upgrade does not work with latest 10 STABLE References: <53C54607.2090100@gmail.com> In-Reply-To: <53C54607.2090100@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 15 Jul 2014 21:27:52 -0000 Johan Hendriks schreef op 15-7-2014 17:17: > Hello all. > > Two days ago I upgraded my zpool to the latest version using zpool > upgrade. This worked fine. Today there where some new features added > to ZFS and I did rebuild the system. A zpool status shows me that the > pool can be upgraded. > > beasty ~ # zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > storage 1.81T 1.06T 767G 58% 1.00x ONLINE - > > beasty ~ # zpool status > pool: storage > state: ONLINE > status: Some supported features are not enabled on the pool. The pool can > still be used, but some features are unavailable. > action: Enable all features using 'zpool upgrade'. Once this is done, > the pool may no longer be accessible by software that does not > support > the features. See zpool-features(7) for details. > scan: scrub repaired 0 in 2h28m with 0 errors on Tue Jul 15 13:57:56 > 2014 > config: > > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > gpt/disk01 ONLINE 0 0 0 > gpt/disk02 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > gpt/disk03 ONLINE 0 0 0 > gpt/disk04-r ONLINE 0 0 0 > > errors: No known data errors > > beasty ~ # uname -a > FreeBSD beasty.mydomain.local 10.0-STABLE FreeBSD 10.0-STABLE #0 > r268604: Mon Jul 14 11:01:01 CEST 2014 > root@beasty.mydomain.local:/usr/obj/usr/src/sys/KRNL amd64 > > beasty ~ # zpool upgrade > This system supports ZFS pool feature flags. > > All pools are formatted using feature flags. > > Some supported features are not enabled on the following pools. Once a > feature is enabled the pool may become incompatible with software > that does not support the feature. See zpool-features(7) for details. > > POOL FEATURE > --------------- > storage > embedded_data > > > But when i do a zpool upgrade it give me an error. > > beasty ~ # zpool upgrade storage > This system supports ZFS pool feature flags. > > cannot set property for 'storage': invalid argument for this pool > operation > > > All things seem to work, and the scrub did a good job also.... > > Is there something I am missing? > > regards > Johan On another machine I have no trouble, I will rebuild it tomorrow and try again on the failing machine. Thanks for your time. regards Johan From owner-freebsd-fs@FreeBSD.ORG Wed Jul 16 17:23:48 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E75CCE16 for ; Wed, 16 Jul 2014 17:23:48 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id CF5202E6C for ; Wed, 16 Jul 2014 17:23:48 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6GHNm0I045542 for ; Wed, 16 Jul 2014 17:23:48 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Wed, 16 Jul 2014 17:23:48 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: vsjcfm@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: attachments.created Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Jul 2014 17:23:49 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510 --- Comment #9 from vsjcfm@gmail.com --- Created attachment 144730 --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=144730&action=edit Memory graph Green: ARC size Blue: Free memory -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Jul 16 17:26:46 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D61D4EE0 for ; Wed, 16 Jul 2014 17:26:46 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BE47F2EA4 for ; Wed, 16 Jul 2014 17:26:46 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6GHQkNK076493 for ; Wed, 16 Jul 2014 17:26:46 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Wed, 16 Jul 2014 17:26:46 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: vsjcfm@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Jul 2014 17:26:46 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510 --- Comment #10 from vsjcfm@gmail.com --- Comment on attachment 144730 --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=144730 Memory graph So the ARC target size after the boot is 247G. Then ARC grows to ~185G and immediately after that target size drops down. BTW, why do you think that I'm experiencing this because of other memory consumers? As you can see, ZFS memory throttle count is zero. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Jul 16 17:27:37 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 665BFF67 for ; Wed, 16 Jul 2014 17:27:37 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4E1CB2EB1 for ; Wed, 16 Jul 2014 17:27:37 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.8/8.14.8) with ESMTP id s6GHRbDV085211 for ; Wed, 16 Jul 2014 17:27:37 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 191510] [zfs] ZFS doesn't use all available memory Date: Wed, 16 Jul 2014 17:27:37 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.3-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: vsjcfm@gmail.com X-Bugzilla-Status: In Discussion X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: version Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Jul 2014 17:27:37 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191510 vsjcfm@gmail.com changed: What |Removed |Added ---------------------------------------------------------------------------- Version|9.2-RELEASE |9.3-RELEASE -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@FreeBSD.ORG Wed Jul 16 20:46:33 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9FF349C6 for ; Wed, 16 Jul 2014 20:46:33 +0000 (UTC) Received: from smarthost1.sentex.ca (smarthost1.sentex.ca [IPv6:2607:f3e0:0:1::12]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "smarthost.sentex.ca", Issuer "smarthost.sentex.ca" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 6BB2521A9 for ; Wed, 16 Jul 2014 20:46:33 +0000 (UTC) Received: from [IPv6:2607:f3e0:0:4:f025:8813:7603:7e4a] (saphire3.sentex.ca [IPv6:2607:f3e0:0:4:f025:8813:7603:7e4a]) by smarthost1.sentex.ca (8.14.9/8.14.9) with ESMTP id s6GKkRmw012492 for ; Wed, 16 Jul 2014 16:46:27 -0400 (EDT) (envelope-from mike@sentex.net) Message-ID: <53C6E49D.8090708@sentex.net> Date: Wed, 16 Jul 2014 16:46:21 -0400 From: Mike Tancsa Organization: Sentex Communications User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: hastctl stuck in sbwait Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.74 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Jul 2014 20:46:33 -0000 A couple of times while experimenting with hastctl wedged with ctrl+t showing it stuck load: 0.06 cmd: hastctl 1355 [sbwait] 261.54r 0.00u 0.00s 0% 3884k load: 0.06 cmd: hastctl 1355 [sbwait] 262.04r 0.00u 0.00s 0% 3884k load: 0.06 cmd: hastctl 1355 [sbwait] 262.27r 0.00u 0.00s 0% 3884k load: 0.06 cmd: hastctl 1355 [sbwait] 262.50r 0.00u 0.00s 0% 3884k load: 0.06 cmd: hastctl 1355 [sbwait] 262.71r 0.00u 0.00s 0% 3884k load: 0.06 cmd: hastctl 1355 [sbwait] 262.91r 0.00u 0.00s 0% 3884k load: 0.06 cmd: hastctl 1355 [sbwait] 263.12r 0.00u 0.00s 0% 3884k load: 0.06 cmd: hastctl 1355 [sbwait] 263.32r 0.00u 0.00s 0% 3884k load: 0.06 cmd: hastctl 1355 [sbwait] 263.52r 0.00u 0.00s 0% 3884k load: 0.05 cmd: hastctl 1355 [sbwait] 263.73r 0.00u 0.00s 0% 3884k load: 0.05 cmd: hastctl 1355 [sbwait] 263.94r 0.00u 0.00s 0% 3884k load: 0.05 cmd: hastctl 1355 [sbwait] 264.15r 0.00u 0.00s 0% 3884k load: 0.05 cmd: hastctl 1355 [sbwait] 264.77r 0.00u 0.00s 0% 3884k Trying to run gstat to see what the disks might be doing, has it stuck in load: 0.07 cmd: gstat 1362 [g_waitfor_event] 207.47r 0.00u 0.00s 0% 2520k load: 0.07 cmd: gstat 1362 [g_waitfor_event] 207.69r 0.00u 0.00s 0% 2520k load: 0.07 cmd: gstat 1362 [g_waitfor_event] 207.87r 0.00u 0.00s 0% 2520k load: 0.07 cmd: gstat 1362 [g_waitfor_event] 208.04r 0.00u 0.00s 0% 2520k Anyone know what might lead to this deadlock ? ---Mike -- ------------------- Mike Tancsa, tel +1 519 651 3400 Sentex Communications, mike@sentex.net Providing Internet services since 1994 www.sentex.net Cambridge, Ontario Canada http://www.tancsa.com/