lto:stable+subscribe@freebsd.org> List-Unsubscribe: X-BeenThere: freebsd-stable@freebsd.org Sender: owner-freebsd-stable@FreeBSD.org MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: en-US To: FreeBSD-STABLE Mailing List From: mike tancsa Subject: Open ZFS vs FreeBSD ZFS boot issues Autocrypt: addr=mike@sentex.net; keydata= xsBNBFywzOMBCACoNFpwi5MeyEREiCeHtbm6pZJI/HnO+wXdCAWtZkS49weOoVyUj5BEXRZP xflV2ib2hflX4nXqhenaNiia4iaZ9ft3I1ebd7GEbGnsWCvAnob5MvDZyStDAuRxPJK1ya/s +6rOvr+eQiXYNVvfBhrCfrtR/esSkitBGxhUkBjOti8QwzD71JVF5YaOjBAs7jZUKyLGj0kW yDg4jUndudWU7G2yc9GwpHJ9aRSUN8e/mWdIogK0v+QBHfv/dsI6zVB7YuxCC9Fx8WPwfhDH VZC4kdYCQWKXrm7yb4TiVdBh5kgvlO9q3js1yYdfR1x8mjK2bH2RSv4bV3zkNmsDCIxjABEB AAHNHW1pa2UgdGFuY3NhIDxtaWtlQHNlbnRleC5uZXQ+wsCOBBMBCAA4FiEEmuvCXT0aY6hs 4SbWeVOEFl5WrMgFAl+pQfkCGwMFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQeVOEFl5W rMiN6ggAk3H5vk8QnbvGbb4sinxZt/wDetgk0AOR9NRmtTnPaW+sIJEfGBOz47Xih+f7uWJS j+uvc9Ewn2Z7n8z3ZHJlLAByLVLtcNXGoRIGJ27tevfOaNqgJHBPbFOcXCBBFTx4MYMM4iAZ cDT5vsBTSaM36JZFtHZBKkuFEItbA/N8ZQSHKdTYMIA7A3OCLGbJBqloQ8SlW4MkTzKX4u7R yefAYQ0h20x9IqC5Ju8IsYRFacVZconT16KS81IBceO42vXTN0VexbVF2rZIx3v/NT75r6Vw 0FlXVB1lXOHKydRA2NeleS4NEG2vWqy/9Boj0itMfNDlOhkrA/0DcCurMpnpbM7ATQRcsMzk AQgA1Dpo/xWS66MaOJLwA28sKNMwkEk1Yjs+okOXDOu1F+0qvgE8sVmrOOPvvWr4axtKRSG1 t2QUiZ/ZkW/x/+t0nrM39EANV1VncuQZ1ceIiwTJFqGZQ8kb0+BNkwuNVFHRgXm1qzAJweEt RdsCMohB+H7BL5LGCVG5JaU0lqFU9pFP40HxEbyzxjsZgSE8LwkI6wcu0BLv6K6cLm0EiHPO l5G8kgRi38PS7/6s3R8QDsEtbGsYy6O82k3zSLIjuDBwA9GRaeigGppTxzAHVjf5o9KKu4O7 gC2KKVHPegbXS+GK7DU0fjzX57H5bZ6komE5eY4p3oWT/CwVPSGfPs8jOwARAQABwsB2BBgB CAAgFiEEmuvCXT0aY6hs4SbWeVOEFl5WrMgFAl+pQfkCGwwACgkQeVOEFl5WrMiVqwf9GwU8 c6cylknZX8QwlsVudTC8xr/L17JA84wf03k3d4wxP7bqy5AYy7jboZMbgWXngAE/HPQU95NM aukysSnknzoIpC96XZJ0okLBXVS6Y0ylZQ+HrbIhMpuQPoDweoF5F9wKrsHRoDaUK1VR706X rwm4HUzh7Jk+auuMYfuCh0FVlFBEuiJWMLhg/5WCmcRfiuB6F59ZcUQrwLEZeNhF2XJV4KwB Tlg7HCWO/sy1foE5noaMyACjAtAQE9p5kGYaj+DuRhPdWUTsHNuqrhikzIZd2rrcMid+ktb0 NvtvswzMO059z1YGMtGSqQ4srCArju+XHIdTFdiIYbd7+jeehg== Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.86 X-Spamd-Bar: --- X-Spamd-Result: default: False [-3.38 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.99)[-0.994]; R_SPF_ALLOW(-0.20)[+ip6:2607:f3e0::/32]; MIME_GOOD(-0.10)[text/plain]; RCVD_IN_DNSWL_LOW(-0.10)[199.212.134.19:received]; XM_UA_NO_VERSION(0.01)[]; TO_DN_ALL(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; FREEFALL_USER(0.00)[mike]; ASN(0.00)[asn:11647, ipnet:2607:f3e0::/32, country:CA]; MIME_TRACE(0.00)[0:+]; MID_RHS_MATCH_FROM(0.00)[]; R_DKIM_NA(0.00)[]; MLMMJ_DEST(0.00)[freebsd-stable@freebsd.org]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; ARC_NA(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; PREVIOUSLY_DELIVERED(0.00)[freebsd-stable@freebsd.org]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DMARC_NA(0.00)[sentex.net]; RCVD_TLS_ALL(0.00)[] X-Rspamd-Queue-Id: 4VgBsz6Gpwz41H4 I have a strange edge case I am trying to work around.  I have a customer's legacy VM which is RELENG_11 on ZFS.  There is some corruption that wont clear on a bunch of directories, so I want to re-create it from backups. I have done this many times in the past but this one is giving me grief. Normally I do something like this on my backup server (RELENG_13) truncate -s 100G file.raw mdconfig -f file.raw gpart create -s gpt md0 gpart add -t freebsd-boot -s 512k md0 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 md0 gpart add -t freebsd-swap -s 2G md0 gpart add -t freebsd-zfs md0 zpool create -d -f -o altroot=/mnt2 -o feature@lz4_compress=enabled -o cachefile=/var/tmp/zpool.cache myZFSPool /dev/md0p3 Then zfs send -r backuppool | zfs recv myZFSPool I can then export / import the myZFSPool without issue. I can even import and examine myZFSPool on the original RELENG_11 VM that is currently running. A checksum of all the files under /boot are identical.  But every time I try to boot it (KVM), it panics early FreeBSD/x86 ZFS enabled bootstrap loader, Revision 1.1 (Tues Oct 10:24:17 EDT 2018 user@hostname) panic: free: guard2 fail @ 0xbf153040 + 2061 from unknown:0 --> Press a key on the console to reboot <-- Through a bunch of pf rdrs and nfs mounts, I was able to do the same above steps on the live RELENG_11 image and do the zfs send/recv and the image boots up no problem.   Any ideas on how to work around this or what the problem might be I am running into ? The issue seems to be that I do the zfs recv on a RELENG_13 box. If I do the zfs recv on RELENG_11 (which takes a LOT longer) it takes forever. zdb differences [1] below. The kernel is r339251 11.2-STABLE.  I know this is a crazy old issue, but hoping to at least learn something about ZFS as a result of going down this rabbit hole.  I will I think just do the send|recv via a RELENG_11 just to get them up and running.  They dont have the $ to get me to upgrade it all for them and this is partly a favor to them to help them limp along a bit more...     ---Mike 1 zdb live pool ns9zroot:     version: 5000     name: 'livezroot'     state: 0     txg: 26872926     pool_guid: 15183996218106005646     hostid: 2054190969     hostname: 'customer-hostname'     com.delphix:has_per_vdev_zaps     vdev_children: 1     vdev_tree:         type: 'root'         id: 0         guid: 15183996218106005646         create_txg: 4         children[0]:             type: 'disk'             id: 0             guid: 15258031439924457243             path: '/dev/vtbd0p3'             whole_disk: 1             metaslab_array: 256             metaslab_shift: 32             ashift: 12             asize: 580889083904             is_log: 0             DTL: 865260             create_txg: 4             com.delphix:vdev_zap_leaf: 129             com.delphix:vdev_zap_top: 130     features_for_read:         com.delphix:hole_birth         com.delphix:embedded_data MOS Configuration:         version: 5000         name: 'fromBackupPool'         state: 0         txg: 2838         pool_guid: 1150606583960632990         hostid: 2054190969         hostname: 'customer-hostname'         com.delphix:has_per_vdev_zaps         vdev_children: 1         vdev_tree:             type: 'root'             id: 0             guid: 1150606583960632990             create_txg: 4             children[0]:                 type: 'disk'                 id: 0                 guid: 4164348845485675975                 path: '/dev/md0p3'                 whole_disk: 1                 metaslab_array: 256                 metaslab_shift: 29                 ashift: 12                 asize: 105221193728                 is_log: 0                 create_txg: 4                 com.delphix:vdev_zap_leaf: 129                 com.delphix:vdev_zap_top: 130         features_for_read:             com.delphix:hole_birth             com.delphix:embedded_data