From owner-freebsd-current@freebsd.org Sun Dec 1 02:32:30 2019 Return-Path: Delivered-To: freebsd-current@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 06C271C31D0 for ; Sun, 1 Dec 2019 02:32:30 +0000 (UTC) (envelope-from wlosh@bsdimp.com) Received: from mail-qt1-x844.google.com (mail-qt1-x844.google.com [IPv6:2607:f8b0:4864:20::844]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47QXMc6gSlz3MqB for ; Sun, 1 Dec 2019 02:32:28 +0000 (UTC) (envelope-from wlosh@bsdimp.com) Received: by mail-qt1-x844.google.com with SMTP id w47so33130762qtk.4 for ; Sat, 30 Nov 2019 18:32:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bsdimp-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Yv+Wf8CnGrqpUHXZS3RKu+HSzVvZXVEqrm9hVOBK4CM=; b=ic49btchzVqKXANd6tOosf+lQ9bRx0bkZAW2KjHPPAnSD27vTdk7Vj2qPttyY7kyVv 4eFE43Cbxm92se9qJFDBXqCyOSnP2W5Qbc3fopvzhW1UvREEXc3WxsNAxpmLJnDoGYmE J2PI47v+jJUgnR9xEuOKe2lbALmvc8f9A3ofyoh1h73ywJj1DgWh3CbYC862PdayUbzM m68RmF3gnqnUcJem4+8fMpX9n3G42oe4s9JzsSPhCiQfQ7zxWyclQLY5tzOLxe+NtbMm iQVXPBHhNkdTmbEfC22XFr44e8SkEPXk5fBMXm3sH6rWKAvvRJs18+FzJwY687BbUMw6 e66A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Yv+Wf8CnGrqpUHXZS3RKu+HSzVvZXVEqrm9hVOBK4CM=; b=Rju++9likOaA8Sodjiw87EyW37j3Dy/hMb98CQk65YVLYOLXd72wK5RvswSERC+Th8 7k61v1gV/ptU2OTHuDPAS6IIFJPVVV99lGKZfaielhhRfeuOSYF4a136HVJ3eK0xWMOs PWN5UG8+D5tBgqT7OUbIBYNvhuYZi5cNVD0gIzVhbpMDlwApyLvS+ccq31xvVkuW2T1h 6IsaTmgPgKjeAP/fQBpzRQWzw8HbFlONQ0dp49w+DNkO+hymxkMl9YhKC/J1LR4CkHh8 85nYNKkF9qtq//6fXWcLqSlIC7LvBCiohI6bsK9xw8bXKQEq9+rO4DhrS6EB2ICD/Kft hSIA== X-Gm-Message-State: APjAAAWRg0QofZjvu8lYNOP8g2D70eXEzzLrC1aRahzmQ58l/oCwAFIq f8m26FiK3iSJNcKNMttscQXpRLEiR6B2q3SDAG4B9BGIrRM= X-Google-Smtp-Source: APXvYqx7fxImkxNcFSkykrZpodqJYQSz3whotuOUIrKnlMxz+8sRHfIuBfYxE9zbeZL/993OcqTRppRc6Y/cf++b+OM= X-Received: by 2002:ac8:67d0:: with SMTP id r16mr10037134qtp.187.1575167547401; Sat, 30 Nov 2019 18:32:27 -0800 (PST) MIME-Version: 1.0 References: <20191201011615.GA45887@www.zefox.net> <20191201015252.GB45887@www.zefox.net> In-Reply-To: <20191201015252.GB45887@www.zefox.net> From: Warner Losh Date: Sat, 30 Nov 2019 19:32:15 -0700 Message-ID: Subject: Re: Rpi3 panic: non-current pmap 0xfffffd001e05b130 To: bob prohaska Cc: freebsd-arm@freebsd.org, FreeBSD Current X-Rspamd-Queue-Id: 47QXMc6gSlz3MqB X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; dkim=pass header.d=bsdimp-com.20150623.gappssmtp.com header.s=20150623 header.b=ic49btch; dmarc=none; spf=none (mx1.freebsd.org: domain of wlosh@bsdimp.com has no SPF policy when checking 2607:f8b0:4864:20::844) smtp.mailfrom=wlosh@bsdimp.com X-Spamd-Result: default: False [-1.14 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.92)[-0.925,0]; R_DKIM_ALLOW(-0.20)[bsdimp-com.20150623.gappssmtp.com:s=20150623]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; TO_DN_SOME(0.00)[]; NEURAL_HAM_LONG(-0.94)[-0.936,0]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-current@freebsd.org]; DMARC_NA(0.00)[bsdimp.com]; URI_COUNT_ODD(1.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; DKIM_TRACE(0.00)[bsdimp-com.20150623.gappssmtp.com:+]; MIME_BASE64_TEXT(0.10)[]; RCVD_IN_DNSWL_NONE(0.00)[4.4.8.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.4.6.8.4.0.b.8.f.7.0.6.2.list.dnswl.org : 127.0.5.0]; R_SPF_NA(0.00)[]; FORGED_SENDER(0.30)[imp@bsdimp.com,wlosh@bsdimp.com]; MIME_TRACE(0.00)[0:+,1:+,2:~]; IP_SCORE(-0.37)[ip: (2.37), ipnet: 2607:f8b0::/32(-2.25), asn: 15169(-1.94), country: US(-0.05)]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; FROM_NEQ_ENVFROM(0.00)[imp@bsdimp.com,wlosh@bsdimp.com]; RCVD_TLS_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2] Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 01 Dec 2019 02:32:30 -0000 Page out errors can be caused by crappy nand... Warner On Sat, Nov 30, 2019, 6:53 PM bob prohaska wrote: > On Sat, Nov 30, 2019 at 05:16:15PM -0800, bob prohaska wrote: > > A Pi3 running r355024 reported a panic while doing a -j3 make of > > www/chromium: > > > Ok, another panic, looks like a dying storage device. This time there > was a preamble on the console: > > (da0:umass-sim0:0:0:0): READ(10). CDB: 28 00 00 c3 90 d8 00 00 08 00 > (da0:umass-sim0:0:0:0): CAM status: CCB request completed with an error > (da0:umass-sim0:0:0:0): Error 5, Retries exhausted > swap_pager: I/O error - pageout failed; blkno 1442883,size 4096, error 5 > swap_pager: I/O error - pageout failed; blkno 1442884,size 4096, error 5 > swap_pager: I/O error - pageout failed; blkno 1442885,size 8192, error 5 > swap_pager: I/O error - pageout failed; blkno 1442887,size 4096, error 5 > swap_pager: I/O error - pagein failed; blkno 1103209,size 4096, error 5 > vm_fault: pager read error, pid 681 (devd) > swap_pager: I/O error - pagein failed; blkno 1130270,size 4096, error 5 > vm_fault: pager read error, pid 2362 (c++) > Nov 30 17:37:34 www kernel: Failed to fully fault in a core file segment > at VA 0x40400000 with size 0x60b000 to be written at offset 0x32b000 for > process devd > panic: vm_page_assert_unbusied: page 0xfffffd0030f8af80 busy @ > /usr/src/sys/vm/vm_object.c:777 > cpuid = 3 > time = 1575164255 > > Earlier panics didn't have any proximate warnings on the console, but > they're > probably the same story. > > apologies for the noise! > > bob prohaska > > > > > panic: non-current pmap 0xfffffd001e05b130 > > cpuid = 0 > > time = 1575161361 > > KDB: stack backtrace: > > db_trace_self() at db_trace_self_wrapper+0x28 > > pc = 0xffff000000729e4c lr = 0xffff0000001066c8 > > sp = 0xffff000059f3e2b0 fp = 0xffff000059f3e4c0 > > > > db_trace_self_wrapper() at vpanic+0x18c > > pc = 0xffff0000001066c8 lr = 0xffff000000400d7c > > sp = 0xffff000059f3e4d0 fp = 0xffff000059f3e580 > > > > vpanic() at panic+0x44 > > pc = 0xffff000000400d7c lr = 0xffff000000400b2c > > sp = 0xffff000059f3e590 fp = 0xffff000059f3e610 > > > > panic() at pmap_remove_pages+0x8d4 > > pc = 0xffff000000400b2c lr = 0xffff00000074154c > > sp = 0xffff000059f3e620 fp = 0xffff000059f3e6e0 > > > > pmap_remove_pages() at vmspace_exit+0xc0 > > pc = 0xffff00000074154c lr = 0xffff0000006c9c00 > > sp = 0xffff000059f3e6f0 fp = 0xffff000059f3e720 > > > > vmspace_exit() at exit1+0x4f8 > > pc = 0xffff0000006c9c00 lr = 0xffff0000003bc2a4 > > sp = 0xffff000059f3e730 fp = 0xffff000059f3e7a0 > > > > exit1() at sys_sys_exit+0x10 > > pc = 0xffff0000003bc2a4 lr = 0xffff0000003bbda8 > > sp = 0xffff000059f3e7b0 fp = 0xffff000059f3e7b0 > > > > sys_sys_exit() at do_el0_sync+0x514 > > pc = 0xffff0000003bbda8 lr = 0xffff000000747aa4 > > sp = 0xffff000059f3e7c0 fp = 0xffff000059f3e860 > > > > do_el0_sync() at handle_el0_sync+0x90 > > pc = 0xffff000000747aa4 lr = 0xffff00000072ca14 > > sp = 0xffff000059f3e870 fp = 0xffff000059f3e980 > > > > handle_el0_sync() at 0x404e6d60 > > pc = 0xffff00000072ca14 lr = 0x00000000404e6d60 > > sp = 0xffff000059f3e990 fp = 0x0000ffffffffd590 > > > > KDB: enter: panic > > [ thread pid 94966 tid 100145 ] > > Stopped at 0x40505460: undefined 54000042 > > db> bt > > Tracing pid 94966 tid 100145 td 0xfffffd002552b000 > > db_trace_self() at db_stack_trace+0xf8 > > pc = 0xffff000000729e4c lr = 0xffff000000103b0c > > sp = 0xffff000059f3de80 fp = 0xffff000059f3deb0 > > > > db_stack_trace() at db_command+0x228 > > pc = 0xffff000000103b0c lr = 0xffff000000103784 > > sp = 0xffff000059f3dec0 fp = 0xffff000059f3dfa0 > > > > db_command() at db_command_loop+0x58 > > pc = 0xffff000000103784 lr = 0xffff00000010352c > > sp = 0xffff000059f3dfb0 fp = 0xffff000059f3dfd0 > > > > db_command_loop() at db_trap+0xf4 > > pc = 0xffff00000010352c lr = 0xffff000000106830 > > sp = 0xffff000059f3dfe0 fp = 0xffff000059f3e200 > > > > db_trap() at kdb_trap+0x1d8 > > pc = 0xffff000000106830 lr = 0xffff0000004492fc > > sp = 0xffff000059f3e210 fp = 0xffff000059f3e2c0 > > > > kdb_trap() at do_el1h_sync+0xf4 > > pc = 0xffff0000004492fc lr = 0xffff000000747418 > > sp = 0xffff000059f3e2d0 fp = 0xffff000059f3e300 > > > > do_el1h_sync() at handle_el1h_sync+0x78 > > pc = 0xffff000000747418 lr = 0xffff00000072c878 > > sp = 0xffff000059f3e310 fp = 0xffff000059f3e420 > > > > handle_el1h_sync() at kdb_enter+0x34 > > pc = 0xffff00000072c878 lr = 0xffff000000448948 > > sp = 0xffff000059f3e430 fp = 0xffff000059f3e4c0 > > > > kdb_enter() at vpanic+0x1a8 > > pc = 0xffff000000448948 lr = 0xffff000000400d98 > > sp = 0xffff000059f3e4d0 fp = 0xffff000059f3e580 > > > > vpanic() at panic+0x44 > > pc = 0xffff000000400d98 lr = 0xffff000000400b2c > > sp = 0xffff000059f3e590 fp = 0xffff000059f3e610 > > > > panic() at pmap_remove_pages+0x8d4 > > pc = 0xffff000000400b2c lr = 0xffff00000074154c > > sp = 0xffff000059f3e620 fp = 0xffff000059f3e6e0 > > > > pmap_remove_pages() at vmspace_exit+0xc0 > > pc = 0xffff00000074154c lr = 0xffff0000006c9c00 > > sp = 0xffff000059f3e6f0 fp = 0xffff000059f3e720 > > > > vmspace_exit() at exit1+0x4f8 > > pc = 0xffff0000006c9c00 lr = 0xffff0000003bc2a4 > > sp = 0xffff000059f3e730 fp = 0xffff000059f3e7a0 > > > > exit1() at sys_sys_exit+0x10 > > pc = 0xffff0000003bc2a4 lr = 0xffff0000003bbda8 > > sp = 0xffff000059f3e7b0 fp = 0xffff000059f3e7b0 > > > > sys_sys_exit() at do_el0_sync+0x514 > > pc = 0xffff0000003bbda8 lr = 0xffff000000747aa4 > > sp = 0xffff000059f3e7c0 fp = 0xffff000059f3e860 > > > > do_el0_sync() at handle_el0_sync+0x90 > > pc = 0xffff000000747aa4 lr = 0xffff00000072ca14 > > sp = 0xffff000059f3e870 fp = 0xffff000059f3e980 > > > > handle_el0_sync() at 0x404e6d60 > > pc = 0xffff00000072ca14 lr = 0x00000000404e6d60 > > sp = 0xffff000059f3e990 fp = 0x0000ffffffffd590 > > > > db> > > > > The last top screen showed > > > > last pid: 94966; load averages: 1.22, 1.42, 1.40 > up 0+05:10:16 16:49:20 > > 43 processes: 1 running, 42 sleeping > > CPU: 3.7% user, 0.0% nice, 20.0% system, 5.5% interrupt, 70.8% idle > > Mem: 502M Active, 6672K Inact, 150M Laundry, 184M Wired, 90M Buf, 55M > Free > > Swap: 7194M Total, 3835M Used, 3359M Free, 53% Inuse, 11M In, 3852K Out > > > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU > COMMAND > > 71350 root 1 22 0 951M 144M swread 0 10:20 5.97% > c++ > > 58502 root 1 21 0 986M 232M swread 1 11:23 3.42% > c++ > > 77283 root 1 22 0 963M 151M swread 0 8:45 3.30% > c++ > > 6904 root 1 22 0 1144M 191M swread 0 21:26 3.29% > c++ > > 1091 bob 1 52 0 11M 324K wait 3 0:55 0.27% sh > > 1079 bob 1 20 0 13M 1636K CPU0 0 0:57 0.22% > top > > 1074 bob 1 20 0 19M 1316K select 1 0:13 0.03% > sshd > > 970 root 1 20 0 16M 1500K select 1 0:02 0.02% > sendmail > > 1069 root 1 20 0 204M 1044K select 3 1:40 0.00% > ninja > > 1050 root 1 20 0 12M 972K select 2 0:02 0.00% > make > > 977 root 1 20 0 11M 0B nanslp 1 0:02 0.00% > > > 957 root 1 20 0 19M 1216K select 1 0:01 0.00% > sshd > > 824 root 1 20 0 11M 1084K select 2 0:01 0.00% > syslogd > > 1084 bob 1 20 0 13M 1008K ttyin 0 0:00 0.00% > tcsh > > > > and the last few storage activity log entries were: > > > > dT: 1.002s w: 1.000s > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps > ms/d %busy Name > > 1 713 694 4666 3.7 20 116 6.0 0 0 > 0.0 90.9 mmcsd0 > > 1 713 694 4666 3.8 20 116 6.0 0 0 > 0.0 91.2 mmcsd0s2 > > 2 751 734 4730 2.0 17 96 0.6 0 0 > 0.0 79.4 da0 > > 1 713 694 4666 3.8 20 116 6.0 0 0 > 0.0 91.4 mmcsd0s2b > > 2 751 734 4730 2.0 17 96 0.6 0 0 > 0.0 79.9 da0p6 > > Sat Nov 30 16:48:21 PST 2019 > > Device 1K-blocks Used Avail Capacity > > /dev/mmcsd0s2b 4404252 1958976 2445276 44% > > /dev/da0p6 5242880 1956872 3286008 37% > > Total 9647132 3915848 5731284 41% > > Nov 30 16:38:17 www sshd[91264]: error: PAM: Authentication error for > illegal user support from 103.133.104.114 > > Nov 30 16:38:17 www sshd[91264]: error: Received disconnect from > 103.133.104.114 port 52716:14: No more user authentication methods > available. [preauth] > > 0/1016/1016/19178 mbuf clusters in use (current/cache/total/max) > > procs memory page disks faults > cpu > > r b w avm fre flt re pi po fr sr mm0 da0 in sy cs > us sy id > > 0 0 12 4523836 56064 6988 186 715 257 6931 25128 0 0 30789 > 1073 29817 14 26 60 > > dT: 1.002s w: 1.000s > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps > ms/d %busy Name > > 0 709 687 4588 3.9 22 144 5.8 0 0 > 0.0 90.5 mmcsd0 > > 0 709 687 4588 3.9 22 144 5.8 0 0 > 0.0 90.9 mmcsd0s2 > > 2 698 679 4696 2.1 19 104 0.6 0 0 > 0.0 75.2 da0 > > 0 709 687 4588 3.9 22 144 5.8 0 0 > 0.0 91.0 mmcsd0s2b > > 2 698 679 4696 2.2 19 104 0.6 0 0 > 0.0 75.7 da0p6 > > Sat Nov 30 16:48:22 PST 2019 > > Device 1K-blocks Used Avail Capacity > > /dev/mmcsd0s2b 4404252 1959032 2445220 44% > > /dev/da0p6 5242880 1956928 3285952 37% > > Total 9647132 3915960 5731172 41% > > Nov 30 16:38:17 www sshd[91264]: error: PAM: Authentication error for > illegal user support from 103.133.104.114 > > Nov 30 16:38:17 www sshd[91264]: error: Received disconnect from > 103.133.104.114 port 52716:14: No more user authentication methods > available. [preauth] > > 0/1016/1016/19178 mbuf clusters in use (current/cache/total/max) > > procs memory page disks faults > cpu > > r b w avm fre flt re pi po fr sr mm0 da0 in sy cs > us sy id > > 0 0 12 4523836 55844 6989 186 715 257 6932 25127 604 604 30790 > 1073 29819 14 26 60 > > dT: 1.001s w: 1.000s > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps > ms/d %busy Name > > 2 736 680 4829 3.9 56 292 5.0 0 0 > 0.0 94.4 mmcsd0 > > 2 736 680 4829 4.0 56 292 5.0 0 0 > 0.0 94.6 mmcsd0s2 > > 1 680 627 4014 1.9 53 328 1.0 0 0 > 0.0 71.1 da0 > > 2 736 680 4829 4.0 56 292 5.0 0 0 > 0.0 94.7 mmcsd0s2b > > 1 680 627 4014 1.9 53 328 1.0 0 0 > 0.0 71.7 da0p6 > > Sat Nov 30 16:48:24 PST 2019 > > Device 1K-blocks Used Avail Capacity > > /dev/mmcsd0s2b 4404252 1959324 2444928 44% > > /dev/da0p6 5242880 1957468 3285412 37% > > Total 9647132 3916792 5730340 41% > > Nov 30 16:38:17 www sshd[91264]: error: PAM: Authentication error for > illegal user support from 103.133.104.114 > > Nov 30 16:38:17 www sshd[91264]: error: Received disconnect from > 103.133.104.114 port 52716:14: No more user authentication methods > available. [preauth] > > 0/1016/1016/19178 mbuf clusters in use (current/cache/total/max) > > procs memory page disks faults > cpu > > r b w avm fre flt re pi po fr sr mm0 da0 in sy cs > us sy id > > 0 0 12 4523836 52860 6989 186 715 257 6932 25125 1038 1038 30790 > 1073 29820 14 26 60 > > dT: 1.002s w: 1.000s > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps > ms/d %busy Name > > 1 751 702 4860 4.0 49 251 5.0 0 0 > 0.0 94.4 mmcsd0 > > 1 751 702 4860 4.1 49 251 5.1 0 0 > 0.0 94.7 mmcsd0s2 > > 2 704 658 4082 1.8 46 235 0.6 0 0 > 0.0 71.9 da0 > > 1 751 702 4860 4.1 49 251 5.1 0 0 > 0.0 94.8 mmcsd0s2b > > 2 704 658 4082 1.8 46 235 0.7 0 0 > 0.0 72.6 da0p6 > > Sat Nov 30 16:48:26 PST 2019 > > Device 1K-blocks Used Avail Capacity > > /dev/mmcsd0s2b 4404252 1959504 2444748 44% > > /dev/da0p6 5242880 1957540 3285340 37% > > Total 9647132 3917044 5730088 41% > > Nov 30 16:38:17 www sshd[91264]: error: PAM: Authentication error for > illegal user support from 103.133.104.114 > > Nov 30 16:38:17 www sshd[91264]: error: Received disconnect from > 103.133.104.114 port 52716:14: No more user authentication methods > available. [preauth] > > 0/1016/1016/19178 mbuf clusters in use (current/cache/total/max) > > procs memory page disks faults > cpu > > r b w avm fre flt re pi po fr sr mm0 da0 in sy cs > us sy id > > 0 0 12 4523868 46872 6989 186 715 257 6932 25123 0 0 30790 > 1073 29820 14 26 60 > > dT: 1.002s w: 1.000s > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps > ms/d %busy Name > > 2 700 681 4888 3.7 19 108 3.7 0 0 > 0.0 92.1 mmcsd0 > > 2 700 681 4888 3.8 19 108 3.7 0 0 > 0.0 92.5 mmcsd0s2 > > 2 709 687 4314 2.1 22 108 3.4 0 0 > 0.0 78.2 da0 > > 2 700 681 4888 3.8 19 108 3.7 0 0 > 0.0 92.6 mmcsd0s2b > > 2 709 687 4314 2.1 22 108 3.4 0 0 > 0.0 78.7 da0p6 > > Sat Nov 30 16:48:28 PST 2019 > > Device 1K-blocks Used Avail Capacity > > /dev/mmcsd0s2b 440 > > > > It's clear the machine was heavily loaded, but storage didn't appear to > be swamped. > > I hope the foregoing has been of some interest, thanks for reading! > > > > bob prohaska > > > _______________________________________________ > freebsd-arm@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-arm > To unsubscribe, send any mail to "freebsd-arm-unsubscribe@freebsd.org" >