From owner-freebsd-current@freebsd.org Mon Jun 29 04:57:35 2020 Return-Path: Delivered-To: freebsd-current@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 3E1B335DDCA for ; Mon, 29 Jun 2020 04:57:35 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from CAN01-QB1-obe.outbound.protection.outlook.com (mail-qb1can01on0616.outbound.protection.outlook.com [IPv6:2a01:111:f400:fe5c::616]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mail.protection.outlook.com", Issuer "GlobalSign Organization Validation CA - SHA256 - G3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 49wFbd4xYYz4Jvf; Mon, 29 Jun 2020 04:57:33 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=K9SDrbWpqey989jhZJVi1zSZtcDtv1n59ETC2Wq2TAwr6Ly+zzEmggtCDN6BWrLiIQOHuqznN1+YxRJ6APRjAIVuK524XwOe9uU+JGRas7U8W9gFN2MJX4/RdLtKofE9GxjOQXYKgkvd/j5vLivXcRDmQ/wjOy6dR9GbEfy3QXdG6qaur3kXxF8/zB6y3mbCqqo9sCJvlkYu+tXTyPB4KLpo+4pODJsRmn8xTb2oV3qrLUpVWMu15tpqcNmW87bpwVuNXc/ct5p69+VkZ2mOvUP+n2rYN/X2J9GiNtCE4SKbX0ND1HNKTNbfAjTLnyZL+VShZdDwXcNXBshAiQIy4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9a+JwYRvCOOcS60de0ACIxYB8lj4c0yZDeY4SpBG95s=; b=Je6C63C0i8Np0zvzZOQIqNWQ2iGiSfMw9/3jhbHv2mX0yZselEihykFsIxiRiSPlfKAov3jLmBKahfUOmOBgItD6UqPh3gE9BR6YQXw4CzLQJX8RjjOf2/eojvPC6tJ+5HloU4hwuWhMdndYQQLO312woTlzb9J8xNz5ppqvFDomqegjd92egaSr0wUa0MWbIxpvS5Y8iIx2xvv8KB9VJjGHdBwUJ30fUI7JJlwLwlR3TIkOilJhthw7dtm39l+U3+PCGTlj96gjjXFeyXXX4GE5AODpM0KK8fuLE6yvTSyXM0POQOrp1ANCOCDu1ZpZmKEaSsTD+3LF6Ldq3TCGFw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=uoguelph.ca; dmarc=pass action=none header.from=uoguelph.ca; dkim=pass header.d=uoguelph.ca; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=uoguelph.ca; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9a+JwYRvCOOcS60de0ACIxYB8lj4c0yZDeY4SpBG95s=; b=GvUppEDr/TGgXPoWLD4HL1lol3sgSanmb72J+6ys85cfriQCKavIA4UDmbYblUkm3L7Sp/Mw2aSIWRD8PCTSKADJyq6N5gljRelXn9hu51LT8RGM+HTx0CniYrO4eqs03q3AhxhqUwvRJ6knE1HFuFUndzdp+A/rbUp44yLOlRzOV5A0VY9XSqzZGJQGU+saZSi2dafbQZ7iUup4LfxiiGOVKnSDrKAu9Auonym2QfV32aIjdD+QKruQT55IniW3xwCMVdieRMTTcbwzXCZxCSJ8Hl8K+Hz2BdfVZ3ff/cJXA+ktZa1xpLrzht2OFg1snDNYHhXBErJ+wzkW9su56Q== Received: from QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:38::14) by YQXPR01MB3669.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:c00:47::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3131.20; Mon, 29 Jun 2020 04:57:31 +0000 Received: from QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM ([fe80::60f3:4ca2:8a4a:1e91]) by QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM ([fe80::60f3:4ca2:8a4a:1e91%7]) with mapi id 15.20.3131.026; Mon, 29 Jun 2020 04:57:23 +0000 From: Rick Macklem To: Ryan Libby CC: Konstantin Belousov , Jeff Roberson , "freebsd-current@freebsd.org" Subject: Re: r358252 causes intermittent hangs where processes are stuck sleeping on btalloc Thread-Topic: r358252 causes intermittent hangs where processes are stuck sleeping on btalloc Thread-Index: AQHWReusQxRwbcmYh0+wsR7fidfwj6jvFvdc Date: Mon, 29 Jun 2020 04:57:23 +0000 Message-ID: References: , In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 6c923ed2-3dfb-4c64-04e8-08d81be8e94c x-ms-traffictypediagnostic: YQXPR01MB3669: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:6430; x-forefront-prvs: 044968D9E1 x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: dzxUN0SQOuSmXLYRYZ5gII/db2viy663VPqGbQrOQtxn6Zc4Urc4+q98OJtqcMSG6MGqNi5MCbDz603iIy2CTI5jCC7FrROp34w5fLnKwFRzFZ0BB3cMDmay6b8npgPpZXDA56UgbXIzakpe4NTORCszu429Yn069fYNkx5FdY6ZtDhiNVMBKiwi6HBW+4RzbPn8pNeormIeQ/nz8PG+9u9Qvnyy3gbQ+I6d3pZbuTyvBIcl5eOhNKcdwwEFglEiHq9Uihu3+bHjL73ZEm02upm8DNkXmEQwLLYLCNGUFDaLD5B6Pxe8xXVMqFjJhLPVIJebvYNP4UfQ/t5x3ySu1m20pOR707wXLy5hR+ZWby3ADIei64jKrXm8MbkbiLQ0neuhhlCTYifxxKEvI4SGNsvxXaGg188GO9WlfWcTXHJM1d5+2Lr8TjT+Y38SuJXMfmyCA2+piA81zuxsNnHUGw== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFTY:; SFS:(136003)(366004)(396003)(39850400004)(376002)(346002)(966005)(7696005)(8676002)(6916009)(86362001)(53546011)(6506007)(2906002)(54906003)(5660300002)(478600001)(8936002)(71200400001)(33656002)(186003)(52536014)(450100002)(4326008)(55016002)(9686003)(316002)(786003)(83380400001)(30864003)(66946007)(66476007)(64756008)(76116006)(66556008)(66446008)(299355004); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: cGAehz9fYH9o6crEcCLFHNKc9i/VQLHfvOTWHdpa7+yEDDj+ubSPy9hdV9mSOnufyF9/qMcWE2MFYI/oZDU9HrN75/tqj6amcz/wpJiYeczq9P25yAiOYKVFCxSnaQOeD4y70rrEfPl9s5O8eEoB73S7iTbR90P3XXD0FVu5igKUvlgwuVKndrORJXMptuRp6jK1gLDv3QoKUYqHHrCdeGdFr87cxsO3R5AteKQWu4FRlPg2QLiszbp7y7scErYqHUNINtRXg1Xq7f7QmKIu2SDJKROXC059L2RfORjifkLlhh6z6twXjErcLxTj7lvViMR2YSMBwNmaP/Hevt6xA3Enu84EQHTF5UMpBxYxgwZuR1jIONFdmUH9kQ7dfRnWbYiTx6dYK94mqjKEQMR2pRgGAWDoyUOc0jP7flffOwwlm8zWj3gsTlMwcN8PFzNpacerw0/a1J+wsicL7VitJx3RCn1iIOsNpPQNno7d3V0BvwJCibMeUoDNqHb8ZdKlxEtLoImXZ5O40FKeSEu+92Lcrn/SsrKRqggrdXbxikpFkPHB8t58trgr1HIu30fr x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: uoguelph.ca X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: QB1PR01MB3364.CANPRD01.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-Network-Message-Id: 6c923ed2-3dfb-4c64-04e8-08d81be8e94c X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jun 2020 04:57:23.1702 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: be62a12b-2cad-49a1-a5fa-85f4f3156a7d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: aq/U95HN3Yh20ndzgGFZ79UMqbnMgkt9G8zDsGpL1QGM759OvhVe2a6oKpiqc1iXORPK40cNC4gi5BUMU4Ds7A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: YQXPR01MB3669 X-Rspamd-Queue-Id: 49wFbd4xYYz4Jvf X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=uoguelph.ca header.s=selector1 header.b=GvUppEDr; dmarc=none; spf=pass (mx1.freebsd.org: domain of rmacklem@uoguelph.ca designates 2a01:111:f400:fe5c::616 as permitted sender) smtp.mailfrom=rmacklem@uoguelph.ca X-Spamd-Result: default: False [-5.30 / 15.00]; TO_DN_EQ_ADDR_SOME(0.00)[]; NEURAL_HAM_MEDIUM(-1.02)[-1.021]; R_DKIM_ALLOW(-0.20)[uoguelph.ca:s=selector1]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; TO_DN_SOME(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; R_SPF_ALLOW(-0.20)[+ip6:2a01:111:f400::/48]; DMARC_NA(0.00)[uoguelph.ca]; DWL_DNSWL_LOW(-1.00)[uoguelph.ca:dkim]; RCVD_COUNT_THREE(0.00)[3]; DKIM_TRACE(0.00)[uoguelph.ca:+]; NEURAL_HAM_SHORT(-0.82)[-0.816]; NEURAL_HAM_LONG(-0.96)[-0.965]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:8075, ipnet:2a01:111:f000::/36, country:US]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1] X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Jun 2020 04:57:35 -0000 Just in case you were waiting for another email, I have now run several cycles of the kernel build over NFS on a recent head kernel with the one line change and it has not hung. I don't know if this is the correct fix, but it would be nice to get someth= ing into head to fix this. If I don't hear anything in the next few days, I'll put it in a PR so it doesn't get forgotten. rick ________________________________________ From: owner-freebsd-current@freebsd.org = on behalf of Rick Macklem Sent: Thursday, June 18, 2020 11:42 PM To: Ryan Libby Cc: Konstantin Belousov; Jeff Roberson; freebsd-current@freebsd.org Subject: Re: r358252 causes intermittent hangs where processes are stuck sl= eeping on btalloc Ryan Libby wrote: >On Mon, Jun 15, 2020 at 5:06 PM Rick Macklem wrote: >> >> Rick Macklem wrote: >> >r358098 will hang fairly easily, in 1-3 cycles of the kernel build over= =3D NFS. >> >I thought this was the culprit, since I did 6 cycles of r358097 without= =3D a hang. >> >However, I just got a hang with r358097, but it looks rather different. >> >The r358097 hang did not have any processes sleeping on btalloc. They >> >appeared to be waiting on two different locks in the buffer cache. >> >As such, I think it might be a different problem. (I'll admit I should = h=3D ave >> >made notes about this one before rebooting, but I was flustrated that >> >it happened and rebooted before looking at it mush detail.) >> Ok, so I did 10 cycles of the kernel build over NFS for r358096 and neve= r >> got a hang. >> --> It seems that r358097 is the culprit and r358098 makes it easier >> to reproduce. >> --> Basically runs out of kernel memory. >> >> It is not obvious if I can revert these two commits without reverting >> other ones, since there were a bunch of vm changes after these. >> >> I'll take a look, but if you guys have any ideas on how to fix this, ple= a=3D se >> let me know. >> >> Thanks, rick > >Interesting. Could you try re-adding UMA_ZONE_NOFREE to the vmem btag >zone to see if that rescues it, on whatever base revision gets you a >reliable repro? Good catch! That seems to fix it. I've done 8 cycles of kernel build over NFS without a hang (normally I'd get one in the first 1-3 cycles). I don't know if the intend was to delete UMA_ZONE_VM and r358097 had a typo in it and deleted UMA_ZONE_NOFREE or ??? Anyhow, I just put it back to UMA_ZONE_VM | UMA_ZONE_NOFREE and the hangs seem to have gone away. The small patch I did is attached, in case that isn't what you meant. I'll run a few more cycles just in case, but I think this fixes it. Thanks, rick > > Jeff, to fill you in, I have been getting intermittent hangs on a Pentium= =3D 4 > (single core i386) with 1.25Gbytes ram when doing kernel builds using > head kernels from this winter. (I also saw one when doing a kernel build > on UFS, so they aren't NFS specific, although easier to reproduce that wa= =3D y.) > After a typical hang, there will be a bunch of processes sleeping on "bta= =3D lloc" > and several processes holding the following lock: > exclusive sx lock @ vm/vm_map.c:4761 > - I have seen hangs where that is the only lock held by any process excep= =3D t > the interrupt thread. > - I have also seen processes waiting on the following locks: > kern/subr_vmem.c:1343 > kern/subr_vmem.c:633 > > I can't be absolutely sure r358098 is the culprit, but it seems to make t= =3D he > problem more reproducible. > > If anyone has a patch suggestion, I can test it. > Otherwise, I will continue to test r358097 and earlier, to try and see wh= =3D at hangs > occur. (I've done 8 cycles of testing of r356776 without difficulties, bu= =3D t that > doesn't guarantee it isn't broken.) > > There is a bunch more of the stuff I got for Kostik and Ryan below. > I can do "db" when it is hung, but it is a screen console, so I need to > transcribe the output to email by hand. (ie. If you need something > specific I can do that, but trying to do everything Kostik and Ryan asked > for isn't easy.) > > rick > > > > Konstantin Belousov wrote: > >On Fri, May 22, 2020 at 11:46:26PM +0000, Rick Macklem wrote: > >> Konstantin Belousov wrote: > >> >On Wed, May 20, 2020 at 11:58:50PM -0700, Ryan Libby wrote: > >> >> On Wed, May 20, 2020 at 6:04 PM Rick Macklem = =3D wrote: > >> >> > > >> >> > Hi, > >> >> > > >> >> > Since I hadn't upgraded a kernel through the winter, it took me a= =3D while > >> >> > to bisect this, but r358252 seems to be the culprit. > No longer true. I succeeded in reproducing the hang to-day running a > r358251 kernel. > > I haven't had much luck sofar, but see below for what I have learned. > > >> >> > > >> >> > If I do a kernel build over NFS using my not so big Pentium 4 (si= =3D ngle core, > >> >> > 1.25Gbytes RAM, i386), about every second attempt will hang. > >> >> > When I do a "ps" in the debugger, I see processes sleeping on bta= =3D lloc. > >> >> > If I revert to r358251, I cannot reproduce this. > As above, this is no longer true. > > >> >> > > >> >> > Any ideas? > >> >> > > >> >> > I can easily test any change you might suggest to see if it fixes= =3D the > >> >> > problem. > >> >> > > >> >> > If you want more debug info, let me know, since I can easily > >> >> > reproduce it. > >> >> > > >> >> > Thanks, rick > >> >> > >> >> Nothing obvious to me. I can maybe try a repro on a VM... > >> >> > >> >> ddb ps, acttrace, alltrace, show all vmem, show page would be welco= =3D me. > >> >> > >> >> "btalloc" is "We're either out of address space or lost a fill race= =3D ." > From what I see, I think it is "out of address space". > For one of the hangs, when I did "show alllocks", everything except the > intr thread, was waiting for the > exclusive sx lock @ vm/vm_map.c:4761 > > >> > > >> >Yes, I would be not surprised to be out of something on 1G i386 machi= =3D ne. > >> >Please also add 'show alllocks'. > >> Ok, I used an up to date head kernel and it took longer to reproduce a= =3D hang. > Go down to Kostik's comment about kern.maxvnodes for the rest of what I'v= =3D e > learned. (The time it takes to reproduce one of these varies greatly, but= =3D I usually > get one within 3 cycles of a full kernel build over NFS. I have had it ha= =3D ppen > once when doing a kernel build over UFS.) > > >> This time, none of the processes are stuck on "btalloc". > > I'll try and give you most of the above, but since I have to type it in= =3D by hand > > from the screen, I might not get it all. (I'm no real typist;-) > > > show alllocks > > exclusive lockmgr ufs (ufs) r =3D3D 0 locked @ kern/vfs_subr.c: 3259 > > exclusive lockmgr nfs (nfs) r =3D3D 0 locked @ kern/vfs_lookup.c:737 > > exclusive sleep mutex kernel area domain (kernel arena domain) r =3D3D = 0 =3D locked @ kern/subr_vmem.c:1343 > > exclusive lockmgr bufwait (bufwait) r =3D3D 0 locked @ kern/vfs_bio.c:1= 66=3D 3 > > exclusive lockmgr ufs (ufs) r =3D3D 0 locked @ kern/vfs_subr.c:2930 > > exclusive lockmgr syncer (syncer) r =3D3D 0 locked @ kern/vfs_subr.c:24= 74 > > Process 12 (intr) thread 0x.. (1000008) > > exclusive sleep mutex Giant (Giant) r =3D3D 0 locked @ kern/kern_intr.c= :1=3D 152 > > > > > ps > > - Not going to list them all, but here are the ones that seem interesti= =3D ng... > > 18 0 0 0 DL vlruwt 0x11d939cc [vnlru] > > 16 0 0 0 DL (threaded) [bufdaemon] > > 100069 D qsleep [bufdaemon] > > 100074 D - [bufspacedaemon-0] > > 100084 D sdflush 0x11923284 [/ worker] > > - and more of these for the other UFS file systems > > 9 0 0 0 DL psleep 0x1e2f830 [vmdaemon] > > 8 0 0 0 DL (threaded) [pagedaemon] > > 100067 D psleep 0x1e2e95c [dom0] > > 100072 D launds 0x1e2e968 [laundry: dom0] > > 100073 D umarcl 0x12cc720 [uma] > > =3DE2=3D80=3DA6 a bunch of usb and cam ones > > 100025 D - 0x1b2ee40 [doneq0] > > =3DE2=3D80=3DA6 > > 12 0 0 0 RL (threaded) [intr] > > 100007 I [swi6: task queue] > > 100008 Run CPU 0 [swi6: Giant taskq] > > =3DE2=3D80=3DA6 > > 100000 D swapin 0x1d96dfc [swapper] > > - and a bunch more in D state. > > Does this mean the swapper was trying to swap in? > > > > > acttrace > > - just shows the keyboard > > kdb_enter() at kdb_enter+0x35/frame > > vt_kbdevent() at vt_kdbevent+0x329/frame > > kdbmux_intr() at kbdmux_intr+0x19/frame > > taskqueue_run_locked() at taskqueue_run_locked+0x175/frame > > taskqueue_run() at taskqueue_run+0x44/frame > > taskqueue_swi_giant_run(0) at taskqueue_swi_giant_run+0xe/frame > > ithread_loop() at ithread_loop+0x237/frame > > fork_exit() at fork_exit+0x6c/frame > > fork_trampoline() at 0x../frame > > > > > show all vmem > > vmem 0x.. 'transient arena' > > quantum: 4096 > > size: 23592960 > > inuse: 0 > > free: 23592960 > > busy tags: 0 > > free tags: 2 > > inuse size free size > > 16777216 0 0 1 23592960 > > vmem 0x.. 'buffer arena' > > quantum: 4096 > > size: 94683136 > > inuse: 94502912 > > free: 180224 > > busy tags: 1463 > > free tags: 3 > > inuse size free size > > 16384 2 32768 1 16384 > > 32768 39 1277952 1 32768 > > 65536 1422 93192192 0 0 > > 131072 0 0 1 131072 > > vmem 0x.. 'i386trampoline' > > quantum: 1 > > size: 24576 > > inuse: 20860 > > free: 3716 > > busy tags: 9 > > free tags: 3 > > inuse size free size > > 32 1 48 1 52 > > 64 2 208 0 0 > > 128 2 280 0 0 > > 2048 1 2048 1 3664 > > 4096 2 8192 0 0 > > 8192 1 10084 0 0 > > vmem 0x.. 'kernel rwx arena' > > quantum: 4096 > > size: 0 > > inuse: 0 > > free: 0 > > busy tags: 0 > > free tags: 0 > > vmem 0x.. 'kernel area dom' > > quantum: 4096 > > size: 56623104 > > inuse: 56582144 > >> free: 40960 > >> busy tags: 11224 > >> free tags: 3 > >I think this is the trouble. > > > >Did you tried to reduce kern.maxvnodes ? What is the default value for > >the knob on your machine ? > The default is 84342. > I have tried 64K, 32K and 128K and they all hung sooner or later. > For the 32K case, I did see vnodes being recycled for a while before it g= =3D ot hung, > so it isn't just when it hits the limit. > > Although it is much easier for me to reproduce on an NFS mount, I did see > a hang while doing a kernel build on UFS (no NFS mount on the machine at > that time). > > So, I now know that the problem pre-dates r358252 and is not NFS specific= =3D . > > I'm not bisecting back further to try and isolate the commit that causes = =3D this. > (Unfortunately, each test cycle can take days. I now know that I have to = =3D do > several of these kernel builds, which take hours each, to see if a hang i= =3D s going > to happen.) > > I'll post if/when I have more, rick > > We scaled maxvnodes for ZFS and UFS, might be NFS is even more demanding, > having larger node size. > > > inuse size free size > > 4096 11091 45428736 0 0 > > 8192 63 516096 0 0 > > 16384 12 196608 0 0 > > 32768 6 196608 0 0 > > 40960 2 81920 1 40960 > > 65536 16 1048576 0 0 > > 94208 1 94208 0 0 > > 110592 1 110592 0 0 > > 131072 15 2441216 0 0 > > 262144 15 3997696 0 0 > > 524288 1 524288 0 0 > > 1048576 1 1945600 0 0 > > vmem 0x.. 'kernel arena' > > quantum: 4096 > > size: 390070272 > > inuse: 386613248 > > free: 3457024 > > busy tags: 873 > > free tags: 3 > > inuse size free size > > 4096 35 143360 1 4096 > > 8192 2 16384 2 16384 > > 12288 1 12288 0 0 > > 16384 30 491520 0 0 > > 20480 140 2867200 0 0 > > 65536 1 65536 0 0 > > 131072 631 82706432 0 0 > > 1048576 0 0 1 1339392 > > 2097152 27 56623104 1 2097152 > > 8388608 1 13774848 0 0 > > 16777216 3 74883072 0 0 > > 33554432 1 36753408 0 0 > > 67108864 1 118276096 0 0 > > > > > alltrace > > - I can't face typing too much more, but I'll put a few > > here that look interesting > > > > - for csh > > sched_switch() > > mi_switch() > > kern_yield() > > getblkx() > > breadn_flags() > > ffs_update() > > ufs_inactive() > > VOP_INACTIVE() > > vinactivef() > > vput_final() > > vm_object_deallocate() > > vm_map_process_deferred() > > kern_munmap() > > sys_munmap() > > > > - For cc > > sched_switch() > > mi_switch() > > sleepq_switch() > > sleepq_timedwait() > > _sleep() > > pause_sbt() > > vmem_bt_alloc() > > keg_alloc_slab() > > zone_import() > > cache_alloc() > > cache_alloc_retry() > > uma_zalloc_arg() > > bt_fill() > > vmem_xalloc() > > vmem_alloc() > > kmem_alloc() > > kmem_malloc_domainset() > > page_alloc() > > keg_alloc_slab() > > zone_import() > > cache_alloc() > > cache_alloc_retry() > > uma_zalloc_arg() > > nfscl_nget() > > nfs_create() > > vop_sigdefer() > > nfs_vnodeops_bypass() > > VOP_CREATE_APV() > > vn_open_cred() > > vn_open() > > kern_openat() > > sys_openat() > > > > Then there are a bunch of these... for cc, make > > sched_switch() > > mi_switch() > > sleepq_switch() > > sleepq_catch_signals() > > sleepq_wait_sig() > > kern_wait6() > > sys_wait4() > > > > - for vnlru > > sched_switch() > > mi_switch() > > sleepq_switch() > > sleepq_timedwait() > > _sleep() > > vnlru_proc() > > fork_exit() > > fork_trampoline() > > > > - for syncer > > sched_switch() > > mi_switch() > > critical_exit_preempt() > > intr_event_handle() > > intr_execute_handlers() > > lapic_handle_intr() > > Xapic_isr1() > > - interrupt > > memset() > > cache_alloc() > > cache_alloc_retry() > > uma_zalloc_arg() > > vmem_xalloc() > > vmem_bt_alloc() > > keg_alloc_slab() > > zone_import() > > cache_alloc() > > cache_alloc_retry() > > uma_zalloc_arg() > > bt_fill() > > vmem_xalloc() > > vmem_alloc() > > bufkva_alloc() > > getnewbuf() > > getblkx() > > breadn_flags() > > ffs_update() > > ffs_sync() > > sync_fsync() > > VOP_FSYNC_APV() > > sched_sync() > > fork_exit() > > fork_trampoline() > > > > - For bufdaemon (a bunch of these) > > sched_switch() > > mi_switch() > > sleepq_switch() > > sleepq_timedwait() > > _sleep() > > buf_daemon() > > fork_exit() > > fork_trampoline() > > > > vmdaemon and pagedaemon are basically just like above, > > sleeping in > > vm_daemon() > > or > > vm_pageout_worker() > > or > > vm_pageout_laundry_worker() > > or > > uma_reclaim_worker() > > > > That's all the typing I can take right now. > > I can probably make this happen again if you want more specific stuff. > > > > rick > > > > > > > > > _______________________________________________ > freebsd-current@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org= =3D " > _______________________________________________ > freebsd-current@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org= =3D " > _______________________________________________ > freebsd-current@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org