From owner-freebsd-bugs@freebsd.org Sat Aug 12 11:09:26 2017 Return-Path: Delivered-To: freebsd-bugs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B8E24DCEA2D for ; Sat, 12 Aug 2017 11:09:26 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A766581BC6 for ; Sat, 12 Aug 2017 11:09:26 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id v7CB9QcC069096 for ; Sat, 12 Aug 2017 11:09:26 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-bugs@FreeBSD.org Subject: [Bug 220971] Freebsd 11.0p11 - system freeze on intensive I/O Date: Sat, 12 Aug 2017 11:09:26 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: markmi@dsl-only.net X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Aug 2017 11:09:26 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D220971 --- Comment #6 from Mark Millard --- (In reply to execve from comment #4) FYI since you have more RAM than the original context for that stress command I'll quote from the man page: -m, --vm N spawn N workers spinning on malloc()/free() --vm-bytes B malloc B bytes per vm worker (default is 256MB) -d, --hdd N spawn N workers spinning on write()/unlink() --vm-keep redirty memory instead of freeing and reallocating So: stress -d 2 -m 3 --vm-keep is only doing 3*256MB =3D 768MB of VM use. That was a large percentage of the 1GB of RAM that the related bugzilla 206048 indicated as the context for the command. It is not that much of around 8GiBytes of RAM. --=20 You are receiving this mail because: You are the assignee for the bug.=