From owner-freebsd-bugs@freebsd.org Thu May 12 19:46:18 2016 Return-Path: Delivered-To: freebsd-bugs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BA664B38A5F for ; Thu, 12 May 2016 19:46:18 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A01D01372 for ; Thu, 12 May 2016 19:46:18 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u4CJkI19002337 for ; Thu, 12 May 2016 19:46:18 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-bugs@FreeBSD.org Subject: [Bug 209471] Listen queue overflow due to too many sockets stuck in CLOSED state Date: Thu, 12 May 2016 19:46:18 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.3-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: rblayzor@inoc.net X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter cc Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 May 2016 19:46:18 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D209471 Bug ID: 209471 Summary: Listen queue overflow due to too many sockets stuck in CLOSED state Product: Base System Version: 10.3-RELEASE Hardware: amd64 OS: Any Status: New Severity: Affects Only Me Priority: --- Component: kern Assignee: freebsd-bugs@FreeBSD.org Reporter: rblayzor@inoc.net CC: freebsd-amd64@FreeBSD.org CC: freebsd-amd64@FreeBSD.org 10.3-RELEASE FreeBSD 10.3-RELEASE #0 r297856M We are randomly getting daemon applications on a mail server (Dovecot and E= xim) where we start getting listen queue overflows due to hundreds or thousands = of TCP connections stuck in a CLOSED state. Kernel messages: sonewconn: pcb 0xfffff800155a3498: Listen queue overflow: 301 already in qu= eue awaiting acceptance (50 occurrences) sonewconn: pcb 0xfffff800155a3498: Listen queue overflow: 301 already in qu= eue awaiting acceptance (50 occurrences) sonewconn: pcb 0xfffff800155a3498: Listen queue overflow: 301 already in qu= eue awaiting acceptance (50 occurrences) sonewconn: pcb 0xfffff800155a3498: Listen queue overflow: 301 already in qu= eue awaiting acceptance (48 occurrences) sonewconn: pcb 0xfffff800155a3498: Listen queue overflow: 301 already in qu= eue awaiting acceptance (50 occurrences) sonewconn: pcb 0xfffff800155a3498: Listen queue overflow: 301 already in qu= eue awaiting acceptance (50 occurrences) ... tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.19266 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.12342 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.29123 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.23215 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.56331 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.52066 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.33798 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.34610 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.15283 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.51922 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.7406 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.41955 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.56028 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.6446 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.2474 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.51723 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.51069 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.18158 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.38435 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.46607 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.33359 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.62935 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.11673 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.51459 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.36490 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.27831 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.44081 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.28384 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.43745 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.64070 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.35722 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.63738 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.14573 CLOSED tcp6 32 0 2607:f058:110:2:.4190 2607:f058:110:2:.12311 CLOSED ... (hundreds and hundreds of these lines removed) Looking at sockstat these connections do not seem to be related to the proc= ess anymore... ? ? ? ? tcp6 2607:f058:110:2::1:2:25 2607:f058:110:2::f:0:49398 ? ? ? ? tcp6 2607:f058:110:2::1:2:110 2607:f058:110:2::f:0:28079 ? ? ? ? tcp6 2607:f058:110:2::1:2:25 2607:f058:110:2::f:1:52383 ? ? ? ? tcp6 2607:f058:110:2::1:2:110 2607:f058:110:2::f:1:35856 ? ? ? ? tcp6 2607:f058:110:2::1:2:143 2607:f058:110:2::f:0:27734 ? ? ? ? tcp6 2607:f058:110:2::1:2:143 2607:f058:110:2::f:1:36851 ? ? ? ? tcp6 2607:f058:110:2::1:2:25 2607:f058:110:2::f:0:40977 ? ? ? ? tcp6 2607:f058:110:2::1:2:110 2607:f058:110:2::f:0:51172 ? ? ? ? tcp6 2607:f058:110:2::1:2:25 2607:f058:110:2::f:1:16197 ? ? ? ? tcp6 2607:f058:110:2::1:2:110 2607:f058:110:2::f:1:1999 ? ? ? ? tcp6 2607:f058:110:2::1:2:143 2607:f058:110:2::f:0:60423 ? ? ? ? tcp6 2607:f058:110:2::1:2:143 2607:f058:110:2::f:1:16527 ? ? ? ? tcp6 2607:f058:110:2::1:2:25 2607:f058:110:2::f:0:34327 ? ? ? ? tcp6 2607:f058:110:2::1:2:110 2607:f058:110:2::f:0:5437 ? ? ? ? tcp6 2607:f058:110:2::1:2:25 2607:f058:110:2::f:1:30114 ? ? ? ? tcp6 2607:f058:110:2::1:2:110 2607:f058:110:2::f:1:57136 ? ? ? ? tcp6 2607:f058:110:2::1:2:143 2607:f058:110:2::f:0:58399 ? ? ? ? tcp6 2607:f058:110:2::1:2:143 2607:f058:110:2::f:1:37073 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:11673 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:33798 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:65207 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:13326 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:27879 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:2899 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:39172 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:19330 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:18694 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:1251 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:43392 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:44343 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:36523 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:41551 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:24288 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:3830 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:43978 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:8897 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:65187 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:14214 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:1:55279 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:31178 ? ? ? ? tcp6 2607:f058:110:2::1:2:4190 2607:f058:110:2::f:0:49242 ... (hundreds or thousands of lines removed) The only way to fix the issue is to reboot the server. (in this case a VMwa= re ESXi 5.5 VM) Network driver is "vmx" if that makes any difference. --=20 You are receiving this mail because: You are the assignee for the bug.=