From owner-svn-src-stable@freebsd.org Wed Nov 8 11:44:04 2017 Return-Path: Delivered-To: svn-src-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 143C9E4FE02; Wed, 8 Nov 2017 11:44:04 +0000 (UTC) (envelope-from kib@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DFAA6761C1; Wed, 8 Nov 2017 11:44:03 +0000 (UTC) (envelope-from kib@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id vA8Bi2mI035477; Wed, 8 Nov 2017 11:44:02 GMT (envelope-from kib@FreeBSD.org) Received: (from kib@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id vA8Bi2ft035476; Wed, 8 Nov 2017 11:44:02 GMT (envelope-from kib@FreeBSD.org) Message-Id: <201711081144.vA8Bi2ft035476@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: kib set sender to kib@FreeBSD.org using -f From: Konstantin Belousov Date: Wed, 8 Nov 2017 11:44:02 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-11@freebsd.org Subject: svn commit: r325544 - stable/11/sys/amd64/amd64 X-SVN-Group: stable-11 X-SVN-Commit-Author: kib X-SVN-Commit-Paths: stable/11/sys/amd64/amd64 X-SVN-Commit-Revision: 325544 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: SVN commit messages for all the -stable branches of the src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Nov 2017 11:44:04 -0000 Author: kib Date: Wed Nov 8 11:44:02 2017 New Revision: 325544 URL: https://svnweb.freebsd.org/changeset/base/325544 Log: MFC r325285, r325447: Restore an optimization that was temporary disabled by r324665. Modified: stable/11/sys/amd64/amd64/pmap.c Directory Properties: stable/11/ (props changed) Modified: stable/11/sys/amd64/amd64/pmap.c ============================================================================== --- stable/11/sys/amd64/amd64/pmap.c Wed Nov 8 11:39:42 2017 (r325543) +++ stable/11/sys/amd64/amd64/pmap.c Wed Nov 8 11:44:02 2017 (r325544) @@ -2896,8 +2896,8 @@ reclaim_pv_chunk_leave_pmap(pmap_t pmap, pmap_t locked static vm_page_t reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **lockp) { - struct pv_chunk *pc, *pc_marker; - struct pv_chunk_header pc_marker_b; + struct pv_chunk *pc, *pc_marker, *pc_marker_end; + struct pv_chunk_header pc_marker_b, pc_marker_end_b; struct md_page *pvh; pd_entry_t *pde; pmap_t next_pmap, pmap; @@ -2910,6 +2910,7 @@ reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **l uint64_t inuse; int bit, field, freed; bool start_di; + static int active_reclaims = 0; PMAP_LOCK_ASSERT(locked_pmap, MA_OWNED); KASSERT(lockp != NULL, ("reclaim_pv_chunk: lockp is NULL")); @@ -2918,7 +2919,9 @@ reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **l PG_G = PG_A = PG_M = PG_RW = 0; SLIST_INIT(&free); bzero(&pc_marker_b, sizeof(pc_marker_b)); + bzero(&pc_marker_end_b, sizeof(pc_marker_end_b)); pc_marker = (struct pv_chunk *)&pc_marker_b; + pc_marker_end = (struct pv_chunk *)&pc_marker_end_b; /* * A delayed invalidation block should already be active if @@ -2928,12 +2931,21 @@ reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **l start_di = pmap_not_in_di(); mtx_lock(&pv_chunks_mutex); + active_reclaims++; TAILQ_INSERT_HEAD(&pv_chunks, pc_marker, pc_lru); - while ((pc = TAILQ_NEXT(pc_marker, pc_lru)) != NULL && + TAILQ_INSERT_TAIL(&pv_chunks, pc_marker_end, pc_lru); + while ((pc = TAILQ_NEXT(pc_marker, pc_lru)) != pc_marker_end && SLIST_EMPTY(&free)) { next_pmap = pc->pc_pmap; - if (next_pmap == NULL) /* marker */ + if (next_pmap == NULL) { + /* + * The next chunk is a marker. However, it is + * not our marker, so active_reclaims must be + * > 1. Consequently, the next_chunk code + * will not rotate the pv_chunks list. + */ goto next_chunk; + } mtx_unlock(&pv_chunks_mutex); /* @@ -3047,8 +3059,24 @@ reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **l next_chunk: TAILQ_REMOVE(&pv_chunks, pc_marker, pc_lru); TAILQ_INSERT_AFTER(&pv_chunks, pc, pc_marker, pc_lru); + if (active_reclaims == 1 && pmap != NULL) { + /* + * Rotate the pv chunks list so that we do not + * scan the same pv chunks that could not be + * freed (because they contained a wired + * and/or superpage mapping) on every + * invocation of reclaim_pv_chunk(). + */ + while ((pc = TAILQ_FIRST(&pv_chunks)) != pc_marker) { + MPASS(pc->pc_pmap != NULL); + TAILQ_REMOVE(&pv_chunks, pc, pc_lru); + TAILQ_INSERT_TAIL(&pv_chunks, pc, pc_lru); + } + } } TAILQ_REMOVE(&pv_chunks, pc_marker, pc_lru); + TAILQ_REMOVE(&pv_chunks, pc_marker_end, pc_lru); + active_reclaims--; mtx_unlock(&pv_chunks_mutex); reclaim_pv_chunk_leave_pmap(pmap, locked_pmap, start_di); if (m_pc == NULL && !SLIST_EMPTY(&free)) {