From owner-svn-src-all@FreeBSD.ORG Thu Jun 23 02:36:10 2011 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D096C10656F4; Thu, 23 Jun 2011 02:36:10 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: from mail-gw0-f54.google.com (mail-gw0-f54.google.com [74.125.83.54]) by mx1.freebsd.org (Postfix) with ESMTP id 4BEC48FC12; Thu, 23 Jun 2011 02:36:08 +0000 (UTC) Received: by gwb15 with SMTP id 15so754554gwb.13 for ; Wed, 22 Jun 2011 19:36:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=La4YNxGzvvIOAwBorC6hqaSTBuXHlWan1heeEaumuyw=; b=H9K/SDoA4wpjQTKKSCT4716iDoNnh4GrRlJpjl0oPo6HN28t8NyVNew8k6nsqpmsGU 8XoFffThEXvHbgS9IF0alN0nx/3m41WlukeshiqNWO0k+r9LtiuvqfQbSkLj2ATwd3po b3VWCAUIvnvG5V8plYrNLKJO/DE2lRLXhdo/A= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=i2h/Mkf8rzZtVXf8aFiuJ3EOgxLdAD91IkWwupEd0D5DhOk3QVZJ9MOodafwgbpcvo n1MTdmpZ36LBS2n3zpIt/4rLDZn4QqblvzpPy7L3TfhFmjjcSgw7/Y/omdeKR8BOxYAp I60avgGFEPwfeLJJMG838r4tjf7DxvpIqlA0w= MIME-Version: 1.0 Received: by 10.150.193.12 with SMTP id q12mr1655142ybf.9.1308796566697; Wed, 22 Jun 2011 19:36:06 -0700 (PDT) Sender: adrian.chadd@gmail.com Received: by 10.150.146.13 with HTTP; Wed, 22 Jun 2011 19:36:06 -0700 (PDT) In-Reply-To: <201106191913.p5JJDOqJ006272@svn.freebsd.org> References: <201106191913.p5JJDOqJ006272@svn.freebsd.org> Date: Thu, 23 Jun 2011 10:36:06 +0800 X-Google-Sender-Auth: mtYCobb0R7g5VS_C3JuVqwMfNQk Message-ID: From: Adrian Chadd To: Alan Cox Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org Subject: Re: svn commit: r223307 - head/sys/vm X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Jun 2011 02:36:10 -0000 Can this commit please be reverted whilst the kinks are worked out for MIPS= ? I'm currently knee deep in MIPS related hackery and I'd rather stay on an unpatched -HEAD so dogfood can be properly self-consumed. Thanks, Adrian On 20 June 2011 03:13, Alan Cox wrote: > Author: alc > Date: Sun Jun 19 19:13:24 2011 > New Revision: 223307 > URL: http://svn.freebsd.org/changeset/base/223307 > > Log: > =A0Precisely document the synchronization rules for the page's dirty fiel= d. > =A0(Saying that the lock on the object that the page belongs to must be h= eld > =A0only represents one aspect of the rules.) > > =A0Eliminate the use of the page queues lock for atomically performing re= ad- > =A0modify-write operations on the dirty field when the underlying archite= cture > =A0supports atomic operations on char and short types. > > =A0Document the fact that 32KB pages aren't really supported. > > =A0Reviewed by: =A0attilio, kib > > Modified: > =A0head/sys/vm/vm_fault.c > =A0head/sys/vm/vm_page.c > =A0head/sys/vm/vm_page.h > > Modified: head/sys/vm/vm_fault.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D > --- head/sys/vm/vm_fault.c =A0 =A0 =A0Sun Jun 19 18:34:49 2011 =A0 =A0 = =A0 =A0(r223306) > +++ head/sys/vm/vm_fault.c =A0 =A0 =A0Sun Jun 19 19:13:24 2011 =A0 =A0 = =A0 =A0(r223307) > @@ -1089,10 +1089,20 @@ vm_fault_quick_hold_pages(vm_map_t map, > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 * caller's changes may go= unnoticed because they are > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 * performed through an un= managed mapping or by a DMA > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 * operation. > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* The object lock is not= held here. =A0Therefore, like > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* a pmap operation, the = page queues lock may be > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* required in order to c= all vm_page_dirty(). =A0See > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* vm_page_clear_dirty_ma= sk(). > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 */ > +#if defined(__amd64__) || defined(__i386__) || defined(__ia64__) || \ > + =A0 =A0defined(__mips__) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 vm_page_dirty(*mp); > +#else > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0vm_page_lock_queues(); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0vm_page_dirty(*mp); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0vm_page_unlock_queues(); > +#endif > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0} > =A0 =A0 =A0 =A0} > =A0 =A0 =A0 =A0if (pmap_failed) { > > Modified: head/sys/vm/vm_page.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D > --- head/sys/vm/vm_page.c =A0 =A0 =A0 Sun Jun 19 18:34:49 2011 =A0 =A0 = =A0 =A0(r223306) > +++ head/sys/vm/vm_page.c =A0 =A0 =A0 Sun Jun 19 19:13:24 2011 =A0 =A0 = =A0 =A0(r223307) > @@ -729,7 +729,12 @@ vm_page_sleep(vm_page_t m, const char *m > =A0/* > =A0* =A0 =A0 vm_page_dirty: > =A0* > - * =A0 =A0 make page all dirty > + * =A0 =A0 Set all bits in the page's dirty field. > + * > + * =A0 =A0 The object containing the specified page must be locked if th= e call is > + * =A0 =A0 made from the machine-independent layer. =A0If, however, the = call is > + * =A0 =A0 made from the pmap layer, then the page queues lock may be re= quired. > + * =A0 =A0 See vm_page_clear_dirty_mask(). > =A0*/ > =A0void > =A0vm_page_dirty(vm_page_t m) > @@ -2325,15 +2330,41 @@ vm_page_clear_dirty_mask(vm_page_t m, in > =A0 =A0 =A0 =A0/* > =A0 =A0 =A0 =A0 * If the object is locked and the page is neither VPO_BUS= Y nor > =A0 =A0 =A0 =A0 * PG_WRITEABLE, then the page's dirty field cannot possib= ly be > - =A0 =A0 =A0 =A0* modified by a concurrent pmap operation. > + =A0 =A0 =A0 =A0* set by a concurrent pmap operation. > =A0 =A0 =A0 =A0 */ > =A0 =A0 =A0 =A0VM_OBJECT_LOCK_ASSERT(m->object, MA_OWNED); > =A0 =A0 =A0 =A0if ((m->oflags & VPO_BUSY) =3D=3D 0 && (m->flags & PG_WRIT= EABLE) =3D=3D 0) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0m->dirty &=3D ~pagebits; > =A0 =A0 =A0 =A0else { > +#if defined(__amd64__) || defined(__i386__) || defined(__ia64__) || \ > + =A0 =A0defined(__mips__) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* On the aforementioned architectures, t= he page queues lock > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* is not required by the following read-= modify-write > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* operation. =A0The combination of the o= bject's lock and an > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* atomic operation suffice. =A0Moreover,= the pmap layer on > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* these architectures can call vm_page_d= irty() without > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* holding the page queues lock. > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/ > +#if PAGE_SIZE =3D=3D 4096 > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 atomic_clear_char(&m->dirty, pagebits); > +#elif PAGE_SIZE =3D=3D 8192 > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 atomic_clear_short(&m->dirty, pagebits); > +#elif PAGE_SIZE =3D=3D 16384 > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 atomic_clear_int(&m->dirty, pagebits); > +#else > +#error "PAGE_SIZE is not supported." > +#endif > +#else > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* Otherwise, the page queues lock is req= uired to ensure that > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* a concurrent pmap operation does not s= et the page's dirty > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* field during the following read-modify= -write operation. > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/ > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0vm_page_lock_queues(); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0m->dirty &=3D ~pagebits; > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0vm_page_unlock_queues(); > +#endif > =A0 =A0 =A0 =A0} > =A0} > > > Modified: head/sys/vm/vm_page.h > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D > --- head/sys/vm/vm_page.h =A0 =A0 =A0 Sun Jun 19 18:34:49 2011 =A0 =A0 = =A0 =A0(r223306) > +++ head/sys/vm/vm_page.h =A0 =A0 =A0 Sun Jun 19 19:13:24 2011 =A0 =A0 = =A0 =A0(r223307) > @@ -89,10 +89,26 @@ > =A0* =A0 =A0 and offset to which this page belongs (for pageout), > =A0* =A0 =A0 and sundry status bits. > =A0* > - * =A0 =A0 Fields in this structure are locked either by the lock on the > - * =A0 =A0 object that the page belongs to (O), its corresponding page l= ock (P), > - * =A0 =A0 or by the lock on the page queues (Q). > - * > + * =A0 =A0 In general, operations on this structure's mutable fields are > + * =A0 =A0 synchronized using either one of or a combination of the lock= on the > + * =A0 =A0 object that the page belongs to (O), the pool lock for the pa= ge (P), > + * =A0 =A0 or the lock for either the free or paging queues (Q). =A0If a= field is > + * =A0 =A0 annotated below with two of these locks, then holding either = lock is > + * =A0 =A0 sufficient for read access, but both locks are required for w= rite > + * =A0 =A0 access. > + * > + * =A0 =A0 In contrast, the synchronization of accesses to the page's di= rty field > + * =A0 =A0 is machine dependent (M). =A0In the machine-independent layer= , the lock > + * =A0 =A0 on the object that the page belongs to must be held in order = to > + * =A0 =A0 operate on the field. =A0However, the pmap layer is permitted= to set > + * =A0 =A0 all bits within the field without holding that lock. =A0There= fore, if > + * =A0 =A0 the underlying architecture does not support atomic read-modi= fy-write > + * =A0 =A0 operations on the field's type, then the machine-independent = layer > + * =A0 =A0 must also hold the page queues lock when performing read-modi= fy-write > + * =A0 =A0 operations and the pmap layer must hold the page queues lock = when > + * =A0 =A0 setting the field. =A0In the machine-independent layer, the > + * =A0 =A0 implementation of read-modify-write operations on the field i= s > + * =A0 =A0 encapsulated in vm_page_clear_dirty_mask(). > =A0*/ > > =A0TAILQ_HEAD(pglist, vm_page); > @@ -120,18 +136,19 @@ struct vm_page { > =A0 =A0 =A0 =A0u_char =A0busy; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* pag= e busy count (O) */ > =A0 =A0 =A0 =A0/* NOTE that these must support one bit per DEV_BSIZE in a= page!!! */ > =A0 =A0 =A0 =A0/* so, on normal X86 kernels, they must be at least 8 bits= wide */ > + =A0 =A0 =A0 /* In reality, support for 32KB pages is not fully implemen= ted. */ > =A0#if PAGE_SIZE =3D=3D 4096 > =A0 =A0 =A0 =A0u_char =A0valid; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map= of valid DEV_BSIZE chunks (O) */ > - =A0 =A0 =A0 u_char =A0dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map = of dirty DEV_BSIZE chunks (O) */ > + =A0 =A0 =A0 u_char =A0dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map = of dirty DEV_BSIZE chunks (M) */ > =A0#elif PAGE_SIZE =3D=3D 8192 > =A0 =A0 =A0 =A0u_short valid; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map o= f valid DEV_BSIZE chunks (O) */ > - =A0 =A0 =A0 u_short dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map of= dirty DEV_BSIZE chunks (O) */ > + =A0 =A0 =A0 u_short dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map of= dirty DEV_BSIZE chunks (M) */ > =A0#elif PAGE_SIZE =3D=3D 16384 > =A0 =A0 =A0 =A0u_int valid; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map= of valid DEV_BSIZE chunks (O) */ > - =A0 =A0 =A0 u_int dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map = of dirty DEV_BSIZE chunks (O) */ > + =A0 =A0 =A0 u_int dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* map = of dirty DEV_BSIZE chunks (M) */ > =A0#elif PAGE_SIZE =3D=3D 32768 > =A0 =A0 =A0 =A0u_long valid; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* map o= f valid DEV_BSIZE chunks (O) */ > - =A0 =A0 =A0 u_long dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* map of= dirty DEV_BSIZE chunks (O) */ > + =A0 =A0 =A0 u_long dirty; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* map of= dirty DEV_BSIZE chunks (M) */ > =A0#endif > =A0}; > >