From owner-freebsd-arch@FreeBSD.ORG Sat Sep 27 08:51:45 2014 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 010E0EC9; Sat, 27 Sep 2014 08:51:44 +0000 (UTC) Received: from mail-qc0-x22e.google.com (mail-qc0-x22e.google.com [IPv6:2607:f8b0:400d:c01::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A3EDD1A6; Sat, 27 Sep 2014 08:51:44 +0000 (UTC) Received: by mail-qc0-f174.google.com with SMTP id i8so6410781qcq.19 for ; Sat, 27 Sep 2014 01:51:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=8xhQMTiw06eFUiCV7Xe8iswA7TM8b7ssQmauCEs3Kjw=; b=xcg53jWufqT4ZHpOSlTWaZj5ma7HOIPzTRJSe9QNzkZmrub5dWV9wqB7gNZjEqyQQy HlQrnrVzfPhOI+zkGmpH+vjQr+SePu/Ki01JzCnxjABgrwzeciqH918x7cgew4byZOsu 1N4V01GNUKD7t9VuurO9st5AT8RqMu4EdZaIWDBtafe1rDSaEFIVLoFccz4IiO22JyDf 6t2WBOvHTQCVnoHBvrkUfRx+MjYR5FzwXw63mNDL45qIOumTHVhD1ezlPfBR6mOY7GR4 OtsQjfUo2Po9v5Okh8a1lPXJyI30FoWJf6i7yt0sm1xxbtWihRZGVkY4Q0GsM+wBXdV1 x7YQ== MIME-Version: 1.0 X-Received: by 10.224.172.198 with SMTP id m6mr37285705qaz.19.1411807903768; Sat, 27 Sep 2014 01:51:43 -0700 (PDT) Received: by 10.140.23.242 with HTTP; Sat, 27 Sep 2014 01:51:43 -0700 (PDT) In-Reply-To: References: Date: Sat, 27 Sep 2014 10:51:43 +0200 Message-ID: Subject: Re: vm_page_array and VM_PHYSSEG_SPARSE From: Svatopluk Kraus To: alc@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: FreeBSD Arch X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 27 Sep 2014 08:51:45 -0000 On Fri, Sep 26, 2014 at 8:08 PM, Alan Cox wrote: > > > On Wed, Sep 24, 2014 at 7:27 AM, Svatopluk Kraus > wrote: > >> Hi, >> >> I and Michal are finishing new ARM pmap-v6 code. There is one problem >> we've >> dealt with somehow, but now we would like to do it better. It's about >> physical pages which are allocated before vm subsystem is initialized. >> While later on these pages could be found in vm_page_array when >> VM_PHYSSEG_DENSE memory model is used, it's not true for VM_PHYSSEG_SPARSE >> memory model. And ARM world uses VM_PHYSSEG_SPARSE model. >> >> It really would be nice to utilize vm_page_array for such preallocated >> physical pages even when VM_PHYSSEG_SPARSE memory model is used. Things >> could be much easier then. In our case, it's about pages which are used >> for >> level 2 page tables. In VM_PHYSSEG_SPARSE model, we have two sets of such >> pages. First ones are preallocated and second ones are allocated after vm >> subsystem was inited. We must deal with each set differently. So code is >> more complex and so is debugging. >> >> Thus we need some method how to say that some part of physical memory >> should be included in vm_page_array, but the pages from that region should >> not be put to free list during initialization. We think that such >> possibility could be utilized in general. There could be a need for some >> physical space which: >> >> (1) is needed only during boot and later on it can be freed and put to vm >> subsystem, >> >> (2) is needed for something else and vm_page_array code could be used >> without some kind of its duplication. >> >> There is already some code which deals with blacklisted pages in vm_page.c >> file. So the easiest way how to deal with presented situation is to add >> some callback to this part of code which will be able to either exclude >> whole phys_avail[i], phys_avail[i+1] region or single pages. As the >> biggest >> phys_avail region is used for vm subsystem allocations, there should be >> some more coding. (However, blacklisted pages are not dealt with on that >> part of region.) >> >> We would like to know if there is any objection: >> >> (1) to deal with presented problem, >> (2) to deal with the problem presented way. >> Some help is very appreciated. Thanks >> >> > > As an experiment, try modifying vm_phys.c to use dump_avail instead of > phys_avail when sizing vm_page_array. On amd64, where the same problem > exists, this allowed me to use VM_PHYSSEG_SPARSE. Right now, this is > probably my preferred solution. The catch being that not all architectures > implement dump_avail, but my recollection is that arm does. > Frankly, I would prefer this too, but there is one big open question: What is dump_avail for? Using it for vm_page_array initialization and segmentation means that phys_avail must be a subset of it. And this must be stated and be visible enough. Maybe it should be even checked in code. I like the idea of thinking about dump_avail as something what desribes all memory in a system, but it's not how dump_avail is defined in archs now. I will experiment with it on monday then. However, it's not only about how memory segments are created in vm_phys.c, but it's about how vm_page_array size is computed in vm_page.c too. Svata