From owner-freebsd-arch@FreeBSD.ORG Fri Feb 1 10:52:54 2013 Return-Path: Delivered-To: freebsd-arch@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 8D1B850F; Fri, 1 Feb 2013 10:52:54 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 9CE90B6B; Fri, 1 Feb 2013 10:52:53 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id MAA18148; Fri, 01 Feb 2013 12:52:44 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1U1EEe-0000v9-4N; Fri, 01 Feb 2013 12:52:44 +0200 Message-ID: <510B9E7A.1070709@FreeBSD.org> Date: Fri, 01 Feb 2013 12:52:42 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130121 Thunderbird/17.0.2 MIME-Version: 1.0 To: Konstantin Belousov Subject: Re: kva size on amd64 References: <507E7E59.8060201@FreeBSD.org> <51098743.2050603@FreeBSD.org> <510A2C09.6030709@FreeBSD.org> <510AB848.3010806@rice.edu> <510B8F2B.5070609@FreeBSD.org> <20130201095735.GM2522@kib.kiev.ua> In-Reply-To: <20130201095735.GM2522@kib.kiev.ua> X-Enigmail-Version: 1.4.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: alc@FreeBSD.org, freebsd-arch@FreeBSD.org, Alan Cox , Alan Cox X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Feb 2013 10:52:54 -0000 on 01/02/2013 11:57 Konstantin Belousov said the following: > On Fri, Feb 01, 2013 at 11:47:23AM +0200, Andriy Gapon wrote: > I think that the rework of the ZFS memory management should remove the > use of uma or kmem_alloc() at all. From what I heard in part from you, > there is no reason to keep the filesystem caches mapped full time. > > I hope to commit shortly a facilities that would allow ZFS to easily > manage copying for i/o from the unmapped set of pages. The checksumming > you mentioned would require some more work, but this does not look > unsurmountable. Having ZFS use raw vm_page_t for caching would also > remove the pressure on KVA. Yes, this would be perfect. I think that perhaps we also need some helper API to manage groups of pages. E.g. right now ZFS can malloc or uma_zalloc a 32KB buffer and it would have a single handle (a pointer to the mapped pages). This is convenient. So it would be useful to have some representation for e.g. N non-contiguous unmapped physical pages that logically represent M KB of some contiguous data. An opposite issue is e.g packing 4 (or is it three) unrelated 512-byte blocks into a single page as is possible with uma. So perhaps some "unmapped uma"? Another, purely ZFS issue is that ZFS code freely accesses buffers with metadata. Adding mapping+unmapping code around such all accesses could be cumbersome. All in all, this is not a quick project, IMO. >> P.S. >> BTW, do I understand correctly that the reservation of kernel page tables >> happens through vm_map_insert -> pmap_growkernel ? > > Yes. E.g. kmem_suballoc->vm_map_find->vm_map_insert->pmap_growkernel. > Thank you! -- Andriy Gapon