Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 7 Jul 2004 15:29:02 +0200
From:      thefly <thefly@acaro.org>
To:        Brooks Davis <brooks@one-eyed-alien.net>
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: ZEROCOPY between kernel and userland
Message-ID:  <20040707132902.GA7187@tyler>
In-Reply-To: <20040706212254.GA22673@Odin.AC.HMC.Edu>
References:  <FE045D4D9F7AED4CBFF1B3B813C85337051D920B@mail.sandvine.com> <20040706133640.GB5922@tyler> <20040706212254.GA22673@Odin.AC.HMC.Edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Jul 06, 2004 at 02:22:54PM -0700, Brooks Davis wrote:
> [Please don't top-post, it tends to lose context.]
> 
> On Tue, Jul 06, 2004 at 03:36:40PM +0200, thefly wrote:
> > could you point me pls to some code of that? To me read-only access is
> > ok, userspace doesn't need to write anything on it, kernelspace does.
> > But what about locking issues between userspace read access and
> > kernelspace write access?
> 
> First, be aware that mmap is not necessicairly faster then copyout on
> modern CPUs.  The cycles required to copy a few K of bytes aren't worth
> much of anything on a modern CPU compared to a page-fault.  Second, if
> you still want to do things this way, take a look at the geom statistics
> mechanism.  IIRC, it works by using a generation number at the top and
> bottom of the stats structure.  The user copies the entire struct and
> then verified that the copies of the generation number at the top and
> bottom of the struct are the same.  If so, it uses the copy it got.  If
> not, it tries again.
> 
> -- Brooks
the array dimension is about 1MB, on a p4, i also considered the cost of
a pagefaults and context switches... but i still can't give a real value
to both the approaches. Anyway i'm planning to implement the mmap,
passing it the array at map time, after the read he unmaps it. When he
wants the new snapshot he remaps it, and has the new map. In this way at
mmap time i can give the process the latest snapshot of my array. I've
looked inside kern/subr_devstat.c for the mmap implementation, but still
can't understand how it works. the code is:

static int
devstat_mmap(dev_t dev, vm_offset_t offset, vm_paddr_t *paddr, int nprot)
{
        struct statspage *spp;

        if (nprot != VM_PROT_READ)
                return (-1);
        TAILQ_FOREACH(spp, &pagelist, list) {
                if (offset == 0) {
                        *paddr = vtophys(spp->stat);
                        return (0);
                }
                offset -= PAGE_SIZE;
        }
        return (-1);
}

Why does it get from end back for the offset? isn't it an offset from
the head? And anyway there's no munmap() implementation...

He just gets back the phys addr of the memory area, where's the the
reference count of the sh memory? Isn't it reference counted? I guess
it's done by the page faults handler...

In my case i'm using contigmem(), so i got the va... is it right in my
mmap() implementation just to return the va given me by contigmem() or
the va + offset (but i'm not planning to mmap at a specific offset).

About ng_ippact... is there any doc which says what it does NOT in
russian? :) 

thanks everybody for the answers, and thanks in advance for the next.

-- 
    Claudio "thefly" Martella
    thefly@acaro.org
    GNU/PG keyid: 0x8EA95625



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040707132902.GA7187>