Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 4 Dec 1998 11:34:35 -0800 (PST)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Kevin Day <toasty@home.dragondata.com>
Cc:        hackers@FreeBSD.ORG
Subject:   Re: Nonblocking page fetching
Message-ID:  <199812041934.LAA16719@apollo.backplane.com>
References:   <199812041431.IAA27124@home.dragondata.com>

next in thread | previous in thread | raw e-mail | index | archive | help

:I have an application where I'm streaming large amounts of data from disk,
:and throwing it on the screen. (Playing a movie, essentially). I'm mmapping
:the region that i'm playing, and just start memcpy'ing each frame into the
:renderer's buffer. This is very time critical, so running out of buffered
:data isn't good.
:
:I needed a way to trick the kernel into bringing pages off of disk without
:making my process block waiting for them. Essentially, if I'm playing frame
:10, I'd better already have frames 10-15 in ram, and should start bringing
:frame 16 in, while 10 is playing. (I tried keeping a 4-6 frame buffer)
:...
:
:What would be very nice is a syscall that could tell the vm system that a
:page will be needed shortly, so bring it in as soon as possible. Sort of
:like madvising with WILL_NEED, but a much stronger hint.

    The facility already exists in the system.  In fact, it even exists in
    -stable.

    mmap() the region as per normal.

    madvise() the region to be MADV_WILLNEED
    madvise() the region to be MADV_SEQUENTIAL

    The kernel will read-ahead.  Now, it will not read-ahead quickly enough
    to avoid blocking, but you may be able to tune the kernel specifically
    to handle your situation.  Take a look at the code in vm/vm_fault.c,
    around line 351.  Also look at the VM_FAULT_READ_AHEAD and
    VM_FAULT_READ_BEHIND defines.

    The key to making this work in a full-streaming application is to
    issue I/O for N pages ahead but to not fault-in one page in the center
    of that group.  You then take a page fault when you hit that page, but
    since it is in the buffer cache it doesn't block and the kernel takes
    the opportunity to issue read-aheads for the next N pages (of which N/2
    already probably in the buffer cache).  If you make N big enough, you
    are all set.  

    Note, however, that disk I/O bandwidth is much less then memory bandwidth
    and probably much less then video-write bandwidth.  If you have limited
    memory to buffer the data, you *will* block at some point.

:One final note... Does anyone know what effect turning off the bzero on new
:pages would be? Security is not an issue in this system, as it's not
:connected to the net, and all software running on it I wrote. I go through a
:lot of ram, and if I could save some time by not zeroing things, it'd be
:great.
:
:Kevin

    We'd have to be careful but I see no reason why pages allocated in order
    for I/O to be issued need to be zerod.  But I was under the impression
    that it already happened this way (pages allocated for I/O sized at least
    to a page aren't zereod).  In fact, I'm sure of it.

						-Matt

    Matthew Dillon  Engineering, HiWay Technologies, Inc. & BEST Internet 
                    Communications & God knows what else.
    <dillon@backplane.com> (Please include original email in any response)    

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199812041934.LAA16719>