Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Sep 1999 10:20:36 -0700 (PDT)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        "Scott Hess" <scott@avantgo.com>
Cc:        "Kevin Day" <toasty@dragondata.com>, "Kevin Day" <toasty@dragondata.com>, "Daniel C. Sobral" <dcs@newsguy.com>, <hackers@FreeBSD.ORG>
Subject:   Re: Idea: disposable memory
Message-ID:  <199909231720.KAA28657@apollo.backplane.com>
References:  <199909231433.JAA61714@celery.dragondata.com> <199909231654.JAA28326@apollo.backplane.com> <1ea001bf05e6$05d47590$1e80000a@avantgo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
    Another idea might be to enhance the swapper.  Using interleaved swap 
    across a number of SCSI disks is a poor-man's way of getting serious
    disk bandwidth.

    My seacrate's can do around 15MB/sec to the platter.  My test machine's
    swap is spread across three of them, giving me 45MB/sec of swap bandwidth.

    Of course, that's with relatively expensive drives and a U2W/LVD SCSI bus
    (80MB/sec bus).

    Another possibility is to purchase a single large, cheap DMA/IDE drive.
    IBM has a number of 20+ GB drives that can transfer (I believe) 20MB/sec+
    from the platter.  You get one of those babies and you can use a raw
    partition to hold part of your decompressed video stream.  No memory is
    used at all in this case, you depend entirely on the disk's bandwidth to 
    push the data out and pull it in as needed.  If the disk is dedicated, 
    this should be doable.

    Using a raw partition (e.g. like /dev/rda5a) is beneficial if you intend
    to do your own cache management.  Using a block partition (e.g. like
    /dev/da5a) is beneficial if you want the system to manage the caching
    for you but will result in lower I/O bandwidth due to the extra copy.

    You can also implement a multi-process scheme to stage the data in and out
    of memory.  One process would be responsible for the cache management and
    would do lookaheads and other requests, with the cache implemented as a
    shared-memory segment (see shmat, shmdt, shmctl, shmget).  The other
    process would map the same shared memory segment and use the results.
    You could use SysV semaphores (see semctl, semget, and semop) for locking.

					-Matt




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199909231720.KAA28657>