Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 19 Nov 2019 13:06:36 +0100 (CET)
From:      Wojciech Puchar <wojtek@puchar.net>
To:        freebsd-hackers@freebsd.org
Subject:   geom_ssdcache
Message-ID:  <alpine.BSF.2.20.1911191256330.6166@puchar.net>

next in thread | raw e-mail | index | archive | help
today SSD are really fast and quite cheap, but still hard drives are many 
times cheaper.

Magnetic hard drives are OK in long reads anyway, just bad on seeks.

While now it's trendy to use ZFS i would stick to UFS anyway.

I try to keep most of data on HDDs but use SSD for small files and high 
I/O needs.

It works but needs to much manual and semi automated work.

It would be better to just use HDD for storage and some of SSD for cache 
and other for temporary storage only.

My idea is to make geom layer for caching one geom provider (magnetic 
disk/partition or gmirror/graid5) using other geom provider (SSD 
partition).

I have no experience in writing geom layer drivers but i think geom_cache 
would be my fine starting point. At first i would do read/write through 
caching. Writeback caching would be next - if at all, doesn't seem good 
idea except you are sure SSD won't fail.

But my question is really on UFS. I would like to know in geom layer if 
read/write operation is inode/directory/superblock write or regular data 
write - so i would give the first time higher priority. Regular data would 
not be cached at all, or only when read size will be less than defined 
value.

Is it possible to modify UFS code to pass somehow a flag/value when 
issuing read/write request to device layer?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.20.1911191256330.6166>